Morality and Artificial Intelligence

A Pair of Projects

The title to this post might sound very serious, but really I wanted to take a look at two projects that play with the relationship between morality and artificial intelligence.

The first is Delphi, operated by the Alen Institute for Artificial Intelligence.

Delphi

Delphi is a research prototype designed to model people’s moral judgments on a variety of everyday situations. You enter a question with a moral aspect, and the website offers you a response on whether what you are proposing is right or wrong.

There are lots of suggestions for question ideas, such as whether it is OK to kill a bear, or ignore a call from your boss during working hours and many others. Or you can invent your own.

I asked whether it was OK to lie to your children about your own alcohol intake, and the answer given was that this is not right. You can then submit an argument that I hope the machine analyzes and uses for future decisions. I suggested that maybe such lies could be justified, for example if the aim was to prevent them becoming attracted to alcohol in the case that their parents were secretly fighting addiction.

The creators have written an academic paper that describes their work. I have taken the following from it:

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state (“thou shalt not kill”), applying such rules to real-world situations is far more complex. For example, while “helping a friend” is generally a good thing to do, “helping a friend spread fake news” is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).

The paper begins to address these questions within the deep learning paradigm. Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1% accuracy vetted by humans. This is in stark contrast to the zero-shot performance of GPT-3 of 52.3%, which suggests that massive scale alone does not endow pre-trained neural language models with human values. Thus, we present COMMONSENSE NORM BANK, a moral textbook customized for machines, which compiles 1.7M examples of peo[1]ple’s ethical judgments on a broad spectrum of everyday situations. In addition to the new resources and baseline performances for future research, our study pro[1]vides new insights that lead to several important open research questions: differ[1]entiating between universal human values and personal values, modeling different moral frameworks, and explainable, consistent approaches to machine ethics.

Moral Machine

The second website is Moral Machine, also a University led research project (in this case a consortium.

On this website you are asked to judge a series of scenario related to driverless car technology. You are shown two possible courses of action in the event of an accident and you chose which you would take.

At the end your answers are analized in terms of your preferences and you can take a survey to participate in the research.

This is also quite challenging and fun. Do you hit young or old, or overweight or fit?

There is a link to a cartoon series and a book, summarized so:

The inside story of the groundbreaking experiment that captured what people think about the life-and-death dilemmas posed by driverless cars.

Human drivers don’t find themselves facing such moral dilemmas as “should I sacrifice myself by driving off a cliff if that could save the life of a little girl on the road?” Human brains aren’t fast enough to make that kind of calculation; the car is over the cliff in a nanosecond. A self-driving car, on the other hand, can compute fast enough to make such a decision—to do whatever humans have programmed it to do. But what should that be? This book investigates how people want driverless cars to decide matters of life and death.

In The Car That Knew Too Much, psychologist Jean-François Bonnefon reports on a groundbreaking experiment that captured what people think cars should do in situations where not everyone can be saved. Sacrifice the passengers for pedestrians? Save children rather than adults? Kill one person so many can live? Bonnefon and his collaborators Iyad Rahwan and Azim Shariff designed the largest experiment in moral psychology ever: the Moral Machine, an interactive website that has allowed people —eventually, millions of them, from 233 countries and territories—to make choices within detailed accident scenarios. Bonnefon discusses the responses (reporting, among other things, that babies, children, and pregnant women were most likely to be saved), the media frenzy over news of the experiment, and scholarly responses to it.

Boosters for driverless cars argue that they will be in fewer accidents than human-driven cars. It’s up to humans to decide how many fatal accidents we will allow these cars to have.

10 minutes of thought-provoking fun. You might want to follow up with a look at this little booklet prepared by the Bassetti Foundation about the self-driving society. I wrote some of it!

Artificial Intelligence for a Better Future

Why not join Bernd Carsten Stahl for the launch of his new Open Access book on Artificial Intelligence for a Better Future on 28 April, at 16:00 CET?

In his new book Artificial Intelligence for a Better Future, An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies, Bernd Carsten Stahl raises the question of how we can we harness the benefits of artificial intelligence (AI), while addressing potential ethical and human rights risks?

As many of you will know, this question is shaping current policy debate, exercising the minds of researchers and companies and occupying citizens and the media alike.

The book provides a novel answer. Drawing on the work of the EU project SHERPA, the book suggests that using the theoretical lens of innovation ecosystems, we can make sense of empirical observations regarding the role of AI in society. This perspective allows for drawing practical and policy conclusions that can guide action to ensure that AI contributes to human flourishing.

The one-hour book launch, co-organised by the SHERPA project, Springer (the publisher) and De Montfort University, features critical discussion between author Prof. Bernd Stahl and a high-profile panel featuring Prof. Katrin Amuns, Prof. Stephanie Laulh-Shaelou, Prof. Mark Coeckelbergh, moderated by Prof. Doris Schroeder.

The panel discussion will include a questions and answer session open to members of the audience.

You can find more information about the launch event and register here, and the book can be downloaded here.
If you would like to know more about the author’s work, you can find an introduction to some of his earlier work here.

The Edge of Knowledge

quote-life-is-a-travelling-to-the-edge-of-knowledge-then-a-leap-taken-david-herbert-lawrence-108885

What do you think about machines that think? Or should I say machines who think? That is the 2015 EDGE question.

Edge of Knowledge

Edge.org was launched in 1996 as the online version of “The Reality Club” and as a living document on the Web to display the activities of “The Third Culture”. What is the third culture? Oh to put it in their words.  ‘the third culture consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are’.

And members (Edgies) have been responding to an annual question now for some time.

What should we be worried about? What is your favourite explanation? How is the Internet changing the way you think? What will change everything? You know, just regular questions. OK big questions.

And this group contains a lot of famous names. Well this group is made up of famous people is a better description, from many walks of life, and so the answers are extremely interesting. Go and check a few out on the website.

Artificial Intelligence

As is the preamble on the website:

“In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can “really” think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These “AIs”, if they achieve “Superintelligence” (Nick Bostrom), could pose “existential risks” that lead to “Our Final Hour” (Martin Rees). And Stephen Hawking recently made international headlines when he noted “The development of full artificial intelligence could spell the end of the human race.”

But wait! Should we also ask what machines that think, or, “AIs”, might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is “their” society “our” society? Will we, and the AIs, include each other within our respective circles of empathy?”

So how close are we to these predictions, dreams and nightmares? There is plenty of stuff on the web to feed the interested, and surely enough developments will surely move in that direction. Last week we learned that a computer can work out aspects of your personality thanks to your social media use (see the post here). But intelligence? Computing variables is not intelligence.

And can we say that learning is intelligence? Computers can certainly learn, but can they think? Can they reason? What does it mean to think? To make a decision based on what? If the decision is based on experience then to some extent it is a calculation, or a computation, and if that is the case then a computer can think.

So back to the question. What Do You Think About Machines That Think?