Morality and Artificial Intelligence

A Pair of Projects

The title to this post might sound very serious, but really I wanted to take a look at two projects that play with the relationship between morality and artificial intelligence.

The first is Delphi, operated by the Alen Institute for Artificial Intelligence.

Delphi

Delphi is a research prototype designed to model people’s moral judgments on a variety of everyday situations. You enter a question with a moral aspect, and the website offers you a response on whether what you are proposing is right or wrong.

There are lots of suggestions for question ideas, such as whether it is OK to kill a bear, or ignore a call from your boss during working hours and many others. Or you can invent your own.

I asked whether it was OK to lie to your children about your own alcohol intake, and the answer given was that this is not right. You can then submit an argument that I hope the machine analyzes and uses for future decisions. I suggested that maybe such lies could be justified, for example if the aim was to prevent them becoming attracted to alcohol in the case that their parents were secretly fighting addiction.

The creators have written an academic paper that describes their work. I have taken the following from it:

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state (“thou shalt not kill”), applying such rules to real-world situations is far more complex. For example, while “helping a friend” is generally a good thing to do, “helping a friend spread fake news” is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).

The paper begins to address these questions within the deep learning paradigm. Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1% accuracy vetted by humans. This is in stark contrast to the zero-shot performance of GPT-3 of 52.3%, which suggests that massive scale alone does not endow pre-trained neural language models with human values. Thus, we present COMMONSENSE NORM BANK, a moral textbook customized for machines, which compiles 1.7M examples of peo[1]ple’s ethical judgments on a broad spectrum of everyday situations. In addition to the new resources and baseline performances for future research, our study pro[1]vides new insights that lead to several important open research questions: differ[1]entiating between universal human values and personal values, modeling different moral frameworks, and explainable, consistent approaches to machine ethics.

Moral Machine

The second website is Moral Machine, also a University led research project (in this case a consortium.

On this website you are asked to judge a series of scenario related to driverless car technology. You are shown two possible courses of action in the event of an accident and you chose which you would take.

At the end your answers are analized in terms of your preferences and you can take a survey to participate in the research.

This is also quite challenging and fun. Do you hit young or old, or overweight or fit?

There is a link to a cartoon series and a book, summarized so:

The inside story of the groundbreaking experiment that captured what people think about the life-and-death dilemmas posed by driverless cars.

Human drivers don’t find themselves facing such moral dilemmas as “should I sacrifice myself by driving off a cliff if that could save the life of a little girl on the road?” Human brains aren’t fast enough to make that kind of calculation; the car is over the cliff in a nanosecond. A self-driving car, on the other hand, can compute fast enough to make such a decision—to do whatever humans have programmed it to do. But what should that be? This book investigates how people want driverless cars to decide matters of life and death.

In The Car That Knew Too Much, psychologist Jean-François Bonnefon reports on a groundbreaking experiment that captured what people think cars should do in situations where not everyone can be saved. Sacrifice the passengers for pedestrians? Save children rather than adults? Kill one person so many can live? Bonnefon and his collaborators Iyad Rahwan and Azim Shariff designed the largest experiment in moral psychology ever: the Moral Machine, an interactive website that has allowed people —eventually, millions of them, from 233 countries and territories—to make choices within detailed accident scenarios. Bonnefon discusses the responses (reporting, among other things, that babies, children, and pregnant women were most likely to be saved), the media frenzy over news of the experiment, and scholarly responses to it.

Boosters for driverless cars argue that they will be in fewer accidents than human-driven cars. It’s up to humans to decide how many fatal accidents we will allow these cars to have.

10 minutes of thought-provoking fun. You might want to follow up with a look at this little booklet prepared by the Bassetti Foundation about the self-driving society. I wrote some of it!

Ninth Annual Winter School on Emerging Technologies

Do you fit the requirements for the Ninth Annual Winter School on Emerging Technologies: Accelerating Impactful Scholarship supported by The National Nanotechnology Coordinated Infrastructure January 3-10, 2022?

The National Nanotechnology Coordinated Infrastructure Coordinating Office is now supporting the winter school, run by the School for the Future of Innovation in Society at Arizona State University, covering fees and accomodation costs..

The Winter School will give junior scholars and scientists an introduction to and practical experience with methods and theory for better understanding the social dimensions of emerging technologies, focused on the broad notion of impact with an aim to explore ways for participants to increase and diversify the impact of their work.

This year’s program will begin with a series of interactive sessions with faculty members to explore a variety of ways in which research can have a positive impact beyond the specific studies involved. The program will conclude with a multi-day immersive “sandpit” experience, where participants will form teams and pitch projects aimed at increasing the impact of scholarship. Successful teams will be awarded funding to help them implement their ideas over the year following the program.

Ample work time and breaks are built into the Winter School schedule to encourage participants to guide their own learning experience throughout the week. Mentorship sessions with attending faculty will also be offered.

The Winter School is an immersive experience for scholars to share their own unique research and learn from peers and experts. The faculty at the Winter School will offer theoretical framings, analytical tools and hands-on lessons in how social science, natural science, and engineering research on emerging technologies can have a greater impact on the world.

Participating in the Winter School will enrich your networks and provide ample opportunities to share ideas, collaborate with peers, and develop proposals to enhance the impact of your work.

Applicants should be advanced graduate students or recent PhDs (post-doc or untenured faculty within three years of completing a PhD at time of application) with an expressed interest in studying emerging technologies such a nanotechnology, robotics, synthetic biology, geoengineering, artificial intelligence, etc.

Applicants may come from any discipline and must be demonstrably proficient in English.

The program will spend its ninth consecutive year at Saguaro Lake Ranch in Mesa, AZ with access to Sonoran Desert hiking, kayaking on Saguaro Lake, horseback riding and relaxing by the Salt River.

The program fees for accepted students will be covered by the NNCI including seven nights at Saguaro Lake Ranch, meals and local transportation from Tempe, Arizona. Participants will be responsible for their own travel to Phoenix, Arizona and should arrive before 1pm on January 3rd.

To access an application and learn more about the 2022 Winter School program visit the dedicated website. Participants are requested to be fully vaccinated before they arrive at the ranch.

 DEADLINE FOR APPLICATIONS IS MONDAY NOVEMBER 8, 2021. Spread the word!

Art in Responsible Innovation, Maurizio Montalti in Conversation

Long ago, back in February of 2015, I wrote this post about Maurizio Montalti and his work with fungus.

Montalti produces various materials in what he calls a collaboration between living organisms, compostable materials that can be used to replace plastics and chemical based products.

Since I first met him he has begun to produce a host of materials on an industrial scale with the foundation of his company MOGU, and earlier this summer I was fortunate enough to catch up with him again and record the video interview you find below, part of my Art in Responsible Innovation series for the Bassetti Foundation.

Maurizio is a designer, scientist and artist whose works is extremely innovative, research and experiment based and perched on the border between art, design and biology.  He has been active in promoting responsibility within innovation throughout his career, with lots of ideas around sustainability and science communication and the role of science in society.

Learn more about this intriguing character and his work through the video and podcast below.