OECD Conference on Technology in and for Society

In this post I would like to offer some take-aways and personal thoughts on the recent OECD Conference on Technology in and for Society, held on the 6th and 7th of December 2021.

Innovating Well for Inclusive Transitions

The conference rationale was Innovating Well for Inclusive Transitions, based upon the arguments that the world faces unprecedented challenges in health, food, climate change and biodiversity, solutions for which will require system transition or transformation. The technologies involved may bring fear of negative consequences and problems with public acceptance, as well as raise real issues of social justice (primarily of equal access, thinking today about covid vaccination inequalities as an obvious starting point).

Good governance and ethics will therefore be necessary to harness technology for the common good.

Towards a framework for the responsible development of emerging technologies

The following is taken from the rationale page of the conference website:

The conference will explore values, design principles, and mechanisms that operate upstream and at different stages of the innovation value chain. Certain policy design principles are increasingly gaining traction in responsible innovation policies, and provide an organising structure for the panels in the conference:  

Inclusivity, diversity and stakeholder engagement

Stakeholder and broader public engagement can be means to align science and technology with societal values, goals and needs. This includes the involvement of stakeholders, citizens, and actors typically excluded from the innovation process (e.g. small firms, remote regions, certain social groups, e.g. minorities etc.). The private sector too has a critical role to play in governance. 

Goal orientation

Policy can play a role in better aligning research, commercialisation and societal needs. This implies investing in public and private sector research and development (R&D) and promoting “mission-oriented” technological transformations that better connect innovation impacts to public policy needs. At the same time, such innovation and industrial policies need to be transparent, open and well-designed so they foster deliberation, produce value for money, and do not distort competition.

Anticipatory governance

From an innovation perspective, governance approaches that engage at a late stage of the innovation process can be inflexible, inadequate and even stifling. More anticipatory kinds of governance — like new technology assessment methods, foresight strategies and ethics-by-design – can enhance the capacity to govern well.

The conference included round-table and panel events alongside institutional presentations, introductions and scene setting as well as wrap-ups. Video of each event is available via the conference website, supported by an introduction paragraph and series of questions.

One of the roundtables I attended may be of particular interest to Technology Bloggers readers as it was all about carbon neutrality:

Realising Net Carbon Neutrality: The Role of Carbon Management Technologies

Description

Reaching net carbon neutrality is one of the central global challenges we face, and technological development will play a key role. A carbon transition will necessitate policies that promote sustainable management of the carbon stored in biomass, but not exclusively so: technology is increasingly making it possible to recycle industrial sources of carbon, thus making them renewable. The idea of “carbon management” may capture the different facets of the answer: reduce the demand for carbon; reuse and recycle the carbon in the bio- and technosphere; and remove carbon from the atmosphere. But a reliance on technologies for carbon capture and usage (CCU) and carbon capture and storage (CCS) may present barriers for other more radical transformations.

● What knowledge is necessary to better guide national and international policy communities as they manage emerging technology portfolios for carbon management?

● What can more holistic approaches to carbon management offer for developing technology pathways to net carbon neutrality?

● What policies could ensure that one technology is not a barrier for implementation of another?

I took a lot of notes, including the following points:

What kind of technology and knowledge is necessary when steering the development of emerging technology?

There are both opportunities and challenges for finding the right mix between technology and policy

Carbon capture alone will not be viable, we have to reduce emissions

The energy transition will have to be dramatic but there is no international agreement on the phasing out of carbon fuels

There is an immediate need for investment, social acceptance and political will

Use technology that is available today rather than using language about innovation

Policy-makers have to see a whole picture, just cutting carbon from some of the big emitters will not be enough

Real structural change is necessary

The old economic sectors and the poor should not be those who pay

Success requires not only information, but communication

The truth about both economic and social costs should be available

Why not watch the video here? It’s just over an hour long.

Plastic Recycling in the Netherlands

Last week I put my plastic, can and carton recycling wheelie bin out for collection for the last time. The Cities of Utrecht and Amsterdam have decided to let us put our plastic etc in the regular waste, rather than separating it and putting it into its own special bin.

This might sound strange, a backward step, but that is not the case. Over the last 2 years, the Utrecht City Council has conducted a study into plastic waste recycling and discovered something unexpected: they can improve recycling percentages mechanically.

The research found that when the population is asked to separate plastic, cans and cartons from their household waste, the recycling percentage sits at about 26%, but if the process is conducted mechanically on all household waste, this rises to 51%.

I should add at this point that paper, glass and organics will still be collected separately.

There is a huge plastic separation system currently in operation in Rotterdam, take a look at this video. It’s impressive, although it does depart from already home divided materials. And of note to me is that it is transported by boat.

The system uses magnets and infrared cameras to determine and separate the different types of materials, and appears to be so precise that it can be used with regular nondifferentiated waste as described in this video (in Dutch).

I would also like to add that here plastic bottles have a tax that is returnable in the supermarket. 25 cents is added to the price of your water or cola, and you take the bottle back to the supermarket and feed it into a machine (along with your glass). The machine prints you out a receipt and it comes off the shopping bill. As the photo at the top of this post shows, such an approach seems to work. Less bottles are left on the streets, and less are thrown away.

I first came across this idea in Norway more than a decade ago. Collecting bottles that tourists had thrown away in the city centres was a good source of income for the University students.

Morality and Artificial Intelligence

A Pair of Projects

The title to this post might sound very serious, but really I wanted to take a look at two projects that play with the relationship between morality and artificial intelligence.

The first is Delphi, operated by the Alen Institute for Artificial Intelligence.

Delphi

Delphi is a research prototype designed to model people’s moral judgments on a variety of everyday situations. You enter a question with a moral aspect, and the website offers you a response on whether what you are proposing is right or wrong.

There are lots of suggestions for question ideas, such as whether it is OK to kill a bear, or ignore a call from your boss during working hours and many others. Or you can invent your own.

I asked whether it was OK to lie to your children about your own alcohol intake, and the answer given was that this is not right. You can then submit an argument that I hope the machine analyzes and uses for future decisions. I suggested that maybe such lies could be justified, for example if the aim was to prevent them becoming attracted to alcohol in the case that their parents were secretly fighting addiction.

The creators have written an academic paper that describes their work. I have taken the following from it:

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state (“thou shalt not kill”), applying such rules to real-world situations is far more complex. For example, while “helping a friend” is generally a good thing to do, “helping a friend spread fake news” is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).

The paper begins to address these questions within the deep learning paradigm. Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1% accuracy vetted by humans. This is in stark contrast to the zero-shot performance of GPT-3 of 52.3%, which suggests that massive scale alone does not endow pre-trained neural language models with human values. Thus, we present COMMONSENSE NORM BANK, a moral textbook customized for machines, which compiles 1.7M examples of peo[1]ple’s ethical judgments on a broad spectrum of everyday situations. In addition to the new resources and baseline performances for future research, our study pro[1]vides new insights that lead to several important open research questions: differ[1]entiating between universal human values and personal values, modeling different moral frameworks, and explainable, consistent approaches to machine ethics.

Moral Machine

The second website is Moral Machine, also a University led research project (in this case a consortium.

On this website you are asked to judge a series of scenario related to driverless car technology. You are shown two possible courses of action in the event of an accident and you chose which you would take.

At the end your answers are analized in terms of your preferences and you can take a survey to participate in the research.

This is also quite challenging and fun. Do you hit young or old, or overweight or fit?

There is a link to a cartoon series and a book, summarized so:

The inside story of the groundbreaking experiment that captured what people think about the life-and-death dilemmas posed by driverless cars.

Human drivers don’t find themselves facing such moral dilemmas as “should I sacrifice myself by driving off a cliff if that could save the life of a little girl on the road?” Human brains aren’t fast enough to make that kind of calculation; the car is over the cliff in a nanosecond. A self-driving car, on the other hand, can compute fast enough to make such a decision—to do whatever humans have programmed it to do. But what should that be? This book investigates how people want driverless cars to decide matters of life and death.

In The Car That Knew Too Much, psychologist Jean-François Bonnefon reports on a groundbreaking experiment that captured what people think cars should do in situations where not everyone can be saved. Sacrifice the passengers for pedestrians? Save children rather than adults? Kill one person so many can live? Bonnefon and his collaborators Iyad Rahwan and Azim Shariff designed the largest experiment in moral psychology ever: the Moral Machine, an interactive website that has allowed people —eventually, millions of them, from 233 countries and territories—to make choices within detailed accident scenarios. Bonnefon discusses the responses (reporting, among other things, that babies, children, and pregnant women were most likely to be saved), the media frenzy over news of the experiment, and scholarly responses to it.

Boosters for driverless cars argue that they will be in fewer accidents than human-driven cars. It’s up to humans to decide how many fatal accidents we will allow these cars to have.

10 minutes of thought-provoking fun. You might want to follow up with a look at this little booklet prepared by the Bassetti Foundation about the self-driving society. I wrote some of it!