Responsibility-by-design

Throughout last year I worked on a European Standard called CWA 17796 Responsibility-by-design – Guidelines to develop long-term strategies (roadmaps) to innovate responsibly, for the CEN. It is now available to download and use.

CEN is an association that brings together the National Standardization Bodies of 34 European countries, providing a platform for the development of European Standards and other technical documents in relation to various kinds of products, materials, services and processes.

This is what they say about themselves: The CEN works together with national standards bodies to create documents established by consensus and approved by a recognized body that provide, for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context.

This document is a workshop agreement that provides guidelines to develop long-term strategies (roadmaps) for innovating responsibly, thereby helping organizations to achieve socially desirable outcomes from their innovation processes. The roadmaps encourage a “responsibility-by-design” approach that integrates considerations of technical, ethical, social, environmental, and economic aspects all along the research, development, and design process leading to an innovation.

After an introduction, the agreement offers an overview of principles for Responsible Research and Innovation, (reflection, anticipation, inclusion and responsiveness), before moving on to a section detailing the framework proposed.

The agreement closes with a series on annexes in which easy-to-interpret tables offer examples of RRI actions, tools, guideline applications, SWOT analysis for implementation in industry, tools for stakeholder analysis, methods for stakeholder engagement, criteria for impact analysis and key performance indicators before concluding with resources from other initiatives and a bibliography.

The idea is that it is a guide, offering suggestions on possible approaches that might help to make innovation strategies more responsive and responsible, following on from years of research and policy suggestions promoted by the European Commission.

Practical and not abstract, for ten Euros it can be downloaded here.

Some Thoughts on Bias

A Little story of bias

A father was driving his two children to watch a football match when they were involved in a terrible accident. The driver was killed immediately, as was one of the boys. The youngest child was sitting in the back on his car seat, survived the accident but was seriously injured.

The young child was taken to hospital where he was rushed into an operating theatre where they hoped they could save his life.

The doctor entered the room and looked at the patient, froze and said “I cannot operate on this boy, he is my son!’

Bias within Data

If you asked the question of how the boy could be the doctor’s son you are falling in a trap of bias. The doctor in the story is the child’s mother (obviously), but that may not be the first solution that comes to mind. In many societies we are brought up to see doctors as male, and nurses as female. This has really big implications if we are using computers to search for information though, as a search machine that uses content generated by humans will reproduce the bias that unintendedly sits within the content.

The source of the bias could be from how the system works. For example, if a company offers a face recognition service and uses photos posted on the internet (for example categorized in some way by GOOGLE), there will be a lot more white males than girls of Asian background. The results will be more accurate for the category with the largest presence in the database.

If a banking system takes the case of a couple who declare an income together, it will presume that the man’s income is higher then that of the woman’s and treat the individuals accordingly, because from experience the data shows that men’s income is higher then that of women and this generalization will become part of the structure.

The problem with language is also easy to see. If the example above of the doctor problem can be in some way ‘seen’ in the vast amount of text analyzed and used for an algorithm, then proposals and offers will differ according to gender.

Let’s take how we describe ourselves for a moment. A male manager will use a set of descriptive terms to describe himself that will differ from those used by a woman, he might be assertive, but she is more likely to be understanding and supportive. A system that unwittingly uses a dataset based upon (or even referring to) language used in job adverts and profiles of successful candidates will replicate a gender bias, because more proposals will be sent to people who use the language that reflects the current make-up of the employment situation.

In short: More men will be using the language that the system picks up on, because more men (than women) in powerful positions use that type of language. The bias will be recreated and reinforced.

In 2018 the State of New York proposed a law related to accountability within algorithms, Take a look at this short description, and the European Commission released a white paper on Artificial Intelligence – A European approach to excellence and trust in 2020. It might be more important an argument than it first appears.

There is lots of literature about this problem if you are interested, a quick online search will offer you plenty of food for thought.

Problems Counting People?

Implementing COVID Regulations

Here in the Netherlands the University system just reopened after a short lockdown (again). There are still restrictions on how many people are allowed into rooms however, a maximum of 75 in any single space. This ruling was introduced last year, and led to some developments that might be of interest to technology fans (and privacy fans I should add).

Counting people manually as they enter and leave a room is a time consuming and expensive approach, so two universities had the idea of using cameras and artificial intelligence to check how many people are in buidings and individual spaces.

Utrecht University ran a trial, while Leiden placed 371 cameras on the walls above the doors to each space.

The Leiden approach however caused a bit of a stink. The cameras were all placed and set up while the students were locked out of the building, the ideal time we might say, to be having people up ladders in front of doors. But such an approach can also be seen as trying to do something without too many people noticing.

And that is how some of the students saw the arrival, and a couple started to investigate for an article in the weekly University student magazine.

Counting entries and exits, well nobody could be against that! The University has to do it by law. So discussion grew around the methods and the cameras and the data.

The university had bought 371 cameras from the Swiss manufacturer Xovis, 600 euro a piece. So the question is what can (and do) they register?

According to company spec, the system is capable of:

Counting students

Following their individual routes

Calculating an individual’s height

Estimating age

Suggesting mood (is an individual happy or angry)

Determining who is a staff member

Counting numbers in groups

Detecting incorrect facemask use.

Now these types of cameras are already in use in airports and shopping centres, to minimize waits (among other things) and to try and calibrate advertising and work out the actual moment that someone choses to buy something. So such data does offer broad analysis possibility.

The slogan used by the manufactures maybe lets the cat out of the bag a bit: ‘Way more than people counting.’

The cameras can of course be set to different levels of data collection and privacy, from level 3 fully anonymous (just numbers of people), to 0, which is a livefeed of the images.

Some Questions

Now I am no expert, but one problem seems to me to be that the system records lots of data, that at some point someone filters before providing their dataset to the customer. Who, when, under which circumstances, who manages security of access, there are a lot of issues here. But they are not all negative. Such a system may be of use in a terrorist incident for example, or other sorts of emergency. You could see why something more expansive might be chosen over a system that just counts movement. But there is a moral as well as practical dilemma in choosing such an overkill solution to a simple problem.

The report the student investigators published in the weekly university magazine showed lots of security issues, and there were protests from the students who wanted the system taken down. Both Utrecht and Leiden have now stopped using the cameras.

But that is not a good result from a responsible innovation perspective. Lots of money was wasted, many people got upset, two sides of an argument were constructed that are at loggerheads with each other.

A change in public participation techniques might have avoided all of this. A lesson to be learned I feel. Informing without debate doesn’t work.

You can read the student report here and a local newspaper report here. All in Dutch though, so you might have to use some translation software.