
If artificial intelligence “works” today, it is not only for technical reasons, but because there is a sort of general belief in its usefulness in different contexts. There is a future, and this future works better, is more efficient, progresses, thanks to AI. In my work this is known as a sociotechnical imaginary (a term developed by Harvard University’s Sheila Jasanoff).
Jasanoff raises questions about the relationship between society and technological development. It’s a kind of co-production, society and technology co-produce the future. Values that are reflected in society are reflected in the technology it co-produces. A vision of the future that includes AI will probably include AI. If we think about the 20th century, we can find an easy example. In the 1880’s the 4 stroke petrol (diesel) engine took off, and by 1886 the first motor coach was built. The start of the petrol engine world that we know so well today.
But 50 years earlier, Jacobi had already built an electric vehicle. But the vision of the 20th century included petrol, and visions lead to practices. Research and funding goes toward the vision, and it develops, at the expense of other less successful visions.
Today some in governance are arguing about how much money to invest in AI, which type of AI they want to develop, rules or no-rule approaches, different visions we might say, but they share something. The belief in the importance of AI.
This leads me to a few questions:
What is the role of societal values in AI?
Who has intellectual copyright of the process, the data used and the results?
Is AI becoming a game for the ‘big boys?’
Opting in, or opting out?
Do you have the right to have your data excluded from calculations?
Privacy (not only data, but AI used to identify individuals from the way they walk etc).
Do institutions of governance inform the population about their use of AI? Do they have the technicians necessary (and ethicists) to implement it correctly? How efficiently does the City of Amsterdam Algorithm Directory inform its residents? Or the Dutch National governmental register? Could I understand anything within their databases?
Can AI lead to the amplification of criteria of prejudice?
Are we not entitled to an explanation of the decision-making process? The AI act (process began in 2017) calls for a human-centred approach and explainable AI (XAI). According to the GDPR (EU Privacy regulation), residents have the right to an explanation of how the model made its decision, and the AI service provider the obligation “to make the logic behind a recommendation transparent and humanly understandable” (not only for yes and no decisions but also for example travel organization.
How environmentally and economically sustainable is AI?
OpenAI requires ~3,617 HGX A100 servers (28,936 Graphics Processing Units) to serve Chat GPT so the cost per query is about $0.36 cents. Running ChatGPT also implies an environmental cost, as it uses 500 ml of water for every 5 to 50 prompts it answers. This water is used to cool down the supercomputers that generate heat after using its computational power (Taken from this blog).
Deskilling and reskilling. Effects on the world of work? Diversity and inclusion if AI is used in pre-selection of candidates (will it favour the typical model)?
And how should we think about human-technological-AI interfaces: AI enabled medical devices and software. AI diagnosis? It might be quicker, but will it make all the same mistakes, miss all the same people?
Can we open the black box?