Neuralink’s brain-computer interfaces: medical innovations and ethical challenges

I have just read a short article called Neuralink’s brain-computer interfaces: medical innovations and ethical challenges, authored by Andrea Lavazza, Michela Balconi, Marcello Ienca, Francesca Minerva, Federico Gustavo Pizzetti, Massimo Reichlin, Francesco Samorè, Vittorio A. Sironi, Marta Sosa Navarro and Sarah Songhorian.

You can find it in the open access publication Frontiers in Human Dynamics, and it is part of a series of 10 articles that appear in a collection titled Socio-Legal, Ethical, Technical and Medical Considerations on Neuroprivacy and Brain-Machine Interaction Technologies in the era of A.I.

The abstract summarizes the authors’ positioning:

Neuralink’s advancements in brain-computer interface (BCI) technology have positioned the company as a leader in this emerging field. The first human implant in 2024, followed by subsequent developments such as the Blindsight implant for vision restoration, marks a significant milestone in neurotechnology. Neuralink’s innovations, including miniaturized devices and robotic implantation techniques, promise transformative applications for individuals with neurological conditions. However, these advancements raise critical clinical, ethical, and regulatory questions. From a clinical perspective, BCIs show potential in addressing severe disabilities, but the long-term effects, safety, and usability of these devices remain uncertain. Ethical concerns focus on informed consent, patient autonomy, and the implications of integrating BCIs into human identity. The bidirectional nature of Neuralink’s devices introduces privacy risks, highlighting the need for stringent oversight to safeguard sensitive neural data. Furthermore, the company’s initial lack of transparency, such as delayed trial registration, has drawn criticism from the scientific community for deviating from established norms of research ethics. Regulatory challenges also emerge as BCIs intersect with frameworks governing data privacy, medical devices, and artificial intelligence. The lack of a cohesive legal framework for neurotechnology underscores the importance of developing comprehensive standards to balance innovation with the protection of fundamental rights. Finally, philosophical questions about human identity and agency arise as BCIs blur the boundaries between mind, body, and technology. As BCI technology advances, it is imperative for the scientific community, policymakers, and society to collaborate in addressing the opportunities and risks posed by this transformative innovation.

Beginning with a brief history, the article describes clinical, bioethical, neuroethical, legal, psychological, philosophical, and enhancement aspects of neurotechnological development, with questions of responsibility in innovation running throughout. The authors guide the reader through discussions around risk, cost and benefit, privacy, transparency, the protection and advancement of human rights, regulation, the influence of AI on neurotechnological developments, the primacy of thought over action, posthumanism (the creation of human-machine hybrids), human enhancement, and access for all of such technologies.

This well rounded and easy to read document is an ideal starting point for anyone interested in these fast-moving developments, and is free to read and download here.

A few more thoughts about AI

If artificial intelligence “works” today, it is not only for technical reasons, but because there is a sort of general belief in its usefulness in different contexts. There is a future, and this future works better, is more efficient, progresses, thanks to AI. In my work this is known as a sociotechnical imaginary (a term developed by Harvard University’s Sheila Jasanoff).

Jasanoff raises questions about the relationship between society and technological development. It’s a kind of co-production, society and technology co-produce the future. Values that are reflected in society are reflected in the technology it co-produces. A vision of the future that includes AI will probably include AI. If we think about the 20th century, we can find an easy example. In the 1880’s the 4 stroke petrol (diesel) engine took off, and by 1886 the first motor coach was built. The start of the petrol engine world that we know so well today.

But 50 years earlier, Jacobi had already built an electric vehicle. But the vision of the 20th century included petrol, and visions lead to practices. Research and funding goes toward the vision, and it develops, at the expense of other less successful visions.

Today some in governance are arguing about how much money to invest in AI, which type of AI they want to develop, rules or no-rule approaches, different visions we might say, but they share something. The belief in the importance of AI.

This leads me to a few questions:

What is the role of societal values in AI?

Who has intellectual copyright of the process, the data used and the results?

Is AI becoming a game for the ‘big boys?’

Opting in, or opting out?

Do you have the right to have your data excluded from calculations?

Privacy (not only data, but AI used to identify individuals from the way they walk etc).

Do institutions of governance inform the population about their use of AI? Do they have the technicians necessary (and ethicists) to implement it correctly? How efficiently does the City of Amsterdam Algorithm Directory inform its residents? Or the Dutch National governmental register? Could I understand anything within their databases?

Can AI lead to the amplification of criteria of prejudice?

Are we not entitled to an explanation of the decision-making process? The AI act (process began in 2017) calls for a human-centred approach and explainable AI (XAI). According to the GDPR (EU Privacy regulation), residents have the right to an explanation of how the model made its decision, and the AI service provider the obligation “to make the logic behind a recommendation transparent and humanly understandable” (not only for yes and no decisions but also for example travel organization.

How environmentally and economically sustainable is AI?

OpenAI requires ~3,617 HGX A100 servers (28,936 Graphics Processing Units) to serve Chat GPT so the cost per query is about $0.36 cents. Running ChatGPT also implies an environmental cost, as it uses 500 ml of water for every 5 to 50 prompts it answers. This water is used to cool down the supercomputers that generate heat after using its computational power (Taken from this blog).

Deskilling and reskilling. Effects on the world of work? Diversity and inclusion if AI is used in pre-selection of candidates (will it favour the typical model)?

And how should we think about human-technological-AI interfaces: AI enabled medical devices and software. AI diagnosis? It might be quicker, but will it make all the same mistakes, miss all the same people?

Can we open the black box?

Responsible Algorithm Use: The Dutch National and Amsterdam City Algorithm Registers

Artificial intelligence systems rely on algorithms to instruct them on how to analyze data, perform tasks, predict patterns, evaluate trends, calculate accuracy, optimize processes and make decisions. The Dutch government wants its own governmental departments to use algorithms responsibly. People must be able to trust that algorithms comply with society’s values and norms. And there must be an explanation of how algorithms work.

The government does this by checking algorithms before use for how they work and for possible discrimination and arbitrariness, in the belief that when they is open about algorithms and their application, citizens, organizations and media can follow and check whether they (and their use) follows the law and the rules.

According to the government, the following processes, among others, contribute to responsible algorithm use:

  1. The Algorithm Register helps to make algorithms findable, to explain them better and to make their application and results understandable.
  2. The Algorithm Supervisor (the Dutch Data Protection Authority) coordinates the control of algorithms: do the government’s algorithms comply with all the rules that apply to them? Learn more about the regulator .
  3. The Ministry of the Interior and Kingdom Relations is working on the ‘Use of Algorithms’ Implementation Framework . This makes it clear to governments what requirements apply to algorithms and how they can ensure that their algorithms can meet those requirements.
  4. Legislation: there will be a legal framework for the transparency of algorithms. This was announced in the letter to parliament dated December 2022 .

Find out more at The Algorithm Register of the Dutch government.

The City of Amsterdam also has an AI Algorithm Register 

The Algorithm Register is a window into an overview of artificial intelligence systems and algorithms used by the City of Amsterdam. Through the register, anyone can get acquainted with the quick overviews of the city’s algorithmic systems or examine their more detailed information based on their interests. Individuals can also give feedback and thus participate in building human-centered AI in Amsterdam. At this moment the register is still under development and does not yet contain all the algorithms that the City of Amsterdam uses.

Find out more at Algorithmic systems of Amsterdam.