Societal aspects of neurotechnology

When it comes to regulations concerning neurotechnology, especially in applications for human improvement, there are still many white spots. Recently, the Rathenau Institute advocated two new human rights in the robot age in the report Human rights in the robot age: Challenges arising from the use of robotics, artificial intelligence, and virtual and augmented reality.

The question is whether the citizen is sufficiently protected when it comes to engineering of the brain. We appointed these regulatory wastelands in 2012. Little has changed since then. In neuroscientific research, too, we believe that the safeguarding of human rights must remain central. For example, Ienca and Andorno (2017) argue for the right to cognitive freedom and the right to mental privacy.

In 2012 we produced the report Making Perfect Life for the European Parliament, on the ethical, legal and social aspects of bioengineering in the 21st century. We showed that these developments are characterized by two major trends: biology becomes technology and technology becomes biology. On the one hand, living organisms, like humans, are increasingly seen as measurable, analysable and producible. Germ line technology, for example, typically fits into this trend. On the other hand, we also develop more biology inspired technology by building in typical qualities of living beings, such as self repair, reproduction, cognition and learning ability. Examples of this second trend are artificial intelligence and social robots. We also saw both trends in neurotechnological developments, where we can distinguish between neurostimulation, which involves intervention in and on the brain, and neurosimulation, in which the brain is digitally mimicked and imitated in order to better understand it.

Five years after the publication of this report, we are again looking specifically at developments in brain research. The brain and behaviour will continue to be in the limelight in 2018. The underlying scientific worldview, in which explanatory mechanisms are sought on the basis of the functioning of our brain, has permeated more and more practices, also outside the care sector. Still, the real neurorevolution hasn’t happened so far; the big questions about consciousness and free will, for example, remain unanswered. There is no shortage of money for neuroscience research and neurostimulation and neurosimulation technologies have also become more advanced. It is striking, however, that in these developments little attention is paid to the consequences of (applications of) this type of research for citizens.

For example, we see the emergence of new technologies, such as TDCS (transcranial direct current stimulation), which can be used at home. Commercial parties such as Neuralink by Elon Musk are becoming increasingly interested in brain/computer interfaces. When adjusting a brain implant no human intervention is needed anymore, the computer can do it in real time. Intervention in the brain takes place on the basis of data from the brain. This may sound like a small update, but from an ethical point of view it is a far-reaching development. However, we still see a lack of regulation around applications of neurotechnology, especially in applications for human improvement.

The largest scale of research funding can be seen in the field of neurosimulation, nowadays also known as ‘big neuroscience’. An important promise is that large amounts of brain data contribute to more personalized care. The large Human Brain Project, financed with European money, is an example of this. Neuroethicists associated with this project mainly focus on research ethics: is neuroscience sufficiently developed for such a modern, hypothesis-free approach? How can we conduct responsible research on such a large scale? Can we simulate morality? These questions are important, but not sufficiently broad and opaque to guarantee public values around neurotechnological applications. Neuroscientific research has a direct impact on the citizen, because of the scientific worldview it endorses, and because of its application, especially when the brain is still largely a black box. Take personalized care: what consequences for the individual citizen does the use of self-learning algorithms to determine how someone is treated and what if this technology is applied outside of care?