Author : Siddharth Yadav

Expert Speak Digital Frontiers
Published on Sep 26, 2024

BCIs hold great potential for treating neuromuscular disorders but raise ethical and safety concerns, particularly as governments and companies explore commercial and military uses.

The emerging safety and ethical landscape of Brain-Computer Interfaces

Image Source: Getty

Brain-Computer Interfaces (BCIs) are devices and systems that bypass the neuromuscular pathways in the body and allow communication between the brain and external devices. Typically, BCIs allow external devices like computers to act by acquiring signals from the central nervous system, analysing them and translating them into commands. The earliest tests for BCI development were carried out on monkeys in 1969 and 1970, followed by tests on human subjects in the 1990s. In 1991, scientists for the first time demonstrated that a cursor on a computer screen can be controlled by using brain waves. Early research focused on using BCIs to help patients with severe motor disabilities caused by conditions like neuromuscular disorders, brain stem strokes or spinal cord injuries. For example, one of the first clinical tests with invasive BCI was conducted on a person with ALS. Over the decades, advancements in technologies like machine learning, signal processing, and neuro-imaging have enhanced the efficacy of BCIs.

EEG-based BCIs have more commercial viability as the BCI devices are more portable and the process involves placing sensors on the scalp, a modality that can be implemented using headsets.

The commercial popularity of BCIs has increased due to innovations in non-invasive BCIs that are more cost-effective and scalable than their invasive counterparts. Non-invasive BCIs employ electroencephalography (EEG) to monitor, acquire, and translate brain activity. EEG-based BCIs have more commercial viability as the BCI devices are more portable and the process involves placing sensors on the scalp, a modality that can be implemented using headsets. However, while invasive BCIs are more cost-prohibitive and require surgical intervention, they offer much more reliable data due to the high fidelity of acquired signals. BCI company Neuralink has already successfully used its invasive implantable BCI system to treat a quadriplegic patient. The success of the company has also begun signalling the arrival of a new age of human-machine interaction with the marketing of BCIs that can not only treat sensory-motor disabilities and degenerative neuromuscular conditions but also facilitate direct brain-to-brain communication.

Government interest in BCIs

Since BCIs are categorised as dual-use platforms, their optimistic potential should be considered in conjunction with potential risks. The risks associated with BCIs are related to their potential for facilitating cognitive enhancement in individuals and for developing novel surveillance capabilities. It should not come as a surprise that governments have already started funding research into military applications of neuro-technological platforms. Two notable examples are neuroscience-focused government projects underway in China and the United States (US). The China Brain Science and Brain Inspired Project, also known simply as the China Brain Project (CBP), was launched in 2016 as part of a broader strategy to charge scientific and technological development in the country. It shares several similarities with the 2013 US Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, particularly regarding the goal of enhancing the understanding of neurological functions and mechanisms. Both initiatives focus on the diagnosis, treatment, and prevention of neurological and psychiatric disorders. However, the CBP distinguishes itself from the BRAIN initiative due to its emphasis on developing brain-inspired artificial intelligence platforms like “cognitive robotics” and “brain-inspired chips.” To develop such platforms, the CBP seeks to leverage insights from fields such as neuroscience, Artificial Intelligence (AI) and advanced machine learning, marking a point of divergence from its US counterpart that concentrates primarily on developing technologies to understand brain function and neurological diseases while relegating AI to a secondary consideration. Nevertheless, given that the BRAIN initiative receives substantial financial support from US defence agencies like the Defence Advance Research Projects Agency (DARPA), it is likely that the US is interested in military applications of emerging neurotechnological platforms.

The China Brain Science and Brain Inspired Project, also known simply as the China Brain Project (CBP), was launched in 2016 as part of a broader strategy to charge scientific and technological development in the country.

Unsurprisingly, the CBP also receives substantial financial support from the Chinese government and facilitates collaboration among research institutions, universities, and the private sector in China. Although the project leadership ostensibly encourages international collaboration, it simultaneously emphasises its long-term goal of promoting domestic research and national self-reliance. In terms of modality, the CBP invests in extensive brain-mapping methods similar to the US BRAIN initiative. However, it integrates these efforts with AI research and the development of neuroinformatics tools to handle large datasets. The ethical and regulatory landscape in China will likely influence the implementation and direction of BCI and neurotechnology development differently than in other countries. Mu-ming Poo, the director of CBP, has stated that the environment of permissiveness in China will likely attract international scientists from other countries where research guidelines are more prohibitive.

The sudden rise of AI applications like generative AI in recent years has revealed the significance of investing in emerging technologies to governments around the world and has prompted them to expedite the introduction of new laws and regulations. Global competition involving neurotechnology applications will likely increase in the future given the geopolitical and economic importance of avoiding a “technology surprise” and being a “first mover” in innovation. Government investments in BCIs will also likely increase as their security-related and military potential becomes more apparent. NATO has already begun discussing the potential to “improve decision-making, increase situational awareness, facilitate faster communication, and even provide control over unmanned systems” and enhance cognition at the population level using BCIs. The medical, commercial and military potential of BCIs has led other countries like Japan, South Korea, Australia and the European Union to start investing in intra-national and international neurotechnology projects.

Risks of commercialised BCIs

Beyond military applications, the commercialisation of BCIs presents a range of ethical and legal concerns that require careful consideration by policymakers, researchers and developers. Addressing risks associated with emerging BCI platforms, the Organisation for Economic Co-operation and Development (OECD) released its Recommendation on Responsible Innovation in Neurotechnology in 2019. A crucial concern highlighted in the report is the issue of privacy and data security: BCIs can collect and potentially transmit sensitive neural and behavioural data that could be vulnerable to unauthoriaed access if inadequately protected. The report also highlights novel risks such as the subversion of “cognitive liberty” and the right to mental self-determination of individuals. While such scenarios are usually situated in science fiction stories, the possibility of neural and behavioural data being harvested by BCI developers raises profound questions about the ownership, legal status and confidentiality of neural data and how it should be protected. In 2021, the UNESCO International Bioethics Committee (IBC) also released a report on the ethical implications of neurotechnology platforms. The report raised the question of whether existing human rights frameworks are sufficient to address ethical and legal concerns latent in BCI platforms or whether there is a need for a new framework of “neurorights.” In terms of medical risks, it should be noted that the long-term impact of invasive BCIs on cognitive function, behaviour and mental health is not adequately understood. Such vulnerabilities require rigorous long-term studies and monitoring to understand the full spectrum of the possible impact of BCIs.

A crucial concern highlighted in the report is the issue of privacy and data security: BCIs can collect and potentially transmit sensitive neural and behavioural data that could be vulnerable to unauthoriaed access if inadequately protected.

Going forward

Tailored data privacy regulations: Taking the OECD Recommendation on Responsible Innovation in Neurotechnology as a starting point, governments should establish data privacy and security regulations specifically tailored to BCIs. These regulations should mandate robust encryption standards for behavioural and neural data, define clear ownership rights over such data, and require informed consent for data collection and use. Additionally, policies should enforce regular audits and assessments of BCI systems to ensure compliance with privacy standards, thereby protecting individuals from unauthorixed access and potential misuse of their neural information.

Strict compliance standards need to be published for clinics and practitioners offering BCIs requiring surgical intervention.

Promote long-term health and safety research: As the 2021 UNESCO IBC report has highlighted, it is crucial to fund long-term research to understand the physical and mental health consequences of continuous BCI use. Testing and regulatory standards  need to be stricter for invasive BCI platforms and applications. Strict compliance standards need to be published for clinics and practitioners offering BCIs requiring surgical intervention. Governments should also establish a regulatory framework that requires pre-release safety testing and post-release evaluation and feedback for neurotechnology applications, particularly invasive BCIs, that will help mitigate potential risks.

Establish ethical standards and neurorights protections: To mitigate potential risks of BCIs, governments should work with international bodies such as UNESCO and non-profits such as the Switzerland-based Neurorights Foundation to develop and adopt universal ethical standards and legal protections for individuals’ neural data. These standards should include the right to mental privacy, cognitive liberty, and mental integrity, ensuring individuals have control over their mental states and the freedom from unauthorised manipulation. Additionally, establishing oversight bodies to monitor the ethical deployment of BCIs in various sectors, including criminal justice and healthcare, will help uphold these rights and prevent negligence and misuse.


Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Siddharth Yadav

Siddharth Yadav

Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies. He acquired BA (Hons) and MA in History from the ...

Read More +