Neuroscientist researcher Rafael Yuste started the Columbia University’s Neuro-Rights Initiative to promote an ethical framework to preserve a set of human rights within neuro-technologies. He co-authored a Nature paper titled “Four ethical priorities for neurotechnologies and AI” in 2017, after creating the “Morningside Group” of over 20 neuroscientists who were also concerned about the potential ethical harms caused by neuro-technologies.
Another neuro-right was added to the latest Neuro-Rights paper titled It’s Time for Neuro – Rights. This brings the list up to the right to identity, right to agency, right to mental privacy, the right for equitable access to neural augmentation, and the right to be free from algorithmic bias. In the end, Yuste hopes to gain momentum within the United Nations to add these fundamental neuro-rights to the Universal Declaration of Human Rights, which could then put pressure on regional legislators to change their laws to stay into compliance with these neural rights.
On May 26th, there was a Non-Invasive Neural Neural Interfaces: Ethical Considerations day-long symposium featuring cutting-edge neuroscientists working to decode the brain, EMG specialists, and other companies working on commercial-grade, neuro-technologies. The gathering was sponsored by the Columbia Neuro-Rights Initiative as well as by Facebook Reality Labs as both sponsors wanted to bring scientists and ethicists together in order to debate the ethical and privacy implications of these neuro-technologies.
I did some extensive coverage of the Non-Invasive Neural Interfaces: Ethical Considerations event within this Twitter thread here:
1/ THREAD covering a #NeuroRights gathering co-sponsored by @FBRealityLabs & @Columbia_NTC's @neuro_rights called "Noninvasive Neural Interfaces: Ethical Considerations Conference."
They'll be Looking at the ethical implications of XR technologies from neuroscience POV. pic.twitter.com/gXzvl3h1UC
— Kent Bye VoicesOfVR (@kentbye) May 26, 2021
Part of the concern about these neuro-technologies is that there is already a large amount of data from the brain that can be decoded, and this is only going to increase over time. Yuste also brought up that there as existing methods to stimulate the brain in a way that could violate our right to agency. Whether it’s reading or writing to our brains, Yuste says that we can’t be walking around with the metaphoric hoods of our brains opened up for any outside actor to measure or stimulate.
In the end, there was a lot more science shared at the Non-Invasive Neural Interfaces gathering than meaty ethical debates. There was not enough diversity of speaker backgrounds to hold a true Multi-Stakeholder Immersion gathering that included perspectives from privacy advocates, philosophers, or privacy lawyers. Part of what makes this topic of how to preserve mental privacy so challenging is that it does require a multi-disciplinary approach representing a critical mass of stakeholders and differing competing interests in order to have robust debates on all of the risks and benefits across different contexts. Also, dealing with the complexity of these emerging technologies requires some potential new paradigm conceptual frameworks around the philosophy of privacy such as Dr. Helen Nissembaum’s theory of Contextual Integrity or Dr. Anita Allen’s approach of treating privacy as a human right (see my talk for more context on this)
There was some interesting resistance to one of Yuste’s proposed strategies for preserving our right mental privacy for navigating the threats to mental privacy, since one of his suggestions was to treat the data from these non-invasive neural interfaces as medical devices and medical data. This would regulate data that could be used to decode what’s happening within the body, but also limit how the variety of different brain stimulation devices could be used.
Neuro-tech start-ups like Open BCI and Kernel Co resisted this suggested classification and regulation of neuro-tech as medical devices since their companies probably wouldn’t exist at the point they are today had there been additional medical regulations that they’d have to follow. But Yuste argues that the use of neural data could have profound impacts on the integrity of our body, and so there is a compelling argument that it’s a type of sensitive data that is most analogous to medical data.
After listening to Yuste at the “Non-Invasive Neural Neural Interfaces: Ethical Considerations” conference, I reached out to have him onto the Voices of VR podcast so that he could elaborate on the current state of the art neuroscience of neuro-tech, what he sees as the most viable strategy for protecting our right to mental privacy, why looking at these issues through the lens of human rights is so compelling, where the future of neuro-rights is headed, and why he’s so excited about the revolutionary and humanistic potential of neuro-technologies to help us understand our brains and ourselves better.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Here’s my 22-minute talk on “State of Privacy in XR & Neuro-Tech: Conceptual Frames” presented at the VR/AR Global Summit on June 2, 2021
This is a listener-supported podcast through the Voices of VR Patreon.