#1184: “Battle for Your Brain” Author Nita Farahany on Establishing Cognitive Liberty as a Human Right for Limits on Neurotechnologies & XR

Nita Farahany argues that we need to establish a new human right of Cognitive Liberty in order to address the threats of Neurotech like VR, AR, BCIs, & non-invasive neural interfaces in her new book Battle for the Brain: Defending the Right to Think Freely in the Age of Neurotechnology (releasing on March 14th). Cognitive Liberty is a umbrella term that includes a complex of other human rights including mental privacy, freedom of thought, and self-determination, but clearly defining this novel concept of cognitive liberty will hopefully provide a pathway for creating the philosophical, legal, and ethical foundations to create viable regulations around these new forms of sensitive brain data, physiological data, and biometric data as well as the inferences that can be made from it.

Farahany’s book is a tour de force of scholarship that is not only catching everyone up on the last decade’s worth of neurotechnology research and enterprise industry developments, and it was actually Meta’s acquisition of CTRL-Labs & their wrist-based EMG interface into VR and AR (see my previous coverage in episodes #814 & #987) that was a catalyst for her to write this book as we’re on the threshold of these neurotechnologies starting to be deployed at a much broader consumer scale. She does a comprehensive audit of both the promises and perils of these neurotech applications across multiple contexts in order to develop of the novel concept of a new human right of Cognitive Liberty and it’s associated rights in order to address the threats from these technologies.

I started this series on XR Privacy after attending the Stanford Cyberpolicy Center’s gathering on Existing Law and Extended Reality organized by Brittan Heller (who also coined the term “biometric psychography”), and the talk that I gave there focused on XR privacy because aside from the NeuroRights approach, there hasn’t been many folks who have comprehensively combined all of the philosophical, legal, and technological components — up until Farahany’s book Battle for the Brain, which fills a much needed gap at the right moment in trying to address some of these issues that I’ve been covering for many years and arguing that new regulation is needed to fill the gaps in covering the privacy threats from immersive XR and neuro-technologies.

I was able to sit down with Farahany and explore the full spectrum of her human rights approach of Cognitive Liberty with mental privacy, freedom of thought, self-determination, the next steps in getting this adopted and diffused throughout different human rights organizations, joint biometrics scoping efforts by the American Law Institute and European Law Institute, and the various regulatory bodies around the world. We also talk about her process of researching the emerging technologies of neurotech with such rigor and depth while keeping up to speed on the wide range of developments and potential moral dilemmas, while simultaneously pushing the edge of philosophical and legal understanding on the topic. In this series on XR privacy, I’ve covered how there still remain many gaps in the GDPR and AI Act as well as other EU regulations when it comes to dealing with XR data and the types of biometric inferences that can be made, and I feel like Farahany’s human rights approach with defining Cognitive Liberty as she has may be one of the most effective tools in closing some of these gaps so that we don’t sleepwalk into a dystopia of Surveillance Capitalism 2.0: Brain Data Edition.

If you’ll be at SXSW, then come check out my talk on the Ultimate Potential of VR: Promises and Perils on Sunday, March 12 at 4p where I’ll no doubt be giving Farahany’s concept of Cognitive Liberty a shout out.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that's looking at the future of spatial computing and the ethics of extended reality. You can support the podcast at patreon.com slash voices of VR. So I started this podcast series after going to the Existing Law and Extended Reality Symposium that was organized by Brinton Heller. And I was really trying to get a sense of, OK, where is privacy at when it comes to the future of spatial computing? And one of the biggest things that I was screaming from the mountaintops is that we need to have a human rights approach to some of the unique types of data that are coming from XR. I was really focused on neuro rights. But at the same time, how do you define and regulate philosophical concepts like cognitive liberty? agency and free will, identity and mental privacy. And I found in this conversation with Nita Farahani that she's actually like literally answering all of these biggest questions. She's written a tour de force book that's called The Battle for Your Brain, Defending the Right to Think Freely in the Age of Neurotechnology, which is actually getting released March 14th, 2023. I had a chance to take an early look at this book and to do an interview with Nita, and boy, she is at this crucial intersection of law, philosophy, emerging technology, and neuroscience, tracking the issues and the evolution of what's happening with neurotechnology for the last decade. and is really on the bleeding edge of both tracking what's happening in the academic literature, she's reaching out to different researchers, she's following what the companies are doing, and keeping close tabs, and finding that she needed to define a completely new concept of what she calls cognitive liberty, which includes aspects of mental privacy, freedom of thought, and self-determination. It was actually the control labs and the non-invasive neural interface of being able to control the future of spatial computing with this electromyography watch that she saw a presentation by Tom Reardon. It basically lit a fire underneath her to finish this book, because she felt like all these things that she was seeing over the last decade, that we're going to start to see some of this technology go into the mainstream, and we're woefully unprepared for trying to deal with the implications of that. We need all sorts of new regulatory approaches to even conceive of what this data means and the sensitivity of it. So that's what she's trying to do in this book is to both identify these concepts, but also be working at the forefront of trying to come up with some of these different solutions. Blending together her backgrounds and philosophy and law and neuroscience and emerging technologies and to kind of figure out how to put some guardrails on some of this stuff before we kind of sleepwalk into surveillance capitalism 2.0 brain edition So that's what we're covering on today's episode of the voices of VR podcast. So this interview with Nita happened on Wednesday, March 8th 2023 so with that let's go ahead and dive right in and

[00:03:04.525] Nita Farahany: So hi, I'm Nita Farahani. I am a professor of law and philosophy at Duke University and the author of the book, The Battle for Your Brain, Defending the Right to Think Freely in the Age of Neurotechnology. And I study the ethics and legal implications of emerging technologies. And in particular, I focus on neurotechnology and artificial intelligence and increasingly interested in the kind of ways in which the brain will interact and XR, VR, AR, and the kind of coming digital changes that are coming through generative AI.

[00:03:39.889] Kent Bye: Awesome. Yeah, maybe you could give a bit more context as to your background and your journey into this intersection of both law, philosophy and emerging technology.

[00:03:48.541] Nita Farahany: It's funny because students often will be like, wow, how did you know to combine all of those things to end up in this area and this degree? And I wish it was that intentional. But what I can say is that my background is I started as a pre-med major back in the day in college. And I did an undergraduate degree in genetics, but I found that while I was doing that and while I was pre-med, I would seek out every possible policy internship that I could do and ended up taking a lot of classes in government and international relations because I was so interested in the broader implications. At the same time, I was a policy debater in college and was really interested in the bigger questions that the science and the technology were raising. When it came time to apply for medical school, I was like, maybe I'll just take a little bit of time and see what else is out there. I did strategy consulting for a few years and that's really what turned me on to futurism because I did a lot of forecasting of the biotech industry at the time and where things were going and learned a lot of the skills of modeling and future forecasting and basing it on data, looking at data and data trends. how to analyze those trends. In that process, while I found that to be an incredible experience, it wasn't quite the right fit. I didn't think that business was going to be the place in which I was going to end up doing the things that I wanted to do. So I applied to grad school. While I was at it and was living in Cambridge, Massachusetts, I did a master's degree in biology focusing on neuroscience and behavioral genetics. found there the intersection of those things with law and philosophy. There was this great chapter in a book about the role of behavioral genetics in criminal cases and how it was being used to try to genetically identify linkages to things like criminal behavior. So I decided to go to law school because the questions I was asking weren't going to be just answered by law. I also applied for and was accepted into a PhD program. And so I did a JD and a PhD in philosophy and in particular philosophy of science. And my dissertation focused on this intersection of behavioral genetics and neuroscience and the legal system, kind of laying the foundation for what would ultimately become a career that has focused on the intersection of the philosophical and the legal issues that arise from emerging technologies and biosciences. And so that's what brought me here. It is kind of like the long path to here, I guess.

[00:06:21.757] Kent Bye: Yeah, well, I guess it's, it's really the perfect combination of this moment in time where we have all these emerging technologies with virtual augmented reality. And I've been talking about the threats of biometric data since I went to the neurotech conference and like 2017 and did an interview with the behavioral neuroscientist who is talking about the ambiguous threshold between persuasion and manipulation when it came to things like this biometric data that were going to be available, moving what was normally in a medical context. And now it's moving into the consumer context. And as I look through your book and all the research that you're drawing forth, it's a tour de force, comprehensive overview of not only what's happening in the realms of neuroscience, but neuromarketing and this whole neurotechnology field and all the different ways that you can see what's theoretically possible from what the academics are doing. And so I'm really excited. I'd love to hear a little bit more about how you first came across this as an issue, because you said you've been working on this book for about 10 years back to like 2013, 2012 or so. And so what was happening back then that really caught your attention to focus in on this future of neurotechnology and the ethical and moral implications of that?

[00:07:31.693] Nita Farahany: It wasn't consumer neurotech at that point. At that point, the earliest headsets were coming out that were noisy and let people do some really simple applications. But at that point, I was still building out this focus of the ways in which neuroscience was being used in criminal law. One of the fascinating applications of it was, there started to be these commercial companies that were touting the use of neurotechnology as lie detection. There was No Lie MRI and Cephos that were using functional magnetic resonance imaging studies. These are much more cumbersome studies of the brain, but very high resolution. You can peer deeply into the brain. They were taking the early findings from neuroscience that were in the realm of fMRI and thinking like, oh, there's got to be commercial applications of this and it's got to be relevant for the criminal justice system. And so they were offering their services to have criminal defendants put into an fMRI machine and then to ask them a series of yes, no questions and saying like, and we can tell with, you know, 99% accuracy, whether they're telling a truth or lying. And that is what really sent me down the rabbit hole of neurotechnology because I thought, wow, that's fascinating. And those early cases made me start to ask the questions about and the criminal justice system, like do we have in the US a right against self-incrimination of having our brains interrogated? Do we have a right under the Fourth Amendment right against unreasonable searches and seizures to protect against, you know, searching our brains? And at that point, it was more like a thought experiment. So I was laying down the foundation of what would ultimately become my conception of what cognitive liberty is, the right against these things. And I was finding that there were gaps in the law, that the Fifth Amendment unlikely would protect both those applications. But then I was looking at all the kind of futuristic applications that could come. The Fourth Amendment was unlikely to do so. The First Amendment, which protects freedom of speech, didn't protect freedom of thought. And so it was unlikely to do so. And as I was seeing these legal gaps, I started to really try to flesh out what would a set of rights around cognitive liberty look like? What's missing and what do we need? And at the same time, at that point, I was looking at, well, fMRI is probably never really going to go mainstream because it's so difficult to run the studies and it's so expensive. And so I had my eye on these consumer neurotech devices that were starting to be developed. And since I was already asking those big questions, I started to ask those same questions of the consumer devices and really watch and track them over time. And and I played it out in a lot of different settings within the United States as I was building toward this idea of cognitive liberty. And then even in 2012, I actually wrote my first proposal for this book around the concepts that I was saying. But then I sat on it for a while. I sat on it for a long while because, you know, it still seems like a future threat to me. It wasn't like a immediate urgent thing that I needed to turn my attention to. I gave a TED talk about it in 2018 and then my literary agent called me up and he was like, hey, remember that proposal? What happened? Are you going to write the book? Still, that didn't light a fire under me. It was seeing a presentation by Control Labs and being like, that's it. It's arrived. The moment has arrived. brain sensors are going to be embedded into everyday devices. They're going to be the controller for VR and AR and for our interaction with the rest of our technology. They're not going to be niche products anymore. They're not going to be the products that people use just to meditate anymore. It's going to be like your sensors that are in your watch for heart rate, your rings that you're wearing, people are going to have brain sensors, which Of course, like, aha. And when I got that, when the light bulb went off, then I was like, Yeah, now, now is the time. And, and I really, like, if I had been a betting person, I would have put a lot of money down on Apple acquiring control labs. And so I was just startled when meta bought them a year later. And that really motivated me to get writing and get this book done.

[00:11:40.232] Kent Bye: Yeah. And I first came across, well, there's a Mayo armband that was kind of the precursor to control labs that I'd seen demos at GDC a number of years ago. And then I was at the Canadian Institute for Advanced Research in 2019, where there was someone from control labs that was there that I did an interview with. And then after that. you know, I had a chance to actually do an interview with Thomas Reardon about the implications of what's it mean for an EMG device to be able to locate the firing of an individual motor neuron. So you basically have the intention to move being able to be tracked and monitored by these different companies. And so during the pandemic, there was a ethical considerations of non-invasive neural technologies that was put on by Rafael Usta and the Columbia's Neural Rights Institute and partnership with at the time Facebook Reality Lab, but now Meta Reality Lab. And, you know, Rafael was putting forth these neural rights of the right to mental privacy, the right to identity and the right to agency. with some other aspects of both the right to be free from algorithmic bias, as well as like the right to equitable access. But I feel like those first three are kind of like these human rights approach that he was trying to take. And I noticed that in your approach, that you had a little bit of different definition of mental privacy, whereas maybe Raphael Eustace sees it as more of an absolute right. You see it as more of a fluid and contextual right. And then maybe an umbrella right of cognitive liberty with other aspects of say, the freedom of thought, which is inalienable, like you can't sort of encroach on these. And so I'd love to hear you as you try to come up with the landscape of your human rights approach of how you start to create this cartography of cognitive liberty, which has all these other sub aspects to it.

[00:13:18.293] Nita Farahany: Yeah. So, I mean, I'd say a lot of it is, it starts again from that foundational work that I started writing about in 2012 and 11. So, you know, I kind of laid down these ideas of mental privacy and freedom of thought and self-determination being core aspects of cognitive liberty. And I should say, I think of cognitive liberty as something that applies to much more than just neurotechnology, right? I don't really frame it as being about neurotech, even though neurotech helps us to best understand it. I think cognitive liberty, which is the right to self-determination of our brains and mental experiences, is something fundamental across so many different domains. Whether or not you're using neurotech devices, as we start to navigate a world of of VR, whether we're using neural interface or not using neural interface, our brains and mental experiences are fundamentally being shaped and reshaped by the technologies that we interact with. And so cognitive liberty to me is really an update to the concept of liberty itself in the digital age, rather than liberty before we had the technologies that surround us. For the component parts, privacy is not an absolute right. It never has been recognized as an absolute right. And so the idea that mental privacy would be an absolute right would be quite a departure from how we understand privacy rights to begin with. So I should say, like, I'm a lawyer first and a law professor. And so I'm probably going to take a much more legalistic and legally nuanced position that maybe ties more to doctrine than just concepts. But I'm also a philosopher. So I'm trying to provide the normative foundation to understand those things. And so I see these three rights, which is freedom of thought, mental privacy, and self-determination overlapping and helping to protect the domain of mental experiences. Mental privacy, I think, covers a lot more than freedom of thought does. mental privacy really pertains to the broad set of experiences that we have from our brain, our emotions, our automatic reactions in our brain, you know, things that are subconscious that we have no awareness of, to the things that are very conscious. Freedom of thought, as I understand it, is a much narrower concept, and it is an absolute protection, right? So, as an absolute protection, I think we need to be careful about what it covers. And in that way, there are all sorts of ways that we try to read each other's minds and change each other's minds every day. And if we were to say any encroachment upon another person's mind, reading their mind or changing their mind violates your absolute right to freedom of thought, we'd be in a lot of trouble, right? We would all be violating each other's freedom of thought all day long, every day. And so I believe that we should conceive of freedom of thought as about the complex inner monologue, the complex thoughts that you have, the thought in a robust sense, the visual images and the thoughts that you have in your mind, whereas mental privacy covers everything. And because of that, because mental privacy covers a bigger swath of things, some of which are going to be far less fundamental to our sense of self and identity and far less sensitive, there will be sometimes that I think that society will have a right to encroach upon our mental privacy. And an example that I give of that in my book, The Battle for Your Brain, is a commercial driver who is having their brain activity monitored for fatigue levels. That violates their mental privacy in the sense that it is actually intruding into that mental space. But I don't think their interest in whether or not they're sleepy or awake outweighs the very real interest in society of knowing whether or not they're sleepy or awake while they're driving a commercial vehicle that poses a grave threat of injury or danger to other people. But I don't think we should be mining their brain for a whole bunch of other information that's irrelevant to the societal interest. And so I think if you understand both the existing backdrop of law, which is that privacy from which mental privacy would be derived and updated, privacy is an international human right. we can interpret because international human rights law is meant to evolve and the interpretation of it changes over time, we can update it to make explicit that mental privacy is included within it. But if we were suddenly to say, and mental privacy is an absolute right, it'd be inconsistent with the rest of the way in which we understand privacy law to begin with and the way that it's interpreted at the international human rights law level. I think when the neuro rights groups say that it's absolute, I think that what they probably are doing is conceptually mixing up the part of freedom of thought that people most care about, that like complex thought piece, which belongs in freedom of thought. as opposed to the like the real legal history of what privacy law is. And, you know, and I offer that in the book. What I'm trying to do in the book is not just like throw out a set of like hero guiding principles, but to develop the normative foundation that we need to build these rights and to build the scaffolding and hopefully adopt them. I do, however, propose a new right. And that new right is the right to cognitive liberty. And I really went back and forth as to whether or not we should just update those three rights and be done with it. or recognize a new right. And I consulted with a lot of my human rights colleagues who've thought a lot about this question of rights expansion and problems like that. And I came down on the side, you know, with their support on the idea that there is significant power, expressive as well as legal power, to codifying it as a new right. The mechanism isn't like adding to the UN Declaration of Human Rights, the Human Rights Committee who oversees the treaty that implements the UN Declaration of Human Rights, the International Convention on Civil and Political Rights. they could recognize it in a comment and an opinion and say there is a right to cognitive liberty. And that directs the updating of these three rights. But the significant power that would have, I think, in the digital age to recognize there is a set of rights that are included within this concept of cognitive liberty and that liberty, which is the foundation and the fundamental aspect of what we think of as democratic ideals, requires updating. And it requires updating by recognizing liberty includes cognitive liberty.

[00:19:22.607] Kent Bye: I've been covering virtual and augmented reality for long enough to know that for every ultimate potentials of these technologies, there's some dystopic aspects as well. And I've, I'd love to have you maybe explore some of the more dystopic potentials that as people read through this book, you know, you have both from what the Chinese government is doing, or I guess the wider principles that you have of cognitive liberty, self-determination, mental privacy, also the freedom of thought, like, what does it look like when each of these different concepts are transgressed on those boundaries? And what are some examples that you can provide of the deployment of some of these technologies that may not be taking into consideration the umbrella concept of cognitive liberty?

[00:20:03.509] Nita Farahany: Yeah. So, you know, I go through the use of it at work, which is, you know, kind of your brain at work, both the example we've already talked about, which is the use of it to do things like fatigue management, but also the growing use of it for attention monitoring its use in wellness programs and some of the upside, but also insidious misuse of data out of wellness programs, given that they don't have the same kind of protections that other programs have. I look at the government use of it. everything from functional biometrics from the brain to brain interrogation techniques using things like P300 or N400 and the actual applications of that in a lot of settings that would probably surprise people. I know some of it surprised me even as I would look for examples and find ever new ones that would emerge from it. you know, the misuse of it, I think, in authoritarian regimes like China, where, you know, if there is a possibility of doing things like testing for political ideology, it seems like they're on board and willing to try it out on, you know, people who are mandated and required to wear sensors, whether in an educational setting or in a workplace or military setting. I look at the upside for individuals, right? So a lot of individual use cases I spend, you know, I think a lot of people focus rights protections on protections from interference, but I think cognitive liberty is a right both from interference, but also to use technologies and to access our own brains and to enhance and change our own mental experiences. And so I, I go through a lot of that and the kind of middle section of the book, and then I, I turn to the concept of mental manipulation and look at both corporate mental manipulation, good and bad, right? I mean, like everything from neuromarketing techniques to try to understand how people behave to dream incubation, which I think is pretty creepy. And then, you know, the really scary dystopian weaponization of the brain. So, you know, China's cognitive warfare program where they're really apparently purportedly making weapons that are targeted to the brain and the U.S. is anxious enough about it that we've sanctioned them and are having active conversations about export controls on whether or not we should have any of our technology related to brain computer interface that goes to them to you know, really icky history in the United States of MKUltra and other attempts at trying to engage in mind control of individuals. And these concepts I lay out both for people to think about it from a context specific way. But also to build out the normative foundation of mental privacy and freedom of thought and to show, like, how do these things fit in each of these examples that I've given with how the right to mental privacy would operate and would operate in that particular context and what the normative foundation for it would be. Where does self-determination come from and how does it come into play and what are the limits of it? So when I go into the kind of transhumanist potential applications of this technology, does self-determination give you the right to change everything about both yourself, even if it had implications for the future of humanity? And then freedom of thought, where, you know, there are harder questions about freedom of thought, I think, in the category of mental manipulation and easy questions when it comes to weapons that are targeted to the brain that clearly go over the line. And so, you know, really try to lay all of that out to help people think about where they come out on those rights, but where, you know, to make very clear and to show my cards on where I come out on how we ought to update those rights to apply to those contexts.

[00:23:45.230] Kent Bye: Yeah, it's a, like I said, it's a really comprehensive overview of the whole industry. And I'd love to have you explain a little bit your process of keeping track of all these things that for one things that I, I guess I use for making sense of these landscapes is like Simon wordly talks about how there's these like evolutionary phases where. There's academic publications that are testing out what's possible. Then you have like custom bespoke enterprise applications that are deployed, and then you have consumer products. And then eventually you potentially go into this final phase of mass ubiquity. So as you're looking at the evolution through these different phases and the diffusion of the technology, I see that you're doing a really comprehensive look of all the different academic studies that have happened. You're tracking a lot of what's happening in maybe these corners of enterprise applications that we may have not heard of because they're not necessarily like on everybody's radar, but through this lens of neurotech, you're able to really trace this lineage of each of these different phases. I think as we come into the brink of consumer phase of neurotech, which has been out for a while, but not very many sensors and not very many high resolution of what we're getting from our brain. But as time goes on, we're going to be able to, you know, like the. EMG sensors that are able to detect down to the firing of individual motor neurons, the tech is going to continue to evolve and grow. And with that, see more consumer applications. But how have you kept tabs on all of this? What's your process of mapping out this entire landscape as comprehensively as you have?

[00:25:08.440] Nita Farahany: I wish I was more methodical. It sounds like a good methodology, but I, I read, read, read and interview, interview, interview. Like I try to keep up with the academic literature by just regularly reading everything I can get my hands on that comes out on it. And then I oftentimes follow up directly with the study authors and ask, and many of them have been incredibly generous to talk with me about it. I talk with a lot of the neurotech companies. I follow the press releases. I reach out every time that there is some interesting, tantalizing piece of news or information to the people themselves and ask them if they're willing to talk with me about them. There's a lot of original interviews in the book as well of technology that doesn't have much yet written about it because I'm reaching out to people. Happily, at this point, because word of the book and what I've been working on has been out there broadly enough. Now I have just a huge amount of people reaching out to me to say like, hey, can I tell you about my technology? Can I give you a show and tell? Can I demonstrate it to you? And so I've been taking that I've just started a newsletter to try to share the deluge of information that I'm getting and do it in a way that reflects and keeps people updated more regularly, which people can sign up for on my website at nitafarahoney.com, so that there's a way to have a systematic record of everything that's happening in this space. It's helpful to understand that a lot of times the academic literature is ahead of where the commercial applications are, but you can connect them up. If you're thinking about it through the lens that I'm thinking about it, which is how this is going to go mainstream, and you're looking for the signals for that, you're looking at the breakthroughs. There was a paper that just came out, I think this past week, that was talking about better control of devices and technologies from an academic perspective and what some of the breakthroughs were that they'd overcome and some of the limitations and hurdles. You can quickly connect that up to where the gaps are in some of the existing tech companies and reach out to them and say, have you talked to X, Y, and Z? You can see where it's going and what they're researching and how the other side, the industry, is pushing to try to have exactly that technology. I think a lot of people oftentimes think of academia as being slower and behind the curve of where industry is, but a lot of the innovations and the breakthroughs that become foundational and fundamental for commercial application are happening in those early basic science research papers. I'm reading both ends of it to try to keep up.

[00:27:39.860] Kent Bye: Yeah, I think that's clear as I was reading through all the citations and also reading through, I mean, the book, as you read it, it's got a very lucid narrative that you're telling, but in the backend with all the citations, you can tell that it's just an incredible amount of scholarship and surveying and keeping track of all these things. I've been focusing on the XR pocket, but there's so much broader of what's happening in this space. That was really informative for me to get a broader context for where this technology is going. Especially things like tracking brainwaves with AirPods and yeah, just to see the breakthroughs. Yeah, I guess, as I've been covering this issue since 2016, I've done like 60 to 70 different interviews with different people. And, you know, Britton Heller is a international lawyer and human rights lawyer and has coined this term of biometric psychography, which is trying to identify the fact that most of biometrics law is connected to identity. But she's saying that there's some of this data that is able to make these biometric inferences that are indicating our likes, our dislikes, our preferences that are essentially from this your rights perspective, mapping and modeling our identity. And then if you have a sophisticated enough model of your identity, then you can potentially nudge someone's behavior and to kind of undermine their right to agency. And so from your perspective, though, I guess the thing that I've been struggling with is that where do you actually start to use the most highly leveraged position and what's the flow between Is it the UN declaration of human rights that gets updated? And then does that flow to the EU charter of human rights that then flows into the GDPR update for how they're defining things like biometric privacy? And, and then eventually maybe that gets adopted by California. And then maybe you have like a federal comprehensive privacy law within the United States, but like, how do you see that flowing forth into like, okay, great. We have the concept of cognitive Liberty. Now, what, where do you go from here to start to have that percolate into all of these different regulatory bodies?

[00:29:26.398] Nita Farahany: Yeah, that's a great question. Because as you and I both know, it's hardly enough, right? I mean, you get a principle, but if it doesn't actually get implemented, it doesn't matter. So first, I think the reason that for me, it matters that you start there is because to me, it actually does flow from there, all of the mechanisms at each of the different levels for where they need to be updated. Here's a simple answer. You just mentioned biometrics. The American Law Institute and the European Law Institute had their first joint project, which concluded recently around the principles of the data economy, which outlined the principles that jointly across the U.S. and the European Union could outline and direct all of the different entities about how to think about and integrate into laws and direct into laws the principles of the data economy. Just recently, a colleague from the ELI and me from the ALI put forth a scoping project to the joint ALI-ELI council on principles of biometrics. outlining this concept as well, the right to cognitive liberty as a joint ALI and ELA project. And we're moving forward on a joint ALI and ELI project on principles of biometrics, which will take a while to define this, but sorts out the specifics and the mechanisms, both of where does brain data and inferences that could be made from brain data and cognitive liberty live within the broader umbrella of biometrics? And how does it fit within the European approach as well as within the American approach to thinking about these issues and get implemented? So I think that's a helpful next step is trying to come up with a joint agreement across ALI and ELI. And of course, As you know, there's a ton of efforts that are happening at the UN, at UNESCO, at the OECD on implementation of rights and regulations around neurotechnologies. I think that's great and really valuable. I think they need to be better aligned with the bigger set of digital rights, the bigger set of digital and AI conversations that are being had, which is why, again, I approach it as a cognitive liberty concept rather than a neuro-only concept, because I think it actually is an umbrella term that works across and works within each of these. Then where do you go from there? It requires updating and alignment across not just like European and American approaches, but across the world. And that's a longer and more involved process with a lot of people at the table, but there are already a lot of people at the table. And I think having this symbolic and legal effect to begin with that as an umbrella driving force helps us in those next steps and in those next projects.

[00:32:04.966] Kent Bye: That's good to hear that there's a plan and that it's being executed because I feel like as I've been covering this, it's been like frustrating to not know where to actually address some of these issues. So at least as this book is coming out, it's bringing broader awareness to this as an issue. As I was reading through the book, though, one of the things that I was concerned of just from coming from the perspective of XR, virtual and augmented reality, I'll give you an example. So I had a chance to go to a meta press conference where they were talking about some of their latest technologies with the control labs EMG watch, right? Is this the one a couple of weeks ago? No, this was last year. Okay. I've done an interview with Thomas Reardon, but before that it was basically like having a chance to get a preview of where things are going in the future of their, they're really seeing the non neural interfaces of the future of human computer interaction, this EMG control labs watch. Right. So, yeah. So it's essentially your motor cortex, it's your movement, it's your body movement. And it's not necessarily your quote unquote thought. And so when I say, Hey, what about neuro rights? What about like, how do you protect this data? And they're like, well, it's just movement data. Right. So does that include like embodied movements, emotion? So what is, what is sort of cognitive Liberty all encompass?

[00:33:20.032] Nita Farahany: Yeah, I mean, so, so I think it's unhelpful to frame it around neuro rights, because then you have this question of like, well, what about EMG? And what about the social contagion experiments on Facebook? And what about Cambridge Analytica? And what about manipulative, generative AI practices? Or what about changing the environment in VR in ways that are iterative in response to how a person responds in order to manipulate and change their behavior? I don't think we should have rights creep of there's metaverse rights, and there's neurotech rights, and there's generative AI rights. We need an umbrella. We need to actually recognize that these things are not acting in silos. They're acting iteratively in combination with one another, and they all are affecting our brains and mental experiences. And not all of them are going to be ones that we think are OK, and not all of them are ones that we're going to think are wrong. And drawing the lines are not always going to be easy. But I think artificially drawing the lines around a technology isn't the way to address it. And so I've always come at this from cognitive liberty rather than from neurotech specific rights. And you'll notice the examples in the book, many of them aren't neurotech examples, right? The whole chapter on mental manipulation is really hardly a chapter about neurotechnology, even though it uses examples of neurotechnology. It's really talking about the ways in which our brains and our mental experiences can be shaped and changed. But I consider EMG to be fair game within all of this. EMG is picking up your intention to move and it's picking up motor neuron activity, but is decoding information. And there's a lot of sensitive information that you can pick up from intention to communicate by typing or intention to move, or even the movements themselves. Fantasizing, right? Sexual fantasizing that people do can have motor neuron activity movements. health implications can be picked up through motor neuron activities. There's a lot that can be inferred from that. And so the idea that EMG is somehow okay, it is different. I think there is a different set of things that can be inferred from EMG and some of them are going to be less sensitive and some of them are going to be just as sensitive, but it's about the inferences. It's about the information. It's about the ways in which you can pick up information from the brain and mental experiences and the ways in which you can shape and change them. That's what cognitive liberty is about.

[00:35:43.011] Kent Bye: And is emotion also like a subset of cognitive liberty?

[00:35:46.825] Nita Farahany: I wouldn't say it's upset. I would say it's a part of cognitive liberty, right? I talk about cognitive and effective functions. And I struggled with that word cognitive, right? Because I didn't mean for it to exclude effective, like the emotional. So I, you know, I always wondered if there was a better word than cognitive so that I would never have to say like cognitive and effective and all other functions of the brain. But cognitive liberty I think of is the fact that they're happening like brain and mental experiences, everything within there, including emotion. Okay.

[00:36:16.677] Kent Bye: Yeah, and when I was reading through the book and also listening to your talk, like Dr. Anita Allen is an example, she's taking some very paternalistic approaches to privacy in terms of like saying sometimes there's aspects of our privacy that maybe should be considered like an organ, like we shouldn't have the ability to give control over different aspects of our privacy. And, you know, in their last chapter, there was a paragraph that you are talking about Susana Zuboff. And you say that when Susana Zuboff coined the concept of surveillance capitalism, our personal data had already been widely commodified and our ability to draw back largely gone. With neurotechnology, it's not too late to protect against the same fate for our brains. So essentially that we stand in a fork in the road. And what I was struck by is that, you know, in some ways, you know, the surveillance capitalism as a concept was being coined after the genie was out of the bottle. And actually it was already kind of like this new unprecedented paradigm had already been diffused out in the culture and there's no pulling it back. And so how do you balance this tension between, you know, when you have things like virtual and augmented reality where companies like Meta who want to enable certain contextual dimensions of this neural technology to have an experiential component, but maybe they want to have this other aspect where they take that data and do all these inferences for psychographic profiling for advertising purposes. And how do you differentiate this boundary for drawing a firm line as to what is acceptable and not acceptable as we kind of like sleepwalk into this new future and not suddenly be on the wrong side in the cooling ridge dilemma where we've missed the opportunity to regulate. And now we're just in the place where it's diffused out in the culture and there's no way to kind of reign it back. So how do we navigate this in a way that's different than what has already happened with other types of data?

[00:37:56.512] Nita Farahany: So I think we have to change the default rule. And I believe that cognitive liberty is the first step to doing so. So let me unpack that. You know, I describe where we are as being a moment before, and I really mean a moment, right? You know well enough. that like this technology has already arrived. Modification of brain data has already begun. It's just a question of scale, right? It isn't yet widespread across society and it is about to be. And so before that happens, we have a moment to change the terms of service. The terms of service right now is it pops up on your screen, you scroll down without reading it and you say accept. And what you've just accepted is that all of your personal data is being commodified and sold and profiles of you are being created and sold to advertisers and marketing and for any other purpose that anybody wants to use them for. By contrast, while a few companies have started to take brain data from devices and use that data to commodify and sell it to other companies, most people do not yet have real-time tracking sensors on their brains and their headphones and their earbuds and other devices. That data, for now, remains their own. And so if we had a right to cognitive liberty, which said, if you are taking brain data, and by here, I'm more limited, because I'll come back to that in a moment, but I think everything is brain data in a way, right? And so I'm being a little bit more limited, which is the category of information that has not yet been taken, which is information from brain activity through sensors, picked up from your body, whether that's through FNIRS or EEG or any other device, or from EMG or any other device that is picking up that data. So cognitive liberty would say you have a right to that data and the terms of service would have to seek an exception from you, like affirmative permission, not as like accept the terms of service if you want to use this, but because of your right to mental privacy and freedom of thought and self-determination, The answer is you don't get that unless you actively ask me to opt in and you can't condition access to services based on access to brain data in the same way. It literally resets it and says, this is your data. you get to choose how it's used. And there has to be an exception and permission and maybe even offers. Maybe it has to be bought from you. Like it literally does like a hard reset when it comes to this new category of data. And that's one of the things I want people to focus on and realize is this is a new category of data. There are a lot of inferences about how you think and feel that are being made from other streams of data. But this is a new category of data and there is no reason why we have to treat it the same way as we've treated every other category of data. We can actually change the terms of service from the get-go, in which case we won't be looking back five years and saying like, oh, you know, the age of surveillance capitalism of the brain, right? Part two, where we write the sad story about how we gave away all of our brain data without even thinking about it.

[00:41:04.527] Kent Bye: Yeah, because it is a new type of data, the legal definitions of that data, I feel like have these broad implications of just talking to different experts in the field of just how like Illinois biometrics law as an example is tightly coupling biometrics data to identity and personally identifiable information. And I just had some discussions with some experts that are looking at the AI act as it's going through the tri-log deliberative process and how there's currently different definitions, but it's trying to potentially decouple biometric data from this identifiable aspect because it can change over time, but also the harms that can come from this physiological or biometric data that's reading from our bodies. It's like the inference-based harms versus identity-based harms is another way, I guess, of putting it. And so is there a specific take that you have in terms of how we start to define this new type of data that's related to cognitive liberty?

[00:41:55.817] Nita Farahany: So when I started writing about this stuff about a decade ago, I laid out what I called a spectrum of neuroscience data. And I put that spectrum, you know, largely on the inference model, which is the kind of inferences that could be made, but identity is on that spectrum. So the kind of four categories that I laid out were identification or identity, automatic, information, memorialized information and utterances. And what I meant by that was, you know, basic identity features. And that could be anything from like, there's a tumor in your brain, right? That can identifying things in your brain to identifying you as an individual, automatic functioning could be everything from like how your brain is processing information to automatic reflexes that you have in the brain that are picked up. Memorialized information could be things like literally your memories, right? The kind of information that is stored through recognition memory or other memories in your brain and utterances are the kind of silent thought, the inner monologue, the inner reflections, the imagery, all of that within your brain. And I offered that spectrum as a way for us to recognize that not all brain data is equally sensitive. and treating it all exactly the same doesn't make a lot of sense, and it doesn't align with the way that we've treated the rest of data. If you look at, like, Fourth Amendment law, it looks at differences of what you have an expectation of privacy, and you don't have an expectation of privacy, for example, in identity. And that's because identity is both visually usually possible to be validated, but also because we think of it as a necessary feature to be able to function through society. We have to provide our identity all the time. Whereas our silent utterances are not something that anybody has a right to access. And so the way that I think about data, brain data, right, is that there is a lot of information that can be gleaned from the brain and how we ultimately regulate it. should actually tie to the kinds of purposes for which it is being used. The challenging thing that everyone who writes about and has been thinking about this is that you capture the full spectrum at once, right? Which is different than a lot of other kinds of data. So if you're capturing raw brain data at the exact same time, I may have identity automatic memorialized and utterances all in one snapshot. But it doesn't have to be the way that it's captured. It doesn't have to be the way that it's stored. It doesn't have to be the pieces that are extracted and used. And so there's the challenge of raw brainwave data, or however you want to think about it. It doesn't have to be brainwave, right? There are other modalities of data that can be collected from the brain. But I think that's a different challenge than the question of the inferences that can be drawn from the data and whether or not we ought to be thinking about it along a spectrum. I believe we need to be thinking about the risk of misuse and the nature of the sensitivity of the data and to not lump it all together and to not mislead people into thinking that every single thing I can extract from your brain is equally sensitive.

[00:44:55.566] Kent Bye: Yeah, and just talking to Daniel Lufer of Access Now, we were looking at the AI act and how it's taking a tiered approach of saying, here's some unacceptable risks that are going to be banned applications of AI. Here's some high risk applications of AI that need to be monitored. And there are certain reporting obligations in terms of transparency, but in the normal or medium risk for the other applications, just have like transparency obligations where you disclose that AI is being used, but When we start to think about some of these types of neural applications, do you see that we'll need a similar approach of saying here are some unacceptable applications of this technology that need to be just outright banned, and then take a similar approach to what the AI Act is doing in creating this spectrum of different risks and trying to categorize those different risks and having different ways to enforce the most unethical applications of neurotech?

[00:45:47.727] Nita Farahany: Yeah, I think well said. And so I've strongly and for a long time advocated that we focus on misuse rather than just access restrictions, because first of all, I think it is almost impossible to win on the side of just access restrictions. And second, I think data can be used for good. And neural data has the potential to be transformative for humanity if we can actually use it for solving neurological disease and disorder. And then third, there are different risks that can arise from it. And there are some that we need to be very vigilant and safeguarding against. And there are some that we shouldn't be as worried about. And so focusing on misuse would help people a lot. And let me underscore that for a moment, which is the more we focus on access, the more people are just afraid of access and don't focus on developing the rights and protections against the harms that can arise. And it keeps them from even having to articulate what the harms are. Like, what is it that you are afraid of happening? What are the real risks? Can you articulate them? And then can you create protections and can you create remedies against it? And so a lot of my work has been trying to do that. Like, let's actually figure out What is the nature of the data? What are the actual harms? And the book is set up very much along those axes, right, is to say, like, here are context specific issues that are going to arise and context specific solutions about how we would apply cognitive liberty in context to those particular problems and those harms. And, you know, it directs, I hope, our attention to trying to create safeguards against misuse, recognizing there are some huge upsides to the use of neurotechnology. There are huge upsides to neural data. And really like the question is just how do we get it right? And how do we put the right safeguards into place for people?

[00:47:38.568] Kent Bye: One of the things that I had a discussion with a neuroscientist who was doing brain computer interface research, and just at the time there was a study that was putting ECOG under the brain and being able to do speech synthesis, but it was with invasive technology. And his comment to me was over the next five to 10 years that they'll be able to have something very similar to speech synthesis with external data. And I know I heard you say a couple of times that you're very skeptical that within 10 years, we'll be able to decode complex thought, but I don't know. It feels like with advances in AI and large language models and everything else that even with you, I can clarify what I mean by that.

[00:48:12.979] Nita Farahany: Right. Cause I'm not, I'm not worried that you're going to think that I. So intentionally communicated complex thought, I am bullish that that will happen much, much sooner. But there's an inner monologue, which you do not intend to communicate, right? And it's been really interesting to look at some of the fascinating research that focuses on inner monologue. And that's going to be a lot harder to decode. And that's, I think, what many people are the most afraid of is inner monologue decoding. And like real time, you do not intend to communicate. you do not intend to share it, you're having an inner musing and it's picking up that self-reflection, that's going to be harder. That's deeper in the brain. It is much more difficult to pick up with surface electrodes or even f-nears. And I don't think it's impossible to bridge that gap. But we're still at the early stages of imaging and fully conceptualizing and understanding inner monologue. But the intentional communication of complex thoughts That's coming soon. Like I think that's that's not. So I say complex thoughts largely to help people who are worried about the idea that the inner monologue of unintentionally communicated speech can be decoded. That's not what we're talking about today. But, you know, you want to share and type and transmit complex thoughts to another person through BCI. Like, yeah, I think that's coming. It's just, there is a difference between intentionally communicated speech and unintentionally communicated speech.

[00:49:45.236] Kent Bye: Okay. Thanks for that clarification. Cause that's more in alignment. I think things are going to probably move faster than people expect, especially when we have like, I saw a recent one of large language models that were generating images and then taking people's brainwaves and having a matching.

[00:50:00.124] Nita Farahany: Yeah, that is like generative AI. is both exciting, but terrifying. And I reached out to Eddie Chang to say like, Hey, how are you guys thinking about generative AI and what the implications are going to be? And he gave me some serious food for thought on that. One of the things I've been thinking about is generative AI is very good at predicting the next thing you're thinking, but for a person who has truly locked in syndrome and their only mode of communication is by BCI and generative AI is predicting what they're saying, but then predicting it in large concepts, like how do we validate that or verify it? And it's going to be complex and scary to think about what could happen in that way. So I won't complete that thought now, but I think you could use your generative AI tools in your brain to complete that. thought about the kind of scary dystopian possibilities with that. But that's all I mean by it. I mean, if you talk to the scientists who are doing speech prosthesis, they'll lay out this distinction between intentionally communicated versus unintentionally communicated complex thought. And it is instantiated differently in the brain, it is easier to decode, you know, your representation of intending to communicate speech than it is your inner monologue, which you don't intend to share and are not conceptualizing to do so.

[00:51:15.277] Kent Bye: Okay. Yeah. Yeah. Either way, it's scary on either end. So you're starting to map this out. Just a few more questions to wrap up, because I know that you are differentiating yourself from neuro rights, but one of the neuro rights that they do have is the right to agency. Previously, they've called it the right to free will. And there's a, I guess, some parallel into the right to self-determination. And you have some sections where you've done some work of trying to, I guess, free will as a concept philosophically is also up to much debate, lots of deliberations. And so how do you translate from the philosophical debates of free will down into something that's practical, pragmatic. And when we look at the line between say persuasion versus manipulation and this nudging behaviors, what's the line for when you're violating somebody's rights of self-determination, the right to agency or right to free will.

[00:52:04.400] Nita Farahany: It's the hardest question, Kent. I mean, honestly, like the chapter that I struggled the very most on in this book was the chapter on mental manipulation. And I spent a couple of years on free will. And I say a couple of years in the sense that I spent a couple of years where I basically didn't write anything other than reading books about free will and the different philosophical approaches to the concept. And then I ended up writing a very short paper at the end of all of that. on a neurological foundation for freedom where I had developed my perspective on the idea of freedom of action as opposed to like free will in the robust sense like we don't have free will over all of our preferences and desires many of which are baked in that we don't have control over but But I found philosophers and ideas that I aligned with and that helped me to build my conception of what I think works with the neuroscience and how I understand it. And that really helps to inform that chapter where what I focus on is Manipulation is trying to overcome our freedom of action. It isn't trying to persuade or change our behavior. It isn't about targeting our so-called conscious or unconscious minds. It isn't about hidden or apparent influences. It's about when you try to make it so that a person cannot act otherwise. A good example of that is trying to addict a person. Addictive technologies which cause a person harm are designed to overcome your ability to act freely. And that I believe is where we should be focusing our efforts on manipulation because all of the rest of it is oftentimes based on a false theory of mind or it's based on, you know, this idea about hidden or unapparent subliminal influences. Most of it's right out in the open. It's just that it's designed to hack into different mechanisms of the brain.

[00:53:54.044] Kent Bye: Yeah. And one clarifying question, cause you did talk about self-determination, but you also talk about informational self-determination. And so in your rights, there's the right to identity, which is to what degree are you sharing aspects of yourself to other people, which I saw some parallels there in terms of like informational self-determination, both on one side, receiving information about yourself, what's happening in your body, but also what you share with other people in terms of. informational self-determination. So is there a connection there to identity or, you know, how do you broadly see identity or right to identity? Or you mentioned it earlier that it wasn't accounting in the existing laws of like the fourth amendment of disclosure, because it's a way that we're kind of relating to each other, a part of our identity, but how do you conceive of identity fits into this aspect of cognitive liberty and your schema that you have?

[00:54:41.142] Nita Farahany: Honestly, it's not something that I have built into, like how I think about it, I have tracked it much more to I think, concepts of free will and concepts of self determination. So self determination, I think is foundational within international human rights law. There's a collective right to self-determination. Self-determination underlies many of the preambles and our understanding of what human rights law is and does. And informational self-access is already recognized as part of an international human rights law. And so I'm not an identity scholar and I wouldn't put it within identity. Yeah, I guess I think identity, if it is a concept that plays in here, that's a concept that's an umbrella term that plays across many, many different areas of law. And there's so much conversation about continuity and discontinuity of identity and what that even means within the philosophical literature that To me, it's not core to the concept of cognitive liberty. The core to cognitive liberty is self-determination. And self-determination, whether that's about identity or about your experiences, even if you have a discontinuous self, I think is fine. So that's a messier answer to you, but it's not core to how I thought about the problem or how I think about the problem. How I think about the problem is your right to know yourself, the right to have access to information about yourself, and the right to be able to choose how you experience the world and your brain and what is around you. And if that means speeding it up or slowing it down, changing your identity or keeping it the same, whatever it is that you want to do with respect to your brain and mental experiences. That's what self determination is about. So I don't tie it explicitly in any way to the concept of identity.

[00:56:28.865] Kent Bye: Okay. Yeah. Just as I see surveillance capitalism as a threat out in the world, I see it as a staged approach, right? I appreciate the neuro rights where the mental privacy, where they're kind of encroaching on our thoughts or what's happening in our side of our mind. And then they're doing this psychographic mapping of our identity. And then the harm for where we see is that maybe they're nudging our behaviors and undermining our right to intentional action or self-determination. And so the end result is that our self-determination may be vital to those nudges, but it's through that psychographic profiling of companies that they're doing that, that I feel like there's another aspect of what happens to inferences and biometrically inferred data. And is there other aspects from a human rights perspective to counteract these mechanisms of surveillance capitalism that people are consenting to? So that's how I think of it.

[00:57:15.960] Nita Farahany: Yeah, I mean, so I think you and I might align around that concept, and I think I would refer to it just more as self rather than identity, because identity and identity politics have such a rich and robust debate around them that it sort of invites to me a set of associations and debates that aren't necessary for me in this concept. I could easily see somebody taking the concept of cognitive liberty and self-determination and applying it to identity politics, right? And saying, like, I have a right to shape my own identity, to determine what my identity is, to declare my identity, whether that's over gender identity or ableism or whatever it is, right? You could see how these things could fit into or be echoes of or part of a similar debate. And that's, I think, part of what's exciting about an offering like Cognitive Liberty is it will be really interesting to see where people take it and go with it, because I can see it applying to a lot of different debates and being related to it. Just I mean, in the way that you have sort of mapped the concept to the neuro rights principles, I can see that people are going to do this in a lot of different contexts and relate it to their own work, just like I've related their work to my work to build and kind of develop the conceptual foundation for it.

[00:58:31.236] Kent Bye: Yeah, well, it's really crucial and key. And I'm so glad that you've done the work to produce this book. And I guess as we wrap up, I always like to ask my interviewees, given that we are going to be able to define cognitive liberty and put on some guardrails for this tech, what do you see as the ultimate potentials of this type of neurotechnology?

[00:58:50.673] Nita Farahany: I think they're vast, right? I didn't write a dystopian book. I wrote a book that is, as most people have described it, really even handed. And I lay it out in that way because I think the potential to treat brain health and wellness and to be able to quantify it and to address neurological disease and disorder and to change and enhance and alter our brains or connect our brains to one another. is transformative for humanity. I think the potential for addressing so much human suffering is exciting. And I think if we get it right, that's a future that we can go into with excitement, optimism, enthusiasm. I think the downside potential is also terrifying. and that it's very real. And we're already seeing the seeds of it and the applications of it. And so my hope is we take this moment of hope and optimism and fear and terror, and we channel it for good to actually get it right this time and to get it right, because I think this is the most important area we possibly could get it right in. And we have a chance to do that now.

[00:59:57.456] Kent Bye: Awesome. Was there anything else that's left and said that you'd like to say to the broader immersive community?

[01:00:02.120] Nita Farahany: just that this has been a tremendous conversation. I learned so much from every conversation, but especially somebody who is as deeply engaged in the debates and as widely read and as widely thoughtful about it. It's a rare privilege and opportunity. So thank you for giving me the chance to have this conversation with you.

[01:00:20.828] Kent Bye: Awesome. Yeah. Thank you so much. And it's the battle of the brain, defending the right to think freely in the age of neurotechnology, March 14th, that it's coming out. So by the time you listen to this, so you can go check it out, highly recommend it's such a tour de force of the combination of both scientific frontiers, the legal and ethical and moral and philosophical implications of all this. It's the right moment, the right book, the right time to help us try to navigate this before we sleepwalk into a dystopia. So Thanks again for writing it and taking the time today to unpack it all. Thank you.

[01:00:52.510] Nita Farahany: Thank you.

[01:00:53.791] Kent Bye: That was Nita Farahani. She's a professor of law and philosophy at Duke University and the author of the book called The Battle for Your Brain, Defending the Right to Think Freely in the Age of Neurotechnology, which is releasing on March 14th, 2023. I highly, highly, highly recommend checking out this book. It's so crucial. There's so much interesting information, first of all, just to see the bleeding edge of what's happening with neurotechnologies. I mean, I focus pretty specifically on XR, and there's some overlap that's happened with neurotech, although it's kind of died down a little bit. I mean, there's OpenBCI had a lot of different integrations, the HP Reverb had some of the different neurotech integrated, but you know, as Windows starts to wind down, some of their Windows Mixed Reality efforts, and the cost of the OpenBCI Project Leo was over $20,000. So it's not necessarily anything that normal consumers are going to have a hold of. But as we looked into the future of human-computer interface, Meta has a lot of hope that these types of risk-based electromyography, non-invasive neural interfaces are going to be the future of the human-computer interaction when it comes to spatial computing. Especially when it comes to augmented reality with virtual reality We already have these hand track controllers that you are willing to have and hold when you're in a virtual experience But when you're out and about in the world and augmented reality You kind of want to have your hands free and use something a lot more like what they're developing with control labs with their EMG wrist interface so this book is just an amazing catalog of what's happening on so many different contexts and and You really get a sense of what the bleeding edge of what's possible right now. You can really see what you're able to discern from the different signals that are coming from our brain and other parts of our body as well. Again, it's not just neurotech, but there's a whole range of looking at the issue of cognitive liberty at its most broadest scale. But yeah, the first part is really looking at tracking the brain, and then the second part is about hacking the brain with changing different aspects of what's happening with your cognition and augmentation and all sorts of other stuff that we didn't get a chance to focus it on here. The main thing that I really wanted to dig into in this conversation was to kind of elaborate these human rights principles of the cognitive liberty, which is underneath it got the mental privacy, which is more of a fluid and flexible concept that, you know, as an example, if you're a commercial driver and you're fatiguing, then you're actually potentially putting lives in danger. And so You might have to wear an EEG cap that's tracking whether or not you're fatigued or not. So, you know, the mental privacy, she's saying that is kind of in the similar vein for how privacy is treated in a legal context, meaning that it's not an absolute right, that it's contextual. And with Helen Isenbaum, there's a whole contextual integrity of privacy, which she really emphasizes that there's specific context where it's important to share information. I think it's in that vein that mental privacy has a little bit more of a fluidity. But there is an absolute right to freedom of thought that you shouldn't have that be encroached on by other people, just like there's a freedom of religion. There's something much more absolute in that sense, and it's a part of the mental privacy. In that sense, it's a little bit of a subset of the broader scope of mental privacy. Also, the self-determination is a lot about free will and intentional actions, and she starts to dig into some of the neuroscience basis to find her own take on that, and other aspects of informational self-determination as well. So, I was just really fascinated to hear her take in terms of how does this actually diffuse out now, what's the next steps, because as I've been covering it, I haven't really gotten a clear answer for how all these things start to percolate. She is doing this collaboration with American Law Institute and European Law Institute. She mentioned that she's part of a scoping project for trying to figure out some of the principles of biometrics. There is an ALI-ELI principles for data economy, which you can look up and see some of the initial surveys of trying to find the connections between what's happening in the European regulatory regime and what can be fed over into the American version of similar legislation, and so trying to have some parity there. So, yeah, just trying to figure out some of these aspects of biometrics as they continue to move forward. And she said that's going to be taking some time. And, yeah, I think trying to come up with some of these definitions. And I had some previous discussions with, like, Daniel Leufer and some of the ways that they're trying to define these different aspects of biometrics. There needs to be a tiered system, and whether there's a different context in which some of the different applications should be outright banned, and to unacceptable risk, and to high risk, and a similar approach to what's happening with the AI Act. There's certainly still a lot of work to be done. In order to prevent us from just moving forward into what we already have with surveillance capitalism companies, we're going to need a completely different way of figuring out how to Maintain more ownership over our neural data and we've been so used to handing over aspects of our data But there's a different quality in context of what type of information may be contained within this neural data that is super sensitive that she had mentioned in the book that people when it comes to like sensitive information that the most sensitive piece of information that people perceive that they have are things like social security number, which of course, if that gets hacked or revealed, then there's a potential for financial fraud or identity theft. You know, there's other risks that the culture is made aware, but there isn't a similar level of urgency when it comes to the threats that are around from this type of brain data. And it's a little bit unfortunate because in some ways it may be actually the most intimate thing of all, especially as you move into different aspects of mental privacy and the differentiations between the stuff that is intentional thought and unintentional thoughts of the private thoughts that are not meant to be shared with other people. So at least there is a level of difficulty when it comes to some of the most intimate aspects of our thoughts. She did have a spectrum of different types of data that she described as identity or identification types of data. The automatic information that you can't stop from happening, it's just going to happen automatically. She goes into a lot of detail in the course of the book. The P300 is a signal that you can start to subliminally impact people and reveal information to them and get a reaction to them that is a recognition that could be used to, say, steal your PIN number for your credit card or stuff like that. automatic information that's happening in the processing of our brain, and then there's memorialized data of our memories, and then there's the different utterances that we have. And so, she's come up with a spectrum of different types of data of the mental privacy, which I guess is helping to map out, from a neuroscience perspective, the different categories of data that we have there. But when it comes to data ownership and coming up with these new regimes, she's suggesting that there needs to be an even bigger overhaul when it comes to this type of brain data. So that's yet to be seen. I haven't seen any proposals that are trying to figure out how that's actually get implemented. So certainly there's no lack of work yet to be done. But I will say that it is a bit of a relief to find somebody who has done this deep of a dive digging into all these different issues from both the philosophical and the legal and the technical and the neuroscience perspective and really just ties it all together. And like I said, it's an amazing read just to read through it and it just helps you to get a sense of where things are at right now and where they might be going into the future as she's doing a comprehensive survey of all the academic literature, tracking what's happening in the enterprise market and trying to see what the signals are to make that jump from that Custom bespoke implementations of this technology into the consumer space and that may actually be through the human-computer interaction Non-neural interfaces of something like the control labs EMG risk-based interfaces that are for augmented reality and potentially for VR as well at some point Which is super sci-fi interfaces and exciting to see what the potentials are. I know I've talked to Palmer Luckey and he's mentioned it as these are the types of interfaces that, you know, just like you've spent hundreds of hours training yourself to type on a keyboard, that will likely do the same type of training for us as we start to move into these new modes of 3D UI and the next paradigm of human-computer interaction when it comes to spatial computing. But at the other side of it is that it is interfacing with our neurodata, and there's a certain sensitivity to that neurodata that she's trying to call out here in her book that is titled, The Battle for Your Brain, Defending Your Right to Think Freely in the Age of Neurotechnology. So this is the last podcast that I have in this 10 podcast series, and I'm going to be heading out to South by Southwest. I'm going to be giving a featured session on March 12th at 4 p.m. in the main Austin Convention Center. So look up on the schedule and come check it out. I mean, giving a big talk on the ultimate potential of virtual reality, promises and perils, and trying to map out both the exalted potentials but also the potential downfalls. And we've certainly covered a lot of the different downfalls throughout the course of this series, and I'll be sure to be mentioning Nita Farahani's work on the Battle for Your Brain book and the concept of cognitive liberty. It's going to be something I'm going to be shouting from the mountaintops for a long, long time. until it's actually sort of implemented at all the different levels and percolated into all the different laws. And actually, it seems like it's the most hopeful approach that I've seen so far to actually addressing some of the biggest gaps that we have in the context of this neurotechnology and the neural data. Broadly before, I've been referring to the neural rights, and I think the human right for cognitive liberty is another thing to put in there in terms of self-determination, freedom of thought, and mental privacy. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoyed this podcast and this series, then please do consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue bringing this coverage, so you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show