Brittan Heller is a human rights lawyer who recently published a paper pointing out that there are some significant gaps in privacy laws that do not cover the types of physiological and biometric data that will be available from virtual and augmented reality. Existing laws around biometrics are tightly connected to identity, but she argues that there are entirely new classes of data available from XR that she’s calling “biometric psychography,” which she says is a “new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests.”
Her paper published in Vanderbilt Journal of Entertainment and Technology Law is titled “Watching Androids Dream of Electric Sheep: Immersive Technology, Biometric Psychography, and the Law.” She points out that “biometric data” is actually pretty narrowly defined in most state laws to be tightly connected to identity and personally-identifiable information. She says,
Under Illinois state law, a “biometric identifier” is a bodily imprint or attribute that can be used to uniquely distinguish an individual, defined in the statute as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” 224 Exclusions from the definition of biometric identifier are “writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color” and biological material or information collected in a health care setting. 225
The types of biometric data that will be coming from immersive technologies are more like types of data that used to only be collected within the context of a health care setting. One of her citations is a 2017 Voices of VR podcast interview I did with behavioral neuroscientist John Burkhardt on the “Biometric Data Streams & the Unknown Ethical Threshold of Predicting & Controlling Behavior,” which lists some of the types of biometric psychographic data that will be made available to XR technologists. Heller says in her paper,
What type of information would be included as part of biometric psychographics? One part is biological info that may be classified as biometric information or biometric identifiers. 176 Looking to immersive technology, the following are biometric tracking techniques: (1) eye tracking and pupil response; 177 (2) facial scans; 178 (3) galvanic skin response; 179 (4) electroencephalography (EEG); 180 (5) electromyography (EMG); 181 and (6) electrocardiography (ECG). 182 These measurements tell much more than they may indicate on the surface. For example, facial tracking can be used to predict how and when a user experiences emotional feelings. 183 It can trace indications of the seven emotions that are highly correlated with certain muscle movements in the face: anger, surprise, fear, joy, sadness, contempt, and disgust. 184 EEG shows brain waves, which can reveal states of mind. 185 EEG can also indicate one’s cognitive load. 186 How aversive or repetitive is a particular task? How challenging is a particular cognitive task? 187 Galvanic skin response shows how intensely a user may feel an emotion, like anxiety or stress, and is used in lie detector tests. 188 EMG senses how tense the user’s muscles are and can detect involuntary micro-expressions, which is useful in detecting whether or not people are telling the truth since telling a lie would require faking involuntary reactions. 189 ECG can similarly indicate truthfulness, by seeing if one’s pulse or blood pressure increases in response to a stimulus. 190
While it’s still unclear if these data streams will end up having personally-identifiable information signatures that are only detectable by machine learning, the larger issue here is that when this physiological data streams are fused together then it’s going to be able to extrapolate a lot of psychographic information about our “likes, dislikes, preferences, and interests.”
Currently, there are no legal protections around this data that are setting any limits about what private companies or third party developers can do with this data. There’s a lot of open questions around the limits of what we consent to sharing, but also to what degree might having access to all of this data might put users in a position where their Neuro-Rights of agency, identity, or mental privacy are undermined by whomever has access to this data.
Yuste, R. Genser, J. & Herrmann, S. "It's Time for Neuro-Rights." Horizons: Journal of International Relations and Sustainable Development, no. 18, 2021. pp 154-164. JSTOR, https://t.co/1OMsuqQaWW pic.twitter.com/gTVk7folFl
— Kent Bye (Voices of VR) (@kentbye) March 31, 2021
Heller is a human rights lawyer, who I previously interviewed in July 2019 on how she’s been applying human rights frameworks to curtail harassment and hate speech in virtual spaces. Now she’s taking the approach of looking at how human rights frameworks and agreements may be able to help set a baseline of human rights that are more consensus-based in the sense that there’s not a legal enforcement mechanism. She cited the “UN Guiding Principles on Business and Human Rights” as an example of a human rights framework that is used combine a human rights lens with company business practices around the world. Here’s a European Parliament policy study of the UN Guiding Principles on Business and Human Rights that gives a graphical overview:
One of the biggest open issues that needs to be resolved is how this concept of “biometric psychography” is enshrined into some sort of Federal or State privacy law in order for it to be legally binding to these companies. Heller talked about a hierarchy between the laws, and this is one way to look at the different layers of how international law is at a higher and more abstract level that isn’t always legally binding in national, regional, or state jurisdictions. She said that citing International Law in a US court is often not going to be a winning strategy.
Ultimately, the United States may need to implement a Federal Privacy Law that sets up some guardrails for companies for what they can and cannot do with the types of biometric psychographic data that comes from XR. I previously discussed the history and larger context of US Privacy Law with Privacy Lawyer Joe Jerome where he explains that even though there’s a lot of bi-partisan consensus for the need for some sort of Federal Privacy Law, there are still a lot of partisan disagreements on a number of issues. There is a lot of United States legislation on privacy being passed at the State level, which the International Association of Privacy Professionals is tracking here.
Heller’s paper is a great first step in starting to explain some of the types of biometric psychographic data that are made available by XR technologies, but it’s still an open question as to whether or not there should be laws implemented at the Federal or State level in order to set up some guardrails for how this data are being used and in what context. I’m a fan of Helen Nissenbaum’s contextual integrity approach to privacy as a framework to help differentiate the different contexts and information flows, but I have not seen a generalized approach that maps out the range of different contexts and how this could flow back into a generalized privacy framework or privacy law. Heller suggested to me that creating a consensus-driven, ethical framework that businesses consent to could be a first step, even if there is no real accountability or enforcement.
Another community that is starting to have these conversations are neuroscientists interested in Neuro Ethics and Neuro-Rights. There is an upcoming, free Symposium on the Ethics of Noninvasive Neural Interfaces on May 26th hosted at the Columbia Neuro-Rights Initiative and co-organized by Facebook Realty Labs.
Columbia’s Rafael Yuste is one of the co-authors of the paper “It’s Time for Neuro-Rights” published in Horizons: Journal of International Relations and Sustainable Development. They are also taking a human rights approach of defining some fundamental rights to agency, identity, mental privacy, fair access to mental augmentation, and protection from algorithmic bias. But again, the real challenge is how these higher level rights at the international law or human rights level get implemented at a level that has a direct impact on these companies who are delivering these neural technologies. How are these rights going to be negotiated from context to context (especially within the context of consumer technologies that within themselves can span a wide range of contexts)? What should the limits be of who has access to this biometric psychographic data from non-invasive neuro-technologies like XR? And should there be limits of what they’re able to do with this data?
I have a lot more questions than answers, but Heller’s definition of “biometric psychography” will hopefully start to move these discussions around privacy beyond personal-identifiable information and our identity, and look at how this data provides benefits and risks to our agency, identity, and mental privacy. Figuring out how to conceptualize, comprehend, and weigh all of these tradeoffs is one of the more challenging aspects of XR Ethics, and something that we need to still collectively figure out as a community. It’s going to require a lot of interdisciplinary collaboration between immersive technology creators, neuroscientists, human rights and privacy lawyers, ethicists and philosophers, and many other producers and consumers of XR technologies.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Update April 20th On April 11th, I posted this visualization of the relational dynamics that we covered in this discussion:
Here is a simplified version of this graphic that helps to visualize the relational dynamics for how human rights and ethical design principles fit into technology policy and the ethics of technology design.
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So virtual and augmented reality are, on one hand, revealing all sorts of amazing new capabilities for us. But at the same time, there's also a lot of threats where that same technology could be used to undermine our agency, or undermine our identity, or undermine our sense of mental privacy. And so, how do we determine what the line is in terms of what is okay and what's not okay, depending on the context and what kind of agreement that you're entering into with these different technology companies? So, Brynn Heller is a human rights lawyer, and she's been digging into some of these different issues. taking the approach of looking at it through the lens of human rights and human rights law. And she actually spent time at the Carr Center for Human Rights at Harvard University. And out of that, published a number of different papers. One is called the Watching Androids Dream of Electric Sheep, Immersive Technology, Biometric Psychography, and the Law. So in this paper, she's defining this new concept of biometric psychography. And one of the things that she explains is that most of the biometric laws and just privacy laws in general are really, really focused on personally identifiable information. information that is identifying you in some capacity. But yet, within the context of virtual reality, there's lots of different biometric information that we're radiating off of our body that is not necessarily personally identifiable, although it may be PII if you have enough machine learning algorithms. But the real threat is around how that information is revealing what you're paying attention to, what you like, what you're engaged with, what you are disgusted by. It's like your real-time emotional sentiment analysis tied to all these other biometric indicators are going to be able to create these psychographic profiles. This is a concept that Heller is coming up with called biometric psychography. In my last conversation with Thomas Reardon from Facebook Reality Labs, he's the director of Neuromotor Interfaces. He was one of the founders of Control Labs, and they have this device you put on your wrist that has access to individual motor neurons, and how a lot of these technologies, on their own, it's not that big of a deal to know how your hand are moving, but when you have that information tied to all this other information that is fusing together all these different sensors, then at the level of Facebook as a business, they may have access to incredible amounts of information about us that go way beyond anything else that we are able to reveal by interacting with these mobile or web-based technologies. You know, this is getting into the core of our body, our emotions, our biometrics that we're radiating. And how do we deal with this information? Well, Britton Heller in this paper is trying to make a definition, a legal definition, that at this point is a gap. There's a hole. There's not really any existing law that covers this. And she's saying, hey, we need to at least start to define this. And then from there is a bit of trying to determine, OK, you know, if this is like a human rights law, where does this fit? But I think this is a really important conversation that needs to happen within wider industry, because, you know, I just went to the IEEE VR and one of the things that a lot of the research of these different privacy workshops, again, are really, really focused on this personally identifiable information. the laws end up dictating the boundaries in terms of what these companies can do, but also the research agenda for what kind of privacy implications some of these technologies have. And so if we're not really thinking about these aspects of biometric psychography, then we may be missing how to deal with some of the biggest threats with these technologies. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Bruton happened on Tuesday, April 6th, 2021. So with that, let's go ahead and dive right in.
[00:03:34.826] Brittan Heller: My name is Britton Heller. I work as counsel at Foley Hoag LLP, and I am a former fellow at Harvard Kennedy School's Carr Center for Human Rights. I look at the intersection of human rights, the law, and immersive technology.
[00:03:52.123] Kent Bye: Okay. Yeah. And I know that you recently published a number of papers. The one that really caught my eye was you define this new term of biometric psychography. Maybe you could give a little bit more context as to how that paper came about and what you were trying to define here with this new class of biometric data.
[00:04:10.094] Brittan Heller: Sure. Maybe I'll start by giving you an example. So we all love VR and pretend you and I are in a car racing game. And I am picking my ride and I see this bright red McLaren. And I really like this car. And in response to seeing this car, my body starts showing the physiological signs of pleasure. Like my heart rate accelerates a little bit. My skin becomes a little more moist. The pupils dilate. These sort of reactions are not voluntary. They're parasympathetic. So they're the way that your body physiologically responds to your mental state of pleasure. Pretend later on, I then start receiving targeted ads in other social VR apps, in other tie-ins on my browser for car insurance or reminders to renew my license or ads for other red sports cars. I realized when I was looking at the law surrounding biometrics that this is inherently possible. The way that biometric laws are formulated, they're formulated around the concept of identity. And in an immersive environment where you need to log in with a verifiable name and billing address for the most part, it's not that the companies don't know who you are. They know who you are. what they are more interested in is how you think and what you like. So the term biometric psychography, I invented that to encapsulate the potential for combining these physiological characteristics of you reacting to a stimuli with the stimuli itself. And that is really what differentiates an immersive environment from an internet-based online environment. You're not recording both inherently through the functioning of the device. So just saying it in other words, biometric psychography is a record of your likes, your dislikes, and your consumer preferences combined with your biological responses to that stimuli. And that's the closest to reading your mind that I can think of.
[00:06:32.820] Kent Bye: So yeah, as you were just recounting there, a lot of the biometrics laws, even if there are a number of state laws that are out there, whether it's Illinois, Texas, or Washington, they are all tying that biometric information to your identity. And what you're saying here in some sense is that all these companies already know who your identity is because you have a Facebook ID or they have an IP address. You know, there's ways that they can already know who you are, but. The real issue here is this whole new class of biometrically inferred information, but there isn't necessarily a good legal concept for how to define that. And you're suggesting to define this biometric psychography, but as I look at the landscape, I'm kind of wondering like, okay, now that this has been defined in this theoretical law journal, you're talking about where things need to go in the future. What's the next step for actually making this into a law that would have any teeth that would be applied to something like virtual reality technologies or augmented reality technologies?
[00:07:29.433] Brittan Heller: The paper was pretty well received. So it won an award from the Future on Privacy Forum. And as part of that, they let the top three papers present to the Federal Trade Commission. So I was able to actually take these ideas and put them in front of a bunch of decision makers who I'm hopeful in the new administration will look at immersive technologies and not let it replicate some of the mistakes that we saw with social media and how that ecosystem was evolving. I think it's a really delicate time right now because there isn't a standardization of hardware. And you may argue with me on that, but I still feel like that space is rapidly closing, but more fluid because we have haptics. There is this new phase where people may be able to use their hands instead of a controller. So I really feel like it's not solidified. And the pushback that I keep getting from industry associations is that it's not solidified yet. We don't want to crunch innovation. So how do we regulate at this point and do that? The way that I think and the way that I'm pushing in my practice to do this is to create industry best practices or soft loss or not hard enforceable regulations, but agreements of best practices between these main companies about what consent means with the limits of facial recognition. the application of clustering, how they are going to monetize these services, and what areas will be off limits, and if they are going to have to do hashing of personal identifying information. I'm talking like a lawyer, but if you want me to talk like a storyteller instead, I'm concerned that there isn't always a clear path towards monetization for many content creators. There's a lot of content out there and people are still figuring out how we're going to make money off of this other than direct sales. I don't want the default to be advertising again.
[00:11:10.087] Brittan Heller: That's why I turned to human rights law as a framework for this, instead of looking at other agreements. Human rights law is inherently based on consensus. The sources of international human rights law include just cohen norms. So the best practices of states. I think one of the challenges is because the United States has a very different conception of what privacy means than the rest of the world, is much more underdeveloped in the United States than it is elsewhere. So I looked at the way that these technologies aren't just gaining traction in the United States, but how the world is getting smaller. The pandemic really emphasized that for me. we need to start thinking larger and bringing more stakeholders on board if we're going to make sure that this technology remains a source of joy and education and enlightenment. One of the things that worries me is the concept of consent in these technologies because the type of information that can be gleaned and that I worry that many content creators aren't aware that they're sitting on top of are things like medical conditions. You can tell if somebody has medical conditions that they may not be aware yet that they have, or if they are aware, they may not consent to make that available to a company. And these are things like ADHD or schizophrenia, Parkinson's disease. You can tell that through the pupillometry. in many HMDs. And like you said, it used to be a medical application and now your reaction time could be used to infer if you have a physical or a mental illness. It also can be used to infer, people are very focused on facial recognition and emotional recognition. I'm more concerned about the inner states that you can determine from these technologies. Like you can tell who somebody is sexually attracted to. And I don't think somebody consents to give away their sexual preferences when they're playing a VR game. It's just something that you really wouldn't think about. It can also reveal whether or not you're telling the truth. So I had one of the earlier developers of this technology when I, when I did the paper, I had to do a series of firsthand interviews because there's not a lot of academic work around this. So one of the earliest developers told me, why would, why would you want to put a polygraph of six cameras on your face? Really? He was one of the people who invented it. I worry about that because I saw that last week HoloLens announced a contract with the military for military applications of its AR interface. And I thought that's the beginning. That's how this can reasonably start unless we are vigilant about how privacy and human rights translate into the immersive world and not just the kinetic or the tangible universe.
[00:14:13.927] Kent Bye: One of the things that I don't have a clear vision on is, you know, there's discussions right now on like, say, a U.S. federal privacy law to maybe kind of do a reboot. And it's ongoing discussions, and we'll see if they're able to have some sort of federal privacy law. But yet a lot of the stuff that you're talking about is a international law or human rights framework level. How do you foresee the flow of how these different frameworks would be interfacing with, say, U.S. federal privacy law or state laws, if there was some sort of human rights framework that was able to gain consensus on some sort of international scale, then how does that actually flow back into how that impacts these technology creators?
[00:14:54.679] Brittan Heller: I'm less optimistic than you about an omnibus federal privacy bill. I think at this point, it's something that a lot of people want and not a lot of legislators can agree on. I don't see it coming. I see it manifesting more on a state-based level, starting with the CCPA, so California's native version of Europe's GDPR. On the state level, I think that might end up kind of forcing the issue because there's not uniformity amongst states in how they treat these technologies, how they define key issues. And companies want to be able to sell across the states without worrying, without having to hire someone like me to tell them what they can sell everywhere in the country, but they got to be careful about Illinois. That's not great for any kind of content creator or hardware developer. The way that I see human rights law influencing is there is a convention that governs the ethical behavior of businesses and it is the UN guiding principles on business and human rights. My practice at Foley Hoag actually helped UN Special Rapporteur John Ruggie craft these 10 years ago. and the process was so great he joined our practice until he retired afterwards. This talks about the obligations of states, the obligations of companies, and the rights of the consumer. and ties it all together very nicely. So I see that kind of a framework, which other tech companies in the internet space, they agree to some of the constraints by this. Most of the major tech companies and telecoms as of two years ago are part of the global network initiatives, not external audits, freedom of expression and human rights principles of these tech companies every two years. So there is a precedent for it. the way that I see it trickling down to individuals is if the companies who control most of the market start exercising best practices, start talking about this amongst themselves and start, again, maybe hedging off the need for legislation. I don't know if you saw the social media hearings, but I don't really trust A lot of my elected representatives understand how the internet works, not discounting virtual reality or augmented reality or mixed reality. So I kind of want them as far away from this as possible at this moment. So I would rather trust the companies who know how this works and know what sort of things they're developing to say, yeah, we're going to have data localization on our devices. Yeah, we're going to retain this type of information for this period, and then we're going to dispose of it. Yeah, we're going to hash personal identifying information and make sure that these sort of things are thought about before they're a problem.
[00:17:50.281] Kent Bye: Yeah, I think there was certainly as I talked to folks at Facebook, there's a growing awareness that there's a lot of these really intractable privacy issues and real question around to what degree are the public going to accept these different technologies as well as what are the guardrails that need to be in place so that, you know. Because in some sense, Facebook can't regulate itself. It can't always draw the line because the technology wants to go in a certain direction. But yet at the same time, there's certain potential human rights violations or mental privacy or agency. There's ways in which it's increasing our expression of identity and expression of our agency. But at the same time, the risks are that it could potentially create these systems that are undermining our agency, undermining our identity, undermining our mental privacy. And I think that there's a lot of ways in which that Facebook leans upon trying to make it accessible in the sense that they make it subsidized and potentially have compromises in these other areas of agency identity and mental privacy in order to make it so that they are subsidizing it through whatever business model they end up settling on. But there's like these human rights principles, the neuro rights principles that were put forth by this group of individuals that just recently published a publication. But even within those, there's like these different trade-offs that I'm having a hard time seeing how those guardrails get into place and who's going to be there to make sure that there's not transgressions in that. And if in the absence of having competent tech policy here in the United States, like I don't know if this kind of self-policing model is going to necessarily lead down a path that I'm going to necessarily feel safe of, say, putting on a brain-controlled interface or having a control labs that's reading my neural input and the firing of individual neuromirrons that's being sensor fused together with all these other biometric data to essentially get this profile of me that's able to do this really intimate psychographic profiling. And I know that this is the roadmap for where the technology is going. I just, I'm having a really hard time seeing what those guardrails are going to be and how to ensure that we don't create a situation that's going to just have the lowest common denominator, worst case scenario happening.
[00:19:55.395] Brittan Heller: Yeah. I think that is a very reasonable concern. I share that concern. And I kind of look to what happened with Cambridge Analytica and social media as being the, not the worst case scenario with immersive technologies, because I think the worst case scenario is even worse. But providing a bit of a template where if we're not cautious, that's where we end up. For me, the issue that may save us in the end and that we don't have a lot of agreement on at this point is what consent looks like in an immersive context. If you look at the policies of some of the major social VR apps, they've all agreed that, not all, but there is general agreement about the personal safety bubble. And they're looking at that more from a physical angle. you know, the way that your avatar interacts with other people's avatars and your digital identity. But the kind of questions that I've been thinking about are, how are you going to indicate to other people more actually in the context of AR? Because that's going to be, I think, more, more subtle than the HMDs of VR at this point. So AR specifically, how are you going to indicate to people that you are using overlays or you're recording them or you're using all of the great features that are being developed now, but do you need someone else's permission to do that? And if so, how do they indicate that they're okay with it? And how do you indicate that you want that permission? That is something that is very, very different in a browser-based context than in an interpersonal interaction like you're going to have with AR. And I know that HoloLens had the green, purple, orange, or something like that. different colored lights that indicate what it's operationalized. But the fact that I do this for a living and I can't remember the colors is a problem, right? Where there's not a common vocabulary yet around what gestures mean, what hardware is standardized as, and how you indicate to somebody you don't want to be filmed or how you indicate to somebody that you are filming. So that is one thing that might come with time, but it is also something that I think companies, not just for self-regulation, but also for commercial success are going to have to think about. Many people won't think twice about it because you don't really think about giving away your privacy until it's gone and you can't get it back for most people. But the way that consent becomes demonstrative is going to be the linchpin for, I think, widespread adoption of ARBR. specifically AR.
[00:22:39.069] Kent Bye: Yeah. One of the things that came up in reading your paper was looking at whether or not this data are being recorded and stored. And I guess I have a couple of thoughts on this is one, I think that there's a big worry for a long time that I had at least for the types of information that is recorded and stored and, and being able to, if that gets leaked out, then people getting ahold of that and being able to determine different aspects of either your identity, or again, it ties back to being able to identify whoever that was and be able to extrapolate additional information. But there's also this trend that I imagine what's going to happen is that they're going to. Potentially move things into like edge compute frameworks that are able to do like real time processing of say, what's happening on your wrist. And it's able to fuse that together with all these other signals. And that even if nothing are recorded, none of the data are recorded, then they still may be able to have a way of taking in all of this data, be making these real-time judgments and inferences of what you are referring to as this biometric psychography. where at the end of the day, like you said, the beginning example, you're looking at a car and your pupil dilates, then there's a piece of metadata that gets generated out of that. Even if the data aren't recorded, they may be able to have this information around what you're personally interested in. And I think that's probably a direction that if everything gets focused on identity, then no one's going to be really paying attention to this real-time processing that's able to generate these real-time inferences based upon all these machine learning sensor fusion processes that is generating all this metadata about us. That may or may not be correct, but it is at the end of the day, generating all this higher level inferences that are that psychographic information. And like you said, there's no legal conceptualization of this concept anywhere in the law. And so it feels like a little bit of a gap in terms of, okay, how do we plug this gap?
[00:24:39.739] Brittan Heller: One way that I'm still playing with these ideas, but I've been thinking about the application of opt-in, opt-out, and maybe the default for immersive technologies should be opt-out unless you turn it on. And that's not the way you see most of the internet working. But because the risks, I think, are so much greater, I think about how you're going to log in in the future to programs. If you just put on a set of your hardware, people are going to want it to be automatically logged in, you know? And if it scans your retina to log you in, then it's just for you. However, I don't know about you, but I lose my glasses all the time. all the time. And I may not, if there are several hundred dollars worth of AR software, but if I lose them, I thought I, you know, I can reset my password. If I lose my phone, I can't reset my retina. So just thinking through the way that human error and the propensity to underestimate risk will bump up against the practical application of these technologies. I've also started to think about concepts of privacy as being cumulative, like you were talking about. And I haven't developed this fully yet, but I feel like immersive technologies make it very clear to me that privacy is not just a you have it or you don't. If you're using the new hardware that goes on your wrist and allows you to gesticulate without using a headset and having the electrical pulses be monitored to control your interface, One instance of that, probably not personal identifying. My question is, if you use that to train a machine learning algorithm, does it cross a threshold? Will your patterns and your algorithm look different than someone else's? And could that be identifying uniquely in the future? So looking at privacy more as an accumulation of small identifying bits rather than you know them or you don't. seems to me to be a more practical level of engagement with the concept of privacy when you need bodily measurements to make your interface work like you do in VR and Air.
[00:28:07.440] Brittan Heller: Yeah, I think that's a really good point. When I think about digital identity, I like to think about identity as being able to be parceled. And so I guess that's kind of a similar concept to Helen Niesbaum's where in any interaction, you go to a restaurant after COVID's over. You pay with a credit card. The person may not know your name. If they ID you, they'll know your name and your age and your address. If you pay with cash, they won't have your financial information. If you pay with your credit card, they'll have your name and your financial information, but they may not have your age. So when we interact with people, we don't put forward every aspect of our identity. So for me, the cumulative nature of creating a psychographic profile of somebody is highly troubling because it doesn't mirror how I practically implement consent in my day-to-day interactions with other people.
[00:29:02.907] Kent Bye: One of the other things I think was striking reading through some of the citations in your paper was seeing how in the artificial intelligence community, it seems like that they've started to take this approach of integrating different human rights frameworks and ethical frameworks for artificial intelligence. And so maybe you talk about that in terms of how you may be taking some inspiration for some of that work of how human rights and the ethics around AI is starting to inform your thinking on some of these issues.
[00:29:28.133] Brittan Heller: Yeah. I run the global AI practice group at Foley Hoag. So I think about this a lot. And I also work with clients to develop ethical AI codes for their individual businesses and get them really thinking about how they're using AI now and how their product development timeline and map mean they have to interface with AI differently. There are lots of codes right now about what ethical AI is and the role of states. I think as of last year, there were 50 countries around the world that each had an ethical AI plan, a national plan. So it's one of the things that I think is gaining more traction on the international front than on the domestic front, at least when you're considering the United States. The leading driver of ethical AI governance is the European Union. So they've put out some really strong guidance, but the challenge with all of this guidance is it's all very, very high level, very, very conceptual and not immediately applicable to individual businesses. It's more designed for governments. So I see that as being the next step in evolving these codes into something that manifests as something useful. One of the things that I also think about is for some of these technologies, like facial recognition. doesn't really matter whether or not they actually work as advertised. If the public and decision makers believe that it works as advertised, the fact that you're gonna get false positives or false negatives is even more insidious. And you see that with AI at this point, when there were people who claimed that they could identify sexual orientation based on people's facial structure, or the infamous case where there was a company that developed an algorithm that said it could tell you whether or not somebody was likely to commit crimes again after they'd been released for prison. And so certain judges started using this as a tool in sentencing recommendations. And it turned out that it was actually worse than randomly picking by chance who would be recidivists and who not. And it was biased against the African-American prisoners And I think it said that they were over half as likely to be a recidivist as compared to 26% of their Caucasian counterparts. So it doesn't matter to me if these actually work as advertised, if judges and decision makers and the public at large think that it holds dispositive weight. I'm still trying to figure out how that exactly translates into the immersive context. But for that, if people think that it does one thing, but it actually does another, or think that it is capable of one thing and it actually is or is not, I think that is going to help determine the mass adoption and the role that we see these technologies having in education systems, in medicine, in industry, sports, and entertainment.
[00:32:38.328] Kent Bye: Yeah, I just watched the Coded Bias' premiere on Netflix. I had seen it at Sundance 2020, but I watched it again yesterday when it premiered on Netflix. And one of the things that was brought up was some of these cases where judgments that are made upon people's lives based upon these algorithms And that could violate the due process constitutional protections if there's no transparency or explainability of some of these AI algorithms that get deployed. And it's like a black box that are making these choices that we don't have a lot of transparency of what's happening, or it could be reinforcing these larger systemic biases that are ingrained within the algorithm. So I think there's a level there, I think, especially within this neuro rights framework, one of the rights was to be free from algorithmic bias, or at least some transparency around it. And I think the Algorithmic Justice League is a whole other initiative that is trying to get like, how do we have some sort of oversight with some of these different algorithms in our lives. So that definitely seems like an issue that is not only coming up in whatever, in real life context, but also in these virtual worlds, if we start to have these different algorithms that are making these judgments and decisions about what we can and cannot do.
[00:33:44.332] Brittan Heller: Yeah, I think that I would challenge the immersive universe to think about is we can't eliminate bias. There will be bias in every data set. The thing that we can do is become aware of the bias and mitigate the impacts of it. I think if we try to talk about eliminating bias from our immersive platforms, we're going to set ourselves up to fail. But if we acknowledge that bias has this deleterious effect on human psyches and human relationships, then we'll be more willing to be proactive about counteracting the effects and being on guard for it.
[00:34:25.377] Kent Bye: I was wondering if you could comment a bit on this other aspect of harassment that you have written about a little bit here in this paper from a human rights perspective, because I know we're talking a lot here about biometric psychography. There's a big privacy dimension, but then there's harassment and content moderation as a whole other area that I think that is going to need to be looked at in terms of as we move forward. what are the either legal frameworks that we need from federal or state level, or if this is another area where having like a human rights approach is going to maybe provide some insight that gives some overall guidance to help us navigate some of these different issues that have already started to come up within harassment and trolling and abuse within these virtual environments.
[00:35:07.182] Brittan Heller: Yeah, if you look to some of the international frameworks around this, I do think they are actually less useful than the terms of service of individual platforms. That's part of the challenge of international human rights law. The best question I ever got when I was teaching a course on it at the University of Maryland Law School, someone just raised their hand and it was middle of the summer and it was hot and whiny and they're just like, Why does this matter? Why does this impact my real life? Because it does seem so high level and abstract. And for me, it's the glue that holds together the relationship between people and their government and people in their society. So that's why it actually matters. If you're in a U.S. court and you are citing international human rights law, you're probably not going to win your case, to be honest, because there is a hierarchy of laws and the national laws, state and local laws definitely take precedence over the international laws, which is why I said they're based on consensus. There's not really an enforcement body for them. but they provide people with this lodestar or a type of compass when they're trying to think about how they want to be governed and how they want society to treat them and other people like them and who are not like them. So this is how it impacts these virtual platforms. Most of the people who are harassed online and based on Jessica Outlaw's great research, many of the people in immersive environments are targeted or what I like to call immutable characteristics. The same sort of things that would be a civil rights violation if it happened on the street. So people being targeted or discriminated against for their gender, their race, their ethnicity, their sexual identity, their sexual orientation, their age, their pregnancy status, things like that. I went a little outside of US law, but not really. There's kind of this core conception in international human rights law that if you are targeted for the things about yourself that you cannot change, that is inappropriate. If somebody treats you differently than somebody who is equally situated based on those characteristics, that is a violation. And there's many, many different treaties and regional codes and all that. But the way that this comes into play is the rules that you see developed in your social VR environment. When I look at these, I don't see whether or not they actually comport with the universal declaration of human rights. That would be a little silly. But I think about those principles, about equality and equity and justice and freedom of expression and freedom of assembly and freedom of association. I look to see if those concepts are embodied by these tangible goals.
[00:38:09.949] Kent Bye: Well, you've put out this paper that's defining the biometric psychography, and now you're also working at this law firm. What's next when it comes to either what you're working on or how this conversation continues to unfold in the right places with the right people?
[00:38:25.230] Brittan Heller: The next thing I'm working on is a paper that's been accepted by the Privacy Law Scholars Conference. And it is actually the crystallization and finally having to write down all the field work that I did in Uganda before the start of the pandemic. I went to this local region, this region right next to DRC called Kasese. and help them develop a plan to engage with hate speech and disinformation in preparation for their next elections. They had a lot of civil unrest based off of social media in 2016 and it actually resulted in fatalities. So I went there to work with the local people and basically figured out that post-colonial hate speech looks different and operates differently than the canonical theories of hate speech tell us it should work. I think this is because there are other theories of hate speech presume that there is one context. And in a post-colonial environment, there are two at minimum, sometimes more competing contexts going on at the same time. So the hate speech is much more layered. It is a much bigger challenge for tech companies to identify and deal with, especially because they, for the most part, are not paying attention to Uganda or to this local region in Uganda. So the paper details out the research that I did, how I think it implicates the theory and should change it, and then recommendations for tech companies about how to deal with the global South and other post-colonial environments to accommodate for this type of challenge.
[00:40:15.553] Kent Bye: Great. And finally, what do you think the ultimate potential of all these immersive technologies and virtual augmented reality, what the ultimate potential of them might be and what they might be able to enable?
[00:40:30.079] Brittan Heller: I think they're magic. They're the closest thing that come to that it's magic. And so my, my hope is that it can take people to new planes in art and expression and connection and education that we use it to amplify the better parts of our nature. That's, that's truly what I hope for all of this technology.
[00:41:02.528] Kent Bye: Great. Is there anything else that's left unsaid that you'd like to say to the immersive community?
[00:41:08.001] Brittan Heller: Thank you for having me here today. And thank you for letting me be the weird human rights lawyer who gets to work with you all.
[00:41:16.064] Kent Bye: Awesome. Well, Britton, I just wanted to thank you for joining me here today. The paper that you wrote up here, I think it's going to help at least start the conversation. And I hope to see more discussions out in the legal community. And as we have more people from the industry, look at these different issues and try to figure out what is going to be the tech policy that's going to help reign this in, whatever guardrails that we need to have into place. The other thing that I'd just say is that there's other conversations that are starting to happen that I'm happy to see from the neurorights and folks who are looking at neurotechnologies and the brain-computer interfaces and these things like control labs, and that Facebook's working with these folks. Together, as we have all these different lenses, whether it's neurorights, neuroethics, or artificial intelligence, or immersive technologies, and all the biometric data that looking through all these different lenses, we'll maybe get some new insights into how to set up the privacy frameworks and maybe I'll hold up hope that we'll have some sort of revolution in terms of having a federal privacy law. We'll see. Like you said, the outlook's not so great, but I think it's papers like this that are going to help lay down the foundation for be able to help facilitate that discussion when it does come. So thanks for doing all that pioneering work. And thanks for joining me here today on the podcast to be able to unpack it a little bit more.
[00:42:28.029] Brittan Heller: Thank you very much.