Brittan Heller is a human rights lawyer who recently published a paper pointing out that there are some significant gaps in privacy laws that do not cover the types of physiological and biometric data that will be available from virtual and augmented reality. Existing laws around biometrics are tightly connected to identity, but she argues that there are entirely new classes of data available from XR that she’s calling “biometric psychography,” which she says is a “new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests.”
Her paper published in Vanderbilt Journal of Entertainment and Technology Law is titled “Watching Androids Dream of Electric Sheep: Immersive Technology, Biometric Psychography, and the Law.” She points out that “biometric data” is actually pretty narrowly defined in most state laws to be tightly connected to identity and personally-identifiable information. She says,
Under Illinois state law, a “biometric identifier” is a bodily imprint or attribute that can be used to uniquely distinguish an individual, defined in the statute as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” 224 Exclusions from the definition of biometric identifier are “writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color” and biological material or information collected in a health care setting. 225
The types of biometric data that will be coming from immersive technologies are more like types of data that used to only be collected within the context of a health care setting. One of her citations is a 2017 Voices of VR podcast interview I did with behavioral neuroscientist John Burkhardt on the “Biometric Data Streams & the Unknown Ethical Threshold of Predicting & Controlling Behavior,” which lists some of the types of biometric psychographic data that will be made available to XR technologists. Heller says in her paper,
What type of information would be included as part of biometric psychographics? One part is biological info that may be classified as biometric information or biometric identifiers. 176 Looking to immersive technology, the following are biometric tracking techniques: (1) eye tracking and pupil response; 177 (2) facial scans; 178 (3) galvanic skin response; 179 (4) electroencephalography (EEG); 180 (5) electromyography (EMG); 181 and (6) electrocardiography (ECG). 182 These measurements tell much more than they may indicate on the surface. For example, facial tracking can be used to predict how and when a user experiences emotional feelings. 183 It can trace indications of the seven emotions that are highly correlated with certain muscle movements in the face: anger, surprise, fear, joy, sadness, contempt, and disgust. 184 EEG shows brain waves, which can reveal states of mind. 185 EEG can also indicate one’s cognitive load. 186 How aversive or repetitive is a particular task? How challenging is a particular cognitive task? 187 Galvanic skin response shows how intensely a user may feel an emotion, like anxiety or stress, and is used in lie detector tests. 188 EMG senses how tense the user’s muscles are and can detect involuntary micro-expressions, which is useful in detecting whether or not people are telling the truth since telling a lie would require faking involuntary reactions. 189 ECG can similarly indicate truthfulness, by seeing if one’s pulse or blood pressure increases in response to a stimulus. 190
While it’s still unclear if these data streams will end up having personally-identifiable information signatures that are only detectable by machine learning, the larger issue here is that when this physiological data streams are fused together then it’s going to be able to extrapolate a lot of psychographic information about our “likes, dislikes, preferences, and interests.”
Currently, there are no legal protections around this data that are setting any limits about what private companies or third party developers can do with this data. There’s a lot of open questions around the limits of what we consent to sharing, but also to what degree might having access to all of this data might put users in a position where their Neuro-Rights of agency, identity, or mental privacy are undermined by whomever has access to this data.
Awesome #NeuroEthics Humans Rights Framework by @yusterafa @JaredGenser & @SRHerrm
Yuste, R. Genser, J. & Herrmann, S. "It's Time for Neuro-Rights." Horizons: Journal of International Relations and Sustainable Development, no. 18, 2021. pp 154-164. JSTOR, https://t.co/1OMsuqQaWW pic.twitter.com/gTVk7folFl
— Kent Bye (Voices of VR) (@kentbye) March 31, 2021
Heller is a human rights lawyer, who I previously interviewed in July 2019 on how she’s been applying human rights frameworks to curtail harassment and hate speech in virtual spaces. Now she’s taking the approach of looking at how human rights frameworks and agreements may be able to help set a baseline of human rights that are more consensus-based in the sense that there’s not a legal enforcement mechanism. She cited the “UN Guiding Principles on Business and Human Rights” as an example of a human rights framework that is used combine a human rights lens with company business practices around the world. Here’s a European Parliament policy study of the UN Guiding Principles on Business and Human Rights that gives a graphical overview:
One of the biggest open issues that needs to be resolved is how this concept of “biometric psychography” is enshrined into some sort of Federal or State privacy law in order for it to be legally binding to these companies. Heller talked about a hierarchy between the laws, and this is one way to look at the different layers of how international law is at a higher and more abstract level that isn’t always legally binding in national, regional, or state jurisdictions. She said that citing International Law in a US court is often not going to be a winning strategy.
Another way to look at this issue is that there’s a nested set of contexts where there’s cultural norms, a set of international, national, regional, and city laws, but also the economic business layers. So even though Article 12 of the UN’s Universal Declaration of Human Rights says, “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” There are contextual dimensions of privacy where individuals can enter into Terms of Service & Privacy Policy contractual agreements with these businesses where they can consent for companies to have privileged information could be used to undermine our sense of mental privacy and agency.
Ultimately, the United States may need to implement a Federal Privacy Law that sets up some guardrails for companies for what they can and cannot do with the types of biometric psychographic data that comes from XR. I previously discussed the history and larger context of US Privacy Law with Privacy Lawyer Joe Jerome where he explains that even though there’s a lot of bi-partisan consensus for the need for some sort of Federal Privacy Law, there are still a lot of partisan disagreements on a number of issues. There is a lot of United States legislation on privacy being passed at the State level, which the International Association of Privacy Professionals is tracking here.
Heller’s paper is a great first step in starting to explain some of the types of biometric psychographic data that are made available by XR technologies, but it’s still an open question as to whether or not there should be laws implemented at the Federal or State level in order to set up some guardrails for how this data are being used and in what context. I’m a fan of Helen Nissenbaum’s contextual integrity approach to privacy as a framework to help differentiate the different contexts and information flows, but I have not seen a generalized approach that maps out the range of different contexts and how this could flow back into a generalized privacy framework or privacy law. Heller suggested to me that creating a consensus-driven, ethical framework that businesses consent to could be a first step, even if there is no real accountability or enforcement.
Another community that is starting to have these conversations are neuroscientists interested in Neuro Ethics and Neuro-Rights. There is an upcoming, free Symposium on the Ethics of Noninvasive Neural Interfaces on May 26th hosted at the Columbia Neuro-Rights Initiative and co-organized by Facebook Realty Labs.
Columbia’s Rafael Yuste is one of the co-authors of the paper “It’s Time for Neuro-Rights” published in Horizons: Journal of International Relations and Sustainable Development. They are also taking a human rights approach of defining some fundamental rights to agency, identity, mental privacy, fair access to mental augmentation, and protection from algorithmic bias. But again, the real challenge is how these higher level rights at the international law or human rights level get implemented at a level that has a direct impact on these companies who are delivering these neural technologies. How are these rights going to be negotiated from context to context (especially within the context of consumer technologies that within themselves can span a wide range of contexts)? What should the limits be of who has access to this biometric psychographic data from non-invasive neuro-technologies like XR? And should there be limits of what they’re able to do with this data?
I have a lot more questions than answers, but Heller’s definition of “biometric psychography” will hopefully start to move these discussions around privacy beyond personal-identifiable information and our identity, and look at how this data provides benefits and risks to our agency, identity, and mental privacy. Figuring out how to conceptualize, comprehend, and weigh all of these tradeoffs is one of the more challenging aspects of XR Ethics, and something that we need to still collectively figure out as a community. It’s going to require a lot of interdisciplinary collaboration between immersive technology creators, neuroscientists, human rights and privacy lawyers, ethicists and philosophers, and many other producers and consumers of XR technologies.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Update April 20th On April 11th, I posted this visualization of the relational dynamics that we covered in this discussion:
Here is a simplified version of this graphic that helps to visualize the relational dynamics for how human rights and ethical design principles fit into technology policy and the ethics of technology design.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So virtual and augmented reality are, on one hand, revealing all sorts of amazing new capabilities for us. But at the same time, there's also a lot of threats where that same technology could be used to undermine our agency, or undermine our identity, or undermine our sense of mental privacy. And so, how do we determine what the line is in terms of what is okay and what's not okay, depending on the context and what kind of agreement that you're entering into with these different technology companies? So, Brynn Heller is a human rights lawyer, and she's been digging into some of these different issues. taking the approach of looking at it through the lens of human rights and human rights law. And she actually spent time at the Carr Center for Human Rights at Harvard University. And out of that, published a number of different papers. One is called the Watching Androids Dream of Electric Sheep, Immersive Technology, Biometric Psychography, and the Law. So in this paper, she's defining this new concept of biometric psychography. And one of the things that she explains is that most of the biometric laws and just privacy laws in general are really, really focused on personally identifiable information. information that is identifying you in some capacity. But yet, within the context of virtual reality, there's lots of different biometric information that we're radiating off of our body that is not necessarily personally identifiable, although it may be PII if you have enough machine learning algorithms. But the real threat is around how that information is revealing what you're paying attention to, what you like, what you're engaged with, what you are disgusted by. It's like your real-time emotional sentiment analysis tied to all these other biometric indicators are going to be able to create these psychographic profiles. This is a concept that Heller is coming up with called biometric psychography. In my last conversation with Thomas Reardon from Facebook Reality Labs, he's the director of Neuromotor Interfaces. He was one of the founders of Control Labs, and they have this device you put on your wrist that has access to individual motor neurons, and how a lot of these technologies, on their own, it's not that big of a deal to know how your hand are moving, but when you have that information tied to all this other information that is fusing together all these different sensors, then at the level of Facebook as a business, they may have access to incredible amounts of information about us that go way beyond anything else that we are able to reveal by interacting with these mobile or web-based technologies. You know, this is getting into the core of our body, our emotions, our biometrics that we're radiating. And how do we deal with this information? Well, Britton Heller in this paper is trying to make a definition, a legal definition, that at this point is a gap. There's a hole. There's not really any existing law that covers this. And she's saying, hey, we need to at least start to define this. And then from there is a bit of trying to determine, OK, you know, if this is like a human rights law, where does this fit? But I think this is a really important conversation that needs to happen within wider industry, because, you know, I just went to the IEEE VR and one of the things that a lot of the research of these different privacy workshops, again, are really, really focused on this personally identifiable information. the laws end up dictating the boundaries in terms of what these companies can do, but also the research agenda for what kind of privacy implications some of these technologies have. And so if we're not really thinking about these aspects of biometric psychography, then we may be missing how to deal with some of the biggest threats with these technologies. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Bruton happened on Tuesday, April 6th, 2021. So with that, let's go ahead and dive right in.
[00:03:34.826] Brittan Heller: My name is Britton Heller. I work as counsel at Foley Hoag LLP, and I am a former fellow at Harvard Kennedy School's Carr Center for Human Rights. I look at the intersection of human rights, the law, and immersive technology.
[00:03:52.123] Kent Bye: Okay. Yeah. And I know that you recently published a number of papers. The one that really caught my eye was you define this new term of biometric psychography. Maybe you could give a little bit more context as to how that paper came about and what you were trying to define here with this new class of biometric data.
[00:04:10.094] Brittan Heller: Sure. Maybe I'll start by giving you an example. So we all love VR and pretend you and I are in a car racing game. And I am picking my ride and I see this bright red McLaren. And I really like this car. And in response to seeing this car, my body starts showing the physiological signs of pleasure. Like my heart rate accelerates a little bit. My skin becomes a little more moist. The pupils dilate. These sort of reactions are not voluntary. They're parasympathetic. So they're the way that your body physiologically responds to your mental state of pleasure. Pretend later on, I then start receiving targeted ads in other social VR apps, in other tie-ins on my browser for car insurance or reminders to renew my license or ads for other red sports cars. I realized when I was looking at the law surrounding biometrics that this is inherently possible. The way that biometric laws are formulated, they're formulated around the concept of identity. And in an immersive environment where you need to log in with a verifiable name and billing address for the most part, it's not that the companies don't know who you are. They know who you are. what they are more interested in is how you think and what you like. So the term biometric psychography, I invented that to encapsulate the potential for combining these physiological characteristics of you reacting to a stimuli with the stimuli itself. And that is really what differentiates an immersive environment from an internet-based online environment. You're not recording both inherently through the functioning of the device. So just saying it in other words, biometric psychography is a record of your likes, your dislikes, and your consumer preferences combined with your biological responses to that stimuli. And that's the closest to reading your mind that I can think of.
[00:06:32.820] Kent Bye: So yeah, as you were just recounting there, a lot of the biometrics laws, even if there are a number of state laws that are out there, whether it's Illinois, Texas, or Washington, they are all tying that biometric information to your identity. And what you're saying here in some sense is that all these companies already know who your identity is because you have a Facebook ID or they have an IP address. You know, there's ways that they can already know who you are, but. The real issue here is this whole new class of biometrically inferred information, but there isn't necessarily a good legal concept for how to define that. And you're suggesting to define this biometric psychography, but as I look at the landscape, I'm kind of wondering like, okay, now that this has been defined in this theoretical law journal, you're talking about where things need to go in the future. What's the next step for actually making this into a law that would have any teeth that would be applied to something like virtual reality technologies or augmented reality technologies?
[00:07:29.433] Brittan Heller: The paper was pretty well received. So it won an award from the Future on Privacy Forum. And as part of that, they let the top three papers present to the Federal Trade Commission. So I was able to actually take these ideas and put them in front of a bunch of decision makers who I'm hopeful in the new administration will look at immersive technologies and not let it replicate some of the mistakes that we saw with social media and how that ecosystem was evolving. I think it's a really delicate time right now because there isn't a standardization of hardware. And you may argue with me on that, but I still feel like that space is rapidly closing, but more fluid because we have haptics. There is this new phase where people may be able to use their hands instead of a controller. So I really feel like it's not solidified. And the pushback that I keep getting from industry associations is that it's not solidified yet. We don't want to crunch innovation. So how do we regulate at this point and do that? The way that I think and the way that I'm pushing in my practice to do this is to create industry best practices or soft loss or not hard enforceable regulations, but agreements of best practices between these main companies about what consent means with the limits of facial recognition. the application of clustering, how they are going to monetize these services, and what areas will be off limits, and if they are going to have to do hashing of personal identifying information. I'm talking like a lawyer, but if you want me to talk like a storyteller instead, I'm concerned that there isn't always a clear path towards monetization for many content creators. There's a lot of content out there and people are still figuring out how we're going to make money off of this other than direct sales. I don't want the default to be advertising again.
[00:09:29.495] Kent Bye: Yeah, so I guess that's the thing that I look at that there's the Federal Trade Commission, they're enforced the privacy policies, these companies say, hey, we're going to record this information and be able to use it for these reasons. And so there's a bit of a consumer consent of the exchanging of information. like we're opting out of a lot of a UN Declaration of Human Rights, Article 12 is talking about the rights to privacy, but yet at the same time, those are rights that we somehow are negotiating through this privacy policy and we're using these technologies. And so I see that you're invoking these different human rights frameworks in this paper, as well as other trends that I see and say, like the neuroethics or neuro rights, where they bring up the right to mental privacy is one of the five major rights that were put forth by this neuro rights initiative that is starting to come together. But I still see that there's international law, there's federal law and state law, But yet at the end of the day, there's these economic relationships that we find ourselves in, and these are consumer technologies. And sometimes some of those legal protections, even if they're there, it seems like as consumers, we can get sent through this privacy policy. And I feel like there's a bit of like, we're on this path where this technology that used to be biometric information that may be medical applications now is all of a sudden these consumer technology. There's going to be this wealth of data and, you know, it's needed to be able to run some of the immersive technologies. But on the other hand, what happens to that data and how it's used is in this other economic realm that is in some of these different business models, as you said, are still developing. But yet I don't know at this point how to put on the guardrails to be able to make sure that this doesn't go down a really dark path.
[00:11:10.087] Brittan Heller: That's why I turned to human rights law as a framework for this, instead of looking at other agreements. Human rights law is inherently based on consensus. The sources of international human rights law include just cohen norms. So the best practices of states. I think one of the challenges is because the United States has a very different conception of what privacy means than the rest of the world, is much more underdeveloped in the United States than it is elsewhere. So I looked at the way that these technologies aren't just gaining traction in the United States, but how the world is getting smaller. The pandemic really emphasized that for me. we need to start thinking larger and bringing more stakeholders on board if we're going to make sure that this technology remains a source of joy and education and enlightenment. One of the things that worries me is the concept of consent in these technologies because the type of information that can be gleaned and that I worry that many content creators aren't aware that they're sitting on top of are things like medical conditions. You can tell if somebody has medical conditions that they may not be aware yet that they have, or if they are aware, they may not consent to make that available to a company. And these are things like ADHD or schizophrenia, Parkinson's disease. You can tell that through the pupillometry. in many HMDs. And like you said, it used to be a medical application and now your reaction time could be used to infer if you have a physical or a mental illness. It also can be used to infer, people are very focused on facial recognition and emotional recognition. I'm more concerned about the inner states that you can determine from these technologies. Like you can tell who somebody is sexually attracted to. And I don't think somebody consents to give away their sexual preferences when they're playing a VR game. It's just something that you really wouldn't think about. It can also reveal whether or not you're telling the truth. So I had one of the earlier developers of this technology when I, when I did the paper, I had to do a series of firsthand interviews because there's not a lot of academic work around this. So one of the earliest developers told me, why would, why would you want to put a polygraph of six cameras on your face? Really? He was one of the people who invented it. I worry about that because I saw that last week HoloLens announced a contract with the military for military applications of its AR interface. And I thought that's the beginning. That's how this can reasonably start unless we are vigilant about how privacy and human rights translate into the immersive world and not just the kinetic or the tangible universe.
[00:14:13.927] Kent Bye: One of the things that I don't have a clear vision on is, you know, there's discussions right now on like, say, a U.S. federal privacy law to maybe kind of do a reboot. And it's ongoing discussions, and we'll see if they're able to have some sort of federal privacy law. But yet a lot of the stuff that you're talking about is a international law or human rights framework level. How do you foresee the flow of how these different frameworks would be interfacing with, say, U.S. federal privacy law or state laws, if there was some sort of human rights framework that was able to gain consensus on some sort of international scale, then how does that actually flow back into how that impacts these technology creators?
[00:14:54.679] Brittan Heller: I'm less optimistic than you about an omnibus federal privacy bill. I think at this point, it's something that a lot of people want and not a lot of legislators can agree on. I don't see it coming. I see it manifesting more on a state-based level, starting with the CCPA, so California's native version of Europe's GDPR. On the state level, I think that might end up kind of forcing the issue because there's not uniformity amongst states in how they treat these technologies, how they define key issues. And companies want to be able to sell across the states without worrying, without having to hire someone like me to tell them what they can sell everywhere in the country, but they got to be careful about Illinois. That's not great for any kind of content creator or hardware developer. The way that I see human rights law influencing is there is a convention that governs the ethical behavior of businesses and it is the UN guiding principles on business and human rights. My practice at Foley Hoag actually helped UN Special Rapporteur John Ruggie craft these 10 years ago. and the process was so great he joined our practice until he retired afterwards. This talks about the obligations of states, the obligations of companies, and the rights of the consumer. and ties it all together very nicely. So I see that kind of a framework, which other tech companies in the internet space, they agree to some of the constraints by this. Most of the major tech companies and telecoms as of two years ago are part of the global network initiatives, not external audits, freedom of expression and human rights principles of these tech companies every two years. So there is a precedent for it. the way that I see it trickling down to individuals is if the companies who control most of the market start exercising best practices, start talking about this amongst themselves and start, again, maybe hedging off the need for legislation. I don't know if you saw the social media hearings, but I don't really trust A lot of my elected representatives understand how the internet works, not discounting virtual reality or augmented reality or mixed reality. So I kind of want them as far away from this as possible at this moment. So I would rather trust the companies who know how this works and know what sort of things they're developing to say, yeah, we're going to have data localization on our devices. Yeah, we're going to retain this type of information for this period, and then we're going to dispose of it. Yeah, we're going to hash personal identifying information and make sure that these sort of things are thought about before they're a problem.
[00:17:50.281] Kent Bye: Yeah, I think there was certainly as I talked to folks at Facebook, there's a growing awareness that there's a lot of these really intractable privacy issues and real question around to what degree are the public going to accept these different technologies as well as what are the guardrails that need to be in place so that, you know. Because in some sense, Facebook can't regulate itself. It can't always draw the line because the technology wants to go in a certain direction. But yet at the same time, there's certain potential human rights violations or mental privacy or agency. There's ways in which it's increasing our expression of identity and expression of our agency. But at the same time, the risks are that it could potentially create these systems that are undermining our agency, undermining our identity, undermining our mental privacy. And I think that there's a lot of ways in which that Facebook leans upon trying to make it accessible in the sense that they make it subsidized and potentially have compromises in these other areas of agency identity and mental privacy in order to make it so that they are subsidizing it through whatever business model they end up settling on. But there's like these human rights principles, the neuro rights principles that were put forth by this group of individuals that just recently published a publication. But even within those, there's like these different trade-offs that I'm having a hard time seeing how those guardrails get into place and who's going to be there to make sure that there's not transgressions in that. And if in the absence of having competent tech policy here in the United States, like I don't know if this kind of self-policing model is going to necessarily lead down a path that I'm going to necessarily feel safe of, say, putting on a brain-controlled interface or having a control labs that's reading my neural input and the firing of individual neuromirrons that's being sensor fused together with all these other biometric data to essentially get this profile of me that's able to do this really intimate psychographic profiling. And I know that this is the roadmap for where the technology is going. I just, I'm having a really hard time seeing what those guardrails are going to be and how to ensure that we don't create a situation that's going to just have the lowest common denominator, worst case scenario happening.
[00:19:55.395] Brittan Heller: Yeah. I think that is a very reasonable concern. I share that concern. And I kind of look to what happened with Cambridge Analytica and social media as being the, not the worst case scenario with immersive technologies, because I think the worst case scenario is even worse. But providing a bit of a template where if we're not cautious, that's where we end up. For me, the issue that may save us in the end and that we don't have a lot of agreement on at this point is what consent looks like in an immersive context. If you look at the policies of some of the major social VR apps, they've all agreed that, not all, but there is general agreement about the personal safety bubble. And they're looking at that more from a physical angle. you know, the way that your avatar interacts with other people's avatars and your digital identity. But the kind of questions that I've been thinking about are, how are you going to indicate to other people more actually in the context of AR? Because that's going to be, I think, more, more subtle than the HMDs of VR at this point. So AR specifically, how are you going to indicate to people that you are using overlays or you're recording them or you're using all of the great features that are being developed now, but do you need someone else's permission to do that? And if so, how do they indicate that they're okay with it? And how do you indicate that you want that permission? That is something that is very, very different in a browser-based context than in an interpersonal interaction like you're going to have with AR. And I know that HoloLens had the green, purple, orange, or something like that. different colored lights that indicate what it's operationalized. But the fact that I do this for a living and I can't remember the colors is a problem, right? Where there's not a common vocabulary yet around what gestures mean, what hardware is standardized as, and how you indicate to somebody you don't want to be filmed or how you indicate to somebody that you are filming. So that is one thing that might come with time, but it is also something that I think companies, not just for self-regulation, but also for commercial success are going to have to think about. Many people won't think twice about it because you don't really think about giving away your privacy until it's gone and you can't get it back for most people. But the way that consent becomes demonstrative is going to be the linchpin for, I think, widespread adoption of ARBR. specifically AR.
[00:22:39.069] Kent Bye: Yeah. One of the things that came up in reading your paper was looking at whether or not this data are being recorded and stored. And I guess I have a couple of thoughts on this is one, I think that there's a big worry for a long time that I had at least for the types of information that is recorded and stored and, and being able to, if that gets leaked out, then people getting ahold of that and being able to determine different aspects of either your identity, or again, it ties back to being able to identify whoever that was and be able to extrapolate additional information. But there's also this trend that I imagine what's going to happen is that they're going to. Potentially move things into like edge compute frameworks that are able to do like real time processing of say, what's happening on your wrist. And it's able to fuse that together with all these other signals. And that even if nothing are recorded, none of the data are recorded, then they still may be able to have a way of taking in all of this data, be making these real-time judgments and inferences of what you are referring to as this biometric psychography. where at the end of the day, like you said, the beginning example, you're looking at a car and your pupil dilates, then there's a piece of metadata that gets generated out of that. Even if the data aren't recorded, they may be able to have this information around what you're personally interested in. And I think that's probably a direction that if everything gets focused on identity, then no one's going to be really paying attention to this real-time processing that's able to generate these real-time inferences based upon all these machine learning sensor fusion processes that is generating all this metadata about us. That may or may not be correct, but it is at the end of the day, generating all this higher level inferences that are that psychographic information. And like you said, there's no legal conceptualization of this concept anywhere in the law. And so it feels like a little bit of a gap in terms of, okay, how do we plug this gap?
[00:24:39.739] Brittan Heller: One way that I'm still playing with these ideas, but I've been thinking about the application of opt-in, opt-out, and maybe the default for immersive technologies should be opt-out unless you turn it on. And that's not the way you see most of the internet working. But because the risks, I think, are so much greater, I think about how you're going to log in in the future to programs. If you just put on a set of your hardware, people are going to want it to be automatically logged in, you know? And if it scans your retina to log you in, then it's just for you. However, I don't know about you, but I lose my glasses all the time. all the time. And I may not, if there are several hundred dollars worth of AR software, but if I lose them, I thought I, you know, I can reset my password. If I lose my phone, I can't reset my retina. So just thinking through the way that human error and the propensity to underestimate risk will bump up against the practical application of these technologies. I've also started to think about concepts of privacy as being cumulative, like you were talking about. And I haven't developed this fully yet, but I feel like immersive technologies make it very clear to me that privacy is not just a you have it or you don't. If you're using the new hardware that goes on your wrist and allows you to gesticulate without using a headset and having the electrical pulses be monitored to control your interface, One instance of that, probably not personal identifying. My question is, if you use that to train a machine learning algorithm, does it cross a threshold? Will your patterns and your algorithm look different than someone else's? And could that be identifying uniquely in the future? So looking at privacy more as an accumulation of small identifying bits rather than you know them or you don't. seems to me to be a more practical level of engagement with the concept of privacy when you need bodily measurements to make your interface work like you do in VR and Air.
[00:26:57.928] Kent Bye: Yeah, I'm not sure if you're familiar with Helen Nissenbaum's contextual integrity theories of privacy, but a big point that she makes is that how context dependent privacy, like say, if you go to a medical doctor and you want to get medical advice, then you often will consent to giving over medical information that you may not be wanting to say, hand over to Facebook or just a third party developer that. who knows what they're going to be doing with that medical information. I guess the concept of this idea of context, I think is the thing that gets back to being able to consent to the exchanging of information based upon whatever that context is. But part of this for me is that a lot of the context ends up being this economic context. Yeah, the privacy policy. And then basically like a blanket, you sign up for whatever they want to do independent of that context and how to really isolate the contextually relevant information. And for you to have a little bit more controls without having this other side of permission fatigue, meaning that you have to click different check boxes in order to use anything at all. So I think there's this consent, but also trying to figure out how the contextual dimensions kind of play into all these different dimensions as well.
[00:28:07.440] Brittan Heller: Yeah, I think that's a really good point. When I think about digital identity, I like to think about identity as being able to be parceled. And so I guess that's kind of a similar concept to Helen Niesbaum's where in any interaction, you go to a restaurant after COVID's over. You pay with a credit card. The person may not know your name. If they ID you, they'll know your name and your age and your address. If you pay with cash, they won't have your financial information. If you pay with your credit card, they'll have your name and your financial information, but they may not have your age. So when we interact with people, we don't put forward every aspect of our identity. So for me, the cumulative nature of creating a psychographic profile of somebody is highly troubling because it doesn't mirror how I practically implement consent in my day-to-day interactions with other people.
[00:29:02.907] Kent Bye: One of the other things I think was striking reading through some of the citations in your paper was seeing how in the artificial intelligence community, it seems like that they've started to take this approach of integrating different human rights frameworks and ethical frameworks for artificial intelligence. And so maybe you talk about that in terms of how you may be taking some inspiration for some of that work of how human rights and the ethics around AI is starting to inform your thinking on some of these issues.
[00:29:28.133] Brittan Heller: Yeah. I run the global AI practice group at Foley Hoag. So I think about this a lot. And I also work with clients to develop ethical AI codes for their individual businesses and get them really thinking about how they're using AI now and how their product development timeline and map mean they have to interface with AI differently. There are lots of codes right now about what ethical AI is and the role of states. I think as of last year, there were 50 countries around the world that each had an ethical AI plan, a national plan. So it's one of the things that I think is gaining more traction on the international front than on the domestic front, at least when you're considering the United States. The leading driver of ethical AI governance is the European Union. So they've put out some really strong guidance, but the challenge with all of this guidance is it's all very, very high level, very, very conceptual and not immediately applicable to individual businesses. It's more designed for governments. So I see that as being the next step in evolving these codes into something that manifests as something useful. One of the things that I also think about is for some of these technologies, like facial recognition. doesn't really matter whether or not they actually work as advertised. If the public and decision makers believe that it works as advertised, the fact that you're gonna get false positives or false negatives is even more insidious. And you see that with AI at this point, when there were people who claimed that they could identify sexual orientation based on people's facial structure, or the infamous case where there was a company that developed an algorithm that said it could tell you whether or not somebody was likely to commit crimes again after they'd been released for prison. And so certain judges started using this as a tool in sentencing recommendations. And it turned out that it was actually worse than randomly picking by chance who would be recidivists and who not. And it was biased against the African-American prisoners And I think it said that they were over half as likely to be a recidivist as compared to 26% of their Caucasian counterparts. So it doesn't matter to me if these actually work as advertised, if judges and decision makers and the public at large think that it holds dispositive weight. I'm still trying to figure out how that exactly translates into the immersive context. But for that, if people think that it does one thing, but it actually does another, or think that it is capable of one thing and it actually is or is not, I think that is going to help determine the mass adoption and the role that we see these technologies having in education systems, in medicine, in industry, sports, and entertainment.
[00:32:38.328] Kent Bye: Yeah, I just watched the Coded Bias' premiere on Netflix. I had seen it at Sundance 2020, but I watched it again yesterday when it premiered on Netflix. And one of the things that was brought up was some of these cases where judgments that are made upon people's lives based upon these algorithms And that could violate the due process constitutional protections if there's no transparency or explainability of some of these AI algorithms that get deployed. And it's like a black box that are making these choices that we don't have a lot of transparency of what's happening, or it could be reinforcing these larger systemic biases that are ingrained within the algorithm. So I think there's a level there, I think, especially within this neuro rights framework, one of the rights was to be free from algorithmic bias, or at least some transparency around it. And I think the Algorithmic Justice League is a whole other initiative that is trying to get like, how do we have some sort of oversight with some of these different algorithms in our lives. So that definitely seems like an issue that is not only coming up in whatever, in real life context, but also in these virtual worlds, if we start to have these different algorithms that are making these judgments and decisions about what we can and cannot do.
[00:33:44.332] Brittan Heller: Yeah, I think that I would challenge the immersive universe to think about is we can't eliminate bias. There will be bias in every data set. The thing that we can do is become aware of the bias and mitigate the impacts of it. I think if we try to talk about eliminating bias from our immersive platforms, we're going to set ourselves up to fail. But if we acknowledge that bias has this deleterious effect on human psyches and human relationships, then we'll be more willing to be proactive about counteracting the effects and being on guard for it.
[00:34:25.377] Kent Bye: I was wondering if you could comment a bit on this other aspect of harassment that you have written about a little bit here in this paper from a human rights perspective, because I know we're talking a lot here about biometric psychography. There's a big privacy dimension, but then there's harassment and content moderation as a whole other area that I think that is going to need to be looked at in terms of as we move forward. what are the either legal frameworks that we need from federal or state level, or if this is another area where having like a human rights approach is going to maybe provide some insight that gives some overall guidance to help us navigate some of these different issues that have already started to come up within harassment and trolling and abuse within these virtual environments.
[00:35:07.182] Brittan Heller: Yeah, if you look to some of the international frameworks around this, I do think they are actually less useful than the terms of service of individual platforms. That's part of the challenge of international human rights law. The best question I ever got when I was teaching a course on it at the University of Maryland Law School, someone just raised their hand and it was middle of the summer and it was hot and whiny and they're just like, Why does this matter? Why does this impact my real life? Because it does seem so high level and abstract. And for me, it's the glue that holds together the relationship between people and their government and people in their society. So that's why it actually matters. If you're in a U.S. court and you are citing international human rights law, you're probably not going to win your case, to be honest, because there is a hierarchy of laws and the national laws, state and local laws definitely take precedence over the international laws, which is why I said they're based on consensus. There's not really an enforcement body for them. but they provide people with this lodestar or a type of compass when they're trying to think about how they want to be governed and how they want society to treat them and other people like them and who are not like them. So this is how it impacts these virtual platforms. Most of the people who are harassed online and based on Jessica Outlaw's great research, many of the people in immersive environments are targeted or what I like to call immutable characteristics. The same sort of things that would be a civil rights violation if it happened on the street. So people being targeted or discriminated against for their gender, their race, their ethnicity, their sexual identity, their sexual orientation, their age, their pregnancy status, things like that. I went a little outside of US law, but not really. There's kind of this core conception in international human rights law that if you are targeted for the things about yourself that you cannot change, that is inappropriate. If somebody treats you differently than somebody who is equally situated based on those characteristics, that is a violation. And there's many, many different treaties and regional codes and all that. But the way that this comes into play is the rules that you see developed in your social VR environment. When I look at these, I don't see whether or not they actually comport with the universal declaration of human rights. That would be a little silly. But I think about those principles, about equality and equity and justice and freedom of expression and freedom of assembly and freedom of association. I look to see if those concepts are embodied by these tangible goals.
[00:38:09.949] Kent Bye: Well, you've put out this paper that's defining the biometric psychography, and now you're also working at this law firm. What's next when it comes to either what you're working on or how this conversation continues to unfold in the right places with the right people?
[00:38:25.230] Brittan Heller: The next thing I'm working on is a paper that's been accepted by the Privacy Law Scholars Conference. And it is actually the crystallization and finally having to write down all the field work that I did in Uganda before the start of the pandemic. I went to this local region, this region right next to DRC called Kasese. and help them develop a plan to engage with hate speech and disinformation in preparation for their next elections. They had a lot of civil unrest based off of social media in 2016 and it actually resulted in fatalities. So I went there to work with the local people and basically figured out that post-colonial hate speech looks different and operates differently than the canonical theories of hate speech tell us it should work. I think this is because there are other theories of hate speech presume that there is one context. And in a post-colonial environment, there are two at minimum, sometimes more competing contexts going on at the same time. So the hate speech is much more layered. It is a much bigger challenge for tech companies to identify and deal with, especially because they, for the most part, are not paying attention to Uganda or to this local region in Uganda. So the paper details out the research that I did, how I think it implicates the theory and should change it, and then recommendations for tech companies about how to deal with the global South and other post-colonial environments to accommodate for this type of challenge.
[00:40:15.553] Kent Bye: Great. And finally, what do you think the ultimate potential of all these immersive technologies and virtual augmented reality, what the ultimate potential of them might be and what they might be able to enable?
[00:40:30.079] Brittan Heller: I think they're magic. They're the closest thing that come to that it's magic. And so my, my hope is that it can take people to new planes in art and expression and connection and education that we use it to amplify the better parts of our nature. That's, that's truly what I hope for all of this technology.
[00:41:02.528] Kent Bye: Great. Is there anything else that's left unsaid that you'd like to say to the immersive community?
[00:41:08.001] Brittan Heller: Thank you for having me here today. And thank you for letting me be the weird human rights lawyer who gets to work with you all.
[00:41:16.064] Kent Bye: Awesome. Well, Britton, I just wanted to thank you for joining me here today. The paper that you wrote up here, I think it's going to help at least start the conversation. And I hope to see more discussions out in the legal community. And as we have more people from the industry, look at these different issues and try to figure out what is going to be the tech policy that's going to help reign this in, whatever guardrails that we need to have into place. The other thing that I'd just say is that there's other conversations that are starting to happen that I'm happy to see from the neurorights and folks who are looking at neurotechnologies and the brain-computer interfaces and these things like control labs, and that Facebook's working with these folks. Together, as we have all these different lenses, whether it's neurorights, neuroethics, or artificial intelligence, or immersive technologies, and all the biometric data that looking through all these different lenses, we'll maybe get some new insights into how to set up the privacy frameworks and maybe I'll hold up hope that we'll have some sort of revolution in terms of having a federal privacy law. We'll see. Like you said, the outlook's not so great, but I think it's papers like this that are going to help lay down the foundation for be able to help facilitate that discussion when it does come. So thanks for doing all that pioneering work. And thanks for joining me here today on the podcast to be able to unpack it a little bit more.
[00:42:28.029] Brittan Heller: Thank you very much.
[00:42:30.157] Kent Bye: So, that was Bryn Heller. She's a human rights lawyer, and her most recent paper that we're talking about here is called Watching Android's Dream of Electric Sheep, Immersive Technology, Biometric Psychography, and the Law. So, I have a number of different takeaways about this interview is that, first of all, I wanted to dig into the actual definitions of biometrics as defined in the Illinois biometric law because I think it's important to realize how most of the law we talk about biometrics is actually very specifically tied to identity. In her paper she quotes from the Illinois state law saying that under the statute they're defining these biometric identifiers as either a retina or iris scan, a fingerprint, voice print, or a scan of hand or face geometry. There are some exclusions to this biometric information, including writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color. So, by focusing on the retina, iris scan, fingerprint, or voice print, or your hand and face geometry, that, again, is very much tied to your identity. And, like I was saying at the top, that the FTC and what they're trying to protect in terms of user privacy and personal identifiable information. all goes back to these things that are very uniquely PII. Now, I suspect that all of this biometric data, that given enough machine learning algorithms, that you'll actually find that there's likely going to be very uniquely identifiable signatures on a lot of this biometric information. So even though it's not as clear as, say, face geometry to do facial recognition, there could be very unique signatures when you tie all these different biometric markers together. That, as Thomas Reardon was talking about in my previous episode, looking at just even how your muscles are firing and being able to detect these different signals through the surface level electromyography, you can actually identify the unique characteristics of how your muscles are wired up to fire in these motor units that are all connected together. But aside from that, I think it's important to recognize that a lot of the research agenda and say privacy researchers at this point, they're really focused on the PII and how you could perhaps extrapolate some of this uniquely identifiable information from any combination of, say, just looking at someone in a six off headset, looking at a 360 video and interacting and touching with the button just to see how those different type of interactions could be boiled down into trying to figure out somebody's identity based upon their unique bone links and their gait analysis and all these other different aspects. So, there's this whole area of the biometric psychography, which Britton Heller is defining as the combination of your physiological characteristics of you responding to a stimuli with that stimuli itself. And so, you're looking at both the virtual environment, you know what you're looking at, and then you're looking at all of your biological reactions. And so, that's everything from your likes, your dislikes, and then your biological responses, your physiological responses to that stimuli. So from that, that's where we're starting to get into this. Okay. Even if they're not recording all the raw data, if you're able to do like real time processing and extrapolate all this metadata about these inferences that are made about what I like and don't like, this may be some of the area where we start to get into like, okay, how far do we want this to go? And one hand, how accurate are these real time inferences? And if that gets put onto this public record, that in some sense is tied to my identity and with the third party doctor. And that means the government could start to get access to this. Whatever they start to label with my behaviors, that's something that presumably the government could start to also get a hold of, just because the Fourth Amendment has this third-party doctrine. With the third-party doctrine, that means that whatever these companies are talking about me, that can also get into the hands of the government without requesting a warrant. I think the issue that came up again and again in the conversation is, OK, what do you do about this? Because there's a number of different layers here. At the highest level, you'd say it's just the cultural norms about what the public is willing to consent to. And then you have the laws. You have international laws, you have the federal laws, you have state laws, so many different layers of the laws. highest level would be the international law or the human rights law, where it's not really binding into individual companies. They're more like ethical guidelines and principles. That's where you get things like the European Union ethical guidelines, in terms of, here are some guidelines for artificial intelligence that you should generally follow in order to preserve human rights. There's a set of human rights ethical guidelines called the U.N. Guiding Principles on Business and Human Rights. That's something that Brittenheller cited as a set of guidelines that all these different companies, in order to do international business, have to generally follow some of these ethical guidelines. The thing that I think is confusing to me is, you look at something like the U.N. Declaration of Human Rights and you have an Article 12 that says, no one shall be subjected to arbitrary interference with his privacy, family, home, or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to protection of the law against such interference or attacks." So generally saying, okay, there are these rights to privacy that you have, but yet at the same time, we enter into these economic contracts with these companies and we basically waive our rights. We say, OK, we may have rights to these privacy, but we're consenting to you having access to this information because we're agreeing to, under this context, entering this exchange where it's OK for you to have access to all this information and for the users to essentially mortgage their privacy in exchange to access to these free services. So I think that's the existential dilemma that I see here, is that even if you were to come up with this international human rights framework, you still have this hierarchy that Britain talks about, where the most highest abstracted level, you have the international law and these human rights ethical agreements, these Human Rights Declaration from the United Nations, and these different frameworks that end up being high-level and somewhat vague and not really prescriptive in terms of how things actually play out. That's where you start to look at the national laws and the state laws here in the United States, where there are now privacy laws that are binding in terms of what these companies have to do, like GDPR, as an example, where there's certain obligations where these companies have to follow. But then there's the relationship between you as an individual and these companies, and that's where you basically sign a terms of service and you consent to the privacy policy. There's a notice of consent saying, hey, we have access to all this information, And as long as they tell you what they're going to be recording, then the FTC is looking to see whether or not they are being transparent as to what they're recording or not. It has nothing to do around, like, OK, this is way too much intimate information that no one should be really consenting to give over. And I think that's where you start to get into these different aspects of ethics. Say, it's not legal for you to sell yourself into slavery, as an example. That's something that is illegal. Even if you wanted to, the people that would be on the other side would be prevented from being able to actually be able to do that. So there's limits in terms of where that consent can go, especially because if you look on the whole, there's different layers of the human rights that the government is saying, hey, we want to try to protect people and make certain things illegal. I think that's where it starts to get into, OK, where does the government start to step in and say, OK, this is OK for you to do and this is not OK for you to do? I think that's where, again, I go back to Helen Isenbaum's contextual integrity theories of privacy. It's not like you should just have a blanket statement saying you should never have any biometric information shared in any immersive technology ever within any context. There are certain situations where if you're working with a doctor and you want to be able to track your own biometrics to be able to do self-improvement or consciousness hacking or whatever that ends up being, there may be certain contexts where you agree to that. But there's a lack of sophistication when it comes to really laying out what those different contexts are and where those nuances are. Even with these human rights approaches, they have this kind of generalized phrasing. Another human rights framework that just came up very recently, it was a neuro-rights framework that was put forth by actually one of Thomas Rudin's college advisors at Columbia University, Rafael Yusta, as well as with Jared Ginzer, as well as Stephanie Herman. They published a piece called It's Time for Neuro-Rights. They published it in the Horizon's Journal of International Relations and Sustainable Development. Looking at this, again, as a human rights framework, but they're looking at neurorights, and so these different non-invasive neurotechnologies, and they lay out these five different principles. They say you have the right to identity, the right to agency, the right to mental privacy, the right to fair access to mental augmentation, and the right to protection from algorithmic bias. Again, there's this right to mental privacy, which is the ability to keep thoughts protected against disclosure. There's a concept of privacy, which is that you should be able to disclose what you think and what your beliefs are. We shouldn't have technology that's essentially extrapolating all this information outside of us against our will of where we're sharing this and what context. I think that's why Britain is going back to this concept of consent, of how are we consenting to When is it okay for us to be able to share this really intimate information? And how do we navigate that in terms of, okay, now we've entered in a context where it's okay for you to have access to this information or that you're expressing some sort of implicit trust, because some of these aspects of like the right to mental privacy, well, that's Again, this high-level right that, of course, we should not have these technologies that are invasive into all of our lives, and just gathering up all this information and then sharing it to government authorities. So we start to enter into this like, OK, well, but what if you're going to a doctor? Or what if you are wanting to enter in a certain program where you want access to that information because it's going to give you some real benefit? I think that's generally the approach that a company like Facebook would take, was that they have this exchange where they're giving you something. And it's in return, you're giving up some of these what may be fundamental rights, and how to navigate that. And where's the line between what's okay, and what's not okay. And I think that's what I see, at least at this point, is everybody a little bit like throwing up their hands and saying, we don't know what those boundaries are. We know this could go down a really dark path. And we know that there's so many amazing benefits. So let's focus on those benefits. But I think part of what needs to happen as a community is for us to come together and say, Okay, Here's where this could go really wrong, and here's how, at this economic layer, maybe we should be preventing, from a federal privacy law standpoint, maybe surveillance capitalism, with all this intimate, biometric, psychographic data. Maybe we should not do that, because the risks of harm could be so great. Especially if you have one company that you're trusting, but if you start to think about, OK, are you going to trust anybody to be able to start to do that? data gets leaked out, then how could that be used to be able to exploit or bring harm to people? This information gets leaked out. There are security vulnerabilities all the time. There's a fundamental incompleteness to the degree to which everything is going to be completely locked down and completely secure. There's no 100% guarantee that anything is ever 100% secure, so there's always some risk that this information gets out into the world. I think that's the real challenge. Even if you trust this major company to have the best interests, what happens if it gets into the wrong hands? What kind of harm could be done? And I think you also have to look at it through that lens as well. And so are you only doing real-time processing? Are you not recording anything? Like, is it safer to do edge compute? What are the consent around this? And Brenton is talking about clustering and, you know, limitations to how this data are monetized. And, you know, what are the certain areas that are going to be off limits? And also, should we default to some of these things just being opt out by default? And if you want these extra benefits from being able to give extra information, then you have to actually explicitly do that. And you maybe see a little bit more of the benefits to be able to have that type of information exchange. And you're willing to take on some of those risks that may happen by you turning on different features. I guess I'm still left with, now that we know all these different risks, how do we really put in the different frameworks to make sure that the consumers are protected? Britain doesn't really put a lot of faith into this whole federal privacy law, mostly because in watching some of these politicians interrogate these CEOs, they reduce all this complexity down and they really don't have a really good sense of the Internet as it exists today, let alone the future of these immersive technologies. To really put a lot of faith and trust in them to be able to do the right thing, I think Britain's right to be really skeptical of that, as I am as well. But also, at the same time, I know that the human rights and international law doesn't feed down into actually making tangible difference to the behaviors that Facebook are exhibiting. It really, I think, will have to come down to some sort of U.S. federal privacy law or some sort of legislation that is starting to dictate a sort of neuroethics or neurorights and the boundaries and limits to a lot of these different non-invasive neurotechnologies. And maybe with this upcoming symposium that is being hosted at Columbia University on May 26th, that Reardon had mentioned and come across the actual posting to that and posted in the write-up from the previous episode. But there's going to be a larger discussion with these different neuroscientists, starting to have these ethical discussions around neuro rights and neuro ethics and how do we gain together all these different perspectives. And I think part of the challenge is, like, Britton is a human rights lawyer. She's not like a privacy expert. And so she's needing to expand out and to look into some of these different aspects of privacy law. But there's so many different nuances and dimensions of the privacy law on top of the human rights perspective, on top of the neurorights and neuroscience perspective, on top of all the different nuances of the technology and what the technology can do. It's like this massive interdisciplinary fusion that has to bring all these different things together and to say, OK, here's what's possible, and then here's how we should put on some different guardrails to make sure that we're doing this in an ethical way and also in a way that protects people. in a way that also makes consumers feel like they're safe and that they actually are safe and that have some level of protection here. At this point, there's none of these different protections. Is it going to take someone really showing what can go horribly wrong and actually happening before there's any action taken? That's typically what has happened with these different technologies. So we'll see. I think it's, for me, I'm very interested in tracking these other strands of the neuro rights. And, you know, there's an approach of looking at the human rights as a lens to be able to start to form these underlying ethical principles and guidelines. And from there, maybe we can start to make this bridge, this gap between those higher level human rights frameworks and ethical principles down into what some of the law should be. Also just working with this IEEE global initiative on the ethics of extended reality is something that I'm also in this conversation with other researchers and folks from across the industry, also looking at some of these intractable issues and trying to figure out, okay, how do we start to define some of these either ethical guidelines that like Britton had said, that maybe there's a set of design practices that is consensus-based where people are consenting to it. There's no enforcement and no ways of having real checks and balances other than to have people say they're committed to it. But maybe that's just a layer that you can operate on. I think that's at least having a set of ethical guidelines and principles for the wider industry, people who are designing these experiences, and hopefully also the people who are designing the underlying technological architectures and the code, because they're at the base level of potentially creating different privacy-preserving architectures. Whether, again, it's things like real-time processing or edge compute or not recording a lot of this information, But again, this whole threat of sensor fusion and how you're tying all this data together and what happens to that data, that's a whole other layer of what Britain is trying to define here as biometric psychography. A new concept, and I think it's starting to have implications for, OK, now that we've defined it in this law journal, now what? It has to actually get translated into law or other ways in which this, as a meme, gets spread out and gets implemented into these different legal frameworks or ethical guidelines and design principles. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. So, if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.