The Polys Awards brought together all four winners of the Ombudsperson of the Year Award for a panel discussion in Engage XR on Human Rights Day, December 10, 2023, to talk about the tech policy and ethical implications of XR technologies. The panel included myself (2020 winner) along with Avi Bar-Zeev (2021 winner), Brittan Heller (2022 winner), and Micaela Mantegna (2023 winner).
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the future of spatial computing. You can support the podcast at patreon.com slash Voices of VR. So on today's episode, I'm gonna be diving into the PolyWebXR Awards panel discussion that was hosted back on December 10th, 2023. It was called Mission Responsible, and it was featuring all of the winners of the Ombudsman Person of the Year from the years of 2020, 2021, 2022, and 2023. So the inaugural WebXR Awards happened on February 20th, 2021, and I was actually awarded the very first Ombudsperson of the year. And then I selected Avi Barzav with all the work that he had been doing to raise awareness with what was happening with eye tracking and all the threats. has gone on to start the XR Guild. And then he chose Britton Heller, who's been doing a lot of really amazing work with researching the different laws of biometric privacy and hosting different gatherings, like at Stanford University Cyber Policy Center, had a whole existing law on extended reality. And then Britton chose Michaela Montagna, who is based in Argentina. Her username is Abogamer, which translates to video game lawyer. And so she's also looking at a lot of the intellectual property law, artificial intelligence, and other aspects of the metaverse where she's done different research and giving talks on the different ethical implications of these immersive technologies. And so Ben Irwin was the founder of the WebXR Awards, and he wanted to bring together all of the winners of the Ombudsman Person of the Year from the previous Polly's Awards. Michaela is the latest winner of this award. And so we had this discussion that happened in the context of Engage XR. There's actually some technical glitches that were happening. Britton Heller was there without able to see anything in the world and also Mikayla was dropping in and out of the conversation and then at the end the primary recording of this conversation crashed and so they had to go to some backup recordings, but yet the last like 90 seconds of her Statement was cut off. And so what I end up doing is adding in her awards acceptance speech that just happened this Sunday March 3rd 2024 where she was able to accept her award and give her acceptance speech so I I'm going to include that at the end since her original statement got cut off. It's difficult to understand and transcribe what was being said there. So that's what we're covering on today's episode of the Voices of VR podcast. So this panel discussion with myself, Avi, Britton, and Mikayla happened on Sunday, December 10th, 2023. So with that, let's go ahead and dive right in.
[00:02:41.615] Julie Smithson: Now here to give us some context is our ambassador for today's event, Kira Bensing.
[00:02:47.898] Kiira Benzing: Thank you so much. So important to be here and so glad that we are having this conversation. Our ombudspersons are our conscience, our guardians and our wise people. Now that we have four ombudspersons, we're bringing them together at this pivotal time in the evolution of our ecosystems. The dangers new technologies present are well established, but we're not here to agonize over a dystopian future that we don't want. We're here to talk about the positive outcome that we do want. This tech obviously is not going anywhere. And it unlocks so many wonderful opportunities for humanity. The mission that we choose to accept is to design a future where human beings have agency and sovereignty over our own data. Now more than ever, we must band together as a community for the sake of our personal blueprint. It is together that we will be heard. But what is it that we all want to say? This is why we're convening our ombudspersons here to talk about what they think the big tech decision makers need to hear. And with that, I hand it back to Julie to invite our ombudspersons to the stage.
[00:04:06.573] Julie Smithson: We are so proud to invite some of the leading minds in tech ethics for this timely conversation as new policy initiatives in the U.S. and in the EU are going to impact our personal data and how it will be handled. This event is an unmoderated organic conversation with no predefined script. So all we ask is that our ombudspersons to give each other equal time and use their time to talk about why they see as a potential solutions to these well established problems. So let's get started. Welcome Kent Bye, 2020 Ombudsperson of the Year. Through his years of exploring ultimate potential of VR through the Voices of VR podcast, Kent has always embedded a social conscience, and in doing so, was the inspiration for the creation of this award. Then Kent chooses Avi Barziyev for his work, which includes the founding of the XR Guild to keep ethics in the forefront of the public discourse. And Avi chooses Britton Heller for her accomplishments as a lawyer and thought leader in bringing these issues to the halls of power in government, academia, and international law enforcement. And now we have our fourth ombudsperson. As Britton just announced, Michaela Montagna is our 2023 Ombudsperson of the Year for her leadership and advocacy. Please welcome the AboGamer, Michaela. And Kent, please kick off this very important conversation today. Thanks, everyone.
[00:05:43.501] Kent Bye: Sure. Thanks. Thanks so much, Julie. Well, thanks, everyone, for gathering here today and looking forward to having this conversation. I feel like these issues are something that I've been covering on the Voices of VR podcast since the very beginning of 2014. Just talking to people in the industry, you start to hear about both the exalted potentials, but also the potential perils of how this technology could lead us into more dystopic aspects rather than the more utopic aspects. So I think with any new technology, it's like two sides of a coin where there's great promises and great perils. And so I'm glad that we're here today to explore some of those perils So just a brief overview and then I'll kick it off to each person, love to hear each person's opening statement. So for me, I started with the Voices of VR, this practice of oral history, listening to people. It's through that listening that issues of privacy started to come up, but also other aspects of trolling and harassment. And from 2016 to 2019, I had started to map out lots of different types of ethical and moral dilemmas from Escapism addiction to where it's going to happen with our privacy, how big we're going to use it for harassment or abuse and basically all the different types of harms and trying to categorize this harm. So I gave a speech on the XR ethics manifesto and started to map out the different contextual domains of where things could initially go wrong with the ethical and moral dilemmas. That led to the IEEE Global Initiative on the Ethics of Extended Reality, which was a two-year process of taking that initial mapping and then diving deep into each of these different contextual domains, and to write eight different white papers, and I did a whole 14-hour series digging into the broad range of the ethical and moral dilemmas. But Britain actually was a part of organizing a whole gathering called Existing Law and Extended Reality. And it was really trying to look at where do policymakers really need to step in and take action? Because of all the different potentials in XR, a lot of this stuff could be either handled by the technological design for how you're implementing the systems or the cultural practices, like what are the codes of conduct that you need to have to create safe online spaces? And then there's going to be other market dynamics that are going to be driving What's going to happen, but there's going to be some aspects that you're going to absolutely need new laws to be able to help define and protect aspects of our human rights. And so it was that day long symposium that The thing that came back to was privacy. That's one area where we don't have a lot of the same protections. And it's a lot of work that Britain has done in terms of looking at the existing laws that are out there are not actually covering a lot of the new types of physiological and biometric data that's going to be radiated from our bodies. It's like this new class of physiological data that doesn't always tie back to identity. But even if it doesn't tie back to identity, it can still give lots of information about what we like, what we don't like, our preferences, What Britain has termed this biometric psychography. And so within the last couple of weeks, I finished up my paper from that symposium where I wrote like a whole 27 page article. diving into both the contextually aware AI, which is a whole other aspect of privacy where Meta wants to come up with what's essentially like an omnipresent AI system that's recording everything that you ever say or do from your first person perspective. It's trying to essentially give contextually relevant UI, but it's also trying to predict your next actions. And, you know, basically it's an omnipresent AI surveillance system that they're proposing. And so I was like, well, this sounds like a really bad idea. contextually aware AI of just AI listening to everything that we say or do that's from our egocentric perspective, and it's also recording everything that everyone else is doing. So how could you push back from that? So the paper I wrote was trying to deconstruct why contextually aware AI violates different principles of what Helen Isenbaum calls contextual integrity, these different contextual domains where there's appropriate flows of information, where you would tell your doctor some information, but you wouldn't tell your banker other information. So how do you ensure that there's not information leakage, but also that you're not dissolving our concepts of what privacy even is if you just have these third parties that are recording everything that you ever say or do? The other aspect of that, though, is all the different information that's coming from our bodies, the physiological data, the biometric data, the emotional data, the data about our internal thoughts and our processes and also our actions or behaviors. All that is classified into these inferences that could happen from XR that's leading to two main human rights approaches. One is neuro rights, that's trying to protect your mental privacy, but then also your right to identity, but also your right to intentional action. And then there's a whole other separate human rights framework from Nita Farahani, who's trying to establish this new human right of cognitive liberty, meaning that there's your mental privacy, but also your freedom of thought, but also your intentional actions. So both these neuro rights and cognitive liberty are saying these technologies are going to be getting into what's happening inside of our minds and in our bodies, trying to map out our thoughts, and then eventually potentially subtly nudge our behaviors. And it's that nudging of the behaviors where it's this violation of free will that is what Nita Farahany is defining in her book, Battle for Your Brain, but also broadly, how do you establish a human right for these technologies that is able to preserve our cognitive liberty and our neural rights? So that, I think, for me, is the biggest frontier for where policymakers will need to step up and start to more firmly define some of these things and figure out how these human rights are define at different levels of international law, but also how that feeds into our policymaking and what kind of protections we have. Because if we don't do that, then we're walking into both Neurotech and XR that has access to all this information that is leading to real dystopic potential futures. So that's my opening thoughts. And I'll sort of pass it along to both Avi and Britton and then Michaela to dive into all the different initial thoughts and also the specific work that you're doing.
[00:11:46.426] Avi Bar-Zeev: OK, thank you. That was great. And you really inspired me to get serious about this. You early on highlighted a lot of these issues. And I think I'm probably like a lot of people who come to this through computer science, engineering, even design. If you studied as a doctor, you learned about ethics. If you studied as a lawyer, you learned about ethics. In school, right, there's classes on this. And I didn't have any classes on ethics as a computer science major. There was nothing. And you see the importance of it now. You see all these people who are, especially in the AI side of spatial computing and other AI beyond that, are like, what problem? You know, what's the harm with what we're doing? We're just scraping the whole internet. What's wrong with that? There's a lot of harms that can come. And over the course of 30 years, I started to see them. I didn't see them initially. I didn't have any training in this. Initially I had a lot of the same ideas people are talking about today with the metaverse and everybody getting together and how great is that going to be for having a 3D internet. I've seen 30 years of the problems of griefing, harassment early on. That's been going on forever and still hasn't been fixed. It's kind of inexcusable that we can go 20, 30 years with the technology and still not have viable solutions for how to engage in a pro-social way. And then when it comes to the tech itself, I've seen a lot of the harms that can come from just wearing the wrong device, having a device that doesn't fit your head properly because of your age or sometimes even your sex. It might not fit. Certain devices may not work. And I'm one of those people who's very susceptible to nausea. And I can see why anybody would have potentially problems using this technology if it's not done well, if it's not done ergonomically. And I think what we're seeing now is that it can be done well. There's no excuse for doing it poorly because we're starting to see headsets that are really usable for more than 30 or 60 minutes by most people, and that's super important. So there are positives coming out of that too. And I've been learning a lot more about the philosophy and about the regulation needed there. For me, the simplest regulation is, if you could write something like this, was only build technology that helps people. Don't release features that are exploitive, that benefit some unseen third party who's trying to make money off of us. If you even have the most invasive technology and say, some AI that watches everything we do, like Kent was talking about, watches everything we do, learns everything about us, but only uses it for our benefit and with our consent, I don't have a huge problem with that. If people consent to use it, it could be a great boost to how we deal with each other and how we deal with the world. But when it starts working against us, when it starts working for somebody else's bottom line, and we don't even know what they're doing behind the scenes, then there's a big problem. So naively, you know, if I were writing the legislation, I'd focus on that and make sure it's actually helping people, not hurting in the same way that the Hippocratic Oath used to start by saying, first, do no harm, like they prove that it's actually helping someone. and not hurting them. And some companies seem to take this seriously, and some don't. So I think we have a lot of education that we need to raise. And I firmly believe that almost everybody in the field, if they knew the right thing to do, would do it. There's very few people who are like, ah, screw you. I'm going to do what I want anyway. They want to please people. They want to build products that everybody loves. I mean, that's pretty universal. And they just don't necessarily understand. what the harms, what the bad decisions are along the way. So I think we do need that education going all the way back to starting at school for a lot of people, but now for people who are already in the field, we need to go back and educate them based on what have we learned? What are the best practices? Where do things go wrong? And, you know, my goal is that some PM sitting in a large company who has to make a decision, let's say about, oh, do we need special accounts for children? Because maybe we don't want the children seeing adult content. would say, yeah, hell yeah, we do, we need special accounts, or unless there's a better solution, but we need to make sure that kids are protected, and let's figure out how to do this. And when the boss pushes back and says, oh, we don't have time for that, we're on a schedule, we have to ship, that PM has enough ammunition to come in and say, here, we all agree, this is important, and it's actually going to cost less to do it right, because of all the things that could go wrong when you mess it up, including at the tail end lawsuits and huge fines from the governments. So it's generally better to do it right up front. It's generally better to take what we've already known and not wait till we find the problems that we already know exist. And then just pretend we're discovering it all over again. There's no excuse for that. We know what the problems are. There's new problems we'll discover. Sure, nobody knows what those are. Like I said, griefing's been going on for 30 years. Like, come on. That should be solvable, but maybe not in the way that we're conceiving of things. Maybe we have to re-approach the way that we're putting people together into these arenas in order to solve that. Because I firmly believe that you can't just add technology to solve bad behavior. Sometimes you actually have to choose the people carefully and set the rules carefully and have consequences for those things that go beyond in-world bubbles or things like that. You can add those kind of hacks. Don't necessarily solve things. So, wrap it up. You know, we're here to try to figure out what the right things are, but make sure everybody knows what they are. Nobody has an excuse anymore for an, I didn't know what would happen or it was the first time it ever happened. It's not 30 years of this, a lot of repetition and I see the same mistakes happening. over and over again. Now's the time to fix them, to work with the policymakers. And so I'm so glad that we have Britton and Michaela here because they know how to talk to lawyers and government people and say the right things so that we can get the right policies written. And I do believe if we write carefully, those things are not going to slow us down. Those things are going to accelerate us because they're going to filter out the crap that we don't have time to contend with, all the bad things out there that are messing things up for people in the field. And let's focus on all the good things that we know. and then build from there. And there will always be new problems to discover, new areas to uncover that we don't know how it goes wrong, but let's fix the ones we know, for sure. I'll stop there. I think, Britton, I guess you would be next.
[00:17:13.968] Brittan Heller: Hi everyone. I'm going to be as brief as I can because I really want to hear Michaela talk. My crusade, if I had to define it in one sentence, would be I am against vibes-based policymaking. And by that, I mean we have so much research. We have so many things that we need to be researching. There is really no excuse at this point to make assumptions to undergird the type of laws and regulations that we create, especially when those assumptions tend to be wrong. A couple areas that I think would be interesting to focus on, whether or not you're working in a company or an academia or a think tank, or you're just someone who's concerned about this. Number one, I think that it is an assumption that a lot of people make that XR is just like social media. when we know that this has a different impact on our bodies and on our minds. Jeremy Bailenson talks about how if you haven't tried the medium, it's like dancing about architecture, meaning you have to really experience it to understand. And I can't tell you how many government officials or even other academics at Stanford that I end up taking through the lab so they can have the visceral experience of being in a headset and then they get, oh, this isn't like a text-based feed. You feel this in your body. You have an emotional impact to what you see and all of your body reacts to this. That's one. I think two is an assumption that I made that as research is coming out now from places like UC Berkeley, that shows that things that we thought were not personally identifying information streams that come out of these devices may actually be personally identifying it. And that means that there may be more legal avenues that we can look at. So it's kind of exciting. Vivek Nair from Berkeley just published his PhD thesis. And he took a study that was done at Stanford with, I think it was about 2,000 people. They put them in XR with 100 seconds of recorded data. They were able to uniquely identify an individual with 94% accuracy. That's pretty impressive. But Berkeley and Stanford have a rivalry. Berkeley decided to do one better, and they had a data set of 55,000 people. And Vivek showed that you were able to identify one person uniquely, not from your college class, but basically from a rock concert or a baseball game sized arena. That is a change in the way that we should be conceiving what the data flows and what the digital exhaust from these headsets looks like. The researchers in the lab have also said that they could identify up to 80 different unique identifying characteristics coming from data flows. So some of the early assumptions that I made that this is not personally identifying may not hold as we're looking at the way the technology evolves. And I think finally, looking at the ways that governments make assumptions about the way this technology works and how that shapes how they want to apply it. I've been doing work in Latin America, and they held their first trials in Horizon Worlds this year. They used generative AI to research and write part of the decision. And the venue for the trial was actually in XR. And they had really good reasons for doing that. And there were also things that they assumed that they hadn't considered. When I was able to speak with the judges, they said, it's just like Zoom, right? And because of the impact on your bodies and your mind, it's not just like Zoom. It can actually introduce new avenues for introducing bias into a judicial proceeding. based on the rendering of our foreheads and our mouths, the way that our body language doesn't actually translate cleanly. So it's kind of an awakening for them to say, how can we use emerging technology in a way to increase access to justice, but in a way that is also responsible? And how do we mitigate the unintentional harms that may come in through that? That's what I have to say about vibes-based policymaking. And now let's turn it over to Mika.
[00:21:21.576] Micaela Mantegna: I was thinking a lot of things, and I wish I could take my handwritten notes, which is different. First of all, why AboGamer? Because my background, I'm from Latin America, I'm from Argentina, and in Spanish, abogado or abogada is the word for lawyer, so that's why AboGamer. that refers to the background I'm coming from, which is video games and ethics and AI, which all know that is converging into this space. And while a lot of you were talking, I was like taking different bits from here to bring into the conversation. So I, more than like an open statement, what I want to do is like bring these bits and move it to the table so we can do like more organic conversation. and another thing about me is I'm a geek so a lot of my thinking in philosophy and analysis come from that space and I was thinking coming from internet governance background that we have seen this before and there is this Bad decisions are motivated by good intentions and then we are seeing these again into the space. We don't know the lasting effects in our cognition and not just in our agency but also in the way that we form our memories, in the form that we interact with each other. There is so much to do about neurodivergency. How do you feel if you are neurodivergent and you come into an a special space and you feel overwhelmed. For me, it's something that I thought that I had to sometimes close my eyes and try to focus on the sound. It's really different, something that Brittan was saying about the embodiment of this. But also, and this is coming from my gaming studies, I'm thinking about this concept that was before about the magic circle. How do you separate online and offline? And we know that that's not true anymore. And a lot of our conversations are around how immersive the experiences are. How do we enter into spaces? And one of my concerns for the future, and that's connecting with what is going to happen in 5, 10, 20 years, is how do we emerge from the spaces? What are the things that we're taking from digital spaces into our lives? How do we go back from immersive experience? How will we emerge into the physical world? And for me, this also connects in how we think about democracy and connecting also with gaming spaces. When I started doing research on gaming studies, one of the things as an activist that we talk a lot about in gaming spaces is how games have this potential to unite people, to create discourse. But at the same time, game became this medium that could be really manipulative and create this silo of opinion. And for me, what happens when we add this layer of embodiment that the things that you are experiencing are also being interjected in your mind. You are feeling it in a different layer, in a different way. And how are we going to enhance further ideas about segregation, about discrimination, about how we think in terms of rivalry with other people. For me, it's a very special day because we are talking about International Human Rights Day, but at the same time, here in Argentina, we have the inaugural day for the elected government. It's a far-right government for the first time, maybe, in Argentina. And for me it opens a lot of questions because when we were talking about social media and the impact of social media in how we create the democratic discourse, for me it's like what is going to happen will be at this layer of embodiment, this layer of interjecting the feelings, interjecting in your memory the way that you interact. So I bring to this table maybe with more questions than clarity. I'm thinking about the philosophies of regulation. I'm thinking about these invisible constraints, because when we talk a lot about digital spaces, we talk mostly about data protection and privacy. But for me, the missing piece of the puzzle that is intellectual property And when we are thinking about creating these spaces as open spaces and creative spaces, and all of these things are already being limited by the regulations that we have in this space, by intellectual property, by terms of service, by contracts. So we were saying this before in the red carpet, how do we find another legal frameworks besides human rights, besides data privacy, that can counterbalance how strong intellectual property is? And that's one of maybe my concerns because of my own professional bias. But this is something that I wanted to bring to the table and also ended up with something that Abby was saying about technology. I always like to bring this metaphor from Star Trek, that if you go back to the next generation and you see people talking in the ship, in the Enterprise, even the bad guys, nobody was assuming that the technology was listening to them. The computer that was like the central computer of the ship was not spying even on these people that was maybe the bad guys on board of the ship. So how do we come to this point that we have kind of like become so natural and this go back to the point about comfort and how we kind of like allowed these barriers to come down and moving into what Kent was saying, like this kind of like possibility of omniscient AI, like recording and listening everything we do. and I was saying like we come from different paths into ethics and yeah I started with law and immediately we need to talk about philosophy because before we go into like the nitty-gritty regulation and the nooks and crannies of policy I think we need to talk in bigger terms and abstract terms about the metaverse we want, about the possibilities. and how capitalism is influencing the business models and powering the business models for the metaverse. And also, a lot of the talk is about consumer metaverses. And for me, it's really important, how do we bring back governments into the equation? How do we reclaim that power that was lost to corporations in these transitions from the initial internet to the commercial internet that we have today? And I will stop here and also get a battery for my headset.
[00:27:52.663] Kent Bye: Awesome. Thanks, Michaela. And keep us posted with how much battery life you have as well, because we'll make sure to hear you before you might have to blip out. Everybody had threads there that I wanted to respond to what Michaela was saying there about intellectual property of how that is going to influence the culture of the metaverse is going to be huge, especially when you think about AI. It's at the forefront of a lot of these discussions. These discussions with AI may start to say, what are the bounds of fair use when you are scraping the entire internet and are using it for models, if you're talking about labor that's being stolen and basically displacing jobs after that. But just to go off of the philosophy aspect, I went to the American Philosophical Association Eastern Meeting 2019, and one of the founders of philosophy of privacy, Dr. Anita Allen, gave a talk where she was basically saying, we don't have a comprehensive philosophy of privacy. It was in that moment that I realized, from the philosophical community, that privacy is still this amorphous thing that hasn't really been penned down. There's been some approaches like Helen Isenbaum, and Dr. Anita Allen lists a number of other philosophers that have given their take. But for her, it includes both the account of the value of privacy, but also the ethics and also the political power, but also having context-specific applications for ways that you can define what privacy even is. And I think it's with that ambiguity of what privacy is that these companies have been left to define it for themselves with whatever laws are there with GDPR. And with the AI Act, I had a talk with Daniel Leufer, who also has a background in philosophy, and he was saying there's some aspects of how the AI Act, and I haven't seen the final language and I would need to follow up, but some of the discussions where some of the language of the AI Act may start to redefine what biometric data is. And as Britton alluded to, there's this connection to identity. So the reason why that's important is because all of the existing privacy laws are tied back to what they call PII, or personally identifiable information. So if there's information that's personally identifiable, it has specific protections that have to be managed according to GDPR and other privacy regulations. And so it says if there's anything that isn't personally identifiable, then it's basically the Wild West, and you can do whatever you want with it. And I think that's the kind of state we're in, is that we have all this new data. It may be personally identifiable, but I'd say the caveat there, even with facial recognition, if you have data sets that you're not including more diversity within those training sets, then you're going to have more misidentifications. If you're using facial recognition as a biometric to uniquely identify people, and the end result is that that person ends up going to jail, then you end up having a disproportionate amount of harm that is portions of the population that are not included into those initial data sets. So I think it's really important to recognize that even though things like our face is very identifiable, it's not perfectly identifiable, and even all this information isn't identifiable. So identity may already be fraught to pin all of our privacy hopes on, especially when it comes to what Brenton has tried to define in her work with biometric psychography, and if the stuff is being inferred about you If you're extrapolating that data, then does it have the same type of protections than PII would if you're doing this algorithmic extrapolation of what that is. You said there's like 80 different types of characteristics that could be pulled in. And I know Avi, you've done a lot when it comes to looking at eye tracking and starting to see how eye tracking is trying to fit in all of this. So yeah, I feel like the reason why I'm focused on privacy, because I do think that that's the one area where we do need some sort of privacy protections. There are going to be other things like children in VR that I think eventually we're going to have other legislation, but I feel like in terms of the most pressing need, that's part of the reason why I've been focusing on that. Yeah, I guess I'll hand it over to Avi and Britton to other things you want to sort of jump in on.
[00:31:41.013] Avi Bar-Zeev: On the eye tracking part, it's good to note that there have been a lot of really good experiments so far using eye tracking data to diagnose different health conditions. And one area where we already do have existing law is in health care, right? In terms of our health data, there's a duty of care that companies have in collecting our health data and making sure that it's not accessible to third parties. Because you can think about people want that data, right? Insurance companies would love to have data that they may not have about you to know how much to charge you so they can charge you more if you're prone to get sick. And so the breaches at 23andMe are really troubling because that data could be extremely sensitive. But also in XR, eye tracking data, could be used in many ways to undermine us if the data gets to the wrong place. But I don't understand why we're not even applying the existing laws to those things and saying, look, clearly it's health data. You know, the raw data may not tell you anything about the health. That may be just personally identifying stuff. But once it's been processed, You know, if we know that you have certain degenerative diseases as a result of Parkinson's, or if we can tell from your combination of camera, front camera data and eye tracking data, what activities you conduct in your life, we can find out things that are going to make you a higher or lower risk. for different diseases. And so, you know, that data has to be protected from day one. It has to be considered our data, and we should have control over where it goes. This idea, the sort of third-party idea that anybody on the internet who happens to observe us can record it, I think we just need to nix that pretty fast, because that doesn't even help the third parties. I mean, they should be asking us for that data. They shouldn't assume the liability for everything that goes wrong with that data. That should be something that is handled much more carefully. I don't know, Britton, do you want to?
[00:33:16.863] Brittan Heller: Daniel Lawford and I have gotten into disagreements before because he believes that the GDPR covers all this. And I don't think it does. He thinks in Article One, it's covered by sensitive use cases. And I think in an ideal world, he would be right. But I don't think we live in an ideal world where that kind of goes back to the what I was saying about vibes based policymaking, where the EU is doing a really good job trying to really engage with this. They're coming out with a report on XR. I think before the end of the year. There's supposed to be another one in 2024. They have other stuff coming out in 2025. So I think they're on the right path, but I don't think they're there yet. The approach that all this European legislation takes is more of a prohibitory rather than a prescriptive approach. And by that, I mean they'll define sensitive use cases or highly sensitive use cases. and say you can't use this type of data for that. So it's kind of like what Avi was saying, where you look at things and this is what you cannot do. My fear after having been involved in emerging legal regimes that are international is that doesn't really plug in with many of the places where these technologies are being developed. And by that, I mean, in America, we don't have a federal Omnibus Privacy Bill. It's all different state based laws. Some of them are based on the GDPR, like the CCPA in California. Most of them aren't. This is where the international law professor in me comes out to play a bit, because I think looking at this different legislation as a potential conflict of laws problem that we're going to have to figure out, and it's going to be really, really important to figure out soon before these technologies get cemented, if that makes any sense.
[00:35:08.340] Kent Bye: Yeah, just to elaborate on the AI Act, when I talked to Daniel Looper, what he was saying is that they do have tiers of harm. And so there's the extreme harm that has banned use cases. So an example would be like using facial recognition for law enforcement because of these potential harms that could come from misidentifying people and sending them to jail based upon data sets that may be biased. And then there's tiers of different reporting obligations. And so even if you are deploying large, scale AI systems, then you have to provide certain information to say like, well, here's how we're trying to implement different safety precautions, or here's the research into harm. So there's different tiers of harm that they've defined. And that's one approach, I think, as we think about legislation in general, but they're driven by a trilogue process. And they also have human rights law that's driving it. But there's a way in which that the EU legislation could be seen as 5, 10 to 15 years ahead of where we're at in the United States, especially if you look at GDPR. And so it can end up changing the technological architectures for these companies that if they want to do business in the EU, they have to change the architectures. But we still are faced with these gaps in the United States where a lot of these companies are operating. There's not the laws that are dictating, because it is such a fragmented space. And there's not a comprehensive philosophy of privacy. And there's not a comprehensive privacy omnibus regulation. So it ends up creating a situation where it's very fragmented, but also for big swaths of the population, they're not going to have any of these protections, let's say, or even the enforcement aspect of it is a whole other aspect. So, yeah, as we move forward, I'm really encouraged by the work that Nita Farahani is doing, and I don't know how that's going to be filtering through these international law down to like local laws, but I feel like a human rights approach is really good to set these are the human rights principles, and then from there dictate what the laws are going to look like. So that has been very productive as to how the EU has been doing things. And so I think Nia Farahani talking about, okay, we need to establish this new human right of cognitive liberty that takes into account and defining different concepts like our mental privacy, our freedom of thoughts, and then also our free will, which also is philosophically something that had a lot of debate around. And so how do you enshrine that into a law that's enforceable when there's all this debate about what even free will even is? But I think the underlying issue is nudging every behaviors that's against our intentional actions are subtly influencing us in a way that these technologies are going to be able to do. And I think if we don't think about that, then we're going to be walking into a situation where these technologies, just like social media has been able to be hijacked. Then how could these technologies be hijacked by people to move large swaths of the population to subtly change their beliefs? So for me, that's what keeps me up at night. And until I see like, how that's going to be addressed. I think it's going to be something that we're still going to be talking about. Like how do we ensure that we don't go down these really dark paths?
[00:38:06.585] Avi Bar-Zeev: You know, we can think back historically and go back into the founding of the United States. There was, there's a reason why there's the first 10 amendments are there. And there was, these are things that were problems at the time, right? One of the amendments specifically says the government can't quarter soldiers in your house. Well, how's that relevant today? Nobody's forcing soldiers to live in our homes, but what was happening at the time, was that British soldiers were living in colonists' homes and they could see what you were reading, they could see who visited, they could see who you talked to. Eventually, it was a violation of privacy for the colonists that the British could figure out who was for them and who was against them. And so there is one of the amendments in the Bill of Rights is there specifically around that. It just needs to be updated, right? Because we don't have the government saying the soldier's living in our house. Now it's technology living in our house. And part of the harm can come from governments abusing that. But we've seen now what can happen from companies abusing that as well for profit. And we need to revisit that and say, okay, those are the harms of the day. Those are the laws we have. Let's rewrite them for today's harms. But be very flexible about how we think about these for the future so that we're not just, codifying it and saying the only thing you're not allowed to do is take pictures of something. You know, that's too narrow. We have to be looking at it from the lens of what do we need to be our full and complete selves? How can we actually be safe in our homes from all these exploits that come from, you know, various powers that have power over us? How can we be safe? Well, we need to be able to have our thoughts, like, you know, if our head is doing. We need to be able to protect those thoughts from abusive actions. But we, at the same time, don't want to throw out some really cool technology that can augment us and make us, in some ways, smarter, in some ways, more efficient, in some ways, better. Those are things we would choose to add. We just don't want the negative implications that come when they're out of our control. So that's the thing we need to be the most careful of. And I think we can make these choices. We have before. It's just there's a whole bunch of confusion today around what choice equals what outcome and there's a lot of education I think that's needed.
[00:40:03.253] Micaela Mantegna: We can debate about human rights and for me I agree that it's like the initial framework that we have to go down the standards but also one of the my concerns is with the human rights frameworks is that it doesn't have more concrete tools like data protection regulations or other frameworks that could enact more concrete protections. And for me it's like going back about this capitalistic approach of digital spaces. is why are we being surveilled? Because our data is interesting to sell us things. And also, it goes back to another point that I think is present in the Neural Rights Framework, which is the fair access to augmentation. which for me, coming from Latin America and having also been a woman that is affected disproportionately, when Abby was saying about not having the right headset and therefore you get more nauseous when you get into a meeting and be neurodivergent, you have to be cognizant of like all the things that are impacting you to think clearly in a digital spatial space. So there is a lot of things to pile up. And on top of that, if you are in a region of the world where your connection doesn't have the latency or doesn't have stability, you have all these added issues. And on top of that, we have the policies are being designed and debated and put into action into places that we don't have a say. So for me, one thing I want to come down all the time with this is like, let's talk about public metaverses. Let's talk about spaces like there is no real interest in, of course, we have different governments that are going to spy on you. And that's another layer of the debate. But for me, it's like, how do we envision a metaverse that is not based for profit? And that maybe give us some answers about how we can like change path.
[00:42:02.043] Brittan Heller: I really like that. I'm coming out with a paper next year that revisits Lawrence Lessig's code as law and updates it for spatial computing. And yeah, it's a page turner. But one of the things that it points to is that, you know, if you look at the origins of the internet, it was an academic research project. And it really privileged certain things. And certain things came naturally out of the evolution of the philosophy of the people who created it. There were positives and negatives. And, you know, some of the positives was that they were weirdos. So they made spaces for art and education and open source. And part of the negative is that if you look at the sociological definition of weirdos, they were all white, educated, et cetera, from the same demographic. So there wasn't a lot of diversity. I think if you look at the evolution and how we're seeing this manifest in XR spaces, you do have more diversity, and that's a great thing, but you don't have the natural evolution of civic spaces and places for art and education that are publicly supported. And we're going to have to be really intentional about that, because it won't just evolve like the internet did.
[00:43:16.751] Kent Bye: Yeah, I just wanted to jump in on the public aspect, because All of these websites are these privately owned spaces. They're not government funded a lot of times. And so because of that, there's a Fourth Amendment, which is protecting your home from unreasonable seizure. And then there's like these different aspects of like how privacy is defined by what's your home and what's not your home. So it's kind of public-private distinctions that are being made. And there's also a legal doctrine called the third-party doctrine that is also defining that any data that you are giving to a third party, technically, according to the Supreme Court, while you're talking about the third party, this is actually passed in the Supreme Court, that the third-party doctrine says that any data that you give to any third party is no longer reasonably expected to be private. So with spatial computing, then, do we need to kind of reimagine Defining some of these different laws that have been passed and seeing like how does this kind of reevaluate this world that we're living in now when some of these decisions were being made. We didn't live in this kind of omnipresent everything always online and also all this data that's available everywhere. So yeah, I feel like the stuff that you're saying, Michaela, around the economics, you know, it's worth noting that when I talked about the neuro rights, I listed three of the five that were connected to privacy and identity, which was like your mental privacy, your identity, and your right to free will or intentional action. But the other two are the being free from algorithm bias and also having equitable access to all these technologies so that everyone has the ability to have this access to these augmentations. And so there is this underlying economic aspect that's embedded in there as well. And that's just worth kind of noting that it's kind of embedded into those frameworks.
[00:44:56.697] Avi Bar-Zeev: The economics can work for us or against us. There's cases like if you look at the economics of Second Life versus ad driven, even virtual worlds to keep it in the same category. Second Life makes, according to Philip Rosedale, Second Life makes more money per person than Facebook does. So and they have no ads. There's no answer whatsoever. The money comes from either renting the servers or from transactions that people make. People make things in the world and sell them to other people. And so there is a business model that didn't need to exploit anybody. Well, they still needed to solve hard problems. They still need to solve griefing, which they haven't fully solved. There's still problems that have to be dealt with. But once you have a business model, that is based upon helping people, based upon empowering people, you lose a lot of the negatives very, very quickly. The negatives come from the fact that these companies are doing things that we probably wouldn't want them to do if we knew what they were doing. Why do we allow that? I know we talk about amendments a lot. There's the first amendment issue with regulating advertising. It is free speech. There have been other places where we've been able to create legislation and separate things like the Glass-Steagall Act separated Wall Street from banks. We've kind of undone that and seen the harm that comes when we've undone that separation. But we could imagine a separation of companies that have a duty of care around our personal data. They curate and hold our personal data for us. those same companies shouldn't be in the business of selling it or using it to affect us in ways that we didn't approve of. We can split those differences. I think we can design that so that the better business models will thrive and the worst ones will not do well or be explicitly made illegal. And that's better for everybody. There's no reason the internet has to be based on an ad model. It's not the best way to do things. In fact, it's just one of the laziest ways to do things.
[00:46:39.540] Kent Bye: One of the things that I've seen is the people that Or the underrepresented minorities or women, they're often the ones who are facing the most harm in these different systems. And so you really have to, especially when you think about creating safe online spaces and thinking about harassment and trolling. It is the minorities of demographics, but also for women, that they're not necessarily a minority, but they are receiving a disproportionate amount of the harm that's coming from these online spaces. And so when I was talking to Daniel Lufer, he says that it's very easy to fall into what philosophically is called utilitarian thinking, where you're like, you think that it's, quote unquote, working for the majority of your users. But that's why he says you really have to take a human rights approach, because even if it's 5% of the users who are being misidentified in whatever system that you're doing. Like, Britton, you had mentioned 94% accuracy of identification. Well, if you're part of that 5% of being misidentified, then that's the same type of 5% that's often facing disproportionate amounts of harm, which means that you have to really move away from the utilitarian approach and more to a deontological or rules-based approach, which is the human rights approach, which is why human rights is going to be so important as we move forward. But, yeah, the other just final thought that I would add out there is that we've talked a lot about appropriate and not appropriate uses of data. And Helen Isipom, in her theory of contextual integrity, is saying that depending on the context, there's appropriate information that's being flowed. Like, you talk to your doctor, you give medical information. But we're in this situation where these companies are, by default, aggregating and collecting all this information and using it in contexts that would not be necessarily contextually relevant to the immediate task at hand. It's relevant to their business model, but not always relevant to what you need to actually have the technology function. So we're in this weird space where XR needs all this really intimate data in order to really properly function. If you take that same data and use it for other use cases, like, say, surveilling you or identifying you, track your thoughts and psychographically profile you, that's where you get into this inappropriate flow of information, which I think trying to define those contexts and trying to define what's appropriate or not is one of the biggest problems for why this hasn't yet been solved in a way that seems intuitive that it should be solved, but trying to define context and trying to define what's appropriate or not based upon the context ends up being a really difficult and intractable problem. But I don't think it's impossible, especially if you start from a human rights approach of like these basics of cognitive liberty could be a way of helping to map all that stuff out. So yeah, that's my final thoughts and look forward to any questions as well.
[00:49:21.647] Avi Bar-Zeev: Yeah, I mean, I'll comment on diversity. I think diversity is part of the cure, and some companies still think of it as part of the problem, that they may not be explicitly racist or sexist, maybe they are too, but they're not thinking about everybody for sure. And in many of the groups that I've been on, when we actually make an effort, to have the designers and the prototypers come from much more diverse backgrounds, we get way better results because there's plenty of things I wouldn't think about from my background. I do have a few things that are more on the minority side visible these days, especially with what's going on. But I'm also, you know, an older white male and so there's lots of things I don't have personal experience around and I won't necessarily know what to do. So I have to be looking for other perspectives, other thoughts, other ways of other experiences of life to be able to collaborate on the best answer and just to give you an example of where that matters is there's a bunch of people out there who think that once we have these great outdoor AR classes that are really useful for looking up people's Facebook profiles or looking up the name of building or whatever. It's like that's not the use case I think most people want. Most people care about safety. Most people care about health. Most people care about social interactions. Those are the kinds of scenarios that you come to when you start talking to everybody about what they'll use it for versus the core set of people who come from an academic background and they're more limited in many ways in their life experience. It just makes for a better result overall. And so absolutely, we want to open as many doors as possible and be as inviting, as respectful, and also as accountable for when we say and do things that might rub someone the wrong way. We need to know about it. We need to deal with it and get to a world in which we can just all contribute. That's critical.
[00:51:00.125] Brittan Heller: I'm going to keep my closing remarks brief and optimistic. I think things are going to get better. And one of the reasons I think it's going to get better is the hardware itself is evolving in ways that I think actually increase our humanity. One example that I can give is mixed reality headsets. I have a baby and so I had to get a babysitter for today to decide whether or not I was going to be with all of you here in a virtual space and be able to or I couldn't be able to watch her running around my real living room. Mixed reality headsets will help a lot with that and I think meet people more where they are so that you can have more people in virtual worlds. I had a woman once say to me that the people who feel comfortable in VR headsets are people who would feel comfortable being blindfolded in a room full of strangers. And mixed reality may help with that. I think it also may go forward and pick up some of the benefits that we know from studies of video gaming, where if you see somebody's eyes, it actually is one of the best ways to decrease harassment in a video game context. A small snippet of showing a bar of someone's eyes can do that. And the technology is evolving in ways that remind us that there's a person on the other side of the screen. It's not there yet, and we can definitely do better, but that's one of the things that makes me optimistic.
[00:52:23.587] Micaela Mantegna: If I have to chime in with some last remarks, it's so funny that I lost connection to the internet just to demonstrate the point of the instability of internet connection in Latin America. I'm also an activist. I founded an organization of women in video games in Argentina. And I see a lot of the things we are talking in those spaces being replicated in this space. But I also talk to women working on crypto, women working on Web3, and we are sharing a lot of the similar topics. Whenever we talk about technological things, women are educated, and this is part of a larger problem, that technology is not our things. We are kind of like designed to use other parts of our brains. We are the creative, the beauty, and just like the conversation about STEM, about getting us into code, into math, into other areas. And on one hand that. On the other hand, I would love to share your optimism, Ritan.
[00:53:28.971] Kent Bye: Just jumping in here to say that the recording had crashed at this point. And then some of the things that I could make out what Michaela was saying was around how. There's still a lot of big major tech companies that are in control of everything and how you have to pay for privacy. And then I'm actually going to play now the acceptance speech that Michaela gave at the web XR poly awards that happened this past Sunday on March 3rd, 2024, just to play out some of the other points that she was getting out here in her final statements.
[00:53:57.743] Micaela Mantegna: So first of all, I want to thank Brittan Heller, not just for the nomination, but for her continuous and critical work in this space. Also extend this gratitude to past honorees Kent Bye and Avi Varsif. My imposter syndrome self can't believe being in the company of all of you in receiving this honor. And of course, to Ben, Sophie, and all the team for producing this award. and to my partner, Marcelo Rinesi, my family, and my beloved cats that are the workers that power everything I do. As we were saying, the metaverse is still a work in progress. We have a monumental task ahead, trying to create an ethical digital future where our autonomy is protected, preserving the best of what makes us human with creativity, ingenuity, and empathy. I often say that we are not born equal into the metaverse. and that in its current path, you will only be as free as you can afford to pay. How this collective dream of an unbound digital space where we could be able to be anyone or create anything free of the constraints of the physical world, and that's creating a digital future of artificial scarcity. Why are we carrying on the inequality when we have this beautiful digital canvas to unleash the unlimited possibilities of a post-scarcity economy? I'm a geek and a startup fan, so I dream about this kind of future. If you remember the next generation, nobody assumed that the enterprise computer was spying on them. How would that look now? And why do we have to conform to the metaverse world creeping into our most intimate thoughts and actions? Technology should not be for only those who can afford it. Technology should not be designed by a few, dictating how the rest of us can use it. Technology should not be surveillance by design, a profit as an end goal. How do we make it back? How do we reclaim this web and the next iterations of it? My later mother had a saying, the path to hell is paved with good intentions. Every small act has consequences. Let's cultivate in our work and practices the ethics of individual choice for collective resistance. Let's think the bigger picture, the impact and ramifications of our decisions, and who can be heard for that. If I only have one wish, let it be more Star Trek and less cyberpunk. Thank you so much.
[00:56:20.410] Kent Bye: So that was a panel discussion that happened on Human Rights Day back on December 10th, 2023. It was called Mission Responsible. It featured myself, Avi Barzeev, Britton Heller, as well as Mikhaila Matanya. So I have a number of different takeaways about this interview is that first of all, Well, always just great to hear all the different discussions and all the different points as we move forward into thinking about the ethical and moral implications of these XR technologies. Again and again, I always come back to the different privacy implications because I feel like that is the one area where we need the most intervention when it comes to new legislation or there's not really a lot of existing coverage. Although Britain did point out that some of the different research by Vivek Nair from Berkeley is showing how what is presumably anonymized data can actually be de-anonymized and can actually point back to individual people, especially if you have just even like 10 seconds or whatever motion data, you can start to uniquely identify people. Yeah, I also just appreciated all the other, you know, vibes based policy critique that Britain has. She's doing a lot of collaboration with different researchers now trying to look at some of the different implementations to make people aware of some of the different things that they may be giving up when it comes to, say, eye tracking technologies. Avi has written a lot about that and has also gone on to start the XR Guild. And then Michaela is bringing up a lot of points around the intellectual property rights on these other aspects of the promise of these immersive technologies and metaverse technologies, but it's still at the point where if you want privacy, you have to pay for it. And there's all these post-scarcity potentials of our economy that are being constrained and limited by our existing intellectual property regimes. And so a lot of her work is looking at some of those aspects of these laws. And she wrote a whole book that was originally in Spanish that's looking at these intersection of intellectual property and artificial intelligence. and has also given like a talk at the TED conference about the metaverse and ethics. And her internet connection was dropping in and out. And so she was popping in and speaking when she could, but then sometimes when it would have been her turn to speak, she wasn't there. And she had mentioned that that was happening as well to reflect that the internet access around the world isn't necessarily equitable or balanced field as well. So there's other dimensions that were being embodied within the course of this conversation as well. There's lots of other aspects of harassment and diversity and other harm-based legislation that's happening in the European Union and the AI Act. And so all these other aspects that were also being talked about here as well. And I wanted to share this out because I feel like this is a lot of discussions in this kind of moment in time that we're still trying to sort out as we move forward into trying to live into the most exalted potentials of XR technologies and how we do need to have the legislators to start to step in in a number of these different areas. Like Britton said, the EU has been starting to look at some of these different things, but yet there's still some of these frameworks like neural rights and cognitive liberty as a human right and other things that need to continue to propagate throughout the course of our ideas for how we think about privacy and also different regimes, especially within the United States, but also around the world to come up with some of these different protections for as we move into immersive technologies, how much more data is going to be available for all these different companies. One final note about the Polly's WebXR Awards. They did actually happen this past Sunday, March 3rd, 2024. And there was one experience called the Escape Artist by Perodowsky Creative that took away the entertainment experience of the year, the game of the year, and the experience of the year. And I'll be having a conversation with them to help unpack their experience, because not only do they have it in the context of the Quest browser to be able to watch it, but they also have just released a Apple Vision Pro version that you can start to play it as well. Looking forward to diving much more deeper into the Escape Artists, which did a clean sweep of all the different nominations that they were given this year at the WebXR Awards. I'll be unpacking that more as well. So that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

