There was an Existing Law and Extended Reality: A Research Symposium held at Stanford Law School on Friday, January 6th, 2023, that brought together legal scholars, industry experts, civil society researchers, and academics to look at how immersive technologies and virtual worlds may bring about new interpretations of existing laws or the need to write new laws that better cover harms from the “Metaverse.” I had a chance to talk with organizer Brittan Heller, who has previously been featured in Voices of VR podcast episodes #789 & #988. We cover the major highlights from the Symposium in this episode, and you can find all of the talks in this YouTube playlist here.
I was invited to give a lightening talk on the “Philosophical Reflections on Privacy from an XR Journalist,” which lays out an argument for how there aren’t any privacy existing protections that cover the types of physiological or biometric data from XR or the Neuro-Rights types of inferences that could made from that data. This episode is the first in a 10-episode series digging into the landscape of XR privacy, what’s happening on the US front, what’s happening in the European Union, and what’s happening in terms of moral philosophy and ethics to start to address some of these potential gaps. Stay tuned as we do a deep dive into what’s happening at the frontiers of XR privacy.
UPDATE March 9, 2023: Here are all 10 of the episodes in this series
- #1175: Highlights of Existing Law & Extended Reality Symposium at Stanford Law’s Cyberpolicy Center with Organizer Brittan Heller
- #1176: XR Privacy Landscape & Data Flows with Future of Privacy Forum’s Jameson Spivack
- #1177: How the EU’s AI Act Could Impact Biometric Data Definitions & XR Privacy
- #1178: How the EU’s Metaverse Initiative May Bring XR Privacy Amendments for the AI Act, GDPR, or Digital Markets Act
- #1179: Discussion of XRSI’s Privacy Framework 1.0 from 2020
- #1180: Anti-Trust & Privacy Watchdog Jason Kint Reflects on Ad Ecosystems & XR
- #1181: VR Renaissance in Moral Psychology, Perspectival Thought Experiments in Philosophy, & Bounds of Empathy
- #1182: Recreating Philosophical Moral Dilemmas in VR, the Gamer’s Dilemma, & Virtual Ethics
- #1183: From Kant to an Organic View of Reality: Scaffolding a Process-Relational Paradigm Shift with Whitehead Scholar Matt Segall
- #1184: “Battle for the Brain” Author Nita Farahany on Establishing Cognitive Liberty as a Human Right for Limits on Neurotechnologies & XR
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. So I'm going to be doing a deep dive into XR and privacy. This is one of the issues that keeps me up at night. There's an unresolved nature to the threats that we're faced with these immersive technologies, and there doesn't seem to be an existing legal framework that's out there to address it all. So I was invited to go to Stanford University to present at the Stanford Cyber Policy Center. There was a symposium that was called Existing Law and Extended Reality, a research symposium at the Stanford Law School. So this is organized by Britton Heller, who I'm going to be featuring here on today's episode. And Britton is a lawyer who's done previous law articles to define this concept of biometric psychography. All the existing biometric laws are very focused on identity, but inferences are made from this biometric and physiological data that is kind of the essence of where the biggest privacy threats that we're seeing when it comes to XR technologies. And so she's been a leader in trying to define some of these gaps. She's currently an affiliate at the Stanford Cyber Policy Center. She's also a senior fellow at the Atlanta Council and the Digital Forensics Research Lab. And in this conversation, we do more of a broad overview of some of the different topics that were being covered at this research symposium. In follow-up episodes, I track down different people who were focusing specifically on the issue of XR privacy to do a bit of a deep dive after the talk that I gave there. I gave a talk on the philosophical reflections on privacy from an XR journalist. I'll include a link in the description here that you can go check that out, and I'll link to all the other presentations that were made there over the course of that day. You can go watch all the different talks that we're talking about here. So, that's what we're covering on today's episode of the Wisdom of the Hour podcast. So, this interview with Britain happened on Tuesday, January 17th, 2023. So, with that, let's go ahead and dive right in.
[00:02:09.030] Brittan Heller: Hi there, my name is Britton Heller. I am an affiliate at the Stanford Cyber Policy Center. I'm also a senior fellow at the Atlantic Council in the Digital Forensics Research Lab. And I work on the intersection of law and technology.
[00:02:24.801] Kent Bye: Maybe you can give a bit more context as to your background and your journey into working at the intersection of XR technologies and law.
[00:02:31.730] Brittan Heller: Sure. I started off my career as a human rights lawyer. So I practiced international criminal law and then different applications of human rights law overseas for about 10 years. And in that capacity, as a prosecutor at the International Criminal Court and doing refugee law throughout Asia and opening up a law school in Afghanistan, I started to examine the connection of technology to social events. I came out to Silicon Valley in 2015 and I opened up a civil rights center that really focused on how do we bring justice and fair treatment towards all in digital environments. And I've been working in XR contexts since about 2016.
[00:03:17.570] Kent Bye: Okay. And yeah, so I know we've had some previous discussions where you're talking about the existing privacy law, biometrics law, and defining phrases like biometric psychography. So you've been looking at this intersection of existing law and what covers things that need to be covered with an XR. And then you actually held a whole symposium at Stanford that you invited me to, to come and speak. And there's other lawyers and academics and folks from industry. that was called the Existing Law and Extended Reality, a research symposium there at the Stanford Cyber Policy Center. So maybe you could give a bit more context as to what you were trying to achieve with this gathering of all of these legal minds to do a bit of a research survey of existing law and how that may or may not apply to what's happening in XR.
[00:04:03.608] Brittan Heller: I got tired of going to talks where everybody was asking what the metaverse was and wanted to move the conversation forward to how the metaverse fits into our existing government structures. there are questions that I was asking myself that I didn't have the answer to. And so I wanted to gather together groups of people who might know how does jurisdiction translate from the tangible world into a digital world? Is advertising the only viable business model for the metaverse? What is the future of content moderation look like in virtual worlds? Is there a philosophical framework for the metaverse that is better than others and why? And what does smellivision like when you experience it? So the conference brought together lots of legal scholars, but in unexpected pairings. So I tried to do a mix of different formats. So we had traditional panels, we had a tour of the Stanford Virtual Human Interaction Lab, where we have to try smell-o-vision and get rocked by the haptic floor. We had a very intimate keynote at the dinner where we got to ask Philip Rosedale, who's the creator of Second Life and High Fidelity, his experiences building virtual worlds and governance systems therein. The keynote with Jeremy Bailenson was actually a question and answer session because the keynote itself was a talk that he'd given earlier and was available online for people to view before they came with their questions. We had a bunch of lightning talks where people could really go very quickly, but a quick and deep dive into a very specific question or a line of research or a technical product and explaining how it worked.
[00:05:55.810] Kent Bye: Yeah, one of the things that was striking to me was there was a lawyer named Yuqing Vogtlaug who said that, very similar to other media technologies, that there's a lot of existing law that's in the United States that will just as well apply to VR that's no different, let's say, like copyright law and intellectual property law. that may have the same type of enforcement that you have. But I think one of the things that we're trying to get at was what's new, what's different? Are there things that are happening with an XR that is going to require new law or new ways of thinking about law? So I'd love to hear your initial take of some of those things that came out of the discussions that you think are the frontiers of things that we need to be thinking about. in order to, as we look at the existing law in the United States, maybe compare it to what's happening in the EU with GDPR and other privacy laws or other movements that are happening in other regulatory bodies. But I guess this is a US-based conference, so it's very US-centric, but there's in some ways the metaverse is global and international in some sense. And so countries from around the world will have to be wrestling with these similar questions and looking at how their law does or does not apply to the XR. So what were some of your takeaways from some of the discussions where what's new or what's different about XR that may require new laws or new ways of interpreting law?
[00:07:10.456] Brittan Heller: I think the first thing that stands out is we had over half of the speakers at the conference identified with minority communities. And that was very, very intentional because the metaverse is supposed to be a place for us all to go to learn, to engage in commerce, to engage in art and in new forms of statescraft. So if we want to create a metaverse, that is reflective of the diversity of our society, we need to bring in a diverse set of commentators to talk about what's at risk. And so I was really, really proud of that. Some of my favorite talks in that regard were Michael Running Wolf came in from McGill University, and he is a scholar who studies using XR to preserve Native American languages. And we put him on a property law panel to talk about cultural appropriation and basically how do minority communities. address the transferring of their cultural property into the metaverse in both consensual and non-consensual ways. That was really interesting to me because it's an issue that they've had, you know, people joke about Coachella being full of cultural appropriation, but I think this can be a unique question because everybody talks about the metaverse as a tool of cultural preservation. And to me, he was talking about how that line is a bit fungible. Another interesting thing to me was in the constitutional and criminal law panel, Amy Stefanovich of the Futurum Privacy Forum, who's their VP of policy, was talking about research that she had done about sex toys. And I think, as Eugene Bullock had pointed out, he was really interested in the purient interest. And he said, sex, sex is going to be the main app of the metaverse. I don't think he's necessarily wrong in that joke. Amy was talking about how she was looking at laws around sexual assault and tort, and how hacking into somebody's vibrator or their intimate toy might not be considered an assault because of the way that laws are currently structured. I think that looking at criminal law is going to be a very interesting realm. As a former prosecutor, there are obligations that a prosecutor has to a defendant and their counsel. And you have to share exculpatory evidence upon request called Brady material. And I wish we had talked more about how laws of evidence are going to change and be stretched when you're talking about digital evidence, and the way that our constitutional protections are also going to have to engage in some calisthenics if you want to incorporate digital worlds into their jurisdiction. Another way that I'm thinking about that is the word home. Under Fourth Amendment jurisprudence, it's very specific protections. You have protections for privacy and safety in your home that you don't in public spaces. And a lot of platforms are creating digital homes. My question is, will you have the same rights and responsibilities in a digital home as you will in your physical home? And if not, what's the difference?
[00:10:29.452] Kent Bye: Yeah, I think some of the discussions around the third party doctrine of whenever you give data over to a third party, then it has no reasonable expectation to remain private. And at the time that it was made, it wasn't such that we live in a digitally infused world where we give so much of our data over to third parties. And so when you talk about having a probable cause or warrant that is being put forth to gather some of this data, the third party doctrine essentially bypasses that. You've kind of waived your rights to privacy whenever you give over that data. And so for a long time, I was really concerned around the third party doctrine in terms of how there could be specific information that's connected to your identity, that's getting into the hands of these companies that are being recorded, and then the government could get ahold of. After your paper on biometric psychography, my focus turned away from that a little bit to focus more on the inferences of the inference-based harms that are happening rather than identity-based harms. But I think that's still a concern in terms of what data are being collected by third parties and how some evidence with the Carpenter case would start to weaken it a little bit. But at this point, the third-party doctrine, it's still kind of the justification for mass surveillance campaigns for the government, because when you give data over to a third party, you have no reasonable expectation of that remain private. In these cloud based systems, we have so much of our lives that are in these third parties. So I feel like that was a big point that was made that I'm taking away in terms of what happens with the future of the third party doctrine that is specifically around the Fourth Amendment interpretation. So I'd love to hear any other reflections you have around that.
[00:12:04.434] Brittan Heller: I do a lot of consulting for XR companies, and that's actually something that I always bring up where I say, if you're going to be moving to cloud-based storage, then you really have to do an examination of privacy laws, not just in the United States with a third-party doctrine, but also with the GDPR and everywhere else around the world. You may not sell in other countries, but you can ship to any country, right? So questions about jurisdiction, and I think Eugene termed it the Bangladesh problem at the conference, saying if someone accosts you and you go to the police and you want redress and it turns out they're in Bangladesh, how do you get adequate remedy? It's an interesting question for me, how criminal offenses or civil offenses that are committed in the metaverse or implicating the metaverse are going to be dealt with. And I'm I used to be a CHIP for the Department of Justice, and that's the Computer Hacking and IP Specialist. It's the one geek prosecutor that they have in every prosecutor's office, every section at Maine Justice, in every local U.S. Attorney's office. And you become an expert in electronic evidence and getting warrants or 2703D orders, which makes you get subscriber information and not user-generated content. And it's subpoena for that. So it's a step below the level of proof that you need for a warrant. My question with the metaverse is, they haven't really defined user generated content, not from the aspects that I see as being integral to the metaverse, architectural data, environmental data, because on the internet, the subscriber data is your name, your address, your phone number, your billing information, and when you go on and off and online. It's not the type of environments that you build or the discussions that you participate in. And I'm wondering how that is going to have to adapt for spatial computing-based environments. And I don't have an answer to that yet, but it's something that Even though we had a member of the Department of Justice at the conference, I left feeling like that was something I wanted more information on.
[00:14:22.668] Kent Bye: Yeah, I would like to register a dissonant view of the terming of the Bangladesh problem, just because when the World Health Organization names different diseases around the world, they've moved away from naming it to regional areas just because of all the stigma that happens. And so I feel like the Bangladesh problem could just as well be called the problem of people living across the world.
[00:14:45.001] Brittan Heller: The problem of global jurisdiction.
[00:14:48.358] Kent Bye: Yeah, it's much more generalized of when there's people that are in these shared spaces, but they're across the world to specify into a specific country, I think is not necessary. You could just call it the global jurisdiction. I like that as an alternative, but I think there's larger question of what are the street crimes within the metaverse and are there street crimes and can they be prosecuted in a way that they would be in the similar physical realm? What I noticed, at least when I see things like the First Amendment, and there's things like the fighting words interpretation of the First Amendment, which is that if you're going to start a fight with somebody, and you say something that is not necessarily protected as for speech, because it could lead to physical violence. But yet, that's something that's way different than what's happening online, with the type of discourse that happens, because there's not the same threat of the physical violence. But yet, there still may be emotional or other psychological harms that are happening that are still happening in the 2d realm. But I think I think in the real time nature of numerous environments, there may be an increased emphasis of that type of cyber bullying or harassment that's happening in a special context that feels like it may need to have additional protections as a form of a street crime that normally at this point, because these are on these websites that are owned by private companies, becomes more of a matter of the jurisdiction of that company and their code of conduct and their policies as to whether or not they're going to ban individual accounts, but whether or not there's going to be things that happen in the context of private property that could still constitute a crime that you could get prosecuted for. Like if you commit a murder in someone's home, it's still a murder. So there's similar types of VR street crimes are happening in these virtual spaces that start to get recognized. and prosecuted in a similar way. So I don't know if that's a line of inquiry that this type of conference was trying to generate those different types of interactions that may require additional interpretations or additional laws.
[00:16:41.208] Brittan Heller: I think it definitely came up, especially in the speaker's dinner with Philip Rosedale, because he talked about building a virtual world. And the best way he found to govern it was to give authority to localized communities within Second Life. to create their own social norms and enforce them accordingly. It was sort of taking it out of the realm of having a centralized authority like you would with law enforcement offline. And he said that it actually was, he thought it was more effective. What I would like to have seen more discussion about, and really what I've been thinking about since the conference is, Street crime in the metaverse, if it's governed by terms of service, is more a matter of contract law than it is tort or criminal law. And I'm not as well versed in contract law, but as a legal scholar, it makes me wonder if the remedies contemplated in a contract law regime are at all appropriate if something creates the kind of somatic impacts that you were talking about. You feel it, you think it, it creates emotional valence. and contracts use monetary means to really address these harms. So in that way, it's kind of similar to tort law in a way where you can get damages, but I'm not sure it's a clean fit. And that's what I've been wondering about, which is a good thing coming from this conference where we stopped saying, what is the metaverse? And we started saying, how are we going to create rules and enforce those rules and provide remedies when those rules are broken within a legal context? The second thing, to kind of go back to one of your earlier questions, is we had a lot of international participants in the conference. I wish there had been more, but with all of the travel woes that were going on around the new year, we didn't get as many as I would have hoped. However, I got to see different threads running through the different regional participants. the Latin American participants were all very interested in neuro rights and other type of expansions of human rights regimes. And so we saw that with Daniel Castano's talk on regulation and Michaela Montenegro's talk about intellectual property. We had speakers from France come, Florence Giselle Marquez from Sciences Po, and she let us know that in France, there's a lot of engagement with metaverse governments. I think that is probably the leader in the European realm. Outside of GDPR and everything, they have a national action plan. I've done workshops for French content creators. We were even engaging with French diplomatic representatives as we were forming the conference. The questions that she brought to the forefront were really interesting to me. I asked the question to them, is looking at European AI regulations, is it folly? Because that's what I look to when I try to forecast the future of XR. Some of the regulations are things that will be very salient, like you have to identify when you are interacting with an AI. And I haven't seen that reflected in any product decision yet, and it's going to have to be. Some of the people that I'm collaborating with academically were at the conference as well. And so the third thing that I think came out of it is there were a lot of people looking at empirical experimentation of legal principles. And that's really exciting to me because I'm a big dork. that I'm part of. We are doing an experiment around eye tracking and notifying people that their gaze is being tracked. So we're trying three different techniques, as were proposed in a law review paper by Professor Ryan Calo from University of Washington. And we're going to see how people respond to the different notifications and see if we can then make product-based recommendations. Not to spoil it too much in case listeners end up being participants, but there's research that shows when you're being watched, you are more conscious that your gaze is being tracked. So one of the scenarios will have like a little gnome, little green gnome guy in the corner with big buggy eyes watching everywhere you go. Another of the scenarios will have rainbow trails that track your vision as you're engaging the environment. I'm envisioning it being in a gallery-like environment. So you look at the paintings and think that you can actually see where you're looking. So you are aware of the type of information you're transmitting to the company. I think that these graphical representation of legal theory and information theory are, I'm really excited about that.
[00:21:24.458] Kent Bye: Can you elaborate on what do you mean by the information theory and legal theory? What's the intersection?
[00:21:28.999] Brittan Heller: How people respond to the knowledge that their eyes are being tracked and whether or not they consent to that after they see it.
[00:21:39.462] Kent Bye: I see. So kind of visible acts of the technology and making it clear so that from the legal perspective, they're kind of, I mean, most of these contracts that people are signing are adhesion contracts, so they really don't have any leverage, but you're just trying to sort of point out that
[00:21:54.244] Brittan Heller: There might be a different way to do meaningful informed consent. And maybe the way that we make it meaningful and informed is by showing people the data, not just telling them that their data is being tracked.
[00:22:06.031] Kent Bye: Gotcha. Okay. So there's a couple of things that you had mentioned there that brought up some moments that I thought were really striking. There was one with Susan Aronson from George Washington University, who was really advocating for data governance as an approach rather than regulation. And that took Florence Gesell, who was from Europe and the EU representative there who was saying though that actually the EU has actually figured out how to do really good regulation around some of these technologies, especially around GDPR. And so there was a little bit of a battle there between a data governance, which I imagine or interpret at least from what I got from the context is the users or the platforms or the technology companies decide some of these things versus what needs to have external regulation come in and something like GDPR to regulate that. And we can't necessarily expect these companies to self-regulate on issues around privacy. And with the AI Act that's emerging, that there was a lot of these ethical frameworks. I was talking to Daniel Loeffler from Access Now saying that, you know, there's a lot of utilitarian AI ethics that we're basically saying, as long as there's a majority of people that have benefit, then we don't necessarily need to worry about the 5% of people that may be having harms, but from a human rights perspective, that's the exact opposite of really listening to the people who are being harmed the most to try to design the systems around. And so I don't know if that type of dialectic between the data governance that Susan Aronson was arguing for, how that contrasts to some of the more regulatory approaches that you see from the EU. Love to hear any comments or elaboration on some of those conflicts that are happening in that panel discussion.
[00:23:40.100] Brittan Heller: For me, that brought up a very fundamental question that I've never gotten a satisfactory answer to. When you're creating regulations, do you regulate based on the harm you're trying to prevent? Or do you regulate based on the technology that specifically implements that harm? And there are pros and cons to either. And so I felt like at the heart, that was the argument that they were having, where I think that Florence was saying, we know how to craft the regulations. So really focusing on the harm And Susan was advocating more for a hardware-based approach to it. The United States, I think, used to take a very technology-centric related approach to their data governance when you look at canonical internet law. And by that, I mean, if you ever read an affidavit for a warrant, it is like a graveyard of antiquated tech. My favorite thing in there is a Bernoulli drive. And the Bernoulli drive was in the early 80s. It had the huge floppy disk. And just on the off chance that somebody somewhere sometime may have relevant data stored on one of these graveyard pieces of tech, they list it all out there. which is a very different approach than saying we are creating something to protect privacy or to prevent X type of harm. And to me, there's virtues with each approach, but also downsides.
[00:25:14.237] Kent Bye: One of the other panels that I enjoyed hearing was from Joe Jerome from Meta had an opportunity to speak. And he was on a panel that was called old torts, new harms. And you mentioned torts and love if you could maybe elaborate on tort law and harms and ways that there may be new harms that are emerging from these immersive technologies and just how the tort law may be connected to XR right now.
[00:25:40.179] Brittan Heller: I heard So tort law is almost like personal injury. So if you shove somebody or assault somebody or have damage to their property, that's the legal regime that you use. So it's when you sue somebody for personal or monetary damages. The panel had Joe, Joe is always excellent. A panel also had Marianne Franks from University of Miami Law School, who talked about her article, which related the risks of sexual based harms in the metaverse. There's a few law review articles that are kind of canonical talking about the metaverse. Eugene Volokh and Mark Lemley wrote one, and that was kind of a an overview of what happens if you get punched in a metaverse? Can you sue? What happens if somebody steals your digital dollars in a game? Can you sue? And Marianne takes it one step further. She's talking about very foreseeable, very specific, and very harmful incidences related to sexual assaults, deepfakes impersonation in pornographic images and non-consensual intimate images being deployed throughout the metaverse. So I look at the first article I mentioned, Marianne's article, And I'm going to toot my own horn and say my own privacy-based work, but for a while, those were the only things you could find in legal academia about the metaverse. I think that what this conference did is it let legal academics know that whatever they are working on, there is an application to virtual worlds. And my hope is that there won't be four or five articles on the metaverse going forward, that this will really motivate people to take their area of expertise and interest and apply it to new form.
[00:27:31.287] Kent Bye: Yeah, I really appreciated Marianne Franks's perspectives on calling out some of the different ways in which specifically women have experienced these different types of sexual violence and something that Jessica Outlaw has talked about. I know we've talked about that previously, as well as, you know, her article that she wrote and I really appreciated Avi Barzev's presentation on eye tracking as well, because I feel like that, you know, and I gave a talk around XR privacy, neuro rights, and other concerns. I feel like the privacy concerns around the type of data that are going to be made available and that Avi really stepped through how our eyes even work and the type of data that we're getting from our eyes. But there's these larger issues of XR privacy that for me, I think are at the forefront of where we need some type of legislation to start to address it. Like we can't necessarily expect the technology companies to self-regulate themselves and to rein in how they're going to use this type of data. And, you know, your paper with the biometric psychography, even with my conversation I did with Daniel Lefler of Access Now, there's GDPR and there's the ways that it's potentially being enforced right now. but even then there's the AI Act. But in both of those, the way that they define biometric data is still somewhat tied to identity. You know, it's still an open question, according to Daniel, if in the final version of the AI Act, which is expected to be passed sometime in 2023, if they'll come up with a more generalized approach that removes this connection to the identity that's still yet to be determined. But at this point, privacy is very connected to personally identifiable information. But yet the thing that I guess I'm concerned about from both Avi's talk and my talk that I gave on neuro rights and other aspects of privacy, and as well as Mark Miller, who's talking about the research that they were doing at Stanford in terms of ways that information could be identifiable, you know, there's no federal privacy law, there's discussions that we need one. And if you expect that we could see something here in the United States that's going to start to rein in different aspects of the type of inferences that could be made from this technology, or if we need to rely on something that you know, is GDPR connected or AI act from the European perspective, and potentially this upcoming virtual world initiative from the EU that should be launching here in May of 2023, aka the Metaverse initiative that they're having, that may have had listed some concerns around the types of data that are coming from XR. So I'd love to hear a bit of an update from you and your perspective, since this is something you've been writing about, talking about. What's the update as to what's the outlook for what's next to actually address some of these issues of the inferences that are coming from the type of physiological and biometric data from XR?
[00:30:04.504] Brittan Heller: I think the biggest update is in a paper it talked about how since there is no federal comprehensive privacy bill, this is all incumbent on states to legislate under the American federalism based system. there are a lot more states in the next year that are going to be coming out with their own state-based version of privacy laws. And I think there are six of them. But I'm hopeful that they take the best aspects of California's mini GDPR and Illinois' biometric definitions and Virginia and Washington State's sort of melange of all of that. I hope they actually study what works and what doesn't and don't just copy and paste the law, writ large. Having advised a lot of legislators, it's really hard for them not to do that because they want to create consistency from state to state. So it's a very tempting place to start and it's an even more tempting place to end up. But looking at other types of cybercrime bills that can create really perverse outcomes. A lot of the laws around cyber harassment and around specifically cyber stalking, just added cyber to a physical stalking statute, which kind of invalidates the purpose of those bills because to require physical contact between a cyber stalking victim and a perpetrator really defeats the point and object of that law. And I'm hopeful that the legislators are being advised by smart people who realize both the virtues and the pitfalls of cutting and posting.
[00:31:50.214] Kent Bye: Yeah, another really striking moment was the intellectual property discussion panel, where there was this question of, to what degree do these AI systems, are they violating intellectual property by scraping the internet or taking copyright photos? And I loved the contrast between Mark Lemley, who was just saying, well, these are going to benefit people, so we should just use it, versus Michael Runningwolf, who was saying, hey, let's take a look at the type of harms that are happening from is indigenous culture of different levels of appropriation and ways that these AI systems have been stealing different aspects of indigenous culture for a long time, not just in XR AI technologies, but in other formats. And you mentioned the Seattle Seahawks as a, you know, an example of a logo that's being trademarked by the NFL, but yet it's coming from many generations of Indigenous cultures. Just that contrast between ways that AI systems that are, in some ways, doing a colonialization of seizing the data and taking ownership of it, and making these different higher dimension latent space inferences on capturing that information. And I think even this week, there's been starting to have like Getty images starting to bring lawsuits against things like the journey, stable diffusion, because they were scraping a lot of the data without having access to the copyright law. So I think those discussions around intellectual property, I think were in the context of the AI, but also generalized out into the general metaverse. So I'd love to hear any other reflections you have on that in terms of where we're at now with some of those issues and from Michael Reinerow's perspective that you had mentioned earlier, but in the context of this intellectual property and these pending lawsuits that are just being brought to the courts, starting to push back against capturing of data without permission.
[00:33:38.447] Brittan Heller: For me, my talk ended up being about content moderation and how generative AI is going to form the backbone of XR content moderation systems and really change the format and what's at issue. But I think that actually applies back towards the IP panel as well, because generative AI at this point is being used to, people are experimenting with art and with poetry and advertising copyright, and God help them, academic essays. But I mentioned this earlier in our talk, but I'm still wondering how you get remedy when your artwork is used to feed a black box AI algorithm And it's not impossible to disaggregate it, but it's economically inefficient and really, really hard. And so, part of me wonders, everybody talks about these as almost like a democratizing development in art creation and information because anybody who wants can have access to stable diffusion and mid-journey and open AI's toys and chat GPT. But if there are remedies that can only be accessed by people with very deep pocketbooks, it doesn't create this contradiction of access. And we've seen that contradiction before in the internet, in the harassment lawsuit that I was a part of. I brought it to see if an average person could get redressed if they were being harassed online. And the answer was no, because I had backers who had donated seven figures worth of their time. That would not be possible otherwise. So are the only people who will be able to enforce their intellectual property rights corporations or already well off as it is? I hope that's not the case. But I think it will be an especially thorny issue in XR because these companies don't have monetization schemes in place yet. Not really. And if you look at all of their developments over the last 18 months, there's a lot of focus on user generated content and digital assets and NFTs. And the reason is because nobody wants to go to a virtual world if there's nothing in it. And this is separate from debates about interoperability. I think it's more, you know, I'm an architecture nerd. If somebody were to use some of Julia Morgan's designs in their virtual home, And then there was a lawsuit by her estate saying, you can't do that. You have to tear your virtual walls down. And what will a company do if they are hosting that type of content? And it basically dissolves the whole investment. So I think we'll have different categories of competing rights and unclear remedies at this point. It's a different story if you're talking about like a digital fashion t-shirt with Mickey Mouse on it. than if somebody is using somebody's painting of a forest to create a digital environment. So I wonder if the law is going to have to evolve to accommodate the functional difference of digital objects.
[00:36:46.154] Kent Bye: Yeah, there's a number of different lightning talks that I appreciated, like Brennan Spiegel did a whole talk on the medical aspects of VR. And we had, I'll be Shik who also was talking about the FDA and how the FDA is considering different experiences or technologies to be a medical device. And so, but I'd love to hear any reflections you had on that talk about the FDA for my chick, who's talking about, you know, some of those different programs that they have in terms of looking at classifying and authorizing different applications to be used in a medical context.
[00:37:19.568] Brittan Heller: that I thought was interesting and a bit of future prediction is the fact that the FDA approved an experience as a medical device two or three weeks ago. It's a very new development because previously you could go on the FDA's website and you can see a list of the medical devices related to XR and it's all hardware, which makes sense. It's a device, but Now talking about individualized experiences as being akin to devices, I think shifts the paradigm and really leads into more of a diffusion of responsibility if these things are found to hurt people or if these things are trying to help people who gets the credit. right? It also, to me, leads into more issues around the role of children's rights. A lot of people focused on what is appropriate and inappropriate for children in the metaverse, and they've really focused on content-based dialogues at this point. Do you want a 13-year-old running around Horizon raising havoc? Probably not. Do you want a six-year-old playing a first-person shooter in XR? Probably not. The experience approved to be the first medical device experience in XR treats lazy eye in children, which to me creates an interesting conundrum. If the hardware is rated 13 plus, but you need to treat lazy eye in a kid, I can be four, they're six so that it doesn't require surgery. Again, it's taking existing paradigms. And because of the way the hardware mechanisms function, making us rethink our assumptions and our baselines. from their legal paradigms.
[00:39:00.105] Kent Bye: Yeah, and I'd love to, when you were talking about in your talk, this shift from content moderation into conduct moderation, as we move from 2D to 3D, how in the realms of content moderation, that's been these artifacts, these digital artifacts that are somewhat static in the sense that there's a photo or a video or piece of text that you could look at and almost in an objective way, do an analysis or at least offshore that. But when we talk about conduct moderation, we're talking about real time interpretation of someone's behaviors in an immersive environment and being able to enforce the code of conduct. So I'd love to hear any other reflections you have as we move from this paradigm of content moderation into conduct moderation.
[00:39:43.885] Brittan Heller: On the panel with me was Kim Malfacini, who works in policy at OpenAI, and Henry Adger, who's an independent expert on synthetic media. And I wish that I'd gotten to ask Henry about the difference between open and closed models of AI content creation. And he and I have been talking about this since the conference, and we'll probably write something. But what I mean by that is, Henry was saying open source and closed source technology, there are different affordances, and I wrote up that there's different assumptions that come along with it. where the internet freedom based communities open source is seen as an unfettered good because it's based on the premise of access to information. And closed source were seen as walled gardens that kept people out from accessing that information. when you're looking at the future of content moderation or conduct moderation or environmental moderation in XR, you can do a lot more with a closed environment than an open environment because the closed environment will have a limited set of variables and you have to think about the way that you're forming the AI. You can't form an AI that will account for every possible behavior or violation. It's going to have to learn from itself. So a closed universe based on the way that AI teaches itself is actually, I posit, would be more productive and more fair and more able to neat community norms than an open source model that would derive from scraping preexisting data sets. And that brings in all of the different harms you talked about before. So I think this should cause people to rethink some of their assumptions about open source good, closed source bad. And it's something unique to XR.
[00:41:39.115] Kent Bye: Yeah. And as we start to wrap up, I'd love to maybe end this discussion around the symposium with the last session they had with funders that are coming forth. And so I guess one of the issues that people have had is finding funding to be able to do the type of research that we need to look at the cutting edge of the metaverse with what the existing laws are. So maybe you could talk about some of the different funders that were there and what you see is kind of the next steps.
[00:42:02.945] Brittan Heller: We had two funders there who gave different points of view. First, we had Eli Sugarman, who has been affiliated with Schmidt Futures and the Hewlett Foundation. And Eli gave a talk about how to pitch your interest to various funders and more strategic advice. All the talks were recorded, and so you'll be able to see them on YouTube and access these if you weren't able to attend in person. The second talk came from Bursu Kilik of the Minduru Foundation, and she was able to announce the formation and launch of the XR30 Fund. And this is my favorite thing about the conference, and this is why I think most people stayed the whole time. I think three people left early. It was remarkable. But she announced that this is the first non-company based funding available for people who want to create new knowledge. So if you have an experiment you want to run, or if you have a paper that you want to create, or if you have a new type of experience that expands paradigms as we know it, you should apply for funding. I'm one of the people who are going to be reviewing applications. And they have a couple other people like Lewis Rosenberg and other people who are deeply involved in the XR community and have been for a long time that are going to help review these. So I would encourage everyone to apply. It's a million dollar fund. So if you don't apply, the answer is already no. But I'm hopeful that this will help creators and academics and people who have probing questions get the support they need to push the industry forward.
[00:43:41.215] Kent Bye: Great. And finally, what do you think the ultimate potential of these immersive technologies of virtual augmented reality and integrated with different AI systems, what those might be and what they might be able to enable?
[00:43:55.543] Brittan Heller: I think if we are looking at real-time content moderation, that we may be able to use AI as a tool. And it won't be an infallible tool. It will be a tool. But we may be able to use that to help us figure out the type of people who are engaging in behaviors and actually not just block them, but engage them in the type of interactions that actually help people self-correct. That would be really exciting to me, and that would be a way that it is completely different from social media. I also am really excited about the way that content moderation in XR is going to make the advocacy community change their tactics. And by that, I mean, I used to run the Center for Technology and Society, and they were very interested in slur lists, you know, going to companies and saying, what type of words do you scan for to do content moderation? Let us know. Can we add to it? Can we see it? And that is going to be irrelevant when we move to a content moderation based model that is almost like a mind map. So conceptually based and the algorithm will learn from itself and expand and contract that based on actual usage and not supposition. That's going to be really exciting to me. And I want advocates to understand that so that they ask better questions. And I think that's where I'm going to end the conversation because that's what I'm hopeful that the conference did. It helps people ask better questions.
[00:45:32.790] Kent Bye: Awesome. Anything else that's left unsaid that you'd like to say to the immersive community?
[00:45:37.033] Brittan Heller: You should watch Kent's talk. It was amazing and it made every professor there feel very lazy.
[00:45:42.398] Kent Bye: Nice. Okay. Yeah, I had a lot to say of just trying to bring together some of my insights, and hopefully it'll be helpful for people as they start to figure out how to navigate different aspects of NeuroRights and how does presence fit in. And yeah, it was kind of an overly ambitious talk as always. And yeah, hopefully.
[00:46:00.829] Brittan Heller: And it was fabulous. I know that there were several law professors who were very, very excited about it. So thank you for, for presenting that foundational work.
[00:46:10.299] Kent Bye: Yeah, I might release it as an episode or once it's online, I have the slides with all the citations and I wanted to just give folks an opportunity to look at some of the existing things that are happening, because as we start to look at neurorights and try to define the types of biometric data, using my approach towards presence to create a bit of a taxonomy for what types of data are coming out, both for the first person perspective, but also bystander privacy, as you look at all these things. And hopefully things that are coming out of the EU, that's at least where I'm putting my short-term hopes of what's happening with GDPR, what's happening with AI Act, with the virtual metaverse initiative that's coming forth, that there may be larger discussions there. And hopefully eventually the US will also keep, kind of catch up with whatever they start to do with the federal privacy law. So yeah, hopefully that'll be helpful for folks. And yeah, I also just appreciate all the work that you've done on this issue and your writings on biometric psychography have been a deep inspiration. informing my own approach for talking about privacy. And yeah, it was great to get all these different legal scholars together to hear their perspectives and to get a bit of a wrap up from you from this gathering of existing law and extended reality. So thanks again for joining me to help unpack this gathering and see where we go from the future of looking at the intersection of law and policy and technology.
[00:47:22.599] Brittan Heller: Thank you again for having me. It was really fun and great to see you in person.
[00:47:26.901] Kent Bye: So that was Brandon Heller. She's an affiliate at the Stanford Cyber Policy Center, and she was the organizer of the symposium called Existing Law and Extended Reality that happened at Stanford Law School on Friday, January 6, 2023. So you can look in the show notes, you can see a link to the other different speakers, as well as a link to all the different YouTube videos. You can go check out all the different panel discussions. So for me, I guess one of the big takeaways was that Eugene Volokh was basically saying that there's no radically different things that are happening within XRT technologies, at least from his perspective, that are requiring new laws, that most of the different stuff we can start to apply are existing laws. Although, as we go on, that may change, and there may be more of a need to say, define what some of these VR street crimes are. Eugene Volokh calls it the Bangladesh problem, what Britton and I call the global jurisdiction problem. But if you are trying to track down individuals, then how do you figure out where they're at, what the jurisdiction is, and where criminal law is, how it may change the laws of evidence. There's implications for the third-party doctrine, and a variety of different harms that are happening in the case of these different virtual environments, marriage infrigs, and the different sexual-based harms. You know, the implications of what's going to happen in the future with AI doing moderation and moving from content moderation to conduct moderation. That was a thing that Britton actually emphasized pretty heavily within her talk, discussing some of that, some of what's happening with FDA and these medical devices. For me, I guess the biggest open question is we need some legislation with privacy. There's no federal privacy law that's comprehensive in the United States, and the latest discussions that I've seen, at least, isn't necessarily thinking about what's happening with XR technologies or neurotechnologies. Where I do see a lot of movement is actually what's happening in the European Union, with both the GDPR and whether or not there may need to be some amendments to change some of the definitions for how biometric data is being defined. And I have some follow-up discussions talking about both the AI Act as well as the Digital Services Act and Digital Markets Act and this upcoming Metaverse initiative or the virtual world initiative that's happening in the context of EU that may start to unpack that as well. But in follow-up conversations in this series, I'll be talking to other folks based in the United States and also the European Union. And then at the end, I'll be digging into some of the more moral philosophers and ethics and take a broader look at what's happening with XR ethics and moral philosophy and research that's happening in the context of XR. And then I'll end up with a conversation that I had about process philosophy with Matt Siegel, who I've interviewed previously. If you do want to check out my talk, you can check it out, and I'll put it in the show notes here, or you can look it up on YouTube. It's called Lightning Talk Philosophical Frameworks for the Metaverse with Kent Bye. But my talk is actually titled The Philosophical Reflections on Privacy from an XR Journalist, looking specifically at these issues of privacy and different reflections there. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue bringing this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.