#1179: Discussion of XRSI’s Privacy Framework 1.0 from 2020

On September 8, 2020, the XR Safety Initiative launched their XR Privacy Framework 1.0 which is a set of self-directed, privacy guidelines aimed to empower “individuals and organizations with a common language and a practical tool that is flexible enough to address diverse privacy needs and is understood by technical and non-technical audiences.” It is “inspired by the NIST privacy framework’s approach” with the five functions of “Identify, Govern, Control, Communicate, and Protect” while the XRSI Privacy Framework modifies it to be “Assess, Inform, Manage, Prevent.” XRSI is in the process of developing a 2.0 version of their Privacy and Safety Framework that’s aiming to be released by the end of 2023.

Their September 2020 paper surveys the fragmented nature of US privacy law, with insights from the EU, and adds in some specific considerations for XR privacy including what they call “Biometrically-Inferred Data”, which they define as “collection of datasets that are the result of information inferred from behavioral, physical, and psychological biometric identification techniques and other nonverbal communication methods.” Most of the examples of Biometrically-Inferred Data listed are the same types of physical or physiological biometric identification techniques that are tied back to identity, but they also included an adapted graphic of Kröger et al’s 2020 paper “What Does Your Gaze Reveal About You? On the Privacy Implications of Eye Tracking” to further elaborate on some of the behavioral or psychographic information that could be inferred from eye tracking. Biometric inferences from XR data remains a big open questions for how they’ll be treated by the law, and it’s something that I first started covering back in March 2017 in an interview titled “Biometric Data Streams & the Unknown Ethical Threshold of Predicting & Controlling Behavior.”

Brittan Heller’s February 2021 Vanderbilt Journal of Entertainment and Technology Law article coined the phrase of “biometric psychography” that she defines as “a new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests.” Heller’s emphasizes that it’s these psychographic inferences that differentiate XR data from how existing legal definitions of biometric data are often explicitly tied to identity. For example, Heller states, “Under Illinois state law, a “biometric identifier” is a bodily imprint or attribute that can be used to uniquely distinguish an individual.” The lack of explicit personally-identifiable information in the types of biometric and physiological data that comes from XR means that it lives within a undefined legal grey zone that is largely unprotected by most existing privacy laws.

One of the limitations of self-regulatory guidelines like XRSI’s XR Privacy Framework is getting major industry players like Meta, Google, Valve, or Apple to adopt a framework like this. And even if they did, then there’s still the open question of enforcement. Ultimately in order to provide consumer privacy protections by the biggest players, then we’ll need either a comprehensive federal privacy law in the United States or stronger privacy protections at the State level, but not everyone lives in California which has some of the strongest consumer privacy protections.

But this paper is explicit in targeting individuals and organizations to provide a “baseline” of “solution-based controls that have principles like “privacy by design” and “privacy by default” baked in, driven by trust, transparency, accountability, and human-centric design.” So it’s utility lays in organizations understand the existing legal landscape and point out some specific considerations of XR data and XR privacy. Since there has yet to be a comprehensive Federal Privacy law, then some of the XR-specific concerns first covered back in this framework may still be relevant in providing a lens into informing potential federal privacy legislation. Companies often only follow the bare minimum for what’s legally required, and so since we’re in an interim space with XR privacy, then this framework lays out some foundational principles for companies to voluntarily adopt. XRSI is also collaborating with Friends of Europe to “explore innovative policy solutions for possible transatlantic regulation of metaverse.”

This interview with five contributors to the XR Privacy Framework 1.0 was recorded as a livestream during the XR Safety Awareness Week in 2020, and features:

  • Suchi Pahi – Data Privacy and Cybersecurity Lawyer
  • Kavya Pearlman – Founder & CEO XR Safety Intitiative
  • Noble Ackerson – leader of Data Governance initiatives & Product Manager for Ventera Corporation
  • Jeremy Nelson – Director, Extended Reality Initiative (XRI) – Center for Academic Innovation, University of Michigan
  • David Clarke – Cybersecurity and data protection work and EU-GDPR Strategy Advisor for XRSI

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that's looking at the future of spatial computing and the ethical and moral dilemmas of mixed reality. So continuing on my series of looking at XR privacy, today's episode is going to be looking at the XRSI Privacy Framework 1.0 that was first released back on September 8th of 2020. This is actually a conversation that I had a couple of years ago at the XR Safety Awareness Week back from December 10th of 2020. So, Kavya Perlman is the founder and CEO of XR Safety Initiative, and they put out this XRSI Privacy Framework 1.0 back on September 8th, 2020. So I had this discussion, breaking it down with five of the different contributors of this, and just to give it a bit of a context. So this is a early framework that's trying to pull in all the different laws and perspectives. There's very fragmented legal perspective in the United States, and so just trying to pull in all the different relevant laws, but also to take a very XR-specific look at that. Kavya was also taking inspiration from the NIST privacy framework, which has five functions of identify, govern, control, communicate, and protect. And so this XRSI privacy framework has assess, inform, manage, and prevent. And so just to give a bit more context to the framework, they self-describe it as a component that empowers individuals and organizations with a common language and practical tool that is flexible enough to address diverse privacy needs and is understood by technical and non-technical audiences. This framework draws a baseline, offering solutions-based controls that have principles like privacy by design and privacy by default baked in, driven by trust, transparency, accountability, and human-centric design. So, this is really like a guidance that is for companies to self-adopt and potentially helping to develop different laws and regulations. I personally think that at some point in order to get big entities like Meta to really comply to different aspects of privacy, that you're actually going to need some privacy legislation in the United States that's going to have some real teeth of not only trying to lay out what the boundaries are, but also have a whole level of enforcement. So there's still a lot of open questions as to whether or not we're going to have a federal privacy law. And so something like this XRSI privacy framework could help to take into account some of the different XR specific considerations. They were very early in trying to identify the inference based aspects of XR privacy by what they call the biometrically inferred data. So we'll dig into that in the wrap up. Yeah. And just to note that they are continuing on into the 2.0 version of this privacy and safety framework, and they're going to be expanding it out and to have additional elements of XR ethics and safety that are implemented into it. They're in the process of designing it and they're aiming for December of 2023 to be able to come up with their latest iteration of this XR privacy and safety framework. This is a conversation to kind of break down both the framework and some of the major ideas of what they were trying to do, as well as some people in different parts of the XR industry that were participating in helping to write it and also potentially deployed within their own context of wherever they're at in their companies. So that's what we're coming on today's episode of the Voices of VR podcast. So this interview with Suchi, Kavya, Noble, Jeremy, and David happened as a live stream on Thursday, December 10th, 2020. So with that, let's go ahead and dive right in. All right. Hello everybody. Thanks for joining us today on this live recording of the Voices of VR podcast. I'm here in collaboration with XRSI. We're going to be doing a deep dive into the XRSI privacy framework with a number of the contributors and authors. And so first I'd like to just have everybody that's here on this discussion today, introduce yourself and tell me a bit about what you're doing in this XR industry and your connection to this XRSI privacy framework.

[00:03:49.460] Suchi Pahi: Okay, great. I'm happy to go first. My name is Suchi Paghi. I'm a data privacy and cybersecurity lawyer. And I got pulled in because I'm obsessed with privacy and saw that Kavya and her team were working on a privacy framework and had a lot of fun digging in from sort of a US environment and what product engineering and privacy side looks like.

[00:04:10.440] Kavya Pearlman: I'll go ahead next. I'm Kavya Paroman. I'm the founder and CEO of XR Safety Initiative, the organization behind this XRSI privacy framework. We have been dedicated and committed to helping build safe and inclusive XR ecosystems and privacy being just one of the major pillars of it. It makes sense and really excited to have this conversation given all of the news around various different Facebook-related issues and lawsuits that are coming up. So very timely. Thank you, Kent, for hosting this.

[00:04:44.143] Noble Ackerson: I guess I'll go next. Hi, I'm Noble. Noble Akerson, formerly worked in the computer space for AR applications for fitness, most recently led data governance initiatives back in 2018 to help global international development nonprofits become more aware of how they handle data, especially for the types of work that they were doing with helping out organizations like the UN and their digital ID 2020 project, telling them how awful that idea was. XRSI, regarding this specific organization, I was a privacy framework contributor. Thanks, Kavya, for including me. And currently, a lead product and innovation focused on data privacy and basic general innovation, bleeding edge technologies for government agencies in D.C. with Ventura Corporation. So that's basically me in a nutshell.

[00:05:39.829] Jeremy Nelson: Great. I'll go next. My name is Jeremy Nelson. I'm the director of an extended reality initiative at the University of Michigan. So we're focused on bringing XR technologies for teaching and learning broadly to all of our 19 schools and colleges. And I connected with Kavya last year and we brought her out to campus to share the important work around safety and privacy and security to our campus community. And she asked me and a colleague at Georgia Tech to help contribute to the privacy framework from a higher ed perspective in the terms of the data privacy requirements that we're under as a higher education institution. And so I'll be happy to talk more about that today.

[00:06:22.918] Kent Bye: Awesome.

[00:06:24.808] David Clarke: Yeah, thank you very much. Thanks for inviting me on the panel. Yeah, I've known Kavya a while now. We've had dialogue about various different things over the last few years. Yeah, and it's great to be part of the initiative. I've got a background in dealing with cybersecurity and data protection, some kind of pretty large companies around the world. And I think it was last year, I can't remember, I was involved with the UK government's verification of children online project. And I still notice them using some bits and pieces I put together for them on their later projects. So yeah, kind of a very interesting kind of idea of protecting, you know, not just privacy, kind of protecting vulnerable people online as well.

[00:07:08.693] Kent Bye: Yeah, so I know that this Privacy Framework 1.0 came out around the Facebook Connect one, which there was a lot of things that were happening. And so I didn't have a chance to be able to dive into it until just really taking a look at it and in preparation for this talk that we're having here. So I've been looking a lot at different aspects of ethics and privacy. And the thing that I've noticed is that there's a lot of different contexts that depending on whatever you're doing, if you're doing higher education, if you're doing medical, applications, if you're doing entertainment, each of those are different contexts, if you're working with children, if you're not. In some ways, these technologies always kind of blurring those lines of those contexts. And so maybe you're kind of like trying to fuse together all these different systems. So I think in some sense, I see this privacy framework as for number one, trying to use together all these different conceptual frameworks and to see, okay, what's new with XR to be able to handle to different people. But there's also what I think of like Lawrence Lessig's model, where there's these four major different aspects of any socioeconomic issue or any technology. There's the actual technological architecture and the code. So how is that code going to be implemented to be able to best handle privacy? there's the law, and so what's the relationship between the legal relationships between that? There's the market dynamics, which I think this maybe gets it a little bit when it comes to the digital divide and other larger issues like that, but this privacy aspect isn't necessarily trying to address the market dynamics. And then there's the culture, which is the relationship to the people and the human factors. And so I think that in some ways I see this privacy framework as kind of addressing that. So just as an overview, I think we're going to be diving into the different expertise here with people who have a law background, people who are looking at a very specific context of higher education or looking at some of the human factors. But maybe Kavya, you could help to also set the deeper context here in terms of the intention of this framework, how it came about, what was the impetus for why XRSI wanted to put together this framework and start to bring together all these different collaborators on this.

[00:09:05.826] Kavya Pearlman: Yeah, well, thank you, Kent. And so you already know that we have been working very closely with Facebook Reality Lab. And while this may alarm some, but this is also comforting to many that we do that by remaining unbiased and sort of ethical. And that was the goal is when we knew that Project ARIA is coming up, Facebook Connect, they are going to make so many announcements. You can almost say, and all the panelists and contributors would concur, we kind of rushed to define this outline to get it out before those announcements come through. And at the same time, we also gave a heads up to the Facebook policy team that this is in the works. And during the time when Facebook Connect was going on, they rolled out some kind of a responsible innovation principles that I will talk about later, be advised on those responsible innovation principles. But the difference is the Facebook responsible innovation principle They just, because, you know, even though we advise, we don't want to be their PR people. So I didn't give them the exact, like, do this and do that. But I was like, okay, this is okay. But one of the things that I noticed is like people first versus human-centric. So our privacy framework was based off of these immersive standards that we pushed out in May of 2020, which basically included these goals like protect identity, build new rules, people first, all of these. There is about five principles. So based on the ethical principle, we felt this dire need to get in front of this, what I call the moving, fast-moving train, and put some kind of a roadmap or a guide so that there is this independent, interdisciplinary group of people. And at times, we have even been called as activists to try to provide some sort of a blueprint and navigate. So it started with the idea of privacy, but I'm sure Shruthi will speak to it after a lot of advice from the legal side. It really is a more of a privacy and safety. So it is a more of a blueprint or an outline for building trust and safety in the XR space.

[00:11:30.307] Kent Bye: Yeah, maybe we could just have each person talk about those aspects of your intention for being involved with this and like how I broke it down in terms of the legal, the human factors and the other specific contexts. And so maybe we'll start with you, Noble, because I know that the first section here, it goes through like these four areas of the assess, inform, manage and prevent. It sounds like you were involved with that a little bit, but maybe you just sort of give a little bit more context as to your intention with trying to help navigate this and create some sense-making frameworks to be able to understand what's happening here.

[00:12:01.714] Noble Ackerson: Yeah, thank you. So I came in, Kavya was kind enough to pull me in based on some previous work that I'd done around awareness. And so I basically fell in the, specifically in the informed bucket of work. And I guess as a product lead doing things, both emerging technology and current, I kind of fall in the, less in the legal nowadays and more in the human factors taxonomy that you laid out. And the contribution that, from an awareness standpoint, that I was sort of thinking about was, you know, how to structure the process by which both consumers of XR technologies and creators of XR technologies can work together to deliver something that addresses a lot of what we see today. Shedding data that we own, obviously is my data, see these organizations in a sort of a zero-sum sort of mentality, I provide you service, you provide me all the data, right? So there were three core areas that constituted that. There was the context, give me enough information that is transparent and not boiled down in legal speak. GDPR, for example, use the explanation of explaining it to a child or, you know, something that anyone can understand. There's some sort of design patterns that can be adapted to that to move away from some of the dark patterns that we see. So just context, give me the information in a way that helps me understand the evolving privacy landscape. So that was the first piece. Once I understand, I have the choice as an individual to act on whether I'm adhering on what I want you to collect as a business or as a service and what I don't want you to connect. And then the third piece was the controls. So how intuitive are the controls for me to act on my choices, right? So as we know too well, in today's world with normal applications that we've been using over the last couple of decades, if I wanted to unsubscribe, a simple example, if I wanted to unsubscribe for or a service that I no longer want, I consider spam, the dark pattern would be just sort of a no contrast link at two point font hidden at the bottom of the email that you'd still have to click through and call someone's mom in order to get it. So just starting from that baseline, we expanded and applied it generally to this framework. And I think it turned out pretty well.

[00:14:36.722] Kent Bye: Yeah, and before we dive into the other legal aspects, Kavi, I want to just sort of loop back to you because as I look at this privacy framework, you know, Noble just talked about the inform aspect, but you start with assess and sort of assessment and mapping, risk assessment, inform, so the context, choice, control, child safety, manage awareness training, monitor and review data processing, special data type considerations, and then prevent, so protecting the data, identity access and control, data security and harm prevention. So maybe, like, as I'm looking at that, I'm sort of seeing how there's different stakeholders with each of those. You know, there's the technology's relationship to the people, there's the compliance officer, there's maybe the designers and the technologists and the architects, and that you're trying to set forth these step-by-step processes so that each of these stakeholders can start to take into consideration for how some of these privacy considerations maybe fits into their design process. And so I'm just curious if maybe you could expand on that a little bit in terms of like, if that's from an existing framework, if there's new things in here that you haven't seen in other places, or, you know, how you're kind of conceptualizing of that as like a framework and how that is going to be put out for people who are maybe looking at this framework and wanting to implement it.

[00:15:49.399] Kavya Pearlman: Totally. And the idea or the initiation of this thought was from my previous experience working at Linden Lab. So I was the head of security for Linden Lab. And during my time, this virtual reality platform was launched called Sansar. And as we all know, Linden Lab is also a maker of the oldest existing virtual world. So in 2017, when I joined the lab, I needed to build some sort of a federal level or some framework for cybersecurity. And so I adopted or helped them adopt NIST cybersecurity framework. And it kind of really worked out because they had to comply with all the state requirements. So the state compliance, like 52 states, as well as the international So during my time, GDPR had come about in 2018. So all of these pieces that I helped the lab adopt in sort of a cross section of virtual reality, as well as this legacy infrastructure that was being moved to the cloud, to the microservices, which is another way of, you know, creating these IT infrastructure that are just much more optimized. That's when I stumbled upon and realized NIST cybersecurity framework was really good for this thing. Why? Because you could speak to all the way from the top management to anybody at the reception or anyone at any level, and they would really plug in and understand. So you know how there is this four pillars in the XRSI framework. It is sort of like following a similar thought pattern that NIST did, allowing a common language, but also going granular all the way to allowing designers and engineers to, let's say, take a control and run with it and be able to implement that. What do the lawyers need to do in the organization? It's very individual focused from the sense of it caters to the human rights, such as, you know, do no harm type of a notion. But at the same time, it goes all the way to like, hey, vulnerability disclosure, or threat intelligence exchange, and very granular stuff that you can just hand it to an engineer as a control, and they could just implement some of these things. There are three aspects that are never seen before in any other framework, whether it's a cybersecurity or privacy framework, which is, one is the special data type consideration. where we introduced another term called biometrically inferred data, which is so critical to take note of as well as understand and protect in case of XR domain. Then another one is the child safety. This is something that is often just a privacy COPA, which is Children Online Protection. That's the one that we rely on. But being back at the lab, I understood that that check the box for COPA, it does not protect children. And David can obviously talk more about it. David himself has been advising UK's ICO about these issues. I just had the luxury of these amazing, brilliant minds advising me, as well as this background of what needs to happen in order for an organization to adopt these things. And then my previous experience working with Facebook as well, back before the lab, I did that. So that was, again, understanding these emerging technology risks. So how do we combat them proactively? So the third mindset shift or difference in this framework is It's not a protect, but intentionally prevent. And that needs to be there for XR because seeing is no longer believing as well as once you see something, you can't unsee something. And that's why we have to prevent harm rather than just protect. Of course, we need to manage the risk, but we definitely need to prevent.

[00:19:46.797] Kent Bye: Cool. Yeah. And as I think about this, I want to bring Jeremy in here and then sort of get into the more legal aspects because the legal sets the larger obligations in terms of what the government is requiring. And a lot of these terms of biometric data or whatever get defined in those laws and that we're kind of in the midst of potentially having a federal privacy law. And so we'll sort of maybe get into some of those dimensions, but the legal aspect, both in the U S law, but GDPR and European law are both playing into this in different ways. But Jeremy, I want to bring you in here just to get your perspective as someone from coming from higher ed, you look at something like the privacy framework. There's for me, when I look at this, there's lots of different contexts that are out there. And education is a very specific context. Right. And so from your perspective, like, how are you oriented to this framework and what are the, either the more generalized aspects that is universal to everything that you're doing and what is maybe more specific to what you're doing with higher ed.

[00:20:39.519] Jeremy Nelson: Yeah, great question. So, you know, as a public research institution and, you know, as a higher education institution in the United States, we're bound by federal privacy regulations as well. There's the Family Educational Rights and Privacy Act, or FERPA, which is kind of a corollary to HIPAA, around educational data. So any student data related to, you know, their learning and their education, we're bound to protect that, to keep that information secure and protected, similar to HIPAA. And so as we're beginning to develop or procure educational content that students would use as part of their learning experience, whether it's learning how to operate a nuclear reactor or doing a physics course or creating a narrative in an English course using augmented reality, we need to be mindful of how that data is being used, who has access to it, what can be inferred from it, right, especially around as we get into the medical space. And we're knowing some of the largest vendors in this space are Microsoft and Facebook and HTC and things like that. So we're very interested and want to be mindful, we don't want to play catch up. So I was really interested in like, I don't want to try to fix this later, or oh, we should have addressed this later. So it was really important for me to make this part of our initiative early on is thinking about privacy and safety. So whether we're building an application with our team here, I want to be thinking about how do we protect the data? How do we authenticate properly? Or as we go out to purchase or use some of these platforms, not even talking about the headsets, but just other content delivery platforms, you know, we need to have agreements with those vendors to protect our information, student information.

[00:22:19.446] Kent Bye: Hmm. Yeah. And what, and so what was the big motivating factor for you to sort of get involved with helping to be directed into this XRSI framework then?

[00:22:29.005] Jeremy Nelson: Well, I mean, part of it is to put my efforts where my mouth is, right? If I'm going to say it's important, we need to contribute. Kavi had asked us to help. And so I was like, all right, yeah, we can bring together some of our resources and looking at it. It's not just FERPA. We're also bound by other Title VI of the Civil Rights Act, Title IX of Higher Education Act, the Americans with Disabilities Act. So we have a lot of work protecting that information in our other areas of creating online education. I mean, it was just important for me to help contribute that expertise, you know, we have here at the university and help lead in this way and support efforts.

[00:23:05.984] Kent Bye: Awesome. Thanks. And yeah, let's move to some of the legal aspects with both Suchi and David. So I recently did an interview with Joe Jerome, where we kind of do a whole primer of the history of privacy and in the United States, at least the privacy law is so fragmented. It's all over the place. There's a lot of discussions about a federal privacy law, but it's not like one place to go to, to like figure out what the universal framework for what the concepts and legal approaches to privacy are. There's lots of different things all over the place. So I think in this privacy framework, I'm seeing like you're starting to pull those things together, but Suchi, maybe you could just give a little bit more context from your perspective, that legal angle, like what type of things that you were trying to do with participating in this and, you know, just kind of give maybe a bit of a, an overview of that landscape of the legal aspects that are also included in here.

[00:23:52.813] Suchi Pahi: Yeah. So the interesting part of that is Joe is fantastic and he was involved in doing the development of this framework with Kavya and everyone else who was involved in developing. So I sort of came up on the later end of this privacy framework and did a review from the legal perspective and a product engineering guidance perspective. I mean, what's fascinating you about it is fascinating me about it, is that Kavya pulled together a team that sort of took the EU environment, which is the GDPR, it's all about data privacy as a human right, and built that into this framework, and also mimicked a little bit of the CCPA and gave us what looks like a really robust privacy framework and got ahead of where XR could potentially end up if it was just left to its own devices. something that I find really exciting about this privacy framework. And to talk about sort of where the U.S. is now, we do have a lot of sectoral approaches. I mean, anyone you talk to in the privacy space typically says, hey, the U.S. is just very fragmented and very messy when it comes to data privacy. And this is true, but at the same time, it's also true that most companies have customers in California. And because California's requirements tend to be so demanding and robust, especially over the last two years, most companies are adhering to California standards. So.

[00:25:16.447] Kent Bye: Hmm. Yeah. And what was the other intention for why you wanted to be involved with this?

[00:25:22.630] Suchi Pahi: I think it's a good idea to at least set what you consider the best floor and then start the conversation and push people to the table because otherwise you get left with this just kind of messy chaos and people are doing whatever they want. And the US consumer group or user groups are moving more towards being privacy aware and wanting to have control over where their information goes and what their identity is and who sees all of it. That's been a major shift in just the last five years.

[00:25:53.455] Kent Bye: Yeah. Yeah. And so David, I know that what's happening over in the European Union and the General Data Protection Regulation, the GDPR, it's like Suji just said, it's like treating privacy as a human right and really having that much more comprehensive approach to privacy. And maybe you could just talk a little bit generally around if you see that GDPR is already robust enough to be able to handle everything, or if there's unique considerations with XR that are actually maybe saying that it needs to be expanded in some ways, or what kind of specific insights that can give.

[00:26:30.929] David Clarke: That's that's a really good point. And I think probably like many privacy kind of regulations, they're probably kind of would have worked great 10, 15 years ago in their current format. You know, kind of technology is really moved fast, much faster than kind of maybe the people who wrote the legislation sort of envisioned. So, yeah, the concept's right. The principles are absolutely right. But actually, you know, making it work is a totally different kettle of fish. And so someone said this earlier, the earlier that these frameworks can be in place, the better it is, because then there's kind of early agreement. I mean, one of the things in the GDPR is Article 8, which is verifiable parental consent, which sounds pretty straightforward, but actually it's phenomenally difficult to do electronically. How do you verify that a child has a parent, that this really is their parent. And from there, you've got to derive their age. And you've got to derive quite often, it may not be a parent, it may be a guardian. So you've got this complexity of how do you make that work online. And the current status is, is that actually, it's down to the guardian to sort this out. And reality is that can't be done. You know, I've got three children. if they've all got 30, 40 applications each, I really cannot be kind of managing that for my kids. That's 90 applications I've got to manage all the time and make sure it's age appropriate for them. You can't do it. And I'm kind of reasonably technically aware. So what it means is there's a whole raft of potential parents who will not have a clue. Most parents I talk to kind of live in the world of make-believe. They normally go, you know what, my kids kind of know best and they can look after themselves. Yeah. Would you do that with your kid and let him go into a bar or a pub or a club? Do you remember those days when we could do that? But in the days that you could do that, yeah. you wouldn't do it, you wouldn't do it in the physical world, so why are you doing it in the electronic world? And I think it's become even more critical now, not just for children, I think, you know, young people with the lockdown that we've had in the kind of UK, I think more and more people are very susceptible to, you know, online and digital harms, much more vulnerable than maybe they've kind of been before. So, you know, there is very little control other than I kind of agree with training and awareness, but I'm not convinced it's kind of that usable most of the time. There needs to be something where someone can say, you know, this isn't right. How do we manage it? How do we control it? You know, I'd have trouble controlling my children's stuff, and I've tried everything to the nth degree. You know, in the end, with my son, yeah, I had to put my router in a metal box that I locked up, because he figured out that all the controls I put on the router, yeah, he only had to put a pin in the back of the router and reset it. And, you know, for about three weeks, he had me fooled, because he was pretending to kind of kick himself off at nine o'clock every night, or whatever the time was. And then one day, he was on it, like, 20 past nine, and I'm going, hey, how did you do it? And he wouldn't tell me for two days, he wouldn't tell me. So I said, look, I'm not going to you're not in trouble, but you've got to tell me it's driving me around the bend. And it was because he'd figured out you can put a pin in the back. And that was when he was about 11. You know, so it's really tough. And, you know, most kind of parents, when I kind of talk to them, they go, you know, my kid's a good kid. He won't do that kind of stuff. And it doesn't make sense because people have paid me loads of money to manage internal security for a company where people are paid to do a job and I get paid to make sure they don't do stupid things. It doesn't make sense. I wouldn't be needed if you could train adults to do that. Why do you think that's fair on kids? And of course, you've got that concept that, you know, I think is valid in many cases, that the trigger is age. But, you know, you're going to have different capabilities and different maturity levels at different ages, because actually age is not the full story. It's just the trigger. It's the trigger to say, OK, from now we've got to do age appropriate design. And then that kicks into place. And then that's a whole, you know, how do you manage that? How do you make these decisions on what's appropriate and not appropriate? And many normal games, let alone virtual reality games, you know, they're so kind of loaded with things to make them addictive. And someone kind of mentioned the dark marketing capability. And they just kind of built into them the whole time. And I think people kind of realize manipulating, you know, people from a kind of young age outwards. So I think there's a lot to be done there. And the thing is, it is a man-made world. We can do something about it. It's not like we're moving Mount Everest or anything. It's only tech. And we can do anything with tech if we wanted to. But it's getting in early enough before, actually, when I've been at meetings from the big tech companies, they're all taught the well-meaning talk. But you can see that they're not going to be the first mover, because the first mover will lose out. And that's kind of some of the big difficulties.

[00:31:21.738] Kent Bye: And I think one of the other things in talking to Joe Jerome is that how both from the legal perspective of how once a regulation is passed, then sometimes it will actually set forth the definitions for say what the definition of biometric data is. So for example, in Illinois, that has a very specific definition that could be aspects of what we would consider like biometric data because it's coming from our body, but it may have these other definitions around it has to be personally identifiable. And so I guess I wanted to just get the perspective from both the lawyers, because it's one thing to define in a privacy framework, a definition for a class of data, but then it's a whole other thing that is defined in the law of what that means. And so when it comes to say, some of these biometrically inferred data, whether it's like facial recognition, iris scanning, retinal analysis, voice recognition, recognizing your ear shapes or keystrokes, gait analysis, gaze analysis, eye tracking. There's galvanic scan response and EEG, ECG, EMG. So there's so many different levels of this data that are out there. And so from a legal perspective, Do you feel like that is part of the role of this privacy framework? Or also just talking to legislators like XR to me, that seems to be one of the biggest areas, which is like these whole new classes of data. We don't actually know all the risks. We certainly know it's going to be revealing some intimate information. We may not always know if it's personally identifiable or not. Probably over time, it's going to be personally identifiable. How do we deal with it? How do we let people record it? Just trying to get a sense from either Suchi or David in terms of biometric data and the legal perspective on that.

[00:32:58.415] David Clarke: I was going to say that there's kind of one thing to sort of mention is on the GDPR, there's also the concept of high-risk processing. So not only do we have these data types that we have to be careful of, it's how those are used. And under GDPR, it's actually how you use them as well will increase the risk of what's called high-risk processing. So you may have multiple things that actually make it much more vulnerable and much higher risk. So it's not just because it's child's data. It's not just because it's eye movement. It's putting all those together and how it's been used in any application.

[00:33:35.899] Suchi Pahi: To sort of like zoom out a little bit, Kent, you're touching on what I foresee would be the next couple of steps and for XRSI or any other groups in this, which is to try to get this from privacy framework and in front of legislators into actual legislation. And maybe that's a state by state approach, maybe that's a federal approach, I don't know. Over the next couple of weeks, you're gonna see a lot of 2021 predictions from many law firms and also privacy lawyers about what they expect to see in terms of privacy laws next year. But one of the things is that without enforcement, there's really no obligation on a company to do anything. And so people can sign things and say, we agreed to use this privacy framework You know, agree being a strong term, but it's not legally binding. So it's really the whole like self led self regulated industry thing is something we've tried and are currently reaping the benefits of so the goal now should be to educate people and legislators, and I feel like that's what XRSI has really started this fall rolling, about what is it that we're going to see coming down the road, like what are the technical specialists seeing that the rest of us should be aware of when we're trying to protect people's privacy and also enable them to enjoy the world of XR. And something Kavya has said a couple of times when we've talked is, you know, Sushi, this is the new next internet. And I personally, even though I'm reading all of the information and writing about it and working in it, cannot fathom what a XR new internet really means in the environment. What does it mean for kids? What does it mean for medicine? What does it mean for the average person who is just trying to go through their life and maybe virtual meetings? What does it actually mean? And what are the harms that come out of it? And I think that's the exciting stuff that right now, if you're a technical specialist in the XR space, you have the opportunity to talk to your state legislators and say, look, biometric data means X according to Illinois. And that's great in the facial recognition world. but biometrically inferred data could cover so much more and could have XYZ consequences over the lifetime of a person. So maybe one of the approaches could be, hey, look at a kid who's starting in an XR world with their playtime education, blah, blah, blah. How does that look by the time they get to college applications? How does that look when it comes to the criminal justice system and what do we do about disparities, you know, vulnerable populations, Black Lives Matter, like these problems aren't magically gone and do the type of definitions we use affect all of those social movements as well? Like whose life is being affected and how? And so these are the case studies slash use cases that now need to be made and advocated for or against so that we can have regulations that make sense in this way.

[00:36:29.721] Kent Bye: Yeah, it seems like there's a number of different stakeholders here, which is that there's the laws that are being set, which are going to be dictating what the technology platform providers like the big companies like Facebook or Apple or Google or Microsoft, you know, what their obligations are in terms of how they're designing their platforms, you know, because the technology is going to be done by these big major technical corporations, then there's a big part of whatever they decide to do. So it's nice to hear that you're in conversation with them, but you point out that, you know, unless there's some enforcement and actual taking these ideas and put them into law with some sort of way that there's a check and balance and enforcement, then I think that's going to be a big part, but there's other. aspects here that I think you're trying to address, which is at the designer and implementer level. So Noble, I know that you're kind of maybe more of a ground level of someone who's designing technology experiences with people and that you have to sort of navigate both what the obligations are from a regulatory perspective, but also to try to understand the different design processes to figure out how do you conceive of a conceptual framework of privacy to be able to help put that into design process. So maybe you could just unpack that a little bit more as you are maybe on the front lines of helping to implement some of this, you know, some of the challenges you face or how you navigate that.

[00:37:44.272] Noble Ackerson: Yeah, so good question. In my world, in the sort of product, the implementation, like you said, ground level, we definitely sort of straddle more the market, the legal and sort of how users interact with this product. And so from the legal standpoint, we do run it. I personally run into a lot of complexity, whereas the guidance coming in from some of the leading regulations, such as the GDPR, they actually tend to focus more on individuals. You know, privacy is a human right. And I totally agree versus As a great example, adjacent individuals to me. So if I were to share my information that data could influence someone else like my family member. So I'll give you an example. If I share my 23andMe data today from a biometric inferred data standpoint, so is it my data if I shared just my DNA with my family? What happens in the case of a breach? Who bears the brunt of breach disclosure if I'm the chief data officer or whatever at 23andMe? So I find that that equates to things that we're talking about today with XR and with the news that came in a couple of days ago, like, you know, how this equates to, say, Oculus and Facebook, right? So if I'm tying a device that can be shared amongst the family, we're all quarantined, we're all using A typical home is probably going to have one device per Facebook profile. And if I'm forced to tie that in, what's being inferred from the biometrics that are coming out of that? And how do you design for that if you want to not go in the direction of Oculus? So it seems you're almost, as a business, as Facebook or Oculus, you're muddying the data points for a specific. And how useful is that data coming out? So these are the kind of things that current regulatory environments, with their regulations, do not really cover an opportunity for, as someone said earlier, for organizations like XRSI, the regulations, and other opposing forces to help bring to the balance. That way, we're not just principled in what we deliver, but we are approaching it with treating data as a human right being sort of at the forefront that we attack this by.

[00:39:59.203] Kent Bye: I'm curious to hear your perspective as well, Jeremy, because you're also in the position of potentially implementing this. We talked a little bit about the regulations and stuff, but there's the four phases here that this privacy framework is laying out from the SS, inform, manage, and prevent. And so I'm just curious to hear, like, as you start to like, try to put this framework into practice, like what your special considerations are or what kind of feedback you have in terms of how this is maybe helped provide you with some conceptual frames to be able to solve some of the specific problems that you're trying to solve in your context.

[00:40:32.501] Jeremy Nelson: Yeah. I mean, it definitely helps as we're having conversations and we're building some of the applications. So as we're having conversations with the faculty. of how you know obviously they want to build quickly and you know let's not spend all this time on security so I think it's helping some of that discussion about especially when we begin to connect into you know we use an enterprise learning management system right and so now we're connecting into another system that has other aggregation of data and like what are the security mechanisms for that how do I know the student that's doing this experience is that student right back to David's point about how we know who the person is like how do I know Jeremy is actually doing the assignment and he didn't give it to a roommate or family member or someone else to do so that's one thing to potentially solve and I think definitely as we begin to look at working with other vendors the types of terms and agreements with what they're doing with the data whether it's you know, living on a cloud platform, whether we can have our own instance, understanding what the chain of trust is in those negotiations. You know, we do that with Zoom, we've done that with Microsoft Office 365, right, the university has agreements with the data protection with those companies. I think some of these startups and other firms, you know, they haven't maybe thought through that yet, so it's helping inform our discussion with them. And I think Kavi and I have talked about this. There is a reality caucus as part of the United States House of Representatives. And I think, you know, while having the framework is great, I think getting that in front of people that can actually make it do something. So I think we could use our platform here at the university to bring that up and kind of communicate that with our legislatures and kind of have an avenue to share the concerns.

[00:42:13.279] Kent Bye: Yeah. And Kavya, when I was talking to Joe Jerome, one of the things that he was talking about is the concept of harms and privacy harms. And you have sections in here in terms of there's a safety aspect of not just the privacy, but also the privacy and security or privacy and safety. And so I know Joe had mentioned that there was, I think it was Intel had created a paper that was creating some taxonomy of different harms, but Just curious if you could expand on that a little bit, because I know there's a whole section here of risk assessment and trying to ask a series of questions and how that starts to play into this overall discussion here of thinking about, for me, when I look at it, I see it as these trade-offs between, there are some benefits, but also some risks, and those risks are those harms. And as you are, anybody who is navigating this, you have to have some sense of trying to trade these off. It's never a clear picture. There's always going to be some ambiguity. And so maybe just talk about how XRSI is trying to reduce some of that lack of sense making framework or ambiguity by having either questions or making taxonomy or how you even start to address this issue of harms and safety.

[00:43:20.917] Kavya Pearlman: And Kent, that's a very, very complicated question. And I heard the podcast with Joe, and there are use cases out there that do attempt to mitigate these harms. But it's getting even more complicated. Why? Because we are now sitting at the cross-section of this artificial intelligence data sets that are absorbing data from XR ecosystems. We are going to see a prime example of it with Project ARIA. where, willy-nilly, you're just capturing the entire reality, feeding it to a server, keeping it there for 72 hours, and then processing it. And then over time, the artificial intelligence system learns from this data, whatever that reality capture is. Now, what happens when all types of data sets go into it? I'll use a very, very simple example, and I think you I remember when we attended the Stanford University's Identity Summit back in October, I mentioned this as my own story of sort of a privacy dilemma and things. So I was born Hindu, and I converted to Judaism. And then after that, Islam. Don't ask me. Long story. But anyhow, while I did that, I tried to stay completely off of social media. and I did not inform my Make India Great Again parents or my distant Make India Great Again relatives who hated the idea of some people becoming Muslim. But what happened is when I joined Facebook as an employee, as a consultant, I was sort of forced to use a Facebook account, to create a Facebook account. And me being sort of conscious of privacy, I wasn't that conscious, but that was like the privacy awakening, is I created the account, I put a few pictures, and I did not realize that I put my hijabi pictures. Pretty soon, the Facebook AI connects me to all these people. where all my Make India Great Again relatives are like, oh, she's Muslim now? So that's just like one story of one human being that is me. Think about transgender folks. Think about all the LGBTQ folks that really care for identity. And with these biometrically inferred data, we don't even need to put a hijabi picture or we don't need to, like with the voice, with the kind of thoughts that we have, or exchange we might have with each other. All of these internal identity, identifiable matrices will be out there for an AI algorithm to ingest and then make decisions on our behalf. Those decisions, I mean, for me, it was as simple as my uncle threatening me that he's going to kill me, and he's dead now, I'm okay, and everything's fine. He's the only extreme person that went that far. But for other people, there is the genocide we noticed because a lot of the people didn't realize what default privacy setting means. And then that's the concern here. And we have had conversation with some of the insurance providers. They themselves are trying to jump in and not grab data, but really try to understand what would be their legal obligations when they do deny coverage based on the knowledge that they gathered from the data sets. because it will happen. And likewise, Shuchi earlier mentioned about Black Lives Matters type of protests. These data sets can be combined with those predatory data sets from predatory drone data sets that hover over you when you are demonstrating some kind of activist stand, you're taking an activist stand. So that's the bigger concern here. And there is a lot at stake about this inferred data and the artificial intelligence systems making decisions on behalf of humans. And that's why the next step that we will take is analyze those use cases, such as the one that you mentioned, Intel use cases and other use cases, to try to orient everything to human-centric. Because that's why we were like, oh, this person will make a decision, but we're now, like earlier I said, Project ARIA. Within the Project ARIA, one of the trust and safety measure is literally an AI mechanism collecting this data and keeping it for two minutes, and then you can potentially summon somebody to observe whether there is an inward harassment going on. Now, that is, again, inward constant reality capture. How will that data be used? Let's say you did harass or maybe did something happen. What would that mean for a person? And in fact, now you tie the identity of Facebook. So earlier, you were like, all of my journal and tweets are kind of my journal. And let's say if one day Facebook decides to ban you, if you put all of your data and your life on Facebook, even if it's private, and for just one mistake, you just lose everything. You could lose your games that you've purchased. So those are so many of these combined aggregated concerns that stem from this data processing. And that's why I'm just so honored that all of the people, these interdisciplinary experts are coming together to wrap our heads around is what happens when we depend on the machine and not the other way around.

[00:48:53.148] Kent Bye: Yeah. And as we start to wrap up here, I wanted to get each person's thought of kind of like the next steps and moving forward and just sort of maybe contextualize that a little bit here, just by saying that on The Voices of VR, I've been going around and doing lots of different interviews with different folks. So as I start to look forward into the future, from my perspective, at least, that When I look at something like the privacy framework, some of my feedback would be that in some ways it's like creating this generalized framework to try to fit into all these different contexts. But yet at the same time, I think it's also going to be potentially helpful to say, okay, here's a framework for education. Here's a framework for this context and medical context. It could be a value of really diving narrow and then also going back up to see what's universal to all the contexts. Because I do see that there's this blending together the context and there could be lessons from each of these, but also it can be kind of overwhelming to sort of take care of everything. It's already overwhelming. And I think this is a good sense making framework, but it's also, I see it as an iterative process and a really good first take, but part of that is also to maybe either focus it narrow and just specific context and use cases. And then also kind of like come back out and see what is universal to everybody. But the other thing that I would say is that I'm involved with the IEEE Standards Association. They had this ethically aligned design initiative to be able to look at AI. They produced all these white papers and in that they came up with some XR thoughts in their final book, but then they realized, okay, XR ethics is so huge that we need to have its own initiative. And so actually it was announced last week that they're going to be having a whole XR ethics initiative that's essentially bringing together lots of different people from academia to look at XR ethics more generally and broadly, not just specific privacy issues, but all of the dimensions of all aspects of XR. I think for me, one of the reasons why I'm involved with that is to start to bring together all these different perspectives of neuroscientists and architects and designers from all these different disciplines and people who are philosophers and ethicists and know, the philosophy of technology is like XR is at the bleeding edge of some of these different challenges. And so for me, what I think is interesting is the bringing together all these different perspectives and people coming together to start to collaborate. And I see that's a lot of what XRSI is starting to do with something like this, the privacy framework of bringing together this interdisciplinary collaboration in that sense, and trying to produce something that's useful to everybody. But I'd be curious to hear for each of you, what you see as the next steps. We have the Privacy Frameworks 1.0. What's the next iteration? What do you see as the interesting intersections of bringing different perspectives in? And where do we go from here?

[00:51:31.670] Kavya Pearlman: Yeah. And Kent, I want to speak to it. Just the two points that are really, really important that's going to happen is, again, Jeremy earlier mentioned the contacting reality caucus and other state legislators to try to inform them that this kind of a blueprint, this definition or outline exists. And then further development, we are accepting submissions on these specific control items. So anybody can submit that. That's the one thing is reaching out to the legislature. But the second piece is really to take it to medical XR use case. And that's already in the planning where we have a medical XR advisory council. And those folks are looking at this from the alignment of HIPAA. And it would be a separate framework where we align this from the health care medical XR perspective. And I would hope, I would love to ask Jeremy if there are any plans that we could sort of branch out to higher ed or something like that.

[00:52:28.022] Jeremy Nelson: Yeah, no, that was going to be my response. So collaborating with Didier from Georgia Tech and Maya from the New School and a couple others to basically create a considerations document. So we had a joint meeting with the CISO from Georgia Tech and the CISO from Michigan kind of raise the issue into their sphere and obviously around the Facebook accounts for the quest. And they were like, you deploy those, I will come take them all away. Like, you can't do that. But, you know, what they asked for was executive summary or kind of a two page white paper. I don't think we can fit it into two page, but kind of guidelines or considerations for other higher ed institutions as they begin to implement these technologies. Because I hear all the time, people are like, oh, I bought 10 quests and I'm going to roll them out to students. And it's like, OK, well, hold on, like these are considerations you should be thinking about. So I think it's somewhat kind of building off the framework, but then have an inactionable language for those institutions for policies they may have about what they should be doing or not.

[00:53:28.172] Kent Bye: Yeah, just to expand on that point. And just to clarify, is that like, my understanding is at least what Joe Jerome told me is that like, as it stands, that policy requiring the Facebook accounts is not FERPA compliant, meaning that if you wanted to use like the Oculus Quest to within the context of the education environment, then you couldn't because of the FERPA. Is that correct?

[00:53:46.437] Jeremy Nelson: Correct? I mean, for an educational course, right? They want to go play a game. That's one thing, right? But if it's a game in the context of a learning for credit, yes, that can't be. That's why our CISO said, I'll come take all those away if you deploy them. I wasn't planning to deploy them.

[00:54:05.286] Kavya Pearlman: Shuchi had made a recommendation for us and that's going to be adopted. And Shuchi, you talked about, you know, it being, make it broader and it's not just privacy. And I, you know, I was really thankful for that. So this is not just going to remain a privacy framework, but then more become like a trust and safety or like definitely an aspect of safety in the version 1.1. Yeah.

[00:54:26.646] Kent Bye: What about the other folks?

[00:54:28.152] David Clarke: Yeah, I mean, one of the things as well is, I'm not sure if you've got figures in the US, but in the UK, the cost of online harms is 11 billion a year. If you kind of want to say it's massive, absolutely massive. Probably every school or headteacher I've spoken to, they literally reckon they must be spending a day a week dealing with technology issue problems in the school. Absolutely phenomenal.

[00:54:54.158] Kent Bye: So you said there's a dollar amount, $11 billion. What's that mean?

[00:54:57.583] David Clarke: A billion pounds, but it's costing the government to handle online harms, you know, the court cases, the investigations, the social workers, the therapy, the doctors, everyone who's kind of involved.

[00:55:11.200] Kent Bye: And so how do you see that as sort of the next step or what, what sort of next for, for your, like, as you're thinking about these issues, how does that connect to what comes next?

[00:55:19.223] David Clarke: So part of this, if there's nothing to go on, how do you get this under control? And that's why frameworks are really, really important. And as Kavya said, it's kind of beyond privacy. It's kind of safety almost first and then privacy. Cause I don't think we've got the safety bit right by any means.

[00:55:36.108] Kent Bye: Nice. What about you Suchi or Noble?

[00:55:39.349] Noble Ackerson: I think I. Align with what you're saying, Kent, that at some point you may want to sort of look vertically into different industries on how to be a little bit more prescriptive. And the reason for that is, as we all know, they're opposing forces. So broadly speaking, Before fully diving into each institution or vertical industry, we probably need continued help from academia and thought leaders, philosophers on how the current sets of framework, be it XRSIs, IEEEs, how they're being used. And then, I mean, if you look at it from the, try this a different way, research shows that every user using digital software cares about their data. And that's because of a growing growth in data breaches, growth in their data generally. We know that companies find a competitive advantage to hold that data in the face of rising data governance risks, right? So that's a push and a pull. We know that there's an increase in privacy standards through both coming in from the legal, coming in from, you know, self regulation, as you said, doesn't really work. And of course, by organizations like us. Right. So. All this comes together before you start diving deep, understand, you know, what is healthcare doing to understand what data they have? You know, how do they go about their risk assessments? And do they pre-plan remediation tactics? And then that's on the discovery side. And then on the delivery side, specific, industry specific, guidance. Like, you know, how do you catalog data? Most data cataloging services, AWS Glue, whatever is happening in the back end, have the ability to sort of automatically flag using machine learning models to flag what is, what could result in potential harm, that kind of stuff. So between the discovery and the delivery, I do believe that perhaps more moving forward, more deeper look into different verticals, different industries and and understanding how their standards in those different areas are, how they're using it today, and then that will inspire how deep you go or how well you work with governments for their regulations, how you work with individuals for their awareness and just education. awareness that things don't have to be the same. And of course, speaking to the companies themselves that deliver a lot of this, deliver, store this data, how they share it across borders, how they broker this, and I'm very inspired by a lot of the work. Even in Within companies themselves, it's not all bad. You've got Tim Berners-Lee's Interrupt efforts, trying to attempt at an interoperable ecosystem, link data, that kind of stuff. And so differential privacy, as we know, that kind of stuff, and sort of how to sort of manage all those together with discovery and delivery.

[00:58:27.987] Kent Bye: Yeah. Cool. And I know we're sort of running out of time here, and I don't know, Suchi, if you have any sort of quick final thoughts before we have to wrap here?

[00:58:34.850] Suchi Pahi: Yeah, sure. I really think everyone's hit on actually great next steps and Kavya's plan is awesome and I'm very aligned there. And I actually wanted to say like this framework isn't solely privacy, which is something that's really unique. And I can't wait to see what else happens with it because having ethics built into the privacy framework at the risk of briefly getting a little bit controversial will avoid sort of the problems we saw in AI and Google ethics this week. So instead of having a set group that's handling the ethics for everything, you have trust and ethics built into your privacy framework already. That's fantastic. Build it from the ground up, give it to the product engineers and let's just roll. I love it.

[00:59:18.938] Kent Bye: Yeah, well, I think this is a great start. And like I said, I see this as like an unfolding process where you're never going to get to the complete ethics. We haven't solved ethics. It's always going to be new ethical transgressions, moral dilemmas. It's an unending process. And I think that this is a good start to start to bring together the larger communities to be able to start to think about this in terms of these generalized frameworks and Yeah, and I look forward to seeing how this continues to evolve and unfold, and there will be this relationship to the legislators and the laws that are set, and that'll also be helping to set this larger context, but also to help potentially come up with some sense-making approaches for people who are in these different disciplines to see how you can start to take into consideration some of these privacy, safety, and security issues. Again, I just wanted to thank everybody joining me here today, Kavya, Jeremy, David, Noble, and Suchi, just to be able to do this deep dive into the XRSI privacy framework. So thanks.

[01:00:13.493] Kavya Pearlman: Thank you. Thank you, Kent. And especially thank you for doing this during XR Safety Awareness Week. And this has really come out to be a really amazing week. So thank you all for participating, contributing. And together, we can potentially get in front of these awful things that are coming up for us.

[01:00:32.458] Kent Bye: So that was a discussion that was a live stream as a part of the XR Safety Awareness Week that happened on Thursday, December 10, 2020. And participating was Suchi Pahi, a data privacy and cybersecurity lawyer. Kavya Perlman, the founder and CEO of XR Safety Initiative. Noel Akerson, a leader of data governance initiatives, privacy framework contributor, and lead product innovation for government agencies for Ventura Corporation. Jeremy Nelson, the director of Extend Reality Initiative of XRI at the Center for Academic Innovation at the University of Michigan. as well as David Clark, who does a lot of cybersecurity and data protection work and is the EU GDPR strategy advisor for XRSI. So I have a number of takeaways about this interview is that, first of all, Well, again, this is taking inspiration from the NIST privacy framework, which they have five functions of identify, govern, control, communicate, and protect. And this is modifying that to assess the privacy risk assessment, the inform, which is informing users about privacy risks, manage the privacy risk management, and then prevent, which is preventing privacy incidents. So again, this is a general guidance of privacy guidelines. And so these are for companies to get their footing as they're trying to navigate how to be good stewards of XR privacy. This is a good starting point for them. And also potentially for, as we continue to have the need and desire for robust privacy laws within different states, as well as within the federal government, then this could help to elucidate some of the different special considerations for XR privacy. In the previous two conversations, I was talking to the European Union perspective of how it's really taking a human rights approach, which are these underlying human rights, and then from those human rights, then trying to come up with the laws around that. That's how the GDPR was formed. And so there are certain influences of GDPR that's in this framework. It's one of those self-governance and self-adopted things. I think the thing that I have been really trying to figure out after I had this conversation, I went on to do two years of the IEEE Global Initiative on the Ethics of Extended Reality, trying to really bring together the larger community to think about these broader ethical issues within XR and trying to break them up into the different contextual domains. And this is a generalized privacy framework that can be applied to many different contexts. But then part of my initial feedback was, OK, how does this actually break down when you apply it to specific domains of either medical or other ways that there's a very fragmented nature within the U.S. context? Or maybe you take a step back and start to try to generalize things out into something more like a human rights approach, like the GDPR. So this is aimed towards individuals and organizations as one of these self-directed and self-guided frameworks. And in terms of whether or not this is going to be adopted by META and kind of hold META's feet to the fire, I think we're going to need something from the government and regulation to really take it to the level of these big, major players. But I think there's still value of looking at this privacy framework for both small organizations that are trying to orient, and this is a good survey of all the different laws and the fragmented nature, but also just trying to identify different concepts and ideas like the biometric-inferred data, which they define as a collection of datasets that are the result of information inferred from behavioral, physical, and psychological biometric identification techniques and other nonverbal communication methods. They list a number of different biometrically-inferred data types and This is what Britton Heller has termed biometric psychography, which is, you know, most of the biometric laws are fixated on identity, but what about the inference data, which is the likes, dislikes, the preferences, and that, as of this point, that doesn't have any legal status. And that was explicated in Britton Heller's paper that was first released back in February 2021 called Watching Androids Dream of Electric Sheep, Immersive Technology, Biometric Psychography, and the Law. And this XRSI privacy framework was first launched back on September 8th of 2020 during the Facebook Connect that was happening at the time. And so they were early in trying to identify some of these risks of things like the biometric inferred data, is what they call it, and aggregating different research and putting everything all into their one framework. So you can check out the original live stream of this or the privacy framework, and they're actually in the process of developing their 2.0 version. I'll have a link in the description. You can go check out the XRSI privacy and safety framework 2.0, which is still in the process of being developed. And like I said, they're having a deadline around December of 2023 to come up with both the safety framework as well as the next iteration of their privacy framework that they've been working on. Also, Kavya pointed me out to the Friends of Europe and XRSI have a collaboration where they're kind of having a bridge between what's happening in the EU and some of the law-making stuff that's happening in that context. And so there's a press release that I'll also link in the description to the Friends of Europe to get more information about what XRSI is doing in terms of this collaboration with Friends of Europe. So that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show