The XR Safety Initiative (XRSI) was co-founded by Kavya Pearlman and formally announced at the Open AR Cloud Symposium on Tuesday, May 28, 2019. XRSI is going to be looking at the safety and security issues within XR in collaboration with security researcher Ibrahim Baggili, who published a landmark paper in IEEE Transactions on Dependable and Secure Computing in March 2019 titled “Immersive Virtual Reality Attacks and the Human Joystick.”
Casey, Baggili, & Yarramreddy discovered a number of novel VR attacks where they showed how they could disable the chaperone system and modify the virtual world in order to guide the user to move to specific physical locations. Pearlman reached out to Baggili, and they decided to co-found the XR Safety Initiative with security researcher Regine Bonneu. XRSI has started to categorize the different security threats and attack vectors, and they hope to collaborate with independent developers and the major companies in helping to develop best practices around safety and security.
XRSI is also going to be collaborating with governments and policy makers, promoting awareness of XR safety issues through their Ready Hacker One initiative, and collaborating with academic research institutions in order to continue to cultivate more research into security in immersive virtual environments. XRSI is also going to be looking at different privacy issues within XR, and exploring different ethical implications of the technology. (You can reach out on their contact form to get involved.)
I’m personally really excited that Pearlman is starting the XRSI in order to continue to work on these issues of safety, security, privacy, and ethics day to day. There is a lot of coordination that needs to happen between these major companies as well as independent developers in order to know who is responsible for what aspect of the five different threat vectors including the privacy of the input data, how to properly store and protect user data, ensuring the output to the user is protected, that the user interactions are protected, and the physical devices are also protected.
I had a chance to sit down with Pearlman to talk about XRSI about an hour before my own talk on The Ethical and Moral Dilemmas of XR, and we talked about the history and evolution of her work in the XR space, her privacy awakening moment while working as a contractor at Facebook, her takeaways from the VR Privacy Summit, and an overview of the open problems in the XR space around safety, security, and privacy that she hopes to be addressing.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Here’s Pearlman’s talk on Wednesday, May 29 at the Augmented World Expo 2019:
Here’s a panel discussion that Pearlman and I were on at the Open AR Cloud Symposium on Tuesday, May 28, 2019 just ahead of the start of AWE:
Here’s the abstract for Immersive Virtual Reality Attacks and the Human Joystick by Peter Casey, Ibrahim Baggili, & Ananya Yarramreddy
“This is one of the first accounts for the security analysis of consumer immersive Virtual Reality (VR) systems. This work breaks new ground, coins new terms, and constructs proof of concept implementations of attacks related to immersive VR. Our work used the two most widely adopted immersive VR systems, the HTC Vive, and the Oculus Rift. More specifically, we were able to create attacks that can potentially disorient users, turn their Head Mounted Display (HMD) camera on without their knowledge, overlay images in their field of vision, and modify VR environmental factors that force them into hitting physical objects and walls. Finally, we illustrate through a human participant deception study the success of being able to exploit VR systems to control immersed users and move them to a location in physical space without their knowledge. We term this the Human Joystick Attack. We conclude our work with future research directions and ways to enhance the security of these systems.”
The full pre-print is available for download on ResearchGate.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So anybody who's been listening to this podcast for long enough will know that privacy within the realm of virtual reality is an issue that I've given quite a number of different interviews about, probably around 30 interviews by this point. And at the Augmented World Expo, there's a number of different conversations that I had there about privacy in XR, as well as a talk that I gave about the ethical and moral dilemmas of mixed reality. I was on a panel discussion at the OpenAR Cloud, talking about some of the different privacy issues within XR. And on that panel with me was Kavya Perlman, which on that day, on that Tuesday of May 28th, she'd actually announced this new initiative that she has called the XR Safety Initiative. So Kavya has worked in the security realm. She was actually a contractor at Facebook working on various different security issues leading up to the election. She spent some time at Linden Lab working on Project Sansar. And now she's created this whole XR safety initiative where she's trying to have this institution that is collaborating with different academic researchers to look at different security threats into XR. She's going to be looking at a variety of different ethical issues within the XR space, talking to different policymakers collaborating with these different companies and independent developers trying to have more communication in terms of trying to figure out who's responsible for what when it comes to safety and security issues. So I'm really happy to see that this is a brand new initiative that's focusing specifically on these different safety, security, privacy, and ethics issues within the XR space. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Kavya happened on Friday, May 31st, 2019 at the Augmented World Expo in Santa Clara, California. So with that, let's go ahead and dive right in.
[00:02:06.438] Kavya Pearlman: I'm Kavya Perlman. I am the founder of XR Safety Initiative. This is a relatively new initiative that we started. What happened is I was working for Linden Lab. As most people in XR domain know, Linden Lab is the maker of Second Life. And they recently, in 2017, launched their virtual reality platform called Sansar. So, I was the Information Security Director at Linden Lab, and as I was advising and sort of tackling some of the virtual world or Second Life, SANSR type of issues, I started to observe that this is not just our corporation or organization issue. There are certain things that are just universal and completely unaddressed. In fact, one of the key moments that happened for me was when you and I actually met at the Privacy Summit, and I was able to tell this whole story. And this really brought it back to surface for me when I talked about my privacy awakening moment back when I was over at Facebook. And it really just got me thinking after all the conversation at the Privacy Summit, there were about 40 of us sitting around talking about these privacy issues, some of the security issues. And I just felt this deep sense of despair. And I just couldn't live with it. And at some point I was like, you know what, we've been talking and I had already been sort of pushing the voice of like, okay, this is bad. We need to think about privacy in VR, think about privacy in XR. At some point, we have to do something about it. And that's what compelled me to find this initiative and bring people together. We did think about, you know, sort of monetizing these things, but that was not our first thought. The first thought was like, hey, can we do something about it? And then finally figure out how would we get money and funding and that kind of stuff from it. Another key moment that happened while I was thinking about XR Safety Initiative was meeting with Abraham Bejele. He is the founder of UNC-FREG, I think that's the forensic group, and the assistant dean of New Haven University, Cybersecurity Research Lab lead. He's excellent. He's currently working with NSA and a bunch of other federal organizations, raising money for research. And what he did was remarkable, something that nobody has actually ever done before. His team and himself, they found some novel attacks. They conducted research to literally attack the virtual reality environment and document how you can move the subjects from point A to point B without their knowledge. they documented something they call it the gradient attack like chaperone attack actually where you could disable the boundary the gradient boundary so if you combine that you can disable the boundary you can move the subject i mean you could literally just kill somebody if that is a person of interest and then if you think about the privacy aspect of it mix that if you can trace where the person is based on whether they're in br or in xr oh my gosh I mean, this is like kind of a ticking bomb because all these other privacy, security, and then cyber security aspects combined together could really diminish the trust in XR domain if we don't pay attention. So right now, right this minute, we are literally sitting at the crossroads of mass adoption. Like, we are looking at mass adoption far in the distance yet. We have to do something. We have to raise awareness. We have to discover these attacks because if we don't the bad guys are inevitably going to and we're never going to know. So that's something you know that keeps me up at night or really bothers me. I'm not hearing more voices besides yourself or besides Jeremy Baleson, for example, you know, and Jessica Outlaw. Jessica is actually one of our ethics lead. And so people like herself and Regine Benot, like we have decided to come together to like combine these voices to find these answers. Because these are unanswered questions. We are going into this uncharted territory and we are just not prepared. Everybody is just looking for the next killer app, which is great. Hey, you want to make money? Go for it. But at the same time, can we please at least be a little bit pragmatic and not just run on deadlines? We need to instill some guidelines, some baseline.
[00:06:50.798] Kent Bye: So yeah, hearing your privacy awakening story was extremely moving, I think, for the entire room, for the people that were at the Privacy Summit. And I was just wondering if you might be willing to share some parts of that moment where you had that awakening with regards to privacy.
[00:07:10.765] Kavya Pearlman: So for those who've seen me, they know that I am a covered Muslim hijabi woman and this is not who I was in my previous life or in my early days. I did not wear a headscarf and it never really occurred to me that that could ever be a concern. The only thing is when I converted to Islam or started wearing hijab, I didn't think about what would happen if I go public with this idea. I kept it under wraps and it was okay until I joined Facebook. And on day one, I'm walking into Facebook headquarters and I see be social, be authentic, all these symbols sort of compelling me to be myself. And then I realized, at least at that time, I had to create a Facebook account, which I had never really had before. So because I had to create a Facebook account and I hadn't you know someone like myself who's like privacy aware and everything but still I didn't pay attention to any of the default settings and what happened is within like a matter of four or five days I started to get threats from my far far distant relatives three people that I would probably not even talk to or have not met with or seen in years and years because just so happens my family background is very traditional Hindu in fact You know, some of them are like, as my friends point out, like, make India great again, folks. And it was just interesting to see their reaction and the threat was that you dare not come back to India or we will kill you. And just simply by choosing to create my Facebook account and having forgotten to modify the default privacy settings, I had now literally put myself at risk of death threats. And then it occurred to me that it's not just me. There are people in Myanmar who are being profiled. It was around the same time. There are people in other countries who have been profiled, whose data has been so minutely profiled. And now we are talking about, you know, because I was over at Facebook doing third-party security during U.S. presidential election time and advising, we were observing a bunch of things. At the same time, I noticed, wow, with this treasure trove of data, you can pretty much not just leak privacy and tell people, oh, this is Muslim, this is that. Based on those profiles, you can pretty much influence an entire generation, move them in such an impactful way that they could potentially vote one way or the other way. And this is something that it was so profound awakening for me. And I learned a lot. Hey, no offense to you. My work there taught me so much. In fact, I was able to come up with this term called cyber political engineering based on everything that I observed in terms of election hacking, they call it, and many different attacks combined. So I learned a lot. I documented a few things that I observed, even though I don't think that anybody acknowledges or agrees to but over time people would learn to recognize that if you can't define a problem you can't solve it. Over time these things kept happening and I kept building my passion and my drive to do something about it and now I'm like, OK, this is time. And I talk to the right people, Abe and Regine and Jessica Outlaw, and then your voice, Kent. When you are putting content out there, I see it. I feel the frustration. I feel it in myself. I feel like, oh, wow. And clearly, you're trying to make an impact. But what was missing until XR Safety Initiative is this platform. It's something to take action. because you're coming in from journalistic perspective, from an activist perspective, which is amazing. We absolutely need that. But we are at a crossroads that if we didn't do something, this is going to doom the trust in the entire XR domain. We need guidelines for developers. Like for regular developers, we have this OWASP top 10. Why don't we have something similar in XR domain? Because this is different. This is uncharted territory. So we're going there. We're trying to discover what else could go wrong, document it, do research around it and present it to the community and say, hey, pay attention to this. Even if you're a CEO or a developer or just a user that you are stepping into this domain, go with awareness of what could potentially go wrong. And hopefully someday we'll solve other issues too, like consent, you know, is broken. Privacy is just a legal tool that people use to hoard more and more data. And those are things that we hopefully want to address and hopefully will be able to.
[00:12:18.795] Kent Bye: Yeah, so hearing your story again, it just reiterates to me the importance of not only default settings of how things are set up by default, but also the risks that are involved with people that may be in these vulnerable positions around the world where it's pretty much illegal to be a homosexual. So they have these laws that are really regressive in terms of prosecuting people because of their sexual preferences and yet with these new technologies for people over and over again say that you can determine someone's sexual identity or sexual preferences by just tracking their eye tracking for a half hour in different environments. And that's just sort of the tip of the iceberg in terms of what type of information is going to be made available with all this biometric data. So I have a lot of concerns in terms of what, to me, feels like it's going to be a gold rush for all this biometric data to gather as much information as they can. to harvest our emotions, to gather all this biometric data on us and to store it forever and to have AI algorithms take those psychographic profiles, which are already pretty robust, but to take it to the next level of all of this unconscious data that is essentially kind of like reverse engineering our psyche and our preferences and our Emotional reactions to a whole variety of different things that could be used to either control or manipulate us or to subtly Sell us things and what John Burkhardt told me is a behavior neuroscientist says as long as you're able to predict behavior be able to control behavior the line between those two is an unknown ethical threshold between prediction and control manipulation and And I think we saw that with Cambridge Analytica and what could happen at a societal level. But I'm just concerned that using the business model of surveillance capitalism on top of what is going to be made available with all of this virtual augmented reality technologies, all this XR technology, that I don't know what the exact solution is or what, if you have a specific perspective on how to sort of curtail the capturing and hoarding of biometric data from XR.
[00:14:24.814] Kavya Pearlman: I don't know if there is a silver bullet, right? I think there is multiple layers that at least today I think that's the case and it could definitely change tomorrow. My opinion could change tomorrow based on what I learned or what we learned together as we do more research. But I think you talk about it, the whole decentralization aspect. So we have to look at alternatives besides these walled gardens that you referenced several times in your talks and in your interviews. And for that reason, we actually, two days ago, just announced this sort of a collaboration with OpenAR Cloud. And they are literally, you know, dedicating resources, sort of advocating for decentralization and that kind of stuff. While they do that, our specific focus as XR Safety Initiative is all about not challenging, but just holding these walled gardens accountable and at the same time spreading awareness about these uncharted territories that we are about to go into, spreading awareness from the stakeholders level all the way CEO to the user level. So I think there is two folds. We need to present these alternative decentralized solution and then we also can't ignore. I gave a talk at AWE two days ago and I referenced Star Wars. In Star Wars there is this amazing philosophy that you can reference what happened to the galactic empire when they chose to ignore all the alerts and the signals. They chose to ignore the vulnerabilities. I mean, even to the point that, you know, when R2-D2, like you plug in the whole cable and there is literally no encryption on the most critical data, which is the Death Star blueprints. I mean, how could you have that happen? And then when the council sits down, oh, we have the best technology and nothing bad can happen to us. So it is almost like I see parallels. Underestimating the risks, while you are building a new technology domain, it's a recipe for disaster. We have to pause and we have to reflect, kind of like what we started with the privacy summit, and we have to now act on what we are concerned about. And that's what I think is important. So decentralization, yes, it's definitely going to be And it also depends on how good of a job do we do building these alternative tools. But if we don't do that, the walled gardens, the emerging empires are already doing the things a bit of an objectionable. And we don't even know what they are doing. That's the other challenge is some of these things are being done in the government side of things. So this is confidential. A normal person can never know. Or they are secret projects, you know, things like building eight at Facebook used to be one of those secret things that they would do and nobody would know what's going on. So this is concerning, right? So people like myself or all these researchers, yourself, all we want to know is, hey, can we just have some sense of ethics or morality or a reflection and some answers? Because we really do want to trust these emerging empires. Maybe there is a place for it. There is a reason why they are so massive and people love to install all kinds of things. There is a reason why they have been able to connect two billion people. But we need to have some sort of accountability there, because we can't just leave people to their own demises. I mean, if you even go back down to Freud, Freud said that, you know, if you leave humans to their own devices, they would just destroy themselves. And that is exactly what could happen, is we can't just leave these really emerging empires to let them make their own decisions and they are mostly driven by profit or money or things like that and that's why I call them the unheard voices of XR and you're one of those even though you have a platform but you know the right people are still not catching up to our concerns and what we want to do is we want to get out there and find these answers from these emerging empires. and at the same time do research so that we could back our concerns up with the data and provide that, you know, hey, these are actual legitimate concerns that we need to talk about.
[00:19:01.075] Kent Bye: Well, there's a lot of things that came up when you were talking about there. One thing is the level of secrecy that is almost like this wall that we can't really see what's actually happening. So I've had an opportunity to talk to some of the architects of the Facebook privacy policy, and I was able to ask them all sorts of questions like, are you recording what people are saying and storing that? And they said that they weren't, but within the next couple of months, with the Facebook venues, they did start to record, at least start to record some running cache. So the privacy policy affords that they could start recording at any time, but there's no limits that are within the privacy policy to say how much they're recording or when they're recording. So it's sort of a blanket statement so that even after the architects of the privacy policy and the engineers tell me, no, we're not recording anything, it essentially means we're not recording anything yet. But we can at any moment. And we have no obligation to tell you. And we don't have any obligation to tell anybody at any time that we're actually start recording. So there seems to be a privacy policy that is really open-ended, allows them a lot of latitude to be able to do things in the future, to give as much permissions as they possibly can. Now, when I talk to Mozilla and what they're doing with the social hubs, they have all their code is open source. You can look at it. You can audit it. You can see everything that they're doing and see how they're architecting it for privacy. But we don't have that similar ethos within any of these companies. The closest thing I can think of is probably what has been happening with open AI and with an AI community, which is, hey, these AI algorithms are going to be a little bit super scary and dangerous. Maybe we should think about having some transparency here with what is happening at algorithmic level so that we could have the entire community take a look at this and start to look at the different threats that are presented by these different AI approaches. And the data that is training the AI is still the thing that's the most valuable. So you may have access to whatever code is coming out. There's a lot of amazing open source projects in the AI community. But still, how it's implemented into these different systems still has a certain amount of opaqueness. We can't see what's actually happening. And so we don't have that full transparency. But it seems like this dilemma that even if I were to ask them, hey, are you doing this yet, they would have to either have some way of disclosing what they're doing, describing everything that they're doing, and then indicate when that changes and to inform the users as well. Or there could be the code that's available so that people could be able to audit and see what was actually happening. I don't necessarily think that either one of those are going to necessarily happen, and so we're kind of stuck with this opaque wall of not really knowing what's happening and not knowing how much we can trust what's actually happening.
[00:21:45.198] Kavya Pearlman: You know, Kent, I think having been an employee or actually I was a consultant, I was not like a Facebook employee, quote unquote. Having been there, having been around the Facebook employees, I can guarantee you 99% of the people, none of them are waking up in the morning and saying, okay, let me go to this place and I'm going to do some evil. In fact, most of the people there are really nice people. They have good intentions and they wake up every morning and they want to change the world. They want to make an impact. They want to influence billions of lives in a positive way. But what happens is when you have this massive large scale, there is tons of priority things that you have to like prioritize. Am I going to talk about encryption of WhatsApp? Am I going to tackle India's election problems and how forwards are causing people to commit suicide or all kinds of other issues so there is a huge like a priority list that these companies have to work on and the only thing that hasn't happened thus far is nobody has come up and really just knocked at the door and said hey you know what these small, medium, medium-sized businesses or these developers, they really want to know what's up, what's happening because something tremendously bad could happen. And my hope is that I'm right and that when I do start to have the concrete conversations because we're already actually in conversation with Facebook and we're thinking about talking to Tim Cook, Apple, and Tim Sweeney, Viveport, all these other, you know, bigger giants that do have the potential to hoard data, to transfer data. All we really want to know is tell us what happens with the data throughout the entire data lifecycle because you have these devices involved. You're introducing your own API, your own things. And my hope is that we will be successful. We don't want full transparency. We don't want your Death Star blueprints. All we want to know is what are you doing with this data because that will give us some sort of a comfort level to make informed decisions. This will tell these smaller applications to decide what to do on their end. Like right now, I mean, there is this application, VR app, big screen VR. A while back, I sent a message to Darshan on Discord. I was like, hey, what are you doing for privacy? His response was, oh, we're trying our best. That was his quote unquote response. But what does that look like? What is trying our best looks like? And now they have all this funding. But how can Darshan make a really good informed decision when Darshan doesn't know What is his side of the responsibility? And what is the Facebook side of the responsibility in terms of Quest collecting data, in terms of any other headsets collecting this data and taking this data on to their ecosystem? What's happening? That's what is needed. And it is needed to do good business, make money. Who's stopping you? But be pragmatic. We need to do a better job with it is what we are saying. I'm not claiming that these are evil people. I'm just saying we need to draw attention and put these issues on a higher priority list because we are literally standing at a crossroads that if we didn't, then there would be tremendous consequences.
[00:25:07.141] Kent Bye: Well, I would love to know how many ethicists Facebook has embedded into these cross-disciplinary teams, thinking about some of these different issues, especially lawyers who are looking at some of the different long-term privacy implications. The thing that gives me the most fear is the third-party doctrine and the current status of the third-party doctrine, which essentially says that any data that you give to a third party has no reasonable expectation to remain private, which has two side effects. if the government would go to Facebook in five to ten years from now, show me all the emotional profile data from Kent Bye about this content that he's been looking at. If they have a subpoena and there's enough of a court case to ask for that data, then Facebook would have to give that data over. But over time, it actually weakens the Fourth Amendment, because any data that you give to a third party collectively weakens our Fourth Amendment protection rights to this data to remain private, which means that The more that we give our biometric data, our eye-tracking data, our Gavilanic skin response, our EEG data, our brainwaves, our motion reactions through our face, all that is now fair game for the government to start to collect all that data, because they've established a new norm, saying that the collective society has decided that this is no longer reasonably expected to be private, which means now the government can get all that information and start surveilling us at that same level. So to me, I feel like there hasn't been as much thought about the implications of the third party doctrine, how that needs to change, but also the implications of what that means for privacy in the long term.
[00:26:39.822] Kavya Pearlman: No, you hint at a very, like, you're absolutely right. And that's the other pillar that needs to be addressed. I think this is time that we start having this conversation with somebody with a technology mindset, somebody who is in the government arena, somebody who's willing to have these conversations and sort of get ahead of these challenges that we would potentially face. Absolutely. The other thing I'm thinking, not just the government surveillance things. Think about what happened. in 2016 election. Think about that. Where did all the data come from? I mean, it's not just the Facebook thing, right? DNC hack happened, right? So when you have these nation state adversaries, When they know that, oh, certain folks or certain demographics have built this cool new technology, and now all this data sits over here or over there, what do you think an adversary is going to want to do? I mean, they're obviously going to go after it. And then the threat profile, which we already knew, goes to a much higher level for any company that has the data. So by creating some sense of transparency around what's happening to the data or minimizing that sort of data hoarding, you're not just preventing this surveillance capitalism or the government related issue. you are preventing national security issues. Because the adversaries of certain countries, they will come after it. They are looking for this type of data. And if you store it, it will be hacked. If you give enough resources, people will steal your secrets. I mean, all of the entire NSA's offensive tools are out. Every bloody government is hacked, for all we know. So, I have more concerns. The government surveillance thing is, yes, of course, is massively concerning, but I think this could be an even bigger problem. where we're already sort of losing a grip on the sense of reality, what's right, what's wrong, what's true, what's false, we could really lose it in terms of like, who is who? Like we're talking about this whole like digital twin copies, who do we trust? So an entire loss of trust in the XR domain, it's so detrimental.
[00:29:05.405] Kent Bye: Yeah, the information warfare threats that are out there are also obviously a huge concern. And watching the Facebook Dilemma, it's a frontline documentary. And they had someone from the National Defense Advisor. And he said that Facebook has become this national security threat for being hacked and getting all this data to be able to have these adversaries basically conduct this different levels of information warfare on us. And he said that because it's like 2.7 billion people, it's the size of the network that makes it a target. So I see that there's this pendulum that swings back and forth between the centralization and decentralization, and that self-sovereign identity, finding ways to have the data be stored with the individual, whether it's through differential privacy or having some way of doing homomorphic encryption or other approaches to be able to do looking at data, but having it behind this encrypted wall so that you can at least keep it protected. I feel like that there's a lot of emerging architectures for what you could do to mitigate and reduce that national security threat. And I guess the other thing that scares me is that you also have the potential at F8. Facebook announced that they want to eventually put iris scans and fingerprinting on this virtual reality headset. They're sort of writing it off as a security feature, saying if somebody's going to pick up your headset and start to spoof your identity, they want to know that it's actually you and not someone else. But the side effect, of course, is that they always know forever who you are and who's using your headset. They're already taking your IP address, where you're at. I mean, if you look at the privacy policy, all the things that are in there, they can always identify. where you're at and who you are. But once they have this biometric identifying information in there, then they're going to start to be able to tie everything that happens to your identity. Even if you have someone else go through the headset, then they're going to be collecting biometric data on those people. So it feels like all this sort of tying of all the data to these specific individuals, but have it storehoused on a centralized server just feels like it's a really bad idea and a potential huge national security threat. And I don't know if there's existing decentralized architectures that are being proposed that are scalable and a viable alternative, but it just seems like the current system, while there's economies of scale where it's cheaper to do it, it feels like that there's too many trade-offs in terms of the risks that are there, both from a privacy perspective, but also national security.
[00:31:31.377] Kavya Pearlman: No, I agree. Yeah, exactly. And that's the part that I'm a little bit, you know, I differ a little bit from you because yes, you can have these alternate architectures and solutions, but that is still not going to cut it. I think we need to accept the fact that we would not be able to I mean, if we haven't been able to get a decent answer about what's happening with the biometrics data, I don't know how far we're going to get in terms of changing the entire architecture per se. So I am running on the assumption that bad things are inevitably going to happen. and I am hoping that we can do some kind of a defense in depth where one security control fails which it inevitably will. What is our plan B? Plan B reminds me of Sheryl Sandberg. I mean she's written this whole book about plan B and if she is that kind of a mindset of a person I'm sure that these guys should think about would think about these challenges. I do want to say, though, it's not just Facebook. There are other companies like Vive, for example, or Apple and Amazon. And there's so many other massive empires that are building and collecting data. You just hear the one name over and over. But I think it's a universal problem. And that's kind of how XR Safety Initiative, at least, is going to try to tackle it. It's not just Facebook problem. It's not just Amazon problem. It's all of our problems. All of us have to hold ourselves accountable and be honest for once before we just shake our hands in the air and give out all of our data and let people do whatever the hell they want.
[00:33:17.430] Kent Bye: Yeah, I just wanted to take a step back and say that there's a lot of amazing things you can do with this data, and that I'm excited to see what could happen from a medical perspective, from being able to get deep insights into yourself, and that looking at these different ethical frameworks, I was struck how there is this contrast between a more Kantian looking at the autonomy versus more of a utilitarian argument, where The autonomy is that you have your data, you own your data, and no one should be using your data against you in any way, against your consent. And then the utilitarian is that you are doing this exchange, almost like in a contract, where you do a terms of service, you have an adhesion contract, and then you get access to all of these services. And potentially, through collaborations with different academic institutions, there could be some benefit for larger public good in terms of things that they could use this technology for research or helping people in the long run. But at the core of it is this business model where they are profiting off of having access to that information by selling you advertising. And so you have this sort of autonomy of, like, do you own your data? Do you control your data? versus the utilitarian aspects of all the different services that are made available and potentially made better by contributing your data to all the AI algorithms so that over time, they can have better AI and better services. So you have these different trade-offs, but yet at the same time, there's these different threats to the autonomy. which, you know, when I think of safety or all these other things, these unintended consequences by giving over your sovereignty and handed it over to Facebook in this sort of exchange where you may be foregoing some of your rights to privacy when you are giving access to data. But in exchange, you get a service that people seem to like. They seem to like to have access to free things and to be able to communicate. So, you pay for access by mortgaging your privacy, essentially. You're getting access to this data. And then sometimes for people, they're perfectly happy with that, and they don't seem to have any problem with that trade-off. They like to get free stuff, and they don't see any unintended consequences that are negative for them, at least in their lives.
[00:35:24.050] Kavya Pearlman: So that's a very, very good point. And something that XR Safety Initiative is really trying to tackle is, are you aware and well-informed to make those decisions, like when you're mortgaging your privacy? Are you aware that this could happen? What happened with me when I was signing up for a Facebook account? Did I know that I'm going to get death threats from my uncles? No, because I was not aware of these default settings. So that's the other piece we are trying to tackle is sort of spread awareness. And I'm glad to say that we literally decided to dedicate an entire platform. We're calling it Ready Hacker One. It's kind of like, you know, what you do in your journalistic domain is create awareness, talk about things that are looking bad. That's precisely what the goal of Ready Hacker One is, is to go out there, find the facts, and let people be more informed and be more aware that these are the potential consequences. These are the potential bad things that could happen when you step into these XR virtual environments. or you partner with X, Y, Z company, or you put on this Quest headset because these things could happen, but there isn't a voice like a universal platform that is literally dedicated to do that. We have like Forbes, for example, but it covers an entire universe. There is nothing dedicated just to talk about the implications of privacy, security, and ethics, and those kinds of stuff in XR domain. And that's why I'm really, really excited and hopeful that we would do it this way. We would also bring the users along with us so they are more aware as they're stepping into this. Because, you know, every single industry, when the bankers try to do digital transformation, they threw so much money at technology. They threw so much money at hiring the intelligent brains. What they forgot is the people. and then comes afterwards oh don't click on this phishing link don't do this that doesn't work. In AR and VR we have to bring people from the time they step into it they have to know that oh my gosh this is something that could go wrong maybe we would build some kind of an MPAA-style rating. So we know that this experience is safe for a child, this experience is good for an elderly. Those are the kind of things that we need to build as we move forward. And awareness, we are already HackerOne. This awareness is going to create and bring more and more like-minded people together. to do these things because obviously we are only a few folks and we need more and more people to collaborate with us and do this work. And I've met with some really amazing folks at AWE this week. They are, they are thinking about these things and they were really excited to hear what we have to say and they want to collaborate. And we just need more and more people. So hopefully we'll make some strides. I'm hopeful.
[00:38:23.369] Kent Bye: Yeah. Just curious, like what type of threats you're seeing in terms of XR safety?
[00:38:29.247] Kavya Pearlman: Yeah, so good question. In fact, I'm glad that I'm able to answer at least part of it. Based on recent research that Ibrahim's Visual Ace team has done, we established about five threat categories. So starting from first being the input user input thread. So whatever you're able to feed into the system, how do you protect that data? When you talk about user interaction, that is another threat category. When you have these sort of a device involved, talk about HMD or it's augmented reality headset, this device is another threat category. So we've established about five so far based on whatever we have learned. And I think the answers will over time change. But that's what we need to continue to do. We need to find these sort of a universally identifiable threat categories and make developers, the CEO, the CISOs aware of these things so they can make more pragmatic decisions. They could implement some controls around these threats that could potentially be exploited.
[00:39:30.615] Kent Bye: Well, the other thing that's been on my radar is I went to the American Philosophical Association in January and listened to Dr. Anita Allen. She's the founder of the philosophy of privacy. And she gave this speech to the entire philosophical community saying that, essentially, we don't have a comprehensive framework for privacy. And that from a philosopher's perspective, they go off, read things, and come back with a comprehensive approach to what that idea is. And they tend to do this waterfall approach, I find, where technology moves at this more iterative process. And so I found myself being a technology journalist, I'm able to be at the cutting edge of what's happening in technology. And when I went to the philosophy conference and the philosophy of technology, it was like, you know, the most up-to-date citation that I heard was about 10 years ago from Nick Bollstrom's super intelligence book, you know, and it was like, there's so much that's been happening. And there was like no mention of virtual reality or any of this sort of contemporary issues of biometric privacy or anything that's like really, at the heart of what's the most pressing need for some of these philosophical insights. But because it is lagging behind so far, you don't have like these iterative frameworks to be a good enough minimum viable philosophy of privacy that's out there. And so in about an hour or so, I'm going to be giving my presentation here, Augmented World Expo, talking about what I see is this landscape of moral dilemmas and hopefully is a first iteration of a minimum viable ethical framework as well as a framework for the philosophy of privacy to start to look at these different contexts and start to differentiate what should be public and what should be private. But at the same time, because that hasn't been established or agreed upon by the community, It's sort of an open philosophical question, which means that it's left for each individual company to draw that line in terms of what they think should be private and what should be public. So, it's left up to Facebook, Google, Amazon, Netflix, Snapchat, and Apple. They're the ones that are deciding, and they all have different ways of thinking about it. there's not any larger institutional feedback to press against. And that's what I found frustrating, is that when I would talk to people at Facebook, and I'd be like, well, let's talk about biometric privacy, and they'd take a very pragmatic approach in saying, well, we don't have any devices right now that can collect any of this data, so we're not going to talk about it, even from a philosophical perspective. So they're not willing to think about the deeper philosophical implications of this, even though I know it's on the roadmap. Now that the Quest is launched, now they're going to start to be architecting for the next generation headsets. They're going to have all the technology, what's going to be in it. And then from that, they're going to be designing the operating system that's plugging into it. So all the APIs, all the operating systems, all the next generation The future of the headsets are being designed, but yet it's in the veil of secrecy that even to get access to all that, you have to have an NDA to be able to see what they're doing. It's all invisible, and we can't see or have any sort of check and balances. As we're starting to talk about these larger issues, there's a lot at stake, and it's completely opaque, non-transparent, and no accountability.
[00:42:34.230] Kavya Pearlman: Exactly. Yeah, I agree. However, I'm okay with NDAs and I'm okay with keeping your quote-unquote organizational blueprints encrypted and stored away. What I'm not okay with is not being able to make pragmatic decisions if I am a small medium business and I'm using the Facebook API or the Quest API or I'm putting some of my applications on Viveport or any other distribution network for that matter. How do I make pragmatic decisions when I don't know what is your responsibility, what's mine? And that's the challenge. I don't want to know all your secrets of your trade. No. Nobody wants to know that. You can keep it. But over time it has been proven that some of these emerging empires, they ignore, you know, they're not pragmatic. And by not allowing these information to smaller or other organizations, they're really putting that entire domain at risk because of the size, because of the scale, because we rely on them so much. I mean, I worry about every time I step into VR, I worry about who's listening to me and what are they building on me. and how they could potentially even use that against me if I ever tried to speak about things like that. And that worries me. And they could even make it look like it's not them. It's somebody else. It could go into sort of a criminal territory for that matter, not knowing what's happening to my data. I would really, really like to know. So I could decide that, oh, before I step into this particular application, I should not allow the mic to be enabled, or I should disable some of these settings, or there should be some kind of a provision to disable, to delete, to get rid of some of the data that I don't want to share. Those kinds of things, you know, really frustrate me. But we'll see what happens. We have to start somewhere.
[00:44:31.702] Kent Bye: Yeah, there's a bit of a permission fatigue that I think an installation of an application and giving a blanket permission at the very beginning, once it's turned on, then they have access to all those things. And there is no progressive permissions that are put forth. And that's one of the things that. Mozilla's Diane Hausfeld was talking about how they want to make sure that you are consenting to each level of these different permissions. I know that when I was looking through different reports about some patents that Facebook has for turning on your camera while you're looking at content on Instagram to do passive camera capture of your emotional affect and your emotional saliency as you're looking at different content. So they're able to potentially correlate what you're looking at. You not even know that they're recording your face as you're looking at stuff. And so there's that level of consent that I feel like there's different degrees in which maybe if people were being asked, hey, we're going to be recording you, people might want to actually know that, including up into secretly recording you when you don't know it.
[00:45:30.213] Kavya Pearlman: Yeah, and go deeper, right? So not just if you don't consent to this, then we may not be able to provide you better services. Say it like it is. That if you consent to this, that means that you are actually allowing us to record your deep psychographic profile. That's the transparency. It's not just, oh yeah, here's a consent checkbox because that doesn't work. How is a person who has never really thought about technology or computers being these bits and bytes type of pieces? In fact, I used to be a hairstylist. How is a hairstylist going to know what that really means? A pop-up box pops in front of me as I'm installing the app, and I just check the box just because I need those things. But if it literally says, hey, the whole UI UX issues, if you design it so that it's almost like a warning that, you know, before you step into this, like fulfill your moral obligation to make this human aware what he's or she's about to step into. That's the kind of accountability is needed, especially in XR domain.
[00:46:37.010] Kent Bye: Well, as you were talking before, I was thinking about the unattended consequences of some of these systems and how by not involving other people more closely, allow them to be able to participate. You have in some ways these threats that, for example, the election leading up to 2016, there's a bit of like an existential threat to democracy in the United States. In some ways, that's an externality. It's something that can't necessarily be turned into a number. It's a threat that has to be thought of and defended against, but there's a bit of like Being able to identify that there is a problem that's there and to know how to fight against it is a lot harder than to know once it's been discovered and found that the vulnerability is there. Then it becomes obvious, but it takes a lot of work to actually discover all those things. and to be able to come up with different mitigation risks against it. But if it's not something that can be quantified or put into your algorithm or put into your bottom line, then that responsibility to that larger ecosystem is put on to, like, whose responsibility is that then?
[00:47:42.127] Kavya Pearlman: Exactly. I mean, you're a developer. If you can't define a problem, you can't solve it. And you don't have to just define a problem. You have to then strategize to how are you going to build your algorithm to solve this particular issue and solve it in such a way that you don't just keep going into a loop or something. You have to have a concrete outcome. solving that problem. So that's really the approach we are taking with this initiative is really defining the problem and then go after what parameters, what people, what processes should be involved and what is the net outcome that we are trying to achieve here. Yes it is quote-unquote enabling the trust in XR domain but what does that literally mean in terms of like Guidelines bringing in the policymakers to now start having conversations about all this constant Era of constant reality capture as we talk about that's precisely what we're gonna do. Yeah
[00:48:37.827] Kent Bye: Well, we were both at the VR Privacy Summit. And there was about 50 people from all across the industry. And we had a great chat over the course of the day and lots of discussion and brainstorming. But yet, at the same time, there is a lot of talk about all these different stuff. But it was so vast and so large without any sort of comprehensive framework to make sense of it that it became hard to really come away from that with any specific action items, because it was just so vast and hard to wrap your mind around. And so when I saw Dr. Anita Allen said that, oh, by the way, there's not a comprehensive framework for privacy, it made so much sense to me. It just really hit me at a visceral level. It made me understand that that was why, in part, it was so difficult for the VR Privacy Summit to have any sort of clear actions or outcomes, because it was like, well, We're not going to, in the course of eight hours, come up with a comprehensive framework for privacy with just gathering 50 people together for a day. And at the same time, it also made me understand why it would have been so difficult for me to have these different conversations with Facebook or Google is because, well, they don't actually have any external guidance. So they have to kind of make it up as we go along. So, I feel like in order to engage policymakers from any sort of legal framework, you have to have, in some sense, some understanding of a comprehensive framework for privacy to tie together all the existing privacy laws, but also to help understand and contextualize all this new information of biometric data that has normally been in the context of medical information and had very strong regulations when it came to HIPAA. But now all of a sudden it's switching context from medical applications and now all this information is going to be made available for all of these companies that are engaged in surveillance capitalism. So I feel like that's a huge context switch, but we don't have necessarily like a framework to be able to be able to say, okay, this should be public and this should be private.
[00:50:27.207] Kavya Pearlman: And there is a federal level framework that is in the process. In fact, I was sitting together with Tony Hudson, I believe is his name, from ARIA, Augmented Reality Enterprise Alliance, and they actually took the time to contribute some of the comments about bringing in augmented reality aspects into this currently developing privacy framework. It's a federal level framework. And these are the kind of efforts, and we're hoping to, you know, combine these forces. And these are the kind of efforts that will make a difference, at least get us started. And then, obviously, we're not going to be able to boil the entire ocean, but we have to do something. And likewise, in the EU, so we have an EU strategic advisor. We're looking to partner up with people who are interested in tackling GDPR-ish type of issues. Now bring augmented reality, virtual reality in the context of GDPR. And when you start to talk about those kinds of issues, you can bring those policymakers onto the table and have these conversations. Because now we're talking about concrete outcomes, concrete impact of the data that is constantly being captured, the data that is going to be transferred from transatlantic, and hopefully these conversations will materialize into something and we'll get some stuff done, build some sort of a framework, or at least incorporate some aspects of augmented reality, something that's just not even on the minds of these policymakers, but we have to interject, we have to do this inception and push and push until they're really able to, you know, grasp the gravity of the consequences that could be seen eventually.
[00:52:00.958] Kent Bye: Oh, wow. That's great to hear. That's the first time that I'm hearing about this federal privacy framework. Do you happen to know any sort of context or background? Like, where is it coming from?
[00:52:09.729] Kavya Pearlman: Yeah. So National Institute of Standard Technology, they are putting together this effort of privacy framework. It's a federal level privacy framework. Do you remember the government shutdown? So that was like the end date of the comments. So they were trying to receive comments from the public about what should go into this framework, how should they approach it. And they also had a couple of calls and the webinars around it. So if you Google NIST privacy framework, you will be able to find it. The challenge was that they are basing it around cyber security. They're basing it around the industries that they're familiar with. And that's where it is our responsibility to, you know, tap on the shoulder and input those comments. And I'm so proud that ARIA kind of took the time to actually go in and put some of those augmented reality aspects on it and then continue to contribute, and be part of, and advise. Because they're not going to dream of our industry facing these challenges. We have to educate the policymakers. We have to kind of step in and just be like, you know what? I know there's an empty chair there. I'm just going to sit here, show up, and I'm going to talk about it. Maybe there is 10 people who would listen. Maybe there would be one. But we will talk about it until somebody is able to listen to us, yeah.
[00:53:24.689] Kent Bye: Do you know if they have anything about biometric data privacy within this NIST effort?
[00:53:29.028] Kavya Pearlman: I really hope that somebody has made those comments. I haven't thoroughly reviewed the comments, but all of these comments are public. So I think that is a good question. We should look into it and absolutely educate them about this upcoming unintended consequential aspects that could develop. Yeah, we should incorporate that.
[00:53:51.108] Kent Bye: Well, speaking of just getting people involved, what are things that people listening to this can do to either get in touch with you, or get involved, or get involved with the larger initiatives at the OpenAR Cloud, or specific things that you're doing at your XR security initiative?
[00:54:06.187] Kavya Pearlman: So like I said earlier, we're sort of taking like a four pillar approach. So there is the academia, you know, if you are a student who is curious about conducting cyber security, privacy, ethics related research in the XR domain, then we have Ibrahim Bajili, we are trying to educate people around the world and establish these virtual reality labs for different universities. So the goal is to raise money and funding so that we could redistribute that money to build those labs, to conduct those researches, document these things, and then be able to, as an industry, make those pragmatic decisions. That's academia. So if you're a student, if you're a university, you definitely want to contact XR Safety Initiative and sort of sign up for these things and express your interest. So when the funding becomes available, which it will, and I will go after it with passion so I could enable these things to happen. we could actually distribute to different universities. The second piece is organization. So you know, it's really nice and easy to say future is private. Well, I'm going to go talk to these people who are saying future is private. I'm going to say, hey, let's make present private. Let's talk about some of the data. Let's document what that data looks like and potentially hold these people accountable. And if that's what the intention is, which is what I'm hearing in the public relations aspect, then let's put a ring on it, let's materialize it, let's put some funding in these aspects, let's put some money in this privacy frameworks in augmented reality, XR. So hopefully, anybody who goes into PR mode and says privacy, privacy, well, give us some money, we want to do some good work and we want to help the XR domain keep the trust. So hopefully some of these larger organizations would listen to our voice and would be willing to contribute. And on the other side, like nonprofits that are already working in this domain, we don't want to spin up a whole apparatus. Like for example, OpenAR Cloud has these working groups. So we're just literally going to tap into that and start working together with them because it's not about the ego. It's really just about being able to solve as many problems as we can and come together. And then the final piece is this whole thing that you are already doing with your tremendous amount of work, journalism, activism, and awareness. And for that, there is a whole dedicated platform for Ready Hacker One. And we are going to invite contributors to talk about these issues, to make people aware. Even if you're a small little shop with a tiny app and a game, but you discover something that is so disturbing that impacts the entire domain, Let's put that out there for everybody to be aware of so that they can make more informed decisions. So awareness campaign, so we don't leave the users behind. And then the final one, the fourth one I definitely want to touch on is the government aspect. So we have this $4.8 million HoloLens project that was the government tapped into so we need to talk to the government side of things as well so when they do things in secret at least there is some sort of accountability there so we're not just completely doomed and in this closed wall just doing whatever the hell they want or we need to educate them about this as well or bring them to the table to talk about some of these challenges.
[00:57:29.932] Kent Bye: Yeah, well, you talked at the top of this interview about your awakening moment for privacy, but also alluded to the gathering that we had at the VR Privacy Summit, where we had this big gathering talking about all these various issues. Can you talk about the moment when you personally decided to start this XR safety and security initiative?
[00:57:48.845] Kavya Pearlman: That was my exact conversation with Abraham Bejeli and I was literally looking to do something and I gave him a call and I was like, hey, I came across your research, you found all these novel attacks, what do you intend to do with it? And while Abe's whole thing is about academia and then as we started to talk within that call, we had already formed XR safety initiative, because we were convinced that with his passion, with research and documenting these things, and with my passion towards security, privacy, and this sort of ethical aspects of things, when we combine these forces together, there would be others as well, we could really make an impact. So that was the critical call when this whole thing was formed.
[00:58:35.392] Kent Bye: And then what happened after that?
[00:58:37.886] Kavya Pearlman: What happened after that is we started thinking about how to do simple things like websites and who else is going to be our ally. In fact, I thought about you, Kent. I was like, I have to talk to Kent. I mean, in fact, you have been one of the contributors to sort of making this into a reality. Because before I joined Linden Lab and started tapping into VR domain, In fact, do you know that we actually met in VR? Remember that time you were at Drax's?
[00:59:08.447] Kent Bye: He did a recording or an interview with me, yeah.
[00:59:10.028] Kavya Pearlman: Something like that. You were there. And then I asked you about a couple of these privacy and security-related aspects. And you pointed me to your episode.
[00:59:17.753] Kent Bye: I did an interview with Ebi Alberg. It was actually one of my first interviews about privacy. What happened, I think it was around SVVR 2016. And I think I might have been talking to Az Balbandian. just sort of mentioning about his own personal concern about privacy and security. And I just started hearing kind of a buzz at that conference of people being like, oh, wait, what's happening? I think Will Mason of Upload VR had done some article saying that there was some sort of talkback features that had been integrated into Oculus. And then Upload wrote a whole article about it. And then it got the attention of Al Franken. There was a big uproar within the VR community of the potential privacy risks and actually like the Senators had sent Facebook a message asking all this information and so there had been kind of this back-and-forth that had been catalyzed by an article by Will Mason that that when I was at the SVVR 2016, there was a bit of like people were starting to really talk about it. And so because I'm on the front lines talking to people all the time, it was like this organic story that had been starting to emerge from the community. And then I had this whole interview with Ebi Alper, the CEO at Linden Lab, and I think I like really grilled him on privacy. It was like maybe my first interview about privacy was like hearing other people, and then he was like the first kind of official person that I was talking to. But yeah, that sort of started me on that journey for the last like three plus years now, talking to the community, listening to what concerns have been emerging. And then this is all before the big explosion with Cambridge Analytica and the whole thing with the election. So it was like two or three years before all that happened, but it was because I was embedded into the conversation in the community and sort of listening to what was emerging.
[01:01:00.770] Kavya Pearlman: Yeah, and that's exactly right. So when you asked me, like, when did you actually form this, you know, the brewing of all these ideas kind of started with you a little bit, where you directed me to some of your previous conversations. And those became like sort of input points to now start thinking about, oh, my gosh, this is really true. because prior to that really you know I didn't feel so compelled to go into VR and to think about it in this way. I was not personally so impacted or I wasn't able to see this domain in such a paranoia a little bit in this light and listening to your talks as well as learning all about Abe's research and exchanging some of these ideas with a couple of other researchers changed my mind. I'm like you know what I am done talking now time for action because it's really just time and In fact, I even looked around. I looked for people that were doing similar things and there isn't a single entity that I can think of right at least at this minute. And if there are some, then please connect them to us because we together have a lot of work to do and we need to. If people are thinking about privacy ratings, for example, if they're building apps so you could easily identify what you're getting into, any kind of privacy-related, ethical-related, any work that is being done. Yes, of course, there are a few people that are. In fact, I tweeted out a few days ago that who else is doing work that Kent Bye is doing. is I'm essentially trying to look for people that we could combine forces with and try to tackle some of these problems together because I alone or my team alone isn't going to be able to do all this but we need to do a lot more than what is possible.
[01:02:49.090] Kent Bye: That warms my heart to know that those conversations had led to that, because I feel like part of my philosophy around this is to continue the conversation, to continue talking to people. And right before we came to Augmented World Expo, there was a whole panel that we both participated on that was part of the OpenAR Cloud, and I felt like this paranoid dystopic guy saying like we've got to act this is like the danger time and like I'm usually a pretty like Optimistic guy that really likes to think about the ultimate potential of VR but there's something about this issue about talking to enough people and and understanding the depth of concern and the threats. It's like a part of my own moral intuition in my body that it's feeling compelled to at least speak out about it. And I'm so relieved that I don't have to be the paranoid doomsday scenario dystopic potential guy. And now I can point everybody to the XRSI initiative and all this work of OpenARCloud for people to, maybe use a little bit of that fear to get involved and to actually make a difference. Because at the end of the day, it's going to be continuing the conversations and building and actually architecting and creating something different. So I'm just extremely happy that you've created a platform so I can point people to and actually get involved and make a difference.
[01:04:07.720] Kavya Pearlman: Yeah. Thank you so much, by the way, for, you know, sort of giving this platform a voice as well, because that's exactly what we need. We need to make people aware that we exist, and we need to bring people together to come together and work with us and contribute to some of these issues. Specifically, developer side of things, because we need feedback from the community, from the development community. Specifically, when you talk about data, we need to understand the data sets that are being used. What does that look like at a database level? In fact, one of the very next step that we are going to take and one of the very first initiative, a concrete set of objective that I've put on my plate is to really go and define XR data and sensitive XR data. So you have PCI data, HIPAA data, all of these type of things exist. But in our industry, we haven't thus far been able to identify what is XR data? What is biometrics data? And those answers can't come from me. I'm a security person. Those answers would have to come from the developers, people who are literally sitting and coding and utilizing these data types, these data sets. And I'm even going as far as talking to some of the people in the AI technology side of things. because they are dealing with a ton of data sets. So data scientists, developers, that's our very first preliminary initiative is to really document what is XR data and base it on what happens when this gets lost or this gets exploited, and based on that risk, be able to quantify and then sort of inform the people as to what is XR data, what is sensitive XR data, and hopefully that will give us enough of a foundation to then start going further into, okay, now we talk about How do we store sensitive? How do we store non-sensitive? So things are building up I mean these are things that we need a lot of people to work on. A lot of diverse voices are needed.
[01:05:57.813] Kent Bye: Yeah, I'd say to also definitely check in with the neuroscientists and the people who are doing the different frontline biometric data capturing things like EEG, EMG, galvanic skin response. I feel like there's a lot of sensors that are used in neuroscience and psychology and social psychology that haven't made it into the actual hardware for virtual reality. But it's coming. And it's just a matter of time before all these things are fused together. So getting involved, the latest technologies and the sensors that are coming from a neuroscience perspective and all the research that's coming from a neuroscience, because I feel like all that is going to be XR technology. It's all these immersive technologies that's all going to be fused together and integrated to be able to figure out all sorts of stuff.
[01:06:39.008] Kavya Pearlman: And that's good advice. And one thing that as we do do this, we're going to have to keep an opening on, you know, like version one, version two. So as we talk about this overall XR umbrella, we're absolutely going to leave room for improvements in these things and new versioning as new things get introduced. and scope it out so it is clearly understandable who does it apply to. Does it apply to this particular aspect of AR, VR, or does it apply to the entire XR domain? So those are the things that we are obviously going to work on and figure out what's next.
[01:07:10.959] Kent Bye: So for you, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?
[01:07:19.002] Kavya Pearlman: Open problems? I think we just talked about all of these. So I zoom out quite a bit when I sit around and think about problems. I zoom out and I think about, you know, hey, the first thing I want to do is keep the trust and then sort of trickle it down further into what does that trust mean? Oh, safety. OK, what does safety mean? If I can't keep my IP address private, somebody could come in and track me down. And if I'm in VR and completely unaware of my surroundings, somebody could literally kill me. So it starts with trust, then comes the safety, then comes privacy, security, and ethics. you know, I'm from India, I have dealt with my fair share of the inevitable harassment that you get from being an Indian female in India, or at least used to. And all of my experiences then start to come together and combine into this, you know, okay, what does that look like at the code level? What is the data type? And then, you know, you trickle it down further into, okay, Does that mean we need to build a policy? Does that mean that we need to now talk to a regulator? But until we have these little bit foundational pieces together, we won't get there. So the first key question is to classify the data. So if you talk about bottom up or top down, that's kind of where my mind is at. And the entire team actually agrees that this is a good approach to go about doing things.
[01:08:47.538] Kent Bye: Great. And finally, what do you think the ultimate potential of immersive technologies are and what they might be able to enable?
[01:08:57.822] Kavya Pearlman: Exactly what you believe. The immersive technologies are literally going to transform the universe, the way we see everything. And I say universe because it's not just planet Earth. We are going to be able to visit space because of these immersive technologies without having to visit them. So these immersive technologies are going to change the way we behave, we think, we train, we learn. and I am so excited and looking forward to it.
[01:09:27.582] Kent Bye: Is there anything else that's left unsaid that you'd like to say to the Immersive community?
[01:09:34.595] Kavya Pearlman: I would like to thank everybody who has thus far enabled me to come and think through these issues and, you know, especially you and many other people like, you know, even Jessica Outlaw, Ibrahim Bajili, my friend Regine Bonneau, Marco Magnano, he's a journalist who's going to be owning Ready Hacker One. So all of my team, plus so many of these unheard voices. that I have been noticing. They have all been contributing to the enablement, to the conception of XR Safety Initiative. And I'm really thankful that people put themselves at sort of a semi-risk and still go and feel morally obligated and still do it. Because it is a hard thing to speak the truth, but we must.
[01:10:22.462] Kent Bye: Awesome. Well, Kavya, I just want to thank you for joining me today. And thank you for starting this XR Safety Initiative. I think it's going to be a key part to help to create an institution where people can focus their energies and help do the research and the funding and think about these deeper issues from an institutional level. So thank you for joining me and doing all the work that you're doing. So thank you.
[01:10:41.987] Kavya Pearlman: Thank you so much. Appreciate it.
[01:10:44.705] Kent Bye: So that was Kavya Pearlman. She's the founder of the XR Safety Initiative. So I'm going to have different takeaways about this interview is that first of all, well, I'm just genuinely really excited that there's other institutions that are coming up now that are going to be taking this on and really giving it its due and really focusing on it full time. going out there talking to policymakers, interacting as a liaison between these big major companies and these independent developers to kind of figure out like, okay, who's responsible for what here, you're an independent developer, and then you're interfacing with the different aspects of the overall ecosystem of these headsets. You have the operating system, you have the actual hardware, you have the software that is being taken care of by these big companies, and then you have the software developers. And that there's different security vectors and safety initiatives that are at each of those different levels, from the app developer, from the platform, from the hardware, from the operating system level. So there's all these different security vulnerabilities that need to be addressed and looked at for these different threat factors. So she's collaborating with a security researcher, Ibrahim Begili, who is already as an academic looking at these different security threat vectors. And part of what Kavi wants to do is to expand this out, to get other universities involved, to raise more money, to have more students that are looking at these different threat vectors and to build up a whole XR security community in academia. So that's one of the big initiatives that she's doing. She's also wanting to have these different conversations with policymakers and lawmakers, which I think is a huge need to be able to be engaged with some of these initiatives that are happening. I mean, this is the first time that I heard of this NIST privacy framework, which sounds like it's still in the process of being developed, but just to have more people from the XR community to actually download, take a look at it, see what kind of things could be added to what is trying to be a risk mitigating framework for these enterprises and to see how both augmented reality and virtual reality can start to be added into the mix. And the more that there's going to be different protocols for these enterprises to follow, then the easier it is going to be to have the adoption of these XR technologies. I actually proposed to Greenlight Insights to do a whole talk specifically looking at different privacy and ethical issues for the enterprise. And I'm going to be giving a talk at XR Strategy Conference, the XR Week that's happening from October 16th to 18th. I'm going to be there. That's actually really a good conference that's really focused in terms of different people from the enterprise. And I always have a good time going there and to see what's happening within different levels of enterprise training and other business applications for XR, because I did give a talk giving out my broad ethical framework at the Augmented World Expo, and it's about an hour long or so, and I was able to cover dozens and dozens of different moral dilemmas within mixed reality, and I hope to be airing that on the podcast here soon, and you can take a look at the YouTube video. I'm actually going to be hosting a panel at SIGGRAPH on Monday, July 29th at 1045 AM to 1215. I'm going to be having Mozilla and Magic Leap and Venn Agency and 6D AI. And we're going to be talking about some of the ethical and privacy implications of mixed reality, really talking about some of the technical architecture that is needed in order to address some of these privacy issues. And so if you're going to be at SIGGRAPH, definitely come by on Monday, July 29th. So just going back to the XR safety initiative. So, you know, she's going to be working with academia, working with the government. She's working with these organizations directly and trying to see what the data is, how to define what XR data is, what the data structures are. And she's also going to be working with these nonprofits like open AR cloud and other collaborators. So if you're working on this issue on ethics or privacy or safety and security any of these different things and then reach out to the XR safety initiative at XR si org and reach out on the contact form and get in touch it's still very early and you know just the thing that is so amazing about the XR community is that you know this is just an initiative that Kavya saw there was a real need for this to happen and sounds like they're still in the process of like raising money and actually making it viable but I you know, this is a venture that she's really going out there and trying to really make a difference. And this is something that if there's more people to really reach out and just say, I love what you're doing. I really want to support and be involved in whatever way that I can. I think that's a way that you can start to take action. And if you're interested in these topics, then to actually get involved and to take part. And then there's also the open AR cloud, which I know they have a number of different working groups that are looking at a lot of these other variety of different issues. decentralization as something that came up a number of times in this conversation and I'm actually going to be going to the decentralized web camp again this year. Actually last year was just a summit put on by the Internet Archive and this year they're actually having a whole like weekend camping environment with all these people that are trying to build the future of these decentralized architectures. And I mentioned things like differential privacy and homeomorphic encryption and You know, I actually don't know what those decentralized architectures would even potentially look like. I know I've talked to Khalil Young about self-sovereign identity and some of these WC3 standards to try to have at least some sort of decentralized architecture so you could start to take more ownership over different aspects of your identity. But in terms of like, what is the actual devices and where it's actually being stored? I mean, the unfortunate thing is that everything seems to be moving into this on the cloud. And unless we change the third party doctrine, then this is just going to be an absolute nightmare for privacy. It already is. But just if you think like the way that the laws are written right now, that the third party doctrine, any data that you give a third party has no reasonable expectation to remain private, which, again, has those two side effects. One is that if the government goes to those entities, then they have to hand over that data. And two, it kind of collectively weakens our rights to privacy, because we're saying as a culture, we're giving over this data, we don't care if other people get a hold of it or not. And we may not think too much about it when it comes to our photos or or emails or whatnot, but once it gets into our biometric data, then you're starting to get into some really intimate insights into what your preferences are. I mean, you start to get into some really big brother stuff. Once you start to say, all this data is made available through biometric data, if you're okay with just having the government have access to that and do whatever they want, then I think those are some of the different policy level threats that this is just something that needs to be changed. So I hope that there's other people that are also taking on this third party doctrine to get that changed. And if that's changed, then you could start to then think about what are the other aspects of data sovereignty, data ownership. There is this trade-off between your own rights to autonomy and your rights to your own data and what data is yours. But then there's this utilitarian exchange that you're doing with these companies. And to try to find out what those balances are, we've kind of seen a lot of the unintended consequences of what happens when all of the psychographic profiles and the data gets into the wrong hands. And I think that's one of the points that Kavya was making is that she was at Facebook during this time and you see what happened with the DNC hack and you combine those very specific targeted email lists with these names and these locations and emails and then you match that up with these Facebook profiles, then you're able to conduct information warfare based upon this information that you're putting together from two disparate sources. And so anybody that is storing data, that could start to potentially become a national security threat if there's a state actor that catches wind that this data exists. they're going to have a lot of ways of wanting to get access to it. And just the way that security works online, there's always going to be some security hole that goes months and months and months that they're not found bugs and codes, just different new attack vectors. And so it's like you can't guarantee 100% safety and security with anything that you're storing. And so we kind of need to assume that any data that you're store housing, that it could end up at one day on the dark web and then just be able to take that information and start to match it up to other things as well. So I think bringing awareness to some of these issues and just having some level of accountability. I really love what Mozilla was doing in terms of like just publishing all their code so you can audit it and see what they're doing. That's a level of transparency that I don't expect that anybody else that's doing a walled garden type of approach to do. But we have to kind of start to think about if there are other levels of accountability that we want to ask for and what that might even look like. I'm not even sure what that looks like. Maybe a good step in that right direction would be to just open up the dialogue to be able to talk honestly about it, to talk about whether or not they have ethicists and philosophers and lawyers who are looking at these different issues that are embedded within these different teams. Because I get the sense that they don't have a lot of interdisciplinary ethicists that are working hand in hand to be able to think about some of these. Potentials like these black mirror scenarios that I feel like part of the role of playing out some of the worst case scenarios is to start to try to figure out what are some of the ways that we can build up some of that resilience. And I think in a lot of ways, that's what the XR safety initiative is trying to do is a little bit of that red teaming of trying to figure out what are some of the different vulnerabilities like if you get a hold of my chaperone system and change it. And you're able to make these subtle changes that can do like this redirected walking thing. You can start to guide and direct me in these very subtle ways where you have these attack vectors where you could start to, as Kavya said, literally start to put lives at danger at that point. So like Kavya said, there's not a silver bullet here. There's not like a single thing that can be done by either XRSI or anyone, but it feels like it's a part of the conversation that they're continuing. So overall, I'm just super happy and relieved that this is even happening and that Kavya is going out and doing this. I think it's amazing that she's taking the sleep in to do this type of work. And I just hope that, you know, she's able to raise the money that she needs and to continue to build this community and to have more people take in some of these deeper issues of ethics and morality and privacy and safety within the larger XR ecosystem. So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends. And, you know, this is an independent journalism project. This is my own community service that I'm doing for the benefit of the larger XR community. And I rely upon donations from listeners in order to continue to bring this coverage. I hope that it's clear by listening to this podcast, what type of impact this work has had over the last five years and helping to track this issue and to get it to the point where it is today, to have other organizations that are building up around it, to kind of take the torch and to take it into the next steps. And I hope to continue to do what I can do from a journalistic perspective. But I think the work that I'm doing here at the Voices of VR is getting out there and is having a real impact. And so I'd love to have a lot more support to be able to not only continue what I'm doing and to make it really ultimately sustainable, but for me to continue to thrive and grow and continue to expand out and to provide other features as well. So you can become a member and donate today at patreon.com slash voices of VR. Thanks for listening.