I caught up with the XR Safety Initiative CEO & Founder Kavya Pearlman as well as Kristina Podnar, who is an independent Digital Policy Consultant, XRSI’s Global Digital Policy Advisor, & the chair of XRSI’s Metaverse Reality Check. We talked about the landscape of tech policy issues including child safety in VR and what Antony Vitillo has called “the kids issue” within VR with an influx of children less than 13 years old after Christmas. We also talk about Privacy, and the overall tech landscape, and do a recap their recent XR Safety week.
XRSI’s core mission is to “inspire and catalyze the safe use of extended reality (XR).” XRSI are spread across so many different contextual domains including consumer awareness campaigns like the XR Safety week, the Cyber XR Coalition advocating for Diversity & Inclusion in the XR industry, XR Data Classification roundtables, publishing a XR Privacy Framework, as well as the Child Safety Initiative, their new Metaverse Reality Check as an oversight board for the Metaverse, the Medical XR Advisory Council, and an immersive journalism platform with Ready Hacker One. On top of all of that, they say that they’re also advising governments and legislators on XR policy.
XRSI also has a wide range of positions and strategies that can also be confusing to know how to classify them within the broader tech policy ecosystem. Most of the time XRSI sounds like a consumer advocacy organization about issues on child safety, privacy, or cybersecurity. But then other times Pearlman will advocate for more experimental blockchain strategies using Decentralized Autonomous Organizations for taking collective actions on privacy issues. Or then other times they will emphasize prioritizing innovation or emphasizing uniform approaches to legislation, which could amount to a more business-friendly position depending on the specifics of the proposal.
Podnar mentioned that she and XRSI were working with the Uniform Commercial Code and Emerging Technologies Committee and the American Law Institute on policies around blockchain ownership of virtual goods, and Pearlman has said on Twitter that XRSI is working with the Uniform Law Commission and the International Institute for the Unification of Private Law to help close the gap in regulation for virtual worlds.
(Update: January 7, 2022) Perlman clarified the scope of their work by saying, “Uniform Commercial Code and Emerging Technologies: The American Law Institute and the Uniform Law Commission joint committee is advised via recommending amendments and revisions to the Uniform Commercial Code with a view to accommodate emerging technological developments such as digital currency and virtual goods. XRSI contributions are specifically focused on commerce in the Metaverse and the legal impact of emerging technologies such as Non-Fungible Tokens(NFT) and cryptocurrencies. New laws are expected to be made public by Fall 2022.”
Broadly speaking each of these organizations are aspiring to create a uniform approach by creating model laws that could be potentially adopted by state legislators in order to simplify the regulatory landscape.
In looking more deeply into one of the Uniform Law Commission’s previous proposals around privacy called the Uniform Personal Data Protection Act (UPDPA) after this interview, I found that the UPDPA was widely panned by privacy advocates. I’m not convinced that uniformity of laws is always the best design goal, especially if it is prioritizing business’ needs over consumer protection as appears to be the case for Uniform Law Commission’s UPDPA.
(Update: January 7, 2022) The scope of XRSI’s collaboration with the Uniform Law Commission and the American Law Institute appears to be limited to advising amendments and revisions to the Uniform Commercial Code and Emerging Technologies’ joint committee on policies around virtual goods, digital currencies, cryptocurrencies, and NFTs. Drafts of these proposals will be made available in the Fall of 2022, which is when the full scope of this effort can be evaluated through the lens of consumer advocacy verses the interests of businesses. Hopefully XRSI’s feedback will help tip the scales towards legislation that is able to add additional protections for consumers.
All of these discussions are also happening within the context of a technology pacing gap & Collingridge Dilemma where technology is advancing far faster than tech policy can keep up with understanding it or regulating it. There is no one consumer advocacy group, trade organization, company, or legislative body has really fully figured out a comprehensive strategy for how close this technology pacing gap or to find the perfect point of equilibrium point within the Collingridge Dilemma that balances technological unpredictability for innovation with the legislative desire to prevent harms without inadvertently stifling innovation.
So it is within this broader context where XRSI’s consumer awareness campaigns like XR Safety Week are very much a needed part of the larger process of informing the public, XR developers, lawmakers, and businesses about some of the biggest harms from immersive technologies.
XRSI passed along some additional show notes and reference links to some of the other frameworks and specific policy issues:
- Securing the Metaverse Research Paper https://www.sisostds.org/DesktopModules/Bring2mind/DMX/API/Entries/Download?Command=Core_Download&EntryId=52969&PortalId=0&TabId=105
- Securing the Metaverse Slide Deck : https://www.sisostds.org/DesktopModules/Bring2mind/DMX/API/Entries/Download?Command=Core_Download&EntryId=52979&PortalId=0&TabId=105
- XRSI’s Child Safety Initiative : https://xrsi.org/programs/child-safety
- California Consumer Privacy Act (CCPA): https://oag.ca.gov/privacy/ccpa
- Protection of Personal Information Act (POPIA): https://popia.co.za
- Children’s Online Privacy Protection Act (COPPA): https://www.ftc.gov/enforcement/rules/rulemaking-regulatory-reform-proceedings/childrens-online-privacy-protection-rule
- Personal Information Protection Law (PIPL): http://www.npc.gov.cn/npc/c30834/202108/a8c4e3672c74491a80b53a172bb753fe.shtml and translation at https://digichina.stanford.edu/work/translation-personal-information-protection-law-of-the-peoples-republic-of-china-effective-nov-1-2021/
- Metaverse Reality Check (MRC): A global oversight board for and by citizens: http://metaverserealitycheck.org
- Uniform Commercial Code and Emerging Technologies: The American Law Institute and the Uniform Law Commission joint committee
- Basic Online Safety Expectations (BOSE) Determination 2021, Safety by Design (SbD) tools and Industry Codes established by Australian Parliament and the positioning statement for immersive technologies by Esafety Commissioner of Australia
- CAMRA Act and Child Safety in Emerging Technologies Children and Media Research Advancement (CAMRA) Act, H.R. 1367 proposed by Senator Ed Markey
- Medical XR Positioning Statement by the NHS, UK and several governing bodies: The Growing Value of XR in Healthcare report
- Kids PRIVCY Act and COPPA Reform: US Rep. Cathy Castor along with several US legislators were advised on Kids PRIVCY Act to Strengthen The Children’s Online Privacy Protection Act of 1998 (COPPA) to keep children safe online. A support statement was issued by XRSI along with several recommendations to bring XR and emerging technologies in scope.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. Well, first of all, Happy New Year, everybody. It's 2022, and I wanted to start off the year with talking about some of the hot topics in ethics. Two big topics, first around privacy, I think, which is a perennial topic that I've been covering for a number of years, and I wanted to unpack some of the latest discussions about that. But also, there's been, in the light of Christmas, there's been an influx of lots of people within VR, including lots of kids, kids under 13. So, technically, you have to be at least 13 to be able to use VR because of COBA compliance. In order to track users within the websites and whatnot, you have to be at least 13. But yet, there's still lots of kids who end up in these different virtual online spaces, maybe in situations they shouldn't be, which can put them in danger. But there's also aspects of kids being in different games that degrades the overall experience if people aren't really mature enough to be able to handle some of the different types of social interactions. So there's a number of different angles there, but I wanted to talk with the XR Safety Initiative, which has been doing the whole child safety awareness initiative. Just wanted to hear from them some of the deeper context of some of the discussions around child safety and this debate around how to deal with this influx of kids. There's both bringing the parents aware for these different issues, but also what are the responsibilities in terms of meta to start to either do age verification, age appropriate design, or actually enforce having like a minimum age. And just before we start to dive in, just a couple other comments. First of all, we live in a situation where the technology is moving so quickly that in order to come up with these tech policy suggestions, it's difficult to come up with the conceptual frameworks around these or even to then at that point, come up with what the policy should be. We know that this is going to be a exponentially growing technology, but how to best handle some of those different issues. For me, the biggest issue is privacy. And what are we going to do with that? I just wanted to get XRSI's take on that battle around our privacy and biometric psychography and agency. Also, a comment, as far as I can tell, XRSI is a volunteer organization, mostly. There are some people who are full-time paid staff, but they also rely upon a lot of different advisors that are coming in and speaking on behalf of XRSI, but also they may have a separate thing on their own. Today's podcast we have Kavya Pourman, the founder and CEO of XRSI, but also Christina Potnar, who happens to be a digital policy consultant. She's also the XRSI's digital policy advisor, and also the chair of the new entity called the Metaverse Reality Check. And so just to make a comment on that as I have this conversation, is that as a digital policy consultant, Christine is working with companies to be able to help navigate these different policies. There's, in some ways, an incentive to try to create policy structure that is uniform so that it's easier for companies to be able to navigate that, but also to be proactive in trying to set up laws so that there's more clarity as they continue to innovate in these spaces. But for me, as I hear some of these different discussions, it's sometimes difficult to differentiate between what a consumer privacy advocate perspective would be and what would be a position that would be more beneficial to a business. So there's a little bit of weird mix sometimes with some of the different discussions as we have this conversation about a variety of different issues. And because XRSI is mostly working in the realm of awareness and promoting these different conversations, there's not specific policy positions that they're putting forth, as an example. They may be helping develop them, but they're in collaboration with others, and they're not always commenting on other policy positions that are out there, so it's sometimes difficult to know where they stand on specific nuances of some of the different policy discussions. So that's just sort of a caveat as we start to dive into this discussion. I'll be unpacking more details at the end in terms of my takeaways of the different stuff that we talk about. But just in general, I think there's a lot of ways in which the technology is moving so quickly and it's hard to fully understand all different things. And there's this gap between the ways in which the technology is moving forward and what we do in order to protect some of these underlying aspects of not only our privacy and security and safety, but to create these larger frameworks to help understand and guide and inform. And as we move forward, what the best way to reduce harm, but also doesn't stifle innovation too much. So that's what we're covering on today's episode of the Voices of the Hour podcast. So this interview with Kavya and Christina happened on Monday, January 3rd, 2021. So with that, let's go ahead and dive right in.
[00:04:27.244] Kavya Pearlman: First of all, thank you for having us, Kent. This is the best way to start the year, to resume this conversation. A lot has happened in the past few years since we last talked. I'm Kavya Perlman, founder and CEO of XR Safety Initiative, XRSI. We have been making quite a bit of stride with the mission of helping build safe and inclusive immersive environments. And as we know, now we have a massive label to put around it, aka Metaverse, which has kind of taken over the internet in sort of a hype cycle, pretty much. So now we are kind of busy, you know, initially we were very focused on XR and still we are, but now all of these other intersections of XR because of the evolution and emergence and anticipation of metaverse have really taken front and center stage. So really excited to resume our conversation and happy to bring in My colleague who is also been really instrumental in shaping the mission of XRSI, Christina.
[00:05:35.751] Kristina Podnar: Thanks first Kent and Kavya for allowing me to come and geek out with you. Certainly excited to be here as we start the new year. I am a digital policy consultant by training and by trade. I've been through the battlefields and can show my scars if you need them, but really excited to be hanging out with Kavya and all of the really great folks over at XRSI. I function as XRSI's global digital policy advisor, and I'm also I'm very honored and humbled to be chairing the metaverse reality check, which is focused on advising regulators and helping governing entities supporting individuals, and also holding big technology corporations accountable for creating a metaverse that's safe for citizens and for society. So I think it's a really important mission, certainly one that's only starting and one that is going to need a lot of voices around the table. So looking forward to having those voices and making sure that we're really enabling proactive measures for ownership going forward, but also interoperability and inclusivity and making sure that we're aware of the harms and the benefits and weighing those together. so that we don't make really crazy decisions as we have historically in technologies that preclude folks from using them or allow a few to really benefit at the peril of many. Really looking forward to that and happy to be discussing that today.
[00:06:58.906] Kent Bye: Yeah, well, I know Kavya, you and I have talked before about your journey into XRSI, but I think it's probably worth recapping a little bit of your own personal story, your background and your journey into this space, as well as Christina and your background and your journey into this point and both of you as you're working together on XRSI. So just to give a little bit more context as your background and your journey into this space.
[00:07:20.351] Kavya Pearlman: Totally can't and yeah and you know that that is a bit significant in fact past few years I constantly revisit these experiences that I heavily lean on to establish a lot of positions share my views as well as just you know. contribute to the space. So in 2011, I was barely a hairstylist cutting hair for $10 an hour. That's when I read Cyber War by Richard Clarke, and then got busy sort of preparing for that cyber war that I anticipated. And masters in network security from Chicago that I was like, okay, if I'm going to do tech, Then I got to move to California. Fast forward 2016 I really just found myself doing third party security for Facebook. And as we know, you know, the biggest scandal in the US, actually global history, because data scandal, kind of took place under. during those circumstances. And right after that, I happened to be the head of security for Linden Lab, which some people know is the maker of Second Life, the oldest existing virtual world. And I recall clearly going in to that space and feeling like, oh, wow, because I had evaluated a few virtual reality security stuff. And then being the head of security for that virtual world, bringing in another VR platform really just gave me a good grasp on what is going to come for us. And now we fast forward, we are looking back at it and I look back at it and I'm like, wow, the real technological risks that I had to evaluate during 2016 and then combining that with what are the risks associated with emerging and evolving virtual worlds, including virtual reality. I mean, that was just like, I just happened to be in the right time, right place. But there was a lot of learning. And I just basically took that forward along with a lot of brilliant minds that decided to come together, formed XR Safety Initiative. And we started in 2019, early 2019. And here we are, starting from basic taxonomy all the way to now, we're currently advising five global governments, including one of the prominent ones around the globe. So it's been quite an exciting journey. from where I started. And I'm really humbled and grateful that I happen to have this role. And people have took to the fact that sometimes they're like, yo, CyberGuardian, this and that, and they tag me for these issues. So I feel pretty grateful assuming that role for the industry.
[00:09:56.245] Kristina Podnar: And I don't have nearly as eloquent of a background as Kavya does. I actually am one of the people who back in the early days of the web, I think it was 98, when I came out of grad school, got my first job as a manager, which sounded so eloquent for a startup. But I cried on my first day at work because I didn't know how to write cold fusion. If you remember the days of cold fusion, nor was I good at CGI scripts, nor hardening servers. And so I grew up in the space doing a lot of crazy things. I single-handedly brought down the St. Jude website for a whole day. I managed to pass a lot of credit card numbers on behalf of UNICEF to banks unencrypted and did a lot of crazy things because it was the early days of the web when we could. And so I grew up in a space where It was the wild, wild west, but as time evolved, I realized that we needed to start growing up and we shouldn't be doing crazy things. And certainly large brands shouldn't be doing crazy things because this thing that was called the web and subsequently all of the other digital channels is a thing and it has good sides and bad sides. And if we aren't mindful, then the risks that we create, whether to large brands or to individuals, tends to outweigh the benefits. And so I slowly but surely started to focus on governance. and got to play a number of roles throughout the digital space. I'm a horrible developer, but I did develop Java for a while. So I've been through the whole life cycle, including deploying really, really large scale mobile devices, et cetera, before really starting to step into the XR space slowly but surely because large brands were heading there. And so I really started to see patterns, the same patterns that we've been making in web one, web two, now we're actually starting to make them as we head into whatever the new horizon is going to be. And so my background really as a digital policy Sherpa, where I help people weigh the risks and the opportunities and make sound decisions, whether it's for an entity or for individuals or for society, seem to align very closely with XRSI and what Kavya and others have been trying to do for years now. and beat the drum that says there is no reason to make mistakes because we can be smarter. We've made historical decisions that we can learn from. And so I'm looking at this from a very perhaps different perspective because I've been inside of brands for many years now. I work with global 2000s with NGOs. And I've seen a lot of organizations make really good decisions. So when I see headlines of things that are not really the best decisions, I feel like we need to call those out and help people find their way, whether it's an entity or an individual. So really excited to be pointing now back to XRSI. Delighted that I joined last year and really starting to advise more and more folks in the space.
[00:12:40.675] Kent Bye: Yeah. And as I've personally been looking at a lot of these similar issues of ethics, I feel like that there's a number of layers that you can start to address some of these different issues. And so, you know, Lessig's approach, he talks about how there's the cultural norms, and then at some point there's the laws that get set within the context of government. And then there's the economics that happens within the context of those laws. And then at some point there's the underlying technological architecture and the code. I feel like that Lessig's approach is saying any ethical issue is going to be some combination of each of these, that there's going to be technological architectures. There's going to be things that are going to be market dynamic issues. Maybe things are solved by a competitor who does a different approach, or maybe there's laws that need to get passed in terms of tech policy to be able to address some of these issues. And then the core for me, at least, is that there's these cultural values that are driving all these other things, whether that's the values of the company that is doing certain business models or the value of the culture that is able to educate themselves to be able to navigate all these different things and to figure out the best way to address these issues. And I know that XRSI has been trying to, I guess, address these either through addressing the tech policy angles or trying to talk about these larger issues that maybe there needs to be tech architectures. So I'd love to hear for the last couple of years, since the last time we chatted, what you see as your own approach for how to address the biggest harms that are potentially out there and what's the best way to start to do that, whether it's through putting white papers out or consulting with governments or trying to gather folks together to talk about these issues to start to then drive a larger conversation around ethics within this larger XR community.
[00:14:18.837] Kavya Pearlman: Yeah, Kent. And I just quickly want to go back to Christina. That's an awesome background. I must officially say that. And I've never admitted, but in one of my jobs, when I moved to California on my day one, I actually literally, and I'm not ashamed to admit that I literally Googled ISO 27,001 framework on day one, making sure nobody was looking, because that's what I was tasked to do. And a year later, that company was ISO certified and stuff. totally like you, you know, going into these uncharted territories and just taking them head on. And likewise, you know, I can totally relate. Going back to your question, Kent, I'm glad that I'm able to actually answer that. And so it goes something like this is when 2019, when we started the mission, We were like, okay, I was leaning on my Facebook and of course the other experience. Every single time I had to personally evaluate risks related to any technologies. Sometimes we had to address like Israelis trying to bring tracking system within Facebook campus and whatnot. So the very first thing that was done was to understand what kind of data that we're dealing with. So this was the premise of like, okay, what is the first thing we should do? We should really understand the kind of data these XR ecosystems that will consume, share, whatever. Because once you establish the level of sensitivity of the data, then we may be able to understand some levels of risks, and then all kinds of stuff. So that was our first stuff that we started in 2019, but very quickly realized that we're not even speaking the same language. And I remember those times where two things were very prominent. In fact, I remember that slide I even made for Augmented World Expo in 2019, we were the unheard voices of XR. I had some of your tweets and Avi Barziv's and a couple of other experts' tweets and myself included. So that was one thing that people didn't quite understand the domain and understand that this is absolutely must be addressed. So we've tried to create that sense of awareness that you were already leading, you know, with voices of VR and telling people what virtual reality is, what privacy stuff. And then the second thing that we noticed was we are not speaking the same language. So taxonomy. how do we even address the risk if we're not even understanding what is XR? So that was another something that standardizing some of those terms, taking from all these experts that have already spoken, Timony West and a bunch of other people, Microsoft had put together virtual reality continuum. So this was a remarkable lesson is that we must establish a common understanding. And then what happened was this layer of onion that kept peeling and we realized quickly, oh wait, there is this massive issue of diversity and inclusion, something that I had personally faced when I received my badge of honor getting fired from Linden Lab. It was along the same lines when minority voices are often just not heard. And so we came up with this diversity and inclusion coalition, CyberXR. From CyberXR, very quickly we realized as we were building these immersive technology standards that are now adopted by government of Australia to create their positioning statement that children are also at risk and there needs to be very specific focus on kids. So then came out this child safety initiative from CyberXR itself. And then along the same lines, once I was in Vancouver giving a talk and somebody approached me that we really need your help in medical domain. And then it was very clear that if we are going to be able to get any handle on XR data, it's along the lines of medical, because that is the kind of sophisticated data. It's not because we shouldn't just share data, but in the medical context, we must share the data. So that seemed very exciting challenge. And so we formed Medical XR Council that currently advises British Health Services, many other governments and medical entities. And we had been talking about it for a while I had this interview on 60 minutes with Laurie Seagal, where she was asking me, is the metaverse real. And in April 2021 I was like, whether it is real or not. This is, you know, something we believe it or not, it is coming for us. And so are the risks or the rest of children and so we kind of got busy preparing for what we knew was coming. It was the emergence and evolution of these confluence of various technologies and fast forward. That's what we had, you know, with a bunch of other programs and awareness campaigns, like it's our safety week and whatnot. On Human Rights Day, 73rd Human Rights Day, we brought into effect the metaverse reality check because we noticed there is a vacuum where, you know, some of the regulators are still like, hey, can you ban Finsta? I mean, that sounds absurd, but, you know, Sometimes that's the level of understanding we have to live with our politicians have. And so we saw this vacuum, there needed to be something and of course Christina had been advising to government of Australia and some of the Kamra Act was another one. Christina, I'm sure you can talk more about those things because that's kind of where the focus shifted a little bit is we are trying to create awareness, not just individual awareness, like to the parents and children, but just like developers, organizations, regulators, And I think that's going to be the role of MRC. And I'm like really excited that we arrived there. And I'm not sure how many more programs stem from it, but really exciting, you know, looking forward to what is to come.
[00:20:22.358] Kristina Podnar: Yeah, absolutely. And just to tie on to what Kavya was talking about, you know, XRSI has been so busy for years now, not a lack of work. So if anybody else wants to join and sign up to roll up their sleeves, certainly welcome them. But Kent, you've been talking in this space for a long time as well. And as you know, there's a slow adoption curve. As Kavya mentioned, we're really trying to raise awareness across the board. We've been very fortunate to have folks like Senator Ed Markey help bring us on board and do advisory around the Camera Act and the Child Safety and Emerging Technologies arena, or working with the E-Privacy Commissioner in the Australian Parliament, just really helping to pull together increased awareness and increased regulations that aren't there to hinder creativity or innovation, and I think that's the one thing that we really need to emphasize. At XRSI, really, yes, we are about identifying the harms and we do want to come up with a common dictionary that we can all use to talk about what are the risks, what are the harms, what are the safety aspects that we should be considering. But we also are very much about having and really fostering innovation and creativity, but you can really only do that when you have this framework within which people can run and create, knowing that they're not going to break things, right? We all have guardrails. that we have on highways these days, we have speed limits. And there's a reason for that. It's not there to hinder us from rolling back the top of our car and enjoying the day in the sun, but it's really there to ensure that we're not doing things that are going to be an oopsies down the road. And quite frankly, I think I'm dating myself here, but been around the block a few times. And over the last few decades, we've made a lot of mistakes. And there's really no reason to perpetuate those going forward. So it's been very exciting, not just to see this regulatory aspect coming out of XRSI in terms of support, but also working with folks like the American Law Institute and working on the uniform commercial code and emerging technologies arena so that we can talk about, you know, what is technological development such as digital currency and virtual goods look like? And so lots of opportunities, been very busy with a number of things. And a lot of that we're now basically bringing under the single umbrella of the MRC because we're looking at the metaverse. And like Kavya said, if we wait for it to quote unquote materialize itself, it's going to be too late, right? Because this is the moment in time when we're actually creating the metaverse. This is where it's actually starting to unfold. This is where decisions will be made that will impact us in seven to 10, 20 years down the road. And so this is the time to be adopting a common language, a common terminology so we can understand what we mean when we say things, and also so we can understand what it is actually that we're building. Not only can we go ahead and build things that are safe, that respect the individual and society, that allow that freedom and creativity I talked about, but also ensuring that we can actually get economies of scale. Because Kent, as you know, if you come up with a great idea, why should I have to reinvent the wheel? I can partner with you and build on top of your great idea. And that's really where a lot of great innovation historically has come, is from this cumulative thought process. But we need to ensure that it's done in a context that makes sense and that it's the right context for everybody, not just a few.
[00:23:34.032] Kent Bye: Yeah, I think that as I look at this issue of the relationship between technology and the larger society, there's the process of understanding what the technology does and how it changes and shapes our culture. But then at some point, if it goes too far, then that's where the tech policy comes in. And then what I've seen over the years is that there's a desire to not have legislation too early that's going to stifle innovation. And so you kind of let things play out with kind of hands off approach and let the technologists kind of do what they're going to do. And then I guess there's certain points when then when it goes up to scale and then starts to shift the larger culture in a certain ways and the lack of antitrust enforcement into the technology companies, you essentially have these companies that end up having larger networks and more power and influence than some of the governments themselves, which then creates these other dynamics of the relationship between technology and governance, not only within the United States, but around the world. And so I see this dilemma of tech policies, what Thomas Metzger, he calls it this technology pacing gap, the degree to which that the technology is evolving and changing so quickly that the conceptual frameworks around it are lagging behind anywhere from five 10, 15, 20 years sometimes in terms of the type of concepts and ideas of what's happening in technology, and then even to understand what's happening. Then on top of that, try to come up with tech policy to start to reign different stuff in. I'm curious how XRSI is thinking about what the biggest issues are right now, and how do you start to navigate this dilemma of this exponentially increasing complexity of these immersive technologies relative to the existing tech policy frameworks that we have within the United States. There's the EU, which seems to have a little bit different approach that generated something like GDPR, which feels like, again, five to 10 years ahead of where we're at here in the United States. But yeah, just curious to hear how you start to think about this relationship between the technology and the need to put in some of these different technology policies, taking into account all these other dynamics I just sort of laid out here.
[00:25:34.580] Kristina Podnar: Yeah. Kavya, do you want me to go on this one or you want to go first?
[00:25:37.421] Kavya Pearlman: Totally. Yeah. And I just want to add to Ken's comment is like, I go back to what we heard from Julie in Man Grant, the e-safety commissioner. She said that the industry is having the seatbelt moment and we need to kind of gear up. But Christina, totally, you know, you're the policy sherpa. These are all the things that you're already working on. So please go for it.
[00:26:00.868] Kristina Podnar: Well, I think, first of all, you know, Kent, you're 100 percent spot on, which is, you know, we look at the EU and we say, oh, they're leading the way with GDPR. I would argue that other countries have already been thinking about different types of security frameworks for a while as well. I mean, Papua, which is in South Africa, was started around the same time as the EU, as GDPR, but really didn't come into enactment until this last year because they couldn't enforce anything. In China, we've actually seen the predecessor to people also, you know, discussed very early on as far back as 13 years ago. And so I think that folks are thinking about this. It's just a matter of like, when do you pull the trigger and start to regulate? And I think you're absolutely right. If you regulate too early, right, as the traditional thought, you're going to stifle innovation. But I think the problem here is that that's how we've thought about things in a legacy context. Now that we're moving into a new realm of emerging technologies that appear sooner than we can all envision or see them coming, we need a more proactive way of addressing them rather than reactive. Reactive was fine in an era where Christina Soul Handley brought down the St. Jude website because nobody died. The website was down for eight hours, but nobody died, nobody was harmed, maybe a few visitors were inconvenienced. But the lives of little children weren't at stake, whereas now there actually are. And we're very fortunate, I think, that we have folks like the US Rep Kathy Castor, for example, who allowed us to contribute for Kids Privacy Act and COPA reform that's in front of Congress right now on the Hill. And so, like Kavya said, we actually have the commissioner in Australia who's also perked her head up very early on and said, look, is this something that we should be reacting to or should we be proactive in this space? And so, in my mind, we need to be very proactive. We need to be looking at not just GDPR, at LGDP, at Popia, at Popita, at CCPA, you know, Virginia's upcoming privacy law, and saying, hey, where are we on all of these, which are very reactive laws, or do we need to actually look proactively ahead and say, Can we introduce a privacy concept for the United States wholeheartedly so we don't have a tapestry of individual laws? Can we look at the duty of care law that's on the books and update it or maybe even look at section 230 and update it? And so I think we need a more proactive approach. And certainly what I've seen in my experience advising corporations is that they appreciate it. right? They appreciate understanding what's coming down the pike. They're more than happy to adopt reasonable risks and balance mechanisms in place, but regulations always, always have lagged. And we're in an era where we can't afford that anymore, because if we allow them to lag, we're actually going to do more harm than good. And I think what organizations, and we've seen this also from the industry, including Meta, have said, no, we actually do welcome more regulation. We do look for guidance and it doesn't have to be hard and fast laws. It could be guiding principles at first that create that safe framework that allow us to create and innovate without breaking things, if you will. And so in my mind, we need to go down this path and do it very quickly because otherwise we are going to be in a position where we're going to do things that we can't come back from. And we're already starting to see that, especially if we look at Asia and some of the governments that are jumping on board with the metaverse concept. So once you build it, you build it. And whether it's going to break something or not, you don't know. And we don't have the luxury of looking back in a year or two and saying, whoopsies.
[00:29:19.172] Kavya Pearlman: Yeah, and just to add to that, I think what's really different now is this confluence of multitudes of technologies. It was very similar, but traditionally, I go back to my cybersecurity background, the threats prior to what we're facing right now had remained a little bit disassociated. We played games kind of like virtual worlds right now, but we were not living in it. But now what's happened is this threat surface that used to be traditionally the nodes, the computers, the servers, and the network, it's now our living room with augmented reality being so compelling and amazing that you can pretty much put a hologram in your living room or immerse yourself with all these virtual objects and virtual worlds. Likewise, our brain, because of the convergence with brain-computer interfaces, that's another very prominent intersection that we have to acknowledge and then get ready for what happens to human agency when we are confronting these technologies that can potentially even read your mind in a way that you don't quite understand the consequences of. Again, that goes back to data. the kind of data that we are dealing with. And then let's think about, there is this huge rise of NFT being talked about, even though it's been around in Second Life for a while now. It may not have been termed as NFT per se, but it's virtual goods. So wrapping our heads around to what are the consequences of this confluence, this convergence. So individually, these technologies are sort of uncharted territory, but there is a lot to kind of think about what happens when these things converge. Specifically, another intersection, which is really massive, is artificial intelligence. So when we talk about this convergence, then things become exponentially dangerous because we don't quite understand what that may result into. A few things that we do know, it would result into digital divide, which we have traditionally seen with the internet. it would result into issues like, you know, we saw last year, Facebook's AI had labeled black male as primates. So like, you know, these kinds of unsupervised learning models would continue to bake those biases in, and then that would perpetually, inevitably go into these virtual worlds where we are collecting data in real time, decisions are being made, and I am for contextualized AI, but somebody has to mitigate the risk of being labeled or banned or any kind of a harm that may be done because in the immersive environments, especially when you talk about children, you can't unsee things. We talk about women, like women harassment and bullying and all of this, like you can't undo those things. So that's why there was a very specific effort towards prevention of harm, prevention, mitigation, proactive mitigation of risk. All of this is like gonna have to be front and center and then carefully crafting and positioning regulation where it needs to be. So yeah, going back to the commissioner, e-safety commissioner of Australia's thing is like, this is our seatbelt moment where we need to absolutely gear up.
[00:32:47.305] Kristina Podnar: And I guess, you know, to me, I kind of think about this too, Kent. I mean, like what happens if, you know, there's blockchain theft or fraud? What if I'm actually calling the police? I'm calling 911 in the United States and saying, Hey, you know, I've actually had this theft occur. How are they going to trace and recover a digital asset? Are they going to be able to do that? And that's something that is very likely to come up for many instances and users and use cases in the coming 10, 12 months.
[00:33:14.865] Kent Bye: Yeah, well, just from what we talked about, we covered a lot of different areas. There was everything from AI and the existing debates that are happening in the EU right now in terms of some AI legislation that may start to form some foundational aspects or concepts that then start to feed into XR. But then there's other aspects of privacy, which I want to just bookmark in terms of that's something that is a big topic I've been looking into, and I think that's probably one of the biggest hot topics. There's harassment, which I feel like there's a lot of things happening at the technological architecture layer for harassment that are mitigating aspects like that and moderation, things like that, and being able to ban people and create personal safety bubbles. Maybe there's things at the policy level to be able to address some of that, but I suspect that there's going to be a lot of burden put onto the technology companies in terms of moderation and creating safe online spaces. But in terms of children, I wanted to maybe just dig into that a little bit more, because that's a topic that's been coming up a lot within VR, especially in the aftermath of Christmas, with parents maybe giving their kids who are less than 13 years old, Oculus Quest 2, or I should say, MetaQuest 2, So I had a conversation earlier this year with some parents and Clubhouse, and I feel like there's dimensions of the pandemic, the context of the pandemic, which is maybe shifted some of the thinking with something like COPA compliance, which is trying to have anybody that's using technology be at least 13 years old. And if not, then there's all these precautions that are made. As an example, in Rec Room, if you're a junior account and you're less than 13 years old, you can't actually talk to anybody when you're in these environments. And so you're kind of a mute and there's all the restrictions that are there in terms of trying to prevent harm from happening for folks who are less than 13 years old. But I remember just talking to different parents and clubhouse saying, look, we're in the middle of pandemic. And this is a matter of my child's mental health that they're able to communicate with their friends. And the only way we can do that right now is through these technologies. Most often it was through discord. to have this whole influx of kids that are less than 13 who are interfacing with their other friends in Discord, but that we're already in this realm of parents for how to navigate oversight and making sure that they're safe. And now we have VR come along and it's much harder to understand what's happening. So I'm just curious how you start to navigate this whole concept of child safety within the context of the pandemic, and especially when it comes to VR and the unknown aspects of whether or not VR with the virgins accommodation conflicts are going to potentially even shift someone's development of the perception as a a large idea, there's probably not enough research to really make some firm statements as to where that line is for how to navigate this. There's been both from the COPA compliance of 13 years old, but also just these unanswered questions in terms of the physiological impact of the technology on kids, especially extended use on kids. So yeah, just curious to hear some of your thoughts on where this issue of child safety and kids less than 13 years old using XR and where you start to either address it from a tech policy level, or is this a larger cultural issue that needs to have conversations and education around?
[00:36:19.677] Kavya Pearlman: Yeah, and I think somewhere in there, Kent, you kind of nailed it is we don't have enough research. And before I go further, I do want to make it very, very clear that we at the moment for the lack of research at XRSI, we still recommend to follow manufacturing guidance, which is currently at the moment, we only rely on COPA. So my response to those very important and urgent questions is we need more research. And the CAMRA Act, which literally, you know, it is this Children and Media Advancement Act, which would allow National Institute of Health to put together a program to understand the media's effect on children. infant, children, adolescents on their cognitive, physical, social, emotional development. That is precisely the direction necessary. And again, of course, we do have to rely on policy because that $14 million that is supposed to be freed up, I think $15 million in 2022, that can't come from any other angle, at least not for XRSI and many research organizations. We do have to lean on some of these bills and governance and then all the way to having that understanding. One other thing that organizations should do is something that we started is a child safety framework. And this is going to be a heavy focus for XRSI is to further develop this framework. Now, I already mentioned that we still stick with 13 plus because we don't have any further fact-based guidance as of now. But what could come out of some of these research, as well as this framework, which kind of leans on some of the guiding principles coming from Australia's e-safety commissioner. And then other thing that came into effect in 2021, September, 2021, we had ICO UK rolled out something called this age appropriate design. And so we saw a heavy re-architecturing of Instagram, for example, whether that was because of Facebook files disclosure, but it was mainly due to this particular guidance that is ICO UK's guidance for age appropriate design, which literally means that if you're dealing with, you know, there's like four or five categories of different age guidance, And we can utilize again, contextualize AI to truly understand what is the age of the person, but to not deny them experience, but rather guide the experience. So if the child is under 13, given that not just dependency, but also a harmonious relationship must be established with technology, because it's not like we can just ban children from using these virtual worlds. Like kids are in Roblox and they're even, putting money and putting their effort and building these economies alongside. So instead of really just outright banning things, because apparently 13 plus arbitrary numbers, somebody decided we should investigate it and establish what are the age appropriate guidance. We should investigate dark patterns. It takes about 12 steps to cancel, and one single click to sign up for something. And those kinds of things should be more investigated, researched, and then understood the impact on various different stages of child's development. And then there needs to be this sort of shared responsibility. Traditionally, what we've seen even now, like some of the folks, they tag me on Twitter, and then they're like, oh, isn't it supposed to be parents' responsibility? Well, I say no to that, because Parents are already busy and let's be honest, some of the kids don't have parents. If you go to India and some of these other countries that the kids exist there too and some of the parents don't have the luxury of even understanding these technologies. So the responsibility when it comes to children, it needs to be shared amongst the regulators to proactively and with almost quite late in this context, but we need to understand the impact the technology would have and make these logical decisions, put them in the law once we understand the data, lean on researchers and scientists, especially, you know, this is one of the things that is really awesome for us at XRSI that we have a medical XR advisory council where we have pediatricians who are leading this child safety framework tackling it from a medical perspective, because it is got to be that way. We have to understand the cognitive and psychological impact that goes all the way to the brain development of a child. Then shift the responsibility on organizations that have traditionally been like, oh, 13 plus on a box that nobody ever looks at. Once you open whatever Quest or any other device, Vive or whatever, and then you just throw it away. And then there is literally no warning for children. or there is no literal walkthrough to shape their experience. I remember in 2016, I evaluated an API for Oculus Stories, which was all about, this was literally just the MPAA guidelines for the traditional media, for the movies. And that API literally goes through and assigns a rating for Oculus Story experiences. Well, Oculus stories no longer exist, but we need something similar. And likewise, oftentimes I've seen parents are guided like, oh yeah, have something on your phone, mirror whatever your child is seeing or mirror on your computer and keep an eye. Well, I say no to that too. We need something like a Netflix, you know, kids mode or the age appropriate mode that should be potentially automated based on we already know so much of a profile of the person who is ingesting these contents, why aren't we utilizing these technologies? And I'm not advocating for profiling everybody, but I'm advocating for utilizing these technologies to create trust, to create safety, to proactively mitigate some of these pedophilic activities that could go on if we ignore the age play or age appropriate design measures or allow everybody to participate in kids game. Why should it be allowed? Or if there is some kind of a grooming activities going on and a contextualized AI could actually really mitigate some of these risks that come along. Those are some of the measures that we're super keen to investigate and then put together in some kind of a framework that can not only give companies some guidance, but also educate regulators as to what should happen with respect to children's safety. And certainly this 13 plus arbitrary number, it's going to go as the COPA reform takes place as we learn more about it, but it's probably going to take some time before we arrive at some other more segmented age limits for things.
[00:43:08.979] Kristina Podnar: I would echo that. I'm excited to have CAMRA come into place as well as the modifications to COPPA. But I think that we need to also be thinking about this, and Kent, you said this earlier, it can't be a single ingredient recipe, right? It's going to have to be a combination of things. And so in addition to everything that Kavya said, I would also throw in the role of education Institutions and educators, but at the end of the day I don't think you can just say 13 plus on a label and expect that kids are going to conform to it. I always think back, I have a teenager now but a few years ago he wasn't yet quite into the teens he was certainly under 13. And in my household, we had banned all social media access until our child turned 13. And it turns out, you know, my son went behind my back and created an Instagram account, found out about it through a friend of his. And at first I was going to be very, very angry. I was thinking about really, really evil ways to punish him. And when he came clean to me, he said, well, I did create an Instagram account, but I use my throwaway email account address. I gave a fake birth date. I didn't upload anything because I knew that you'd be upset about the metadata that was ingrained. Didn't give my actual location and I haven't started using it yet. And I looked at him and any ideas of grounding him or punishing him for long periods of time immediately flew out the window because I thought, wow, he actually thought about this. He's been listening to what I've been saying. And I felt very privileged to have had these conversations since my son was two years old when I started explaining to him that the doctor's office doesn't need a social security number on a piece of paper when we go to visit the pediatrician. But back to Kavya's point, not everybody has a mom who geeks out on digital policy in their household and can teach them this. I look at our education system and I wonder why aren't the schools teaching this? I mean, for heaven's sake, my son goes to a public school where his email address, which by the way is hosted on the Google ecosystem, his email address is the same number that's used as a student ID, which is the same number on his transcript, which is the same number used in the cafeteria to buy lunch, which basically has become a social security number. And what that's really done is groom children to not think about their privacy, right? To not think about how they're linking their everyday privacy into the digital space. And then I extrapolate that and think about the future of consent and what that's going to look like from the iPhones of today to when we actually leap into the metaverse. And if I think about it, the metaverse is all about one-time consent, right? It's intended to be a continuous immersive experience and seeking consent each time you engage in an activity seems impractical. And so what is that going to look like? And what is this going to mean for children? I think we have to really start not just thinking about studies because we do definitely need the studies and the research, but we also have to mobilize. We have to mobilize as individuals, as not just parents, but people in an extended, children's life situation because you might not have a parent, you might have somebody else who's in a position of trust who might be able to teach children or at least be part of a solution. That could be a pediatrician, it could be somebody in the healthcare system, it could be in a school environment, maybe a caretaker, but even those folks need to be educated. I look at my colleagues who are 40 something and really smart people who still look at me and go, oh, there's no way that my kids are being tracked as much as you say that they're being tracked online. Really? Wow. I think that you do need the additional research. We do need the strength and acts in place that protect children. We need better definitions, especially when we look at the emerging tech. But we also have to go back and we have to look at our regulatory system. We have to look at the interpretation of duty of care, for example. What does that mean? I think what we have to do is commend the companies that are already out there. and looking to do the right thing and separate them out from the folks that are just steamrolling everybody else. And so in my mind, I think back about six or seven years ago, having conversations with Sony and with Microsoft around how do you authenticate a child online? Like, how do we know that a child is signing up to their PlayStation and it's a child? Like, what tells me that? Is it because they have a credit card? Well, you know, nothing stops my kid from taking my credit card out of my wallet. Many people out there might have the same experience. And so what is it that uniquely identifies it as a child versus an adult. And these are paradigms that we need to really start thinking about because we do have ways of telling that when you put on your Oculus set, we have ways of telling whether it's an adult or it's a child. And so as a company, I think we need to start saying, hey, if you're manufacturing this device and you have a way of telling a child versus an adult and you have done the research, then slapping a 13 plus label on your device isn't the only answer here. And so why don't you step up and be part of the solution rather than just creating additional barriers or creating additional issues, which is where I think the regulatory arm comes in. But I think society as a whole also needs to step up and start demanding right, that companies start to deliver on safety and on usability in ways that we expect them to.
[00:48:07.337] Kent Bye: Yeah, this is a great example, I think of just one issue of how many different contextual domains this is going into, whether it's the economic domain. So the issue with Facebook, maybe wanting to be liberal about having kids using XR technologies, because they have a whole lost generation that they're trying to recoup from the teenagers up to the young adults. And so they wanted to, at some point have like Instagram for kids in a way that For them economically, maybe they are wax enforcing this because they in some ways want these young kids using the technology. And then to what degree are they being tracked? And you know, there's all these other issues. If there is no age verification, there's no enforcement of that that's viable. You know, then the other side is the contextual domain in terms of that we're in the middle of a pandemic. And maybe just as the parent was talking to me about having their kids with Discord and it's monitored and there's different ways that there's a safe context with short term use. It's not extended use. It's maybe short term. There's parents that are monitoring what's happening if there's no multiplayer experiences. So I see it from the debate of coming from the people who are adults who are in these experiences that then are dealing with kids who are not mature enough to be able to be playing some of these different multiplayer games that are involved with going around, you know, killing other people, or there's just different dynamics that are shifting from the experiential dimension of having kids run rampant within these social VR experiences in different ways. And there's ways in social VR of creating your own private instance or whatnot. And so I guess it's when you go into these public spaces where maybe some games don't allow you to create a private context that is absent of all these kids that are in there that shouldn't be. So I feel like it's a larger issue, but you know, how to address it. It feels like, I don't know what the solution is, if it needs, even if there was a law that was passed, it then has to be enforced. And then how do you start to do all that? So I guess that's part of what you're trying to do with trying to come up with those frameworks, I guess, to try to give some guidance to address some of these issues that are there and how to start to approach it.
[00:50:02.840] Kristina Podnar: We've certainly started to do that and Kavya has been beating this drum for years. In fact, I was speaking with her last night and I said, oh, it feels like finally somebody is hearing the drumbeat and starting to recognize that music. So it feels like people are not just expecting somebody like XRSI to kind of come about and help facilitate these conversations, but they're actually clamoring for it. But I think that there's good news. I think amongst all the crazy, serious harm, the threat, the risk, the opportunities that we talk about every day, I do feel like everybody should be optimistic and rise to the challenge. This is the moment. And we're definitely starting to see back to your point, Kerr, I think it's not just, you know, as a parent or as an individual, it's not one ingredient is going to make the recipe. But we are starting to see other pieces come up together to form the puzzle. If I look back to last year, 2021, we saw the Texas Supreme Court that ruled Facebook is not shielded by Section 230 for things like sex trafficking recruitment that occurs on its platform. That was a huge shift, right? That was a huge shift, I think, in the arena. And we started to see even Zuckerberg come out and say, yes, you know, we should be subject to duty of care, but does duty of care need to be actually updated? Well, yes, it does. And so I think that it's not, again, back to your point, it's not a simple solution. I think it's a multitude of things, but I think being passive is not okay. And I think that that's why XRSI was formed originally by Kavya and other folks coming to the table to say, look, we need to actually do something. We need to be proactive and standing still and doing nothing isn't okay, because it's going to be too late by the time that we all start to act. And so, from my perspective, as I joined this really great group of people and we start having these great conversations and do it in a very tactical way on some days right some days as Kavya said it's all about creating a taxonomy, making sure we're all having the same dialogue with the same terminology. making sure that we can roll up our sleeves and come to the table with really smart people who understand regulatory and legal ways of codifying the rules that we need to have in place like Senator Markey for example, right, making sure that we get everybody lined up so that collectively we can make a difference I think is where we're at. We just need to get going in a little bit of a meaningful, and perhaps time-sensitive way, because it feels like waiting too long isn't really going to yield the type of initiatives and the type of environment that we need to be successful.
[00:52:24.723] Kent Bye: Awesome. Any other thoughts, Gabby?
[00:52:26.705] Kavya Pearlman: Yeah, I was thinking just one last thing that I think is worth mentioning here is really directly speaking to the children. Understanding that these are the citizens, the future citizens, and in fact they are citizens that are living in the virtual wars. We need to start addressing them as that. because when a four-year-old to like a seven-year-old can go ahead, bring down some of those servers or can create an NFT that sells for over hundreds of thousands of dollars, they really are entrepreneurs in the making. And so potentially even giving guidance, you know, one of the infographics that we put out in 2020, that was directly speaking to children, you know, hey, take our mascot at a child safety initiative is Captain Cuddle. And cuttlefish is known to change colors and hide away and evade identity. So we try to like find creative ways to speak directly to children, so that they can potentially ingest these guidelines to become better citizens. And that's another aspect, because sometimes when people talk about child safety, they often think that, oh, my God, heavy hammer, and we need to put the kids out of the picture or mitigate them and stop them from experiencing these things. But I think there is a beautiful aspect of enabling digital experiences and shape their future alongside, but then it is our collective responsibility as adults to facilitate that. And I think that's another aspect that sometimes when we talk about regulation and stuff, this opportunity gets lost. Let's not do what sometimes the big tech corporations think of them as quote-unquote users and be like, get them early. You want to get them early, but from a very positive framing of the narrative. Like, hey, let's shape their future collectively because we can't deny that completely. We can't like put our heads in the sand or just look away. This is the reality of our current situation. Let's address it and let's enable these experiences. Let's have them learn what it's like to sell an NFT or do digital commerce or create their own design. Just be a bit more conscientious about it.
[00:54:47.895] Kent Bye: Yeah, well, I wanted to dig into an issue that I think really got me headfirst into a lot of these ethical issues, which is privacy within XR, which started for me covering way back in the spring of 2016, when there was some reports from upload VR that got on the radar of Al Franken. And then there was discussion that went back and forth. And that eventually led to the VR Privacy Summit after I had done like 20 plus interviews about privacy by that point. And then a gathering of 50 different people from across the industry, including yourself, Kavya, that you were there at that VR Privacy Summit at Stanford that was co-organized by Jeremy Balanson of Stanford and Philip Rosedale of High Fidelity, Jessica Outlaw and myself that brought together the people and there was a lot of ambition that we're going to come together with 50 people from around the industry and write up a declaration of privacy. And that somehow we'd be able to then leverage that to be able to then have this issue be addressed in some fashion. And what happened was that we realized that this is such a huge issue that's even difficult to wrap our mind around. And then for me was a provocation that the lack of having a clear answer from that. with me on this journey of going into talking to different people, philosophers of privacy, looking at what are the things that are actually going to translate these privacy threats that we face from these XR technologies into law or legislation. There's a couple of touch points that I've come across along the way. Britton Heller has started to define these concepts of biometric psychography. So to move away, just thinking about privacy in terms of personally identifiable information, which treats us as like this static entity where revealing our identity is based upon information that's leaked out. That's a lot of how privacy is thought of, but I think we're moving into this world that's much more about the biometric psychography where they already know who we are. They're just adding lots of contextually relevant information about what's happening in the context and then being able to profile our essential character. our behaviors, our motivations, our intentions, what our likes or dislikes are, this whole set of emotional salience or physical actions and trying to create a mental model of us, create this digital twin that there's this movement of neuroscientists starting the neuro rights that are trying to lay down some fundamental human rights around our right to identity, our right to mental privacy, our right to agency, our right to be free from algorithm bias, and the right to have access to this technology. You know, the three for me, at least, are mental privacy and violate what we may be thinking to be able to then modeling our identity, who we are, all this multimodal synthesis of all this information to create a digital twin that models our identity, that then is able to then nudge our behaviors that potentially undermines our agency. So the line between what's okay for, say, villain's capitalism to do with marketing as it exists today, with all the existing biometric psychographic profiling that we have, versus at what point does it become creepy and cross an ethical threshold to the point that legitimately undermines our rights to our agency, our rights to our mental privacy, and our rights to our identity. So I feel like the human rights angle seems to be a pretty strong aspect. But still, at the end of the day, that human rights has to get translated into a federal privacy law, let's say in the United States actually have enforcement that can actually dictate what's going to happen. And then on top of that, there's always with everything that I've looked at, kind of like this loophole of consent, meaning that if they sign an adhesion contract, they can do whatever they want. So even if you do pass all these laws, then they can just say that, well, let's get the user consent and then we can still do whatever we want. So I guess that's where I feel like I'm a bit of at an impasse. I see some hope and optimism with some of the AI law that's what's happening in the European Union with the discussions that there may be able to get some of the language of the neuro rights or start to specify some of the different threats to their biometric psychography and privacy. But that's still yet to be seen. And I don't see any movement after having a conversation with Joe Jerome over a year ago, about the political landscape of what's happening with everyone agrees we need a federal privacy law, but these sort of hang ups that get caught up in terms of the private right to action, or whether or not the existing state laws are going to be preempted by a unified federal law, and to make sure that all the state protections are then put into that federal law. And that seems to be a low political priority and not anywhere near being up for discussion. Even if it were, they'd still probably get caught up in partisan polarization and not be decided upon. So I feel like the most hope I can see is maybe from the European Union that's starting to maybe pass something like GDPR that changes the underlying technological architectures for the entire industry. But the enforcement and then all these other questions, I feel like the more I look at this issue of privacy, the more this feels like the number one issue for me, at least in terms of the risks that could happen on the other side of it and the harms that could happen. But at the same time, I'm not sure if I could point to a singular conceptual framework that would be able to be able to encompass all these various different contextual domains on top of trying to convince everybody and inform everybody what's happening with the technology. to have them pass a law that would actually implement something above and beyond still having these consent loopholes that we have. So I don't know where XRSI is, but that's sort of my recap of where this issue of privacy is. And I feel a little bit lost as to where things go from here.
[01:00:04.170] Kavya Pearlman: And you did a fantastic job sort of recapping those issues. And one can drill down to this list of podcasts that you've stacked up, that particular thread I referred to. In fact, my last interview is part of that thread. And yeah, exactly what you said, there is just these plethora of challenges around privacy. There are a few things, and I'm going to take it to sort of a zoomed out sort of high level first. is agency. You know, that's one keyword that we should really internalize. So traditionally, privacy has been like people even synonymized it with anonymity. Can I just not be identified? Can I maintain my privacy? And that's been the traditional demand with digital media and internet. But as we move into these immersive domains, we're going to have to think about not just this anonymity or identity, but like agency. Like, am I really the human making the decision that I should or I am, or am I just being manipulated into clicking something? And then the concept of dark pattern emerges. where you are thinking mentally that you made this decision, but you really had no choice. So then what starts to happen, and this is part of, back in 2019, 2020, we rolled out this XRSI privacy framework, which is now, per the advice of our legal experts, is a privacy and safety framework, and we're working on version 2.0. But one of the specific points there, which thanks to CyberXR's new president, Nobel Ackerson, he gave to us a 3C plus R framework, which is part of the XRSI framework. 3C is context, control, and choice. If we are going to figure out what is privacy, just under this umbrella of privacy, if we want to ensure all the rights that you talked about, the new rights and stuff, we really need to tackle what is the context? And again, it also goes back to the data. You were talking about psychography and all these other types of data at XRSI, we refer to it so that inevitably this is going to have to be turned into a law at some point. So what is the kind of data? We call it the biometrically inferred data, because traditionally we've talked about personal identifiable information and that kind of stuff. But now we need to think about what inferences, can be made from your body movement, from your gaze, from your pose, all this type of data. So biometrically inferred data and all these other rest of the personally identifiable data needs to be understood from the aspect of what is the context that you're taking this data in or processing it? Do I have a choice? And what are the controls available? And you can't leave it at that. Noble Akerson, he puts it so beautifully. It's like, you've got to have respect. So not just facilitating a control, but respect my agency, my autonomy, and respect my human being. And that really starts to come into picture as we interface more with artificial intelligence agent, which traditionally we've seen interaction with NPC characters, the non-player characters in video games. this is going to be more and more where we interface with some agents that are not human. And as we do that, this context, control, choice, and then the whole aspect of respect comes into play. And I'm going to defer to Christina to talk further about some of these policy aspects. And that's the beauty of XRSI is we've got this sort of multidisciplinary all the way from medical to even philosophers. And as you know, I'm myself very religious. So like we take a very multidisciplinary approach to think about these things all the way to respecting a human being. And I think that's kind of what we're going to see is as we develop this 2.0. One last thing I also want to mention is we need to lean on some of the technologies that are now really taking front and center is like decentralized autonomous organization, a DAO. So in the version 2.0 of the privacy and safety framework, we're really trying to lean on some of these aspects like creating a DAO, allowing people to contribute to the safety and security controls and really demand based on their own liking pretty much. And then yet, have some multidisciplinary experts weigh in on what those controls should be so it could dissipate with some expert guidance. So I'll leave it that, but I know Christina has got tons of perspective on this and really keen to hear that as well.
[01:04:46.361] Kristina Podnar: Well, I think you've both done a really really great job of summarizing the challenges and some of the work that we're actually undertaking and are going to continue to advance in order to come up with the solution that's right size to the industry into the society as a whole. I think, from my perspective. I couldn't help but think to myself as you were speaking, Kavya, how technology just regularly is changing and bringing this new privacy and set of security concerns that is going to continue, which is why we can't stop at 1.0, we have to go to 2.0, and then it'll have to be 3.0. The emergence of XR is no different than historical technologies. It requires a very forward leaning, forward looking approach to help inform not just society and companies, but also policymakers. And so, one of the things that I do want to highlight, and it goes back to a very important part of today's conversation and one that we continually have which is the need for a greater diversity of people and perspectives. When we're designing deploying and also making policy for XR tools and technologies. And back to point caveat that you just made, which is, we need to have respect right nobles awesome about saying that, and it's not just lip service right he fundamentally believes that is how he's been involved in the industry. A lot of folks that work with him believe it, which gives me a lot of hope for the future. I'm going back continually to the fact that there are a lot of organizations that are doing already great work in XR. They are actually trying to do the right thing. They are trying to put trust and respect at the balance of risk and opportunity. I think what's interesting is we say that consensus is really hard to find on issues such as XR privacy and security because they're still in their infancy. But the reality is that we're already starting to see emerging challenges, various solutions, and possible ways to go forward in terms of feasibility implications. So I think what we need to do is really help accelerate the process and encourage an evidence-based, certainly, approach to identifying, addressing XR privacy and security. But what we can't do, I think, is just continue to ponder the solutions. I think we have to understand that it's a rich discussion. Policy is one aspect. We've been very fortunate to have several folks as we've called out today, kind of see the need for new legislation, for new regulations. But so many other areas and so many other folks need to also step up to the table. Great that we have camera, great that we have the Commissioner in Australia and Parliament looking at the solution. But where's everybody else? And I think back to your point, Kent, we're not going to have a single federal privacy framework in the US. You know, I hate to say it, right. But here we are, you know, in 2022. And I'll bet you I don't know, bottle of something really good or chocolate, whatever your weakness is. But I'll bet you something today that if we get together a year from now, we're not going to be any closer to federal solution from a privacy security perspective. I've seen this with other laws and regulations. It's true on the accessibility front for sure. We've seen it on data breaches, right? We have 50 different flavors of what I like to call Baskin-Robbins ice cream, right? If there's a data breach in the United States, you have to go state by state, figure out the rules and the legislations. I'm starting to see the same thing emerge in terms of privacy, and it's fascinating. We talked about GDPR today. I don't know if we've mentioned CCPA a lot, but every conversation I have on privacy, nobody ever mentions Nevada and SB 220, right? Which was in effect well before CCPA. We have this tapestry of laws and it's going to continue to be as such in the United States. What we really need is a solution from an industry perspective that's going to put the pressure on regulators to start to come up with toughened laws if we can't have a single law, because privacy and security are far too important for us to risk. I know it sounds really great from an EU perspective, and don't get me wrong, I think the commission has been doing a great job. There's really wonderful thought-provoking action, certainly one step ahead of the curve. But even the EU is behind when we start to look at, for example, China. I know everybody cringes and I do too, don't worry. Lots of human rights issues, violations on that front. But what I do think it's fascinating is that the government has turned around and put in place some really great ideas that we can adopt elsewhere, which is things like data localization. What does it actually constitute personal data? When is it okay for big tech to have that data and when is it not? I'm not saying adopt Chinese, tactics, or even government rules, because I think that those are way off from a human rights perspective. But what I do think is that it's absolutely possible to get at a solution or extract things that make sense and that are viable from a technology solution perspective. And I think that for companies that are operating in the Western Hemisphere, like Meta, like Google, like Apple, you are going to see them adopt some of those kind of operational processes that we're seeing happen in Asia, in China. And there's no reason not to adopt them elsewhere in the world, which really comes back to trust and respect for the user. And so I think that's what it really comes back down to regardless of what law regulation we can come in place with it just good business makes sense to have transparency to respect the user, and back to a point that caveat made right now, they are just children, but those children are growing up fast, they are going to have economic power. And I think if we look at long standing brands and how they've cultivated over generations loyalty, I think the same thing is going to apply in the XR space and certainly in the metaverse going forward. And I think smart brands are going to understand that sooner rather than later and look to make the right choices and take the right path.
[01:10:20.976] Kavya Pearlman: Yeah. I can't agree more. People do bash that whole China regulatory framework, but there are some really awesome... When I looked at this log in detail first, I was like, wow, they're actually kind of... If you look at the technological parallels, they're doing a better job in some aspects, exactly what you pointed out, like data localization and stuff. I used to personally laugh at and really despise that firewall of China from cybersecurity perspective. They were just like way ahead of times. But then we saw like Israel be like, hey, we will not connect our federal systems to the internet. And then slowly by slowly, everybody woke up that, you know, cyber attacks are a reality and it became like a national defense strategy. So I think even though we may not have this federal privacy law or something, what I would love to see, and I think that's what we at SRSI are trying to advocate for, is take these good lessons from all the states, the nations, and various different legislations globally to root for and really just like amplify a global framework. And I had just recently spoken with this prominent Japanese newspaper, Asahi Shibam, and I was literally talking to them that we just need to be global about the regulations in the metaverse or in the immersive domain, specifically to privacy. Because just like cybersecurity back in 2012, 2013, I was thinking, oh, wow, how do you maintain geopolitical boundaries in cyberspace? where when you go to social VR, you're talking from somebody from Nigeria to another person in Iran and another person in UK and Britain and all these things. We need to think about the reality of our situation is that we can't really uphold these geopolitical boundaries in some context. And the context, again, you know, you've done an amazing job capturing that with IEEE artifact that you're building. And those kind of contexts are really going to play into these global regulations. So I think what our job at XRSI really becomes, and I'm really glad that now we have the MRC to do that, is bring most of these people together. We made our first effort when we had this data classification roundtable during XR Safety Week, but this is exactly what needs to happen. It's like now we need to really bring people together to understand the reality of it, is just having a localized law. or even a federal, even that's not going to be very sufficient because it doesn't quite address the overall threat that is more of a global threat. So I don't know how long it would take, but that's the one thing that I'm really rooting for is understand these things from a global perspective and not just like United States or Mexico or something like that. Those were back in the days we could deal with that because it was rooted in the properties or ownership that depended on the geolocation and stuff. We're way past that point. So hoping for some global regulation in the future.
[01:13:35.895] Kent Bye: You know, as we start to wrap up here in the last 10 minutes or so, I'd love to maybe just do a quick recap of some of the highlights from the XR safety week. I mean, we've already started to talk about some of the issues around privacy and child safety. And if there's other big hot topics that you see that there were discussions that you're bringing together different folks to have these different group discussions. And what you were able to achieve through that XR Safety Week in terms of having the community come together to talk about these issues. And then what you see is the next steps for the big hot topics that need immediate addressing.
[01:14:07.610] Kavya Pearlman: Let's see, I think I'm gonna divide this into, if I were to do a summarization, I'll divide it into something taken from cybersecurity, we call it a PPT, people, process and technologies. So people-wise and during XR Safety Week, there was such a carefully crafted agenda that there were some amazing speakers that were highlighted and not just speakers, but just like people that are really passionate about this domain. So one thing I recommend for anybody to go back to XRSI YouTube channel and revisit some of these moments. So I want to highlight a few folks that really contributed thought leadership, one of them being Doug Thompson. He's kind of like a silent advisor for XRSI. Oftentimes we borrow his knowledge to even frame what is the metaverse. He has been investigating the metaverse for a long, long time. There were artists like Sutu, whose website is sutueatsflies.com. And he's an amazing artist who uses NFTs to create awareness. And he donated one of his NFT called Online Identity Crisis. So that's another person that's such an amazing creative person who potentially will guide us in the metaverse, like how to communicate critical information using artwork, using NFTs or virtual goods. So that's another person. I can't speak highly enough for Julie Mann-Grant, who is the East Asia Commissioner. We brought her perspective and I hope you get to talk to her about this navigation because Australia is one of the first countries that actually took CyberXR immersive technology standards and created a positioning statement for the entire country. And it's really remarkable that they were trying to stay ahead of the game and truly understand these things. So many other folks that we saw, like we saw some folks from medical domain. So we heard from the head of British health services, HEE, health education, England. His name is Neil. I forget his last name, but I do know that our liaison who works together with us, Richard Price, there are some other amazing researchers, Abison being one of them. We have our advisor, Sarah Tico, who is constantly recently building this bridge between UK and the United States and leading the UK's medical XR strategic initiative. So all those folks, and let me also, you know, shout out to our executive leads on all these fronts, you know, Marco Magnano, who has been one of the very amazing topic that he brought up is how should we cover metaverse? Because just because people want to throw the word metaverse and they want to amplify using the SEO, they just throw this metaverse everywhere. It's like, No, you should have some moral responsibility to be thoughtful about it because you're misleading folks. So that was a very beautiful discussion between Marco and Tony. Scarred Ghost is the Twitter handle, as I know. There were just amazing people who focused on journalism during Child Safety Focus Day, talking about other senators that kind of also highlighted. We heard from Senator Mark Warner, who is Prominently known as the tech guy in the space and he and his team has been actually very, very supportive of our mission and we're going to continue to build that bridge and try to share knowledge back and forth with them as well. So a lot of these amazing folks and including yourself. As you know, you helped facilitate this very first virtual citizenship forum. So treating virtual citizens like actual human beings and not just a bunch of avatars roaming around. We are humans behind those avatars. We have rights. We should be given rights and we should be treated as such. And finally, even though it was not very highlighted, but I do want to give some credit to Meta for at least sparing a chunk of change from especially for a billion dollar company, that's really nothing. What I would have loved to see, and I would really highly encourage more organizations to focus on this, be really diligent about it. I talked about the people, some of these processes literally need to encompass awareness, things that you've been doing, constantly sharing knowledge with people and raising the right questions. Just a discussion sometimes can be so enlightening like it has been right now for me. And then finally, if you talk about technology, I pointed out, you know, use of NFT to create awareness. There is a very disturbing notion. Sometimes I've heard some of the XR folks, they're like, Oh, do not talk to me about NFT. I'm not going to talk to you. And it's like, Hey, what happened to you? You know, like these are technologies that you can actually extract out the good use cases and really build upon the foundation. So don't bash the entire community because you don't understand technology or you think that that's not sustainable. There are sustainable ways to pursue these technologies. So those things were highlighted. Some other technologies were highlighted within the closed roundtable that we conducted. So we have like Department of Veteran Affairs all the way to There were about 18, 20 plus universities, researchers from all around, ul.com. A lot of people came to this closed round table, including EFF, talking about human rights. And so finally, we'll have a report that comes out of that data classification round table, and we've committed to, and in fact, Meta has committed to, and I really hope they keep their word this time. that they are going to do a round two for the data classification round table. And, you know, I'm gonna hold them to it for sure, because we absolutely need to dig deeper into the impact of data that has impact on human rights. So I think those are the few highlights, but Christina and I were the co-chair for the last human rights and privacy day, which was the 10th of December. And that's when we announced the MRC. And Christina, anything that jumped out or any recaps, I'm sure there's tons there.
[01:20:10.588] Kristina Podnar: Oh, well, you've done a really great job of summarizing the themes, Kavi. I think it's really hard to put into a few sentences what happened over a whole week's time. Lots of really great insights. The one thing that I would like to add to everything you just captured so eloquently is one of the themes that I heard in our discussions in a panel that the EFF folks curated, which is, you know, there's this real possibility that XR will widen the gap between the haves and the have-nots. And, you know, just looking at it practically, the cost of purchasing XR hardware will obviously exclude many people around the globe. And so I think we need to take off our Western colored glasses sometimes and realize that XR has the potential to revolutionize learning and provide enriching educational experiences. for everybody, including children, right? We've been talking about safety and protection today, but, you know, we can also use XR for good. But if these opportunities aren't available to everyone, we risk creating even more elitist education systems. And so we need to continue to pay attention to that as well. But I'm really excited from a MRC perspective to really have a great bunch of folks from XRSI and beyond who did come together to pull together the Metaverse Reality Check. And Kavya, you've been instrumental, obviously, in helping get the MRC up and running, but we have really great founding members, including Steve Peters, and like you said, Laurie Sagal, and Ryan Cameron, and Marco Magnano, and Oz Soltan, Noble Akerson, Scott Butler, and also My personal favorite, not that I'm supposed to play favorites, but we have Fox or Dennis Bonilla, and I'm really excited that these founding members have come together and helped pull the MRC together and really gained some velocity as we head into 2022 to do some great things. In fact, we're going to be releasing an advisory shortly on the global oversight for the metaverse and what are the certain aspects that U.S. regulators can start taking into consideration right now. So I see an extension of XR Safety Week and the work that was started leading up to that week, what was discussed that week, and then also the work that's about to come out of that week being a continuum and certainly a continuum of the great work that XRSI has been doing. Really looking forward to Kent having you at the table, other voices as well, and being able to demonstrate some really good work in 2022, not just talk about it.
[01:22:33.607] Kent Bye: So finally, what do you each think is the ultimate potential of virtual reality, augmented reality, and the future of the metaverse and all the safety precautions that need to happen and what that might be able to enable?
[01:22:46.214] Kavya Pearlman: I know this is one of those amazing questions that brings it to a very positive point and I think that it is really positive. There is a net positive here. The caution that I throw in there is we have this podcast Singularity Watch is as long as we are Zooming in, zooming out to understand that we are progressing towards this point of singularity and we should make decisions today that will inform us tomorrow. I think there is huge potential, specifically in medical domain, is being able to receive care without having to go to the hospital kind of thing. Because a lot of people think that, oh, virtual reality or extended reality is limited to gaming, but that's the thing. It's like tourism and medical context and education. The whole society could be transformed into having a harmonious relationship with digital media or technology. And that is the potential. We could literally lean on the very same technologies to create trust and safety and create awareness, have a trustworthy, immersive journalism platform. where you don't have to just read something passively, but you can actually visit an island where whoever you meet are these very well-informed, trustworthy human beings just like yourself. And you could really meet them. You could maybe meet their digital twin. And that digital twin of, let's say, Kent Bye could inform you all that you need to know, and maybe even contextually based on knowing your personality and what you're looking for. Kind of like an Oracle, for example. So there's just all these fascinating sci-fi metaphors that have previously just stayed in science fiction movies or comics. I feel like those are going to inevitably come true. And it's a fantastic future that I would love to see happen. I would love to be able to order my pizza in a virtual world, have it piping hot delivered to my window via a drone. We can't do that today because we don't trust these technologies. So I want to be able to trust these things so we can have hopefully some bots someday that would, you know, I live by myself, I'm a single person, like a bot could wake me up and be like, good morning, Kavya, here's your coffee. You know, I want to be able to trust that and not have it become that surveillance giant or a monster who just wants to harvest my biometrically inferred data. or just weaponize some of the things that it learns from my behavior. It should keep me private. Yesterday, I submitted a talk on virtual sex, cybersecurity, privacy, and intimacy in the metaverse. I look to these technologies to extend realities in other dimensions and bring all of our humanness into it. the need for intimacy, the need for connecting with human beings. So going back to some of the pillars that we laid out, connect, create commerce, put some of the ethos and standards together, address some of these social challenges. So future is really fantastic from challenges perspective because we love to solve challenges. But just in general, I think it's going to be a great decade almost as we see this evolution, the confluence of technologies come into play.
[01:26:00.672] Kristina Podnar: I'm personally very, very excited because I'm thinking about things like increased engagement, of course, but also overcoming distance so my family, the majority of my family lives on a different continent I don't get to see them very often because of coven. you know, I actually would love to be able to experience new places that I haven't been. And I also want that for others as well. And so when I think about how large the global population is, how diverse the global places are, I think what's most exciting to me is that we're all going to hopefully have the opportunity to explore that both people in places and things and experiences. But my biggest wish, I'm not sure if I'm daring enough to say that it's my prediction, but my wish is, that everybody should be allotted that experience, whether you're part of a nomadic tribe in Africa, or if you're sitting comfy in your chair in suburban Chicago, it would be great if folks like that could meet in new places, discover each other's cultures, personalities, and just really start to have exchanges that are meaningful at that human perspective, which will then, of course, drive new innovation, new thinking for the benefit of all, but certainly for commerce and businesses globally. So I think there's a really great and bright future. I think right now we're experiencing the bumps in the road and hopefully we can even out those bumps so that they're not as stark as they could otherwise be. And we can all end up in a much happier place as a result of cumulative thought.
[01:27:30.978] Kavya Pearlman: And one last thing I want to add to, I really want to go from this heads down culture to heads up culture. Especially I recently remember like I was in Europe, in Italy, and I was mugged. And right at that time, this Facebook stories, Ray-Ban glasses that come out and I was thinking to myself, I'm like, where are those damn smart glasses when you need them? Because, you know, like I would have been more aware of my surrounding if I was looking up. And I probably would have been able to record some of those moments, too, when some of these cops tried to deny me that they can't register my case. And then they refused outright that they weren't the ones. I was like, wow. In the era of constant reality capture, which may seem very alarming, there is a silver lining and a good thing that's happening. We saw it with George Floyd or Rodney King, videos captured. Sometimes these technologies are going to come to rescue and save us in terms of human rights, justice. There are a lot of good things that these technologies enable. They enable safety and create this heads down to heads up culture. Those are some of the things that I really look forward to.
[01:28:42.534] Kent Bye: Awesome. Is there, is there anything else that's left and said that you'd like to say to the broader immersive community?
[01:28:47.859] Kavya Pearlman: Get engaged. I think it's an exciting era and we happen to be fortunate to living in this timeline. So I say to the global community is there is this massive need for engagement and multidisciplinary perspective to come together, go to XRSI.org or go to the metaverse reality check.org. any of our programs because there's just multitude of these programs. Identify and find your niche. If you have even a little bit of time or if you're even a student or a researcher, especially this year, we're going to be heavily invested in research. That's going to be our main focus. Research organizations should definitely reach out to us because we're interested in pursuing that collection of knowledge, collection of data to understand and how to mitigate the risk. That's what I would really say that, you know, please come talk to us and we want to learn from you and we certainly want to educate you as well based on everything that we learn.
[01:29:50.908] Kristina Podnar: I would echo that just from a metaverse reality check perspective, the MRC has a number of ways to get involved, whether you wanna actually sit and roll up your sleeves around the table, if you just wanna lend your voice or you wanna become a patron of the board and participate and enable the board to do really good work, there's always a way to get involved. And so I would like to invite as many folks as are interested in sitting around that table, whether your viewpoints align, hopefully they don't, because I think that diversity is a good thing and this is where we need diverse thought. So I hope that a lot of folks will turn around and, like Kavya said, really join the conversation, join the effort, and make a difference moving forward.
[01:30:29.758] Kent Bye: Yeah, well, Christina and Kavya, thank you so much for joining me today as we just reflect on not only what's been happening the last year, but the last couple of years since we last talked and your whole journey up to this point. And yeah, I just, for me, I'm really happy that XRSI exists. I remember the launch of it at Augmented World Expo 2019 and getting up on stage and talking about the ethical and moral dilemmas and then being able to point people towards this new effort that had just started with XRSI and just to see all the different progress you've been able to make over the last couple of years and have all these different discussions really around the world, because I think if you only focus on US and US policy as the only way of addressing these, you're not going to get so far as you're going to get if you have this international discussion from the same issues that are happening around the world and different approaches. And my hope, at least, is that these other foreign countries and their policy is going to create enough of a momentum and potentially even fragmentation that catalyzes some more universal approaches towards some of these issues. especially have the enforcement aspect of it here in the United States, which you don't always get as US citizens. You may have these things that changing tech architecture, but to actually get the enforcement and the protections of those other foreign countries, you don't always get here in the United States. So to be able to not only have the architectures, but the enforcement and the protections, which I think is the goal of where we want to get to in the longterm. And yeah, you're on the frontiers of a lot of those discussions and I just really appreciate the continued dialogue that you're continuing to facilitate within that community and the wider tech policy arena. So, and thanks again for just coming on and helping to unpack it all here on the podcast.
[01:32:02.457] Kavya Pearlman: Thank you, Ken. The best way to start the new year is to start with Kent Bye. And I mean, I think the future is bright. This is a good sign that we started with this dialogue. In fact, this is literally like the first thing that we're doing as we begin the new year. So thank you so much. Thanks for holding that torch of creating awareness as well. And thank you for having us both here.
[01:32:25.615] Kristina Podnar: I'll second that Kent and also thanks for being one of the key voices in the industry. I think we need more of those and certainly appreciate you leading the conversation.
[01:32:33.365] Kent Bye: All right. Yeah. Thank you. And you're quite welcome and look forward to seeing where it all goes in 2022. So, that was Kalvia Perlman. She's the founder and CEO of the XR Safety Initiative, as well as Christina Podnar. She's a digital policy consultant, XRSI's global digital policy advisor, and the chair of the XRSI's new Metaverse Reality Check initiative that just launched on December 10th. So, I have a number of different takeaways about this interview is that, first of all, Well, XRSI has certainly been busy across lots of different aspects of these different discussions and these open debates and creating different frameworks to be able to start to define a taxonomy. They put together a Privacy 1.0 framework, which I actually had a chance to talk to them at the time, and it's on a YouTube video that you can check out. My take at the time was that there's a lot of contextual domains about privacy and that they will probably need to start to dig into more specific context in order to really address some of these different issues. Their next iteration is really emphasizing context control and choice and respect from the Noble-Ackerson and that next iteration of their privacy framework. They've also went off and started to do a whole medical XR council, which a lot of the different information that's coming from the medical XR context is going to be the future of the biometric and physiological data that is going to be eventually making its way into consumer contexts. and so how to really start to address some of those different issues. But also, they have some of the other initiatives. They have the Child Safety Initiative. They have the CyberXO Coalition, which is all about diversity and inclusion. They have their Metaverse Reality Check, which was just announced, which is going to be kind of this oversight for looking at different aspects of the metaverse. They had a whole XR Safety Week, which is a lot of consumer awareness to be able to promote a lot of these different discussions across all these different domains. They're consulting with different governments and different companies, and generally trying to be a part of the conversation on a lot of these different issues around safety. So I think the challenge sometimes with understanding XRSI is that they do have a lot of people from lots of different perspectives, like I said at the top. Christina Podnar is a digital policy consultant and so her day job, as far as I can tell, is to be working with companies to be able to help them navigate policy landscapes, which is different than, say, if she was a full-time writer of policy, which would be very specific around privacy advocacy. The reason why I bring that up is that some of the different things that she mentioned were the American Law Institute, as well as the Uniform Commercial Code and Emerging Technologies Committee, which, when I look at that, they have a lot of people from the American Law Institute and other groups like the Uniform Law Commission, which both of those entities are trying to create uniform laws that are trying to standardize something into one holistic approach. The challenge with that, as I was digging into it in the process of editing this podcast, is that there was an effort over two years from the Uniform Law Commission to put forth a very specific perspective on privacy that was very watered down, in fact, critiqued by lots of different privacy advocates in terms of being very business friendly and not enforcing different things that you would want to see in there. So, you know, as I started to look at the different consumer privacy advocates that I trust in the industry and to see their critiques, then, you know, you end up having like a uniform law, but that ends up kind of watering down the consumer protections that we kind of need and especially doesn't address a lot of these different issues around XR privacy. The specific context in which Christine was talking about the Uniform Commercial Code and Emerging Technologies Committee is to be able to handle different aspects of fraud and these digital objects. If you have the cryptocurrency assets that's stolen, then how do you start to have consumer protection around those different things? Trying to come up with a legal framework that's going to be adopted across all these different states. With privacy specifically, you have the situation where you have a very strong consumer advocacy privacy law and approach within California, and then these other more watered-down approaches where we're trying to create pressure to be able to create a uniform federal privacy law. This question of whether or not that would preempt the state laws or not preempt it starts to get into, as somebody in her job is trying to consult these different companies, They want to have a uniform approach so that it's clear. You know what the boundaries are and you know how to progress without having to navigate a tapestry of 50 different state laws. That's globally going to start to have this kind of fragmentation. It just makes it difficult to operate as a business when you start to have that. There are real needs to do that, but at the same time, a lot of these different processes, as they develop these laws, are being informed by these companies who very much want to maintain the status quo in terms of being able to have these specific exemptions for what they start to do with behavioral tracking and selling data to third parties, all this stuff that was not covered within the Uniform Law Commission's specific approach around privacy. I bring all that up just as somebody who has done a deep dive into privacy. It helps orient me into a lot of these other stakeholders and these different players and the incentives and the motivations behind them. As Christina is specifically working with the American Law Institute and different efforts that are in collaboration with the Uniform Law Commission, you have this thing that gets a little confusing as to where XRSI itself stands on some of these different consumer privacy issues because the language and the rhetoric that I'm hearing is very much from a consumer privacy advocate perspective, but sometimes how that gets translated into specific policy positions is not always clear. Sometimes when you have individuals that are advising but not completely under the auspices, then you have messages that have, for me at least, difficult to know where it stands in terms of prioritizing innovation or is there a real emphasis on trying to really strengthen the different consumer privacy approaches. Now, all of this is said within the larger context within the United States, political context, which this is being recorded on January 6th, which is the one-year anniversary of the insurrection. So there's an extreme amount of political polarization, which means that any type of deliberative process and some of these nuances of these policy decisions don't, in the short term at least, seem like they're going to actually be fleshed out. And so that's also a part of the context of some of these different conversations. which is, while it would be nice to be able to have a uniform approach that is both giving clarity to the companies, that allows them to innovate, but also has different aspects of protecting consumers. And I think, for me, I am in this position of not knowing what is the next best step for how to live into a future where we're not sleepwalking into having these companies be able to use this biometric and physiological data to be able to undermine different aspects of our identity, our rights to mental privacy, and our rights to agency. So, all that said, I'll be talking to other consumer privacy advocacy groups and seeing what's happening in the European Union. I think there's probably likely going to be more movement that's happening in this specific issue around XR privacy that's coming from outside of the United States rather than from within the United States. But I just wanted to help kind of get a baseline. The other aspect of all these different efforts, I think there's a lot of worth of just discussing the different landscape and the stakeholders and different frameworks and approaches, especially around child safety and following the manufacturing guidances. And then we do need more research to be able to have more informed policy and to start to have evidence base to form some of these different guidelines as to here's the real harm that can be done. And just in general, you know, we need to have lots of these different conversations that are happening across the industry because there's an opportunity here to start to say that we need these different types of guardrails and for me I put a lot of weight into what's happening with neuro rights and I want to follow up with a lot of those different folks that are putting forth different governance frameworks for being able to govern brain data, and start to talk about what are the different potential risks as we move forward with neuroscience, and as this confluence of these technologies that are usually within the medical context, how they get into the consumer context, and how we may need to address that at some sort of policy level as well. Just another comment, as I listened to different themes that were coming up around NFTs and cryptocurrencies and the blockchain, there seemed to be almost an unconditional approach of being able to accept them. I think Kavya was saying that if they're criticizing, they don't understand the technology. I do think there's actually a lot of legitimate criticisms around proof-of-work and the ecological impacts. proof-of-stake, which has been more ecological and sustainable than something like proof-of-work. But the distributive decisions can still be influenced by, if people have a lot of access to the coins or capital, either with proof-of-work or proof-of-stake. It cannot fully decentralize that power that they aim to do. It can still get centralized and be susceptible to these civil attacks of one entity taking over and controlling and dominating it. I think something like Holochain may be something that's a little bit more distributed. Also, the underlying culture that is within some of these different cryptocurrency environments is some of the things you have when there's money to be made. You can get all sorts of different layers of scams and multi-level marketing dynamics. For me, at least, there's a caveat, where I do actually have a lot of hope and optimism in terms of the decentralized tech, if it's in the right relationship with the world around us. All that is to say is that there are specific nuances and legitimate criticisms to be skeptical about some of the different aspects of the cryptocurrencies, while at the same time trying to look at the potential for how this could be able to both decentralize power and ownership and control, and to get away from these monopoly powers. I do think it is going to be a big part of it, but it has to be taken into consideration how to properly apply it within the right context. Kavi was suggesting using decentralized autonomous organizations to be able to come up with a collective declaring what are rights around privacy and security or whatnot, kind of like this way for people to come together and have some pushback. But at the same time, I don't know specifically how that DAO would be implemented and what the nuances, what the specifics are, what the context of that is. And so for me, it's a little confusing to know how that specifically applies to some of these different privacy conversations. But there was a whole long list of different types of policy movements that are happening around the world that they were Mentioning and I'll have a list in the show notes that they pass along a list of the different things that are happening around The world they can dig into some of those different discussions just in terms of the u.s. Policy policy, what I've noticed is that a lot of things have been submitted over the past couple of years, but not as much stuff even coming up for debate or deliberation or even a vote. Just because there's a little bit of stagnation that's happening in terms of trying to get things through Congress in this larger context of this. vast polarization means that there's stuff that may get suggested, like the camera act or whatnot, but not even brought up to a vote. So it's kind of like this stagnation that can happen within these larger discussions. And so I think that's also a part of the context for, you know, how to navigate this as we start to look at what's happening around the world, but also realizing that we're kind of in this situation where the technology pacing gap is getting worse and worse and worse. So what can we do about it? I see a lot of hope of what's happening in the international scene and having these different discussions. And so maybe if there's a continued fragmentation around some of these different topics from an international perspective, then it can start to push back. There's issues of not having those things be enforced, but at least it's starting to potentially change the underlying information architectures as they start to move forward. Because for me, I don't want to see us sleepwalk into this dystopic future where all this biometric and physiological data is being captured and profiling us and leading us into a future that we don't want to actually live into. So that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a list of supported podcasts, and I do rely upon donations in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.