Ethics in XR is a vast topic, and I had a chance to moderate a panel discussion for Games for Change / XR for Change Talk and Play salon with four people including Tom Ffiske (Editor of VirtualPerceptions.com), Galit Ariel (TechnoFuturist), Kavya Pearlman (founder XR Safety Initiative), Em Lazer Walker (cloud advocate at Microsoft).
We talk about Ffiske’s six principles for data capture, XRSI’s Data Classification Framework Public Working Group, the power dynamics of the biggest players, business models beyond surveillance capitalism, safety risks, engineering harassment mitigation vs cultivating inclusive cultures, experiences of public vs private spaces, regulation, Microsoft’s approach towards ethics, foreign state actors spying on domestic citizens, ethics of XR for military contracts, research into ethics, addictive gamification, and cultivating moderation.
One of the reason why XR Ethics is so fascinating to me is there's tradeoffs between different contexts, which provides an opportunity to spatialize & map conversations into a memory palace.
I used my XR Ethics Manifesto framework to visualize what we covered in VoVR #930 https://t.co/bTwinvKHip pic.twitter.com/90cPMl2zng— Kent Bye (Voices of VR) (@kentbye) July 29, 2020
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Here’s the video of the original panel discussion from June 25, 2020.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. So ethics within XR, I think, is one of those topics that you could always be talking about. There's always some issues that have been brought in that hasn't been discussed before. And so because I've been covering this issue for the last four years or so, I was invited by XR for Change to moderate a panel with four other panelists, including Tom Fiske, Galit Ariel, Kavya Perlman, and M. Laser Walker. And doing panel discussions on ethics is kind of challenging because it's so vast. I mean, you could really go in so many different directions. And so I wanted to get a sense of the one topic that each of them really wanted to cover. And so what we did is that we had each of them cover their topic, had a little bit of a round table discussion, and then open it up for questions and answers and have more discussion about this topic. Ethics is something that I don't think will ever be fully resolved, trying to figure out what those issues are and what exactly to do with it. So I just think it's important to have lots of different perspectives and to continue to have this conversation. So that's what we're covering on today's episode of the Voices of VR podcast. So this panel discussion with Tom, Galit, Kavya, and Em happened on Thursday, June 25th, 2020. So with that, let's go ahead and dive right in.
[00:01:23.207] Archit Kaushik: Thank you so much for joining us. Welcome to the XR for Change Talk and Play on ethics in XR. We have an excellent panel lined up for you tonight. My name is Archit and I'm the XR for Change fellow at Games4Change. And with that, I'm going to pitch it to the president of Games4Change, Susanna. Take it away.
[00:01:42.648] Susanna Pollack: Thanks, Archit. Thanks, everyone, for joining us this afternoon. We're really happy to be having our latest Talk and Play event. This is an event we would typically do in person, but as circumstances have dictated, we have been holding these online. And they've been amazing conversations, and we're reaching people from all over the world. So I'm really, really happy that you all chose to join us today as well. I just want to talk for a few minutes before I hand the session over to our moderator, Kent Bye. If any of you are not yet aware and haven't yet registered, we have our upcoming Games for Change Festival, which is happening online and for free for the first time ever on July 14th to the 16th. We have an incredible lineup for three days. In addition to our XR for change content, you know, we obviously focus on games and impact as well. And we are covering topics from health and wellness to XR and games that are used in learning and how this medium can be used to grow awareness on civic and social issues. So with that, I'm going to pass the mic over to Kent, who is a good friend of Games for Change. A little intro on Kent, if you're not aware. He's a producer of Voices of VR podcast and also a writer of XR Ethics Manifesto. He's been doing podcasts for, I guess, six years now, and Kent's conducted over 1,500 podcast interviews. Some of them were at Games for Change a couple years ago, which we were psyched about. He's a philosopher, oral historian, and experimental journalist. And I'm sure he's going to entertain us all and lead us through a really exciting conversation with an amazing group of people. So Kent, thank you for joining us. And thank everybody for participating today. I'll see you all at the end.
[00:03:29.223] Kent Bye: Awesome. Hi. Thank you so much, Susanna. So yeah, like Susanna said, my name is Kent Bye. I do the Voices of VR podcast. And as I've traveled around to nearly a hundred different gatherings over the last six years, I've been in conversation with the XR community. And it's from those conversations actually that issues around privacy and ethics started to organically emerge. And as I've done a bunch of different interviews about this topic, I helped to co-organize the VR Privacy Summit back in 2018. There's a couple of ways that I think about it. One is that with all new technology, it starts to blur the line of our existing contexts. And it creates new ethical situations where we don't necessarily have the normative standards to be able to understand it or to navigate it yet. And so it's a process of trying to put language around it and to see how there's various trade-offs as you try to merge these different contexts together. Also, the issues around privacy and blending together certain aspects of what we have radiated from our bodies, that's a whole other area as well. After talking to a lot of people about this, the whole field is vast and it's really impossible to try to do a comprehensive take in this conversation that we're going to have here. And so the best I can say that we're going to try to do is have each person has their own slice of what they're looking at. We're going to have each of our panelists introduce themselves. And then everybody who's in the audience will also likely have your own perspective of what you're looking at. And so I look forward to opening it up to discussion and questions here after about an hour of us talking about it and trying to cover as much ground as we can. And so with that, I'm just going to hand it over to the different panelists to go ahead and introduce themselves and give a little bit more context as to how you're connected to this topic of XR ethics.
[00:05:15.063] Tom Ffiske: My name's Tom Fiske. I am the editor of Virtual Perceptions, which is a VR and AR analyst website, which covers what's happening in the industry. Earlier this year, I published a book called The Immersive Reality Revolution, where I cover everything when it comes to immersive, where I dedicated also a chapter to the ethics of VR and AR, which in hindsight is way too short considering how big this topic is. I've been in this industry for about four years, since 2016, and I've just been following all these amazing people doing these amazing and innovative items. What particularly interests me when it comes to ethics is the use of our data. So, how people collect data and how the data is sold and exploited in lots of different ways, which touches on upcoming VR and AR items coming in the future. But that's my specialism. Thank you.
[00:06:11.914] Galit Ariel: Hi, so my name is Galit Ariel. My background is actually a very non-linear background that led me to technology. I started off as a designer of physical spaces and objects, moved into experiential design and strategy, and kind of landed again in human-computer interaction, focusing mainly on augmented reality that for me is the best immersive technology. For me, what fascinates me with augmented reality in particular, but immersive tech at large, is the fact that we are really about to step into a new realm where the pixel, the neuron, and the atoms will become a new space. And, you know, with great power comes great responsibility. And as I was interviewing people for my master's research about AR, I was kind of alarmed at how many developers are admitting that this is one of the greatest technologies, but refusing to create processes and ethics and regulations that will also prevent misuse of it. So I published a book, Augmenting Alice, the Future of Reality, Identity, and Experience. I've been giving a lot of workshops and talks about it since I consult for AR gaming entities. And basically, I just want for us to all have a shared reality and the technology we all deserve.
[00:07:43.282] Kavya Pearlman: I'm Kavya Paraman. I am the founder of XR Safety Initiative. XR Safety Initiative started in 2019 and the goal is very, very simple. It's to help build safe XR environments. Now, why did I start XR Safety Initiative? That is a whole long story, but essentially because I feel that I'm uniquely positioned to bring about this sort of very much needed change in the industry and potentially produce solutions and by collaborating or coordinating with different entities. And what put me in this unique position is a couple of things that happened in my life. One being, I found myself as the chief security officer or the cyber security officer for Second Life, which is we all know is the oldest existing virtual work. And prior to dealing with, you know, security issues for Second Life, I also found myself doing third party security for Facebook during 2016 election time. And that sort of, you know, gave me a unique set of skills. And the biggest thing that it taught me is that how technology, if we are not proactive about allowing third parties or use of data, like we saw in 2016 how data can be weaponized. And so now when I combine that experience with like what I see right now, what we are doing with XR, there was a huge gap here that we need to fill in. And we talk about ethics when this gap sort of comes up. It's like there is a gap between, you know, how do we perceive ethics? How do we address these issues? So I think those few experiences make me, and then I, you know, got connected to so many incredible people in the industry because my background being cybersecurity, got connected to the co-founder of XR Safety Initiative, Abraham McKinley, he was like the first person who actually discovered novel cyber attacks in virtual reality. And is actually the only person at the moment, along with his team at New Haven University, who can actually prove what happens forensically inside a virtual environment once things have been manipulated. So establishing the truth is something of a capability that we need to understand and have as things progress in XR. But yeah, there's so much more that I can go on and talk about, but I encourage people to learn about XR Safety Initiative because I feel like after one year of this work, we've sort of become this essential piece of puzzle to navigate these uncharted territories.
[00:10:23.667] Em Lazer Walker: Everybody, my name is M. Laser Walker. I use she her pronouns and I work as a cloud advocate at Microsoft. So my background is mostly in experimental games, largely using non-traditional physical interfaces and emerging technologies and everything that's not using a controller or keyboard and mouse. So that is often meant sort of making my own hardware, but it also means using XR and VR and AR tech, including I spent a bunch of time doing research at the MIT Media Lab focused on using fiction to connect people with real world spaces and how can we safely and ethically use spatial audio in public spaces to give people immersive experiences. And since COVID has broken out, a lot of what I have been focusing on is online virtual worlds and social spaces. So one lens of ethics that I'm particularly interested in is looking at how we can create spaces that prevent abuse and harassment and things like that. I'm also here to sort of represent the Microsoft side of things. I think it's interesting to have a large tech company with some representation in this panel. Although as I'm sure we'll get into, Microsoft is a bit weird compared to talking about a Facebook or a Google, because we don't really have a monolithic VR, AR strategy. We have HoloLens, we have Windows Mixed Reality, we have a bunch of developer tools and cloud services. There is less like one way that Microsoft views ethics in VR.
[00:11:40.918] Kent Bye: Great. So I know we had a chance to do a bit of a pre-discussion where we mapped out a little bit of the topics that we're interested in. I think what might be a good approach is to kind of go through each person's topic areas. And I'll give a brief summary before we dive in, just so that everybody knows that as we go from each person, I'll hand it over to them to make an opening statement. But before we do that, one other thing I just wanted to mention just to sort of help set a broader context is that Right now, there's open questions around what laws need to be set at a global scale. There's aspects of what each company has to do to be able to set their privacy policies for where they set those boundaries. But for most of the people that are probably here, it's more of like an education. So learning about what the risks are. So as designers, how do you create the most ethically aligned embodiment of XR design, but also as consumers, what do we need to push back, whether it's by our voting dollars and supporting certain things or, you know, new economic models or whatnot. So I think that's also important just to say that there's going to be a lot of different vectors in which action could be taken as we think about this as a panel for XR for Change. So I wanted to just sort of give a brief overview of what we talked about and then we'll dive in. So I know Thomas was talking a lot about the use of data and research and lots of issues there about what that is. So we'll start there. Kalit was talking about the business effects are, who has power, who's in control. And so maybe dive into more of those larger structural issues there that I'll let her sort of make that statement. And then I know that Kavya, you've been doing a lot of stuff with cybersecurity as well as with harassment and then trust and safety and security. And then we'll probably cover a lot of the cyber harassment stuff there. But, you know, I'd love to just hear a little bit more about this whole Microsoft angle of how this company fits in, especially with the relationship to government contracts. So that's sort of like an overview. So with that, I'm going to hand it over to Thomas to kick off with the data aspects.
[00:13:34.126] Tom Ffiske: Thank you. So when it comes to the use of data, we're currently in a society where our data is farmed and then used in multiple different ways. The clear example, this is Facebook with redirecting of ads and with pixels where tracks are all our online activity. My fear is the extension of that as we use virtual reality or augmented reality, because the complexity of the data becomes more intimate. Last year, they had preliminary findings on the use of brain reading technology, which reads very simplistic instructions from the mind. The technology is very juvenile. It's not good enough quite yet to do much, but give it a few years and Facebook will start being able to read what you are doing. And the reason why I'm cautious is I'm talking from a UK perspective, but I also know globally, we haven't really touched on the issues when it comes to collecting data with immersive technology, because there's so much going on with that. So with that in mind, I came up with six principles, which I feel encapsulates what we should do ethically in order to make sure we are safe. I'm not sure they'll ever be implemented, but I feel that in a utopian world, this would happen. First of all, there should be limited access. Regulators should control which organizations can access and use user data. For example, political campaigns would just have restricted access. I know ad companies are already displaying when a political ad is happening, but if I'm honest, I don't trust that at all. I'd rather just completely limit how much politics uses our data. I also think there should be transparent design. So the neuroethical design of brain interfaces must be open and understandable by regulators and agencies to fully understand what kind of data is collected. Because one of the key issues is a lot of people don't actually quite understand how the data is then used and then interpreted. It's a very oblique system. And in order for it to work better, it needs to be more transparent for regulators to take a look into. The third principle is understandable algorithms. So touching back on my previous point where it just needs to be open and understandable for regulators and agencies to understand what's happening. And then the next three principles are based on the users. I believe the user should completely own their data, completely. It is not the companies who actually own it. It's the user's right to be able to own the data and sell it. I know Kent has also mentioned with his exile manifesto that he has the same views on this topic. It also should be open to users as well. Any user should access their own data to the broadest extent where possible. And then finally, I believe there should be active opt-in. Every once in a while, a company may send you an email saying, we've updated your terms, conditions, you may opt out whenever you want. I believe there should be active opt-in saying, your accounts will be blocked unless you read and understand these principles, you must tick this. And the reason why I believe that should be the case is because a lot of users don't actually understand what they're accepting and how their data is being used. Those are my six utopian principles. I'm happy to hear what you all think about them.
[00:16:59.883] Kent Bye: Yeah, I think that I'll share a brief thought and sort of open it up for other folks to jump in. You know, this issue of what data is recorded and what is captured or eye tracking data, galvanic skin response, you know, there's going to be all sorts of information that is going to revealing all sorts of information. Now, in certain contexts, like a medical context, that's great because you want to be able to rehabilitate yourself. And so sometimes it's contextual where it's OK, but other times, if it's Facebook having access to that, then it's obviously not as OK. And so Helen Nissenbaum has this privacy framework called contextual integrity that tries to start to map out how contextual privacy is. There's not like a universal definition where sometimes it's okay and other times it's not. So that's my initial take. My other just thought, as you say, all those different principles is that sometimes in ethics, it's impossible to implement a perfect design because you're always trading off one thing over the other. So what I'm really interested in is the different trade offs or the dialectics of these things where you can have a little bit of this, but you're never going to have like the perfect ideal situation because it's always going to be some compromise that you have to do. So that's my initial thoughts, but I'd love to hear what other folks have to say as well.
[00:18:11.496] Em Lazer Walker: I think there's a really interesting question about how user freedom and choosing technologies that if you look at the web, not all web technology was made. Like we needed stuff like GDPR to make this happen, but there are a lot of things that you get freedom by there being choice in web browsers. If your web browser doesn't support the do not track header, you can switch to one that does. If your ad blocker doesn't work in a web browser, you can switch to one that does. If you don't trust Google or Firefox. you can compile it from scratch. And that is less viable in XR where you're not going to make your own VR headset from scratch. And I don't quite know how to solve all these problems in context of we are stuck living in Facebook or Valve's world.
[00:18:52.762] Galit Ariel: And for me, I think I really like what you said about being transactional because, of course, this is a system that exists because policymakers and companies and developers and users are all contributing to it. And this is where I love the fact that we talk about ethics because ethics talks more about norms than regulation. And this is about how do we create a mindset that we prioritize and decide what are these trade-offs, and where do they happen, and whom do they happen with? Because I think we're only now waking up, or at least not us, but I think the wider audience that might have been a bit oblivious to what's happening is waking up. And from talking and doing workshops with users, I do Utopia, Dystopia users, what scares me the most is that most of them, especially when I talk about social media, are like, well, what are you going to do? It's like Black Mirror. So we accepted the fact that this is the way it is, and it can't be changed. And this trade-off is the only way to achieve useful application and prosperous societies. And this narrative is the first thing we have to peel out. You know, you have to trade your privacy for this. It is one mechanism that works in many levels, but it creates a lot of problems. So we have to start a conversation from the root of what if we didn't have all that and we could build it all again? What would we do then? Because we can. This is the truth. We can.
[00:20:21.300] Kavya Pearlman: Right. And I really admire that question is like, how do we solve it? And then I want to add my piece here, because we have been looking at this very challenge. I've been looking at it for almost like two years now. But as XRSI, as a collective, we've been looking at this sort of ethical issue, or this overall data, privacy, cybersecurity, all of these like collective issues. that touch, I mean, we can say XR domain, but it actually touches a lot of domain. It touches healthcare, it touches education, it touches travel industry, it touches almost every domain because we know that this is going to be our new web. And now we have an opportunity to possibly get this right. Now, you know, the lead asked this question, how do we do this? And I see and I have so much respect for all of the people on the panel and just outside the panel trying their level best to lend some sort of a narrative. Kent was probably the first one and through Kent, I heard all these risks that have come about that have surfaced through the ethical manifesto. Some people call it like the ethical dilemma and others are calling it some, you know, like, hey, we need to have some more accountability, yada, yada. So after about a year, just about, I think it was February, we brought together about 12 different organizations, including some really key organizations that do cybersecurity work, people that are focused on diversity and inclusion, so many people that actually are involved deeply into artificial intelligence research, And I can name all these organizations, but I encourage you to go to this other website, we formed this sort of coalition called Cyber XR Coalition. And so what we did is like we brought these experts, we looked at this very problem, ethical dilemma, and all these other risks that are, you know, we've surfaced them. We understand some of these challenges. In fact, Galit, I think earlier today you had retweeted something about being a problem solver versus problem seeker. And honestly, thus far, we have been problem seekers. And I think that is also what is needed for the industry. Let's be a problem seeker. So after seeking that problem, finding these ethical issues, we establish that we need a framework. We need a framework between public-private entities, like how should they collaborate. We need a framework to teach our educators what kind of research should they be doing. You know, academically, we need a framework for users. How should they be aware of when they are stepping into it, checking the box is not enough. So then fast forward what we arrived at is there are some things we can take this as an ethical principle, but trust me, Google started as do no evil. Nice ethics. That's the premise that everybody starts at. But then let's talk about how do we solve this. We need to now mandate these things. How do we do this? We zoomed out a bit. We took a look at, hey, what about journalism? You know, journalism has ethics. It has been around forever. But guess what? When it comes to taking a stand, ethical stand, about some political, whether to flag some tweet or not, You see a shrug when it comes to whether your platform was used to undermine democracy or the very first thing we see is a shrug. So how do we avoid this shrug? We make it a trust and safety issue. And that's what we did, is if you look at the recent standards that we rolled out, the CyberXR standards, we tried to zoom out from just the ethics. Yes, ethics and ethical principles remain a key component, but we need to talk about how do we build trust proactively in these platforms? And when we start to talk about that, we encompass privacy into it. We incorporate ethical principles into it. We think about inclusion of all minorities and races and genders and whatnot. So that's kind of what we did. And then we just kind of took the critical pieces, critical risks that have to be addressed and put them under a trusted safety umbrella. So if you are a CEO, let's say, you're developing a BCI platform, or if you are a indie developer, you can use this comprehensive set of risks, like it's a list of risks, and look at it like, okay, these are the 10 things that I must care about. And now the next piece is to turn them into regulations on mandates and whatnot. So what we're trying to do now is build a bridge between us and Facebook, let's say, between us and we're working directly with the ICO to work on some child safety issue, for example. So really just narrowing down different paths that we can take now and advise other people to take to solve these problems. because we're all like sort of been in this problem-seeking mode. And I think XRSI is like releasing standards, telling like, you know what, this is one way, and this is our way to solve the problem. And now we're bringing all these entities to the table to share more knowledge, to exchange these things and tell us, are we saying the right thing? And if we are, then you must adopt it. So now we create accountability. And I think that's kind of what, you know, I hope that that would be our major contribution is to have people become more accountable and implement these things and then just like talk about it.
[00:26:10.588] Kent Bye: Well, I wanted to do one more quick round on the data issue before we move into the larger systemic issues of surveillance capitalism and the power that Galit, I'm sure, will introduce to us. But in terms of the data, I think there's two quick points I want to make. One, Tom, that you said, the act of opt-in, there's a challenge. The trade-off there is that every single time you go to a new experience, let's say WebXR, you have to give consent for everything all over again. And you sort of get this opt-in fatigue as we have from like GDPR. But not only is it not just to see the website, it's like, can we have your tracking of your eyes, of your head position? And so there's like this, how to deal with that consent and informed consent and make it a good user experience. But I just want to also just sort of have anybody that wants to toss out other information in terms of the threats, in terms of the information specifically around like, hey, if you have access to eye tracking data, we can know your sexual preference or things like gait detection. Once you have how somebody is walking, you can determine someone's bone length and be able to identify them. So this whole question of personal identifiable information versus non-personal identifiable information And with that in the future, stuff that is currently seen as non personal identifiable is going to eventually, through the assistance of AI become personally identifiable. So I just wanted to sort of open it up to see if anybody had any other quick things you wanted to say about data and the risks of data.
[00:27:37.928] Galit Ariel: So I think there's a big problem in what data is being captured, conscious and subconscious. Who's capturing it? Who might have access to it maliciously? And are we aware of the big scope of what does it mean for us individually and as a society? I'm not gonna go into the data per se, but I'm gonna talk about spatial computing and how that's linked to data. because Vladimir Putin said, he who controls AI will control the world. And I say, no, he who controls AR will control the world because they will be able to control the narrative of what we see in the world and our perception of reality. And also we're moving into a point where data is not just what you tap in, but as you mentioned, it's me walking in a public space all of a sudden, me being in my home, just being, all of a sudden data will be captured all the time. There will be no opting out. And this is where we really have to put a real cut now and put some regulations and some actions in place. Because before we know it, like 1984 has nothing on on what we're talking about. We are now at 1984 with kept smart devices. When we talk about smart and data capturing spaces, this is where we lose potentially all agency on how we perceive reality and how our actions are being tracked and used against us.
[00:29:10.195] Tom Ffiske: So I actually want to respond to that because I agree with you that he who controls AR controls the world, as you say, because it controls what people see. My counterpoint to you is in the UK, there's been a launch of a new company called Darabase, which is targeting that issue. So what they do is they look at geo AR and they're forming a permission based layer on the virtual world. So Basically, they want to map it out where if someone wants to use the virtual layer of a particular location, then that means they have to seek permission from the owner. So, they have to go through this particular system, which is amazing. This is exactly what we need. A lot of companies suspect Snapchat already has this permission-based layer, but they've not actually publicly actually announced it yet. But DaraBase has been formed to help assist other companies do the same thing, because of all the issues you've targeted, which is such as, what if McDonald's basically bought the virtual layer to do Burger King ads, that kind of stuff, for example. But no, it's been worked on, and I'm so happy it is.
[00:30:17.253] Em Lazer Walker: Do you know, are they, are they a for-profit company or are they some sort of government entity or who is controlling this layer?
[00:30:23.555] Tom Ffiske: So it's a for-profit company. It's owned by Dominic and he's a lovely guy. I'm going to have to do introduction after this meeting, but they are for-profit and they're definitely a company to look into. It's called Darabase.
[00:30:37.820] Kent Bye: Cool. Any other last call for any concerns around data?
[00:30:42.500] Em Lazer Walker: I think one interesting thing to point out is a lot of these cases, the solutions we're talking about are largely social solutions, which is correct. That is the way you need to approach these. But occasionally there are technical solutions as well. Like I'm thinking a lot about how Microsoft had some HoloLens research a year or two ago around the HoloLens is capturing all of these AR point clouds and often sending them up to the cloud to do all sorts of analysis. It is not very difficult to go from a very detailed 3D mesh of a space to this is giving you detailed information about that person. So they did a bunch of research into, well, how can we essentially anonymize this data? And they came up with some really clever ways of restructuring that data so that you can still do the same sort of analysis you need to, but in a way that they can't reconstruct someone's living room. And again, I think there are still questions about, this is research coming out of a for-profit company. What is compelling other companies to do the same sort of thing? But at least in some cases, it is not a given that if you want all of these rich machine learning technologies that we need to make XR wonderful, they can be built with privacy at the core.
[00:31:46.808] Kavya Pearlman: Right. And I just want to follow that up with, again, looking at it from a solutions perspective. So first of all, this technology, like collectively XR, I'm thinking of it as this perception manipulation technology. And I would have to sort of agree with Kilit is, yes, people who will control your perception or will be able to manipulate perception will control the world. This is a very obvious thing to happen. And now one thing that we have to admit before we look at the solutions, and this is something that I recall last time I was at a panel with Kent Bye, we talked a lot about the era of constant reality capture. This is a reality. We are not going back and we are not going to be able to segment where the privacy begins and ends at times. What we can do, however, is we can sort of define and shift this responsibility on the industries that are the stewardess of our data, that are owning our data, even though we can say we are the data owners, but essentially it is their responsibility to secure them. Now, I heard a lot of talk, and not just me, but, you know, overall XRSI, we heard a lot of talk about gaze data, pose data, all of this. So our very first order of business really was we formed a data classification framework working group. What we are going after is what does it look like at the data structure level. So we're essentially in that working group trying to create immersive API so that you can potentially take a subsection of your platform data, apply this immersive API, and then be able to visualize your entire data lifecycle. if you can see how data is being created, transferred, stored, and then basically archived and hopefully someday destroyed or has some kind of retention policy, and you pop that into that sort of immersive visualization to a CEO, and you already see, oh, here is this transfer of data, but it's going to Facebook, and we don't even know what they do with biometrics data. We just don't know it, right? They take it, but we don't know what happens. So here is this black hole. So if you can demonstrate that to, let's say, 100 CEOs that are actually creating these things, then we start to ask questions. Then we start to shift that accountability. And then you know how in cybersecurity we have this, reasonable security controls must be implemented, we have to, at the very least, ask for that reasonable security control in this black hole where this data is going. But until we can see the data, all this gaze pose, gait, and all of this, like now we're talking about the entire body tracking, we would not be able to have this conversation. We'll bitch and moan. We'll talk about gaze pose. Oh, my data, my data. But now once we have this sort of a visualized thing we again using the same bridges that we are building with big tech firms, we talk to them and say, hey, Tell us now. Otherwise, you know, there can be consequences to talk to the regulators about it that these are the regulations that are needed so We're looking at it from a very solution perspective, and I encourage people who are concerned to join this working group. Because in phase one, we ran into a problem of like, oh, what is XR? What is VR? So we just like kind of standardized those terms. And now we're kicking off phase two under the leadership of Diane Hossfeld from Mozilla. So when we kick this off, this is the objective. And hopefully at the before the end of the year, we should have some solution where we can visualize all this data. and visualize what is happening to our data and then color code it and tell people, look, this is what happens when you use this platform.
[00:35:36.605] Kent Bye: Yeah. Sorry. Yeah. I was just going to, uh, uh, well, go ahead, Glete. What are you going to say?
[00:35:41.506] Galit Ariel: Make it quick though, because I'm going to move on to the next thing. So I love, I love what you're doing Kavya, but, um, just want to plant a seed, like a question. So, you know, last year I was giving a talk in front of the data economy, EU and GDPR was launched and they were all like saluting the fact that it's affecting the whole world and companies are changing their standards. And then COVID happened. And for those who are not aware, it was found out that GDPR was a regulation, but it was a recommendation in times of crisis. And basically most countries applied contact tracing and all of a sudden all this work disappeared in a second. So I think to layer on the work you're doing, it's also like there's what if, if, if situations that we have to think about versus like the day-to-day risks.
[00:36:34.420] Kavya Pearlman: Totally. Very valid point.
[00:36:37.122] Kent Bye: Yeah, it's a good point. Yeah, the last thing that I would say on this topic is that researchers that are in academia, a big line of research is looking at what data are available and seeing what you can tell from this type of data and extrapolating information about people. So that's like an active area of research that will help eventually be fed into hopefully at some point policy being made to be able to start to either limit the amount of data that are being recorded or what you can do with the data. Galit, so you wanted to bring up some of the larger power issues, who's in control, maybe you could sort of make your argument and we can discuss it for a bit.
[00:37:10.572] Galit Ariel: Yeah, I think the problem of Excel, which is not a problem, is that it matured again in an era where we have like the big five or six or seven. And what is happening in my mind to regulation and ethics or the lack of them or the lack of applied ones is also part of the fact that it's being actively developed and applied by for-profit companies that are also relying on social media, business models, and surveillance capitalism. So I'm personally quite concerned about it. And even when you talk to people that are trying to create alternatives, more ethical, more inclusive, even art pieces, so, you know, speculative arts and experimental art, it is impossible to be sustainable without either trying to being sold to these companies eventually or as a complete strategy. And I think this is, a big problem because, you know, when we talk about tech today, tech is not just an industry. It's like the butter that is smeared on any piece of bread in industry. It's part of politics and policy. You know, we have the heads of tech companies are directly consulting to policymakers. It's part of our communication media, every system in our lives. So we are a technological civilization now. So it's not just an industry, it's not even a common, it's something that is beyond. And the power and the drives that then drives not just the tech, but every other layer, like Kavya said, this is not just a tech problem, this goes into tourism, into services, it goes into every layer. And this is the big problem. Now, this is not to say that we can move into, let's say, You know, I'm a digital hippie, you know, I know where I want to move to. But of course, we live still in a capitalist society and a for profit society. But I think the biggest problem is that we don't have enough reward systems for those who are doing it. better. That's a big hurdle. Even here in Canada, that has fantastic incentives for digital innovation. You first have to be a company that proven itself that has a span that made money. So you have to be big enough, aka commercial enough to be able to use these funds. And also there's not enough penalties for those who are not adhering with not just ethical, but legal, like violating legals. And I think these two edges need to be like highly elevated, and I think also on a personal accountability. So this is between companies and regulators, but I also expect a lot more from the users. I think, again, we became so compliant with like, yeah, you know, I'll just go, you know, there are companies that me as a professional, I refuse to work with, I refuse to work with them, you know. because, you know, sorry, they're evil. They are evil. And if it were projects that they would come to me and say, like, help us change it. It's one thing, but help us amplify this. And let's hope for the best for me is a big problem. And I really believe that change can come from the community of developers as well. We've seen it happening in companies like Facebook, in companies like Google, that people walked out and people are thinking twice on whether or not they advertise on them. they participate in them and they work for them. And I think we have to take individual accountability as users and as developers to kind of like not be part of the problem, to be honestly on the right side of history. I think zero tolerance is where we're at at the moment or should be.
[00:40:49.630] Kent Bye: Yeah, at the VR privacy summit back in 2018, you know, big takeaway for me was that the issues of privacy and the business models for companies like Google or Facebook of surveillance capitalism, those are in direct competition with each other. And so until there's like a complete new business model for how advertising and this whole surveillance aspect is done, then you're always going to have this tension between wanting to continue to grab a bunch of data about somebody and to be able to extrapolate information about them so that you can sell more ads versus, you know, the data sovereignty of your privacy of not, and to be able to be in a place where you don't feel like you're constantly having everything that you do or say, or move, or what you're looking at, what you're paying attention to being put into this big surveillance machine. There's obviously Facebook, one of the biggest players. Microsoft, I'm really happy to see that they're starting with the enterprise and they don't necessarily have like a whole surveillance capitalism business model. Apple also has privacy, but there's a trade-off of like the HoloLens is like thousands of dollars and Apple, you may end up having to pay for that privacy. So yeah, and I don't know if since you're inside of Microsoft, what your kind of take is on this sort of tension that we see within the larger XR industry.
[00:42:02.746] Em Lazer Walker: Yeah, I mean, I think your point about us selling to the enterprise is a good one. I think sort of an elephant in the room is there isn't really a profitable consumer VR business. Everyone is sort of shoving money onto the fire in the hopes that at some point it will be profitable. And that series of incentives makes it even more likely that even if you would already be leaning towards surveillance capitalism, you're going to lean into it as a way of trying to get some return on your investment. Even people who aren't platform holders, like if you're producing consumer VR content, maybe you have some arts funding, maybe you've raised some VC funding, but you're probably getting money from platform holders, not even traditional games publishers. And I think one way to view this and the way that Microsoft has done and now like magically shifting to enterprise is if you can sell something to companies that are paying a lot of money for that solution, this is something that is out of the hands of normal people and they can't really afford it, but it effectively sidesteps this issue entirely. And like Microsoft, if we're selling you something to help your business, we don't want that data. If we are building developer tools or people making XR experiences, like we do not want that liability of having private data.
[00:43:12.122] Kent Bye: Yeah, any other thoughts, Tom or Kavya?
[00:43:15.204] Kavya Pearlman: Sure. I just want to say that, you know, we mostly focus on like these big tech companies doing, you know, whatever ethical and ethical or their best effort that may not be quite ethical kind of thing. But I think we need to take into account what Gillete said. It's not just like bigger companies. There are smaller enterprises that are struggling. What are the incentives? What are the ways that they wouldn't just sell out? There is a particular organization that just received about $7 million in funding. Their whole business model is all about incentivizing data in XR. So how does a company like that, you know, decide that now that I have this responsibility and all this money, I'm going to operate ethically? We basically have to rely on the CEO. Likewise, there was another company that was just sold fairly recently for millions of dollars to Niantic when I asked the CEO, and I said, how did you make the decision of handing over data to these guys? And he said, well, I looked at everybody and they seemed very principled. Nice answer. But what does that mean principled, how does their third party security program look like when they do data sharing. Did we look at that. How do they actually intend to or do they have it in writing legally use the data that is coming in. We saw another interesting acquisition happened, which was been saber. And then suddenly, this happened in November, and all of a sudden, magically, Facebook's privacy policy gets updated in December. So I wonder what happened. I mean, there's just enough connection to make that speculation that maybe there is this data that arrived that somebody wanted to use, and now they've sort of updated. However, during this update, what we still didn't see is what is happening to biometrics data. So, again, looking at it from a solution perspective, again, this is like, you know, the standards that I keep talking about. It was our first attempt to give people that baseline. We said, if you are building, if you're doing stuff in XR, make it based on human-centric design, which is based on, like, accessibility, inclusion, and trust. So, you include ethics there. You give people some kind of a baseline to sort of follow is hey, you got your funding, you got this, you're a small business, you're a bigger business. If we all adhere to this, including Facebook, including everybody, if we just draw this line in the sand and be like, hey, we need to think about human-centric design, and we need to think about trust in general, are we building trust by exchanging this data or not, then I think we can solve these issues better. We need some sort of a baseline, and then we improve upon it.
[00:46:00.302] Galit Ariel: I have a quick solution that maybe, Kavya, you'll agree. How about we take all the tax money that all the big tech companies are not paying, make them pay it, and then make that into grants for ethical and small businesses? Ha. Okay. No, it's not possible, but a girl can dream.
[00:46:17.496] Kavya Pearlman: I think you're right though. I mean, this is what I thought when we started XRSI is like, it's not my responsibility to raise money for XRSI. We have now become this essential component. So that's my next step. I'm just going to start putting these research components out there that must be done. And hey, Facebook, Microsoft, Google, people who have billions of dollars, people who actually pay $59 million in a lawsuit just so somebody could shut up, they need to pay. They need to build these things. It's in their better interest. It's in the better interest of humanity. So I am with you. In fact, I'm going to call you and ask you for advice on this as I build this. We will. We'll make those dreams come true.
[00:47:02.971] Tom Ffiske: I'm so happy you mentioned tax as well. There was a historian who visited the World Economic Forum, where all these billionaires come with private jets, and he mentions that the real solution for helping the world is to make sure people pay tax properly. And then he said it's a bit like going to a firefighters conference and not mentioning water, for example. But yeah, going back to Kavya's other points when it comes to thinking about other companies, I totally agree. You acquire companies, you shift the policies to make sure you get to do these other things. And it's very scary. And it happens very regularly. And I think I just want to cap off with mentioning one company, which I don't think no one's mentioned yet, but I suspect is going to become like a hot topic. And that's TikTok, because At the moment, the companies which are getting the most interest and insight is Facebook-owned companies, as well as Apple, as well as other big tech companies. But TikTok is very interesting because less people are interested or looking into it that much, yet they have such a grip hold on a particular age demographic all around the world. And the way they collect the data is actually quite scary, because it's one of those platforms where a lot of fake information spreads very quickly with almost no regulation, which is why it's impacting a lot of people, at least particularly in the UK. And the reason why I mention it in the context of ethics and XR is because they're also looking into augmented reality tech. I have to wonder what TikTok will be doing next when it comes to augmented reality, which is why I'm taking a very close eye on that company right now.
[00:48:41.056] Kent Bye: Yeah, that's basically like a massive spy machine in the globe. And I'd say in terms of the United States-based companies, a lot of the big companies, I'm skeptical that they're just going to do the right thing without either pressure from consumers, pressure from the larger culture, pressure from regulators, frankly, in terms of how to give the whole power of the government into actually forcing certain actions. So I think as we think about this, it's like, how do you come up with the policies and the legal frameworks to be able to actually push up and to start to mandate more action on this? Something similar to the GDPR, but something like privacy as a human right, and how do you conceive of that? And how do you enforce that at a governmental level? you know, there's a number of different philosophers that have looked at different approaches, like Adam Moore, looking at privacy in terms of something that you own, that you could license out like copyright, contextual integrity, which is like, it's more about the context, or Dr. Anita Allen talks about how you need to actually like have a paternalistic approach where people are not responsible enough to taking care of their own privacy. So we need to have the government take care of it for us. So all of these are not perfect. And so how to force action at the collective level. But I think that's probably a good place we could, again, go on for hours for any one of these topics. But I wanted to kind of wrap up the rest of this round and then kind of open it up for questions. So Kavya, why don't you go ahead and talk a bit about trust and security and harassment or whatever, which ones that you want to sort of focus on at this point.
[00:50:06.822] Kavya Pearlman: Yeah, sure. And I think I've shared multiple stories around it. But before I go there. I want to follow up with some of the points that were earlier made is like, how are we going to get people to be more ethical and I think there is like this threefold approach that at XRSI we are planning to take is we're going to start with awareness for all stakeholders, like users. To do that, we're going to do a campaign of awareness of risks of XR going out to 29 plus countries, however many organizations, 4,000 plus. And for that, we partner with somebody who already knows how to do that. They already have connections. So it's like, great. So awareness, awareness using games for change type of platforms, awareness by, you know, partnering up with girls who dream about, you know, this is a better model to adopt and, you know, those kinds of things. And then incentivize. Now we draw a line in the sand is like, okay, whoever will adopt and do this, we would personally hail them or feature them on our awareness platform, which we are going to be rolling out next month. It's a ready hacker one. So then once we incentivize these, once we draw this line, then we potentially can hope that now regulators, which, you know, again, awareness for them as well, can potentially understand and then have them build something more mandatory than we go into enforcement, do like a slap on the wrist. And hopefully we can then hope for like, you know, 4% of the revenue type of GDPR like laws come out. The only sad part is that we are not able to get much of attraction in the United States, whereas other governments actually truly respond better. So that's the one piece that I'm like still thinking about connecting. Besides that, there is the aspect of harassment, bullying, all of this is coming to XR inevitably, and it already is there. We see that in our gaming industry, that if you are a girl avatar, you feel sort of not comfortable in many of the cases, and still there is not enough awareness around like, How should you be treating a girl avatar? Or why should you be speaking to a girl avatar in this way or that way? How do we solve that, really, is yet another collaboration where we are partnering up with something called Gamers Safer, another organization that's really thinking about these things very uniquely. using computer vision and artificial intelligence to create some kind of a digital ID where kids and, you know, just like players or the XR folks are more incentivized for behaving better. and then, you know, define those principles, what better means, and then sort of like go about solving it. So we're just sort of taking this piece by piece by rolling out multiple programs without even spinning up too many apparatus. There is so much work going on, just connecting these dots, and then just thinking about it strategically, like these things happen. Using my unique experience when I was in Second Life or the virtual platform Sansar, I personally experienced harassment. It impacted me tremendously. So I know that these things have a greater impact because it's a very compelling reality that we experience when we experience XR. So yeah, those are the things. I'm very solution driven this year. I'm just going to monetize and solve like problem seeking and then now it's problem solving.
[00:53:37.017] Kent Bye: Yeah, well, I welcome comments from everybody and all the stuff that Kavya said, but one thing that I'd say just in terms of the harassment is that harassment is a challenging thing, for one, is that there's technology and then there's human behavior. And if people are determined to be horrible human beings, I think there's only so much that the technology can do to prevent that once you get people communicating with each other. So I think we've seen that across Twitter. We've seen that in VR. And so some of it is a human issue and a training and a cultivating of a culture to cultivate the behaviors that you want. But there's also, there are things that you can do for the technological side with personal space bubbles, allowing yourself to block people or ban people, to mute people. All of these are the basics in terms of what you need. But I'd say like, there's also a very interesting like trade-off because if it were up to me, I wouldn't do any sort of recording or tracking, but the reality is that like in Oculus Venues, for example, if you report harassment, then it's been recording whatever you're doing. So you have to consent to being recorded. Or a lot of these social VR, in order to scale up to the level that they do, they actually have to assign individuals like a social score. It's an invisible social score that's never revealed. You don't know what your score is, but there's an invisible trust factor that you have that's essentially like a trust score. So there's certain things that are actually happening behind the scenes in order to create a safe place but I think that's what makes it such a challenging ethical issue is that you have these weird trade-offs of like social scores and being recorded so I think that's the challenge with this in order to like create safe environments there's no like perfect solution that satisfies everybody's desires. Anyway I just want to start there and open it up to see what other folks have to say.
[00:55:21.679] Em Lazer Walker: I don't know, but I'm not sure I agree that we need these sort of recording technologies in that at least they are not currently working. I have not spent time in a completely public VR social space without being harassed. The times that I have felt safe have been when it is at a smaller event where the people organizing the event can treat it like an in-person event and use the same sort of social techniques. that you would use if you're at a physical meetup to make sure that people are safe. And the model for using recording video and things like that goes to places like Twitter and Facebook, where it's much easier to scale up human moderation because everything is text-based and you can much more easily apply stuff like machine learning, even if you're just looking at plain text. That's faster for human moderators. And even they are completely struggling under the load of the abuse and harassment on their platforms. And I don't know, even if we were okay with saying everyone is always being recorded in every space, we're going to capture every piece of data we have and forget all of the other ethical concerns we talked about 20 minutes ago. I don't know if that solves the problem.
[00:56:22.409] Galit Ariel: Yeah. And to add to that, I think, yeah, we need some safeguards and some spaces that we can like block the trolls. But if we think, especially if we think about AR, we keep talking about AR of adding data and adding features. AR is also a tool that can take out things out of the public space. So when it just started, you know, my first thought was like, okay, great. So like fundamentalists, they don't want to see women, you know, now it won't be on the women, it will be on them. And I'm like, yeah, but what happens if, for example, Trump's supporter doesn't want to see people of color and they disappear from his reality. So we're here in this like duality. And that's another ethical thing, like to block someone, it solves, the symptom but doesn't solve the problem. And I think technology is great in creating these roadblocks and protection gear to block the symptoms, but we have to deal with the hard problem of this behavior and this mindset to begin with. And some of this behavior and mindset is allowed or sometimes encouraged by the platform and some of it is something that we have to deal outside technology world and really have a deep root canal to take it out.
[00:57:38.880] Kavya Pearlman: You're right, you're right. And since we're talking about ethics. I want to add to it just one more sort of ethical concern that is very least talked about, but it must be talked about. So we're talking about in game harassment and all but YouTube bunch of videos where people are in VR and then people around them are either groping or touching or doing all sorts of things while people are experiencing extended reality. So the point today I also want to make is we need to build a spectator culture including what Galit just said is this is a cultural upbringing that we need to do for ourselves. And of course, today is a great way to do so. And then, you know, 50 other people listen to us and then they talk to 50 other people. That's great. But we need to do more and better cultural awareness. We need to tell the researchers that, you know, or the university. Sometimes they're doing research. And they have no idea whether this professor, he could be a harasser, or what kind of a guideline should they have. So these kinds of like, first we draw the line in the sand that, you know, hey, when somebody is in VR or XR, like, please do not do da-da-da-da-da, or do not harass, and do not record, like all these things have to be instilled in a way which is culturally We need to grow up and this is an opportunity. This new profound technology, it is very scary, but if we use it properly, it's amazing. And we need to know what it is before, you know, we start like recording people or touching people or grabbing people, you know.
[00:59:21.487] Em Lazer Walker: Yeah, that's a really good point. And I think that is deeply intertwined with talking about the relationship between games and VR, because I think gaming culture is so fundamentally toxic. And you look at the harassment and abuse that happens, not just in games, but outside games. I don't know if there is a way to save quote unquote gamer culture. And so there's a real question of as VR as this technology that has the potential to reach this much wider audience, but is very much now a lot of the time focusing on gamers who are willing to spend money on expensive PCs at an initial market. To what extent are we letting that toxic culture define what the overall culture of VR is? And that's a real problem.
[01:00:01.942] Kent Bye: Yeah, and I just wanted to jump in and expand on your point in terms of the public-private aspect, because you look at something like VRChat, they have areas where you just go into public spaces and it's kind of free-for-all, or you could create an instance that's just your friends and you have more of an invite-only, or you could start to have like your friends of the friends start to come in, and so there's, I see that there's this dialectic between like those private spaces where you have control over who's in those spaces versus the public square. I guess part of my concern as well is like, what is public space look like in the future? If everything in order to be safe is like totally private, then how do you actually get away from the filter bubble aspect of just radicalizing everybody in terms of like never having anybody that you encounter online that has any different perspectives from you. And so there's like these larger things to we've already have these filter bubbles that have been generated by social media. And then as we have physical gatherings within virtual reality, then then how do we cultivate and create? Like maybe there are sacrifices we do have to make when we are in these public spaces in order to make them safe, but we have the option to be in the private spaces as well. So that's some of the stuff that comes up when I hear some of those things. That's already happening in the VR as well. Cool. Well, Tom, did you have anything else you wanted to add in before we move on to the final topic?
[01:01:18.679] Tom Ffiske: One thing I wanted to say, as I saw in the chat, Juliana says, what would you like allies to do to support better behavior in these public spaces, which Kavya has been talking about? It's very tricky. I was talking to lots of people from the Educators in VR Summit, because there's a very tight group of people who work in alt space, and they explore how to treat others in these immersive spaces. And a lot of it comes down to two things, I found. One is to call people out of public spaces, which is very similar to what happens in real life. You just need to call out bad behavior in a constructive way. The second thing I've seen, which is, I'm just going to touch on the exile manifesto again. The avatars you use really define how people react to you, for better or worse. It's going to be a big, big, big topic to explore how you portray yourself in these spaces because some women have found that more people listen to them when they are male-bodied, which is awful, and that should never happen. But it's been recorded to happen. And I guess one thing we need to explore is creating a framework within these immersive spaces where a lot of these biases in real life do kind of spill over into these very intimate spaces. And how do we, as a community, build together to solve these issues? And I feel the solution does come down to how active a community is to make sure they improve.
[01:02:50.390] Kent Bye: Yeah, the code of conduct. I think every VR application has a code of conduct and it's like a design challenge. Like, how do you ramp up all of your members that are using the app on what the rules are and how are those rules moderated and enforced? So that's sort of like, I know there's different approaches that VRChat uses, Rec Room, so, and Altspace as well. So yeah, let's move on to the final topic and then we'll open it up to Q&A. So Em, you're at Microsoft, a big company that has lots of government contracts. And so there's working with the military and the ethics around using XR technology for military training. The military has been involved since the very beginning of XR with flight simulators and the sort of Damocles being funded by DARPA and Tom Furness and the Air Force. So the whole history of XR and VR and AR is tightly bound into the military. But I'd like to just hear you say whatever you want about your perspective of being inside of Microsoft and some of the ethical issues that you see with a company that's as big as Microsoft.
[01:03:47.878] Em Lazer Walker: Yeah, I think the military thing is tricky. But I can say about that is there are limited teams at HoloLens working with the military that I am totally divorced from. But I know that we have a larger ethics group that works with them and brings in outside subject matter experts to try to figure out who are we engaging that should be engaging with them. On a personal level, I would note, it is not just that the history of XR is intertwined with the military, but in many ways, the history of computing. We would not be communicating here over the internet if not for DARPA. I grapple a lot with, even as I personally would not want the things that I build to be used for military purposes, Military funding has directly led to a lot of the things that I use in my day-to-day life, and that is something that is really tricky to grapple with in a way that is more abstract than, say, should Extech products have an ICE contract or not? So yeah, that is one thing. I think to the larger ethical point, though, I think I touched on this a little bit, that for the most part, what Microsoft is doing in XR is selling to companies rather than selling to consumers, which completely shapes the model. Like other than alt space is one thing, alt space shares a lot of the same concerns as VRChat and all these harassment and privacy issues we were just talking about. But for the most part, when my XR thing is a tool that developers are going to use that customers are never going to see, that really changes the calculus. Like the data we're storing We don't want to own it. It is a liability. We don't really have to think about a lot of these same privacy issues or ethics issues that other people do, which is a very privileged position to be in.
[01:05:23.739] Kent Bye: Well, I'll ask one more other question and also invite other people if they want to either ask you a question about what they want to know about Microsoft or have other comments about what we've talked about. But since Microsoft is such a big company and with XR being so new and, you know, how to navigate these ethical dilemmas, like how has Microsoft internally started to have discussions around ethics and how are ethics embedded into the designs themselves? Like, how does that relationship between the ethical frameworks and the actual implementation of the design? How does that conversation take place?
[01:05:58.609] Em Lazer Walker: Yeah, I wish there was a more unified answer. I think one answer is that Microsoft is a very, very large company. And so it is often difficult for different parts of the company to talk to each other. So like there is this sort of formal ethics team that I mentioned, but for the most part, I would love it if the people on the HoloLens team and the Windows Mixed Reality team were directly having conversations about the same things they are facing on a day-to-day basis. It is possible that I am just in a completely other arm of the company and those conversations are happening, but the perception I have is that any of these discussions of ethics are happening within individual product teams. And the sense I have gotten is they do definitely exist. I don't know if I'm allowed to talk to specifics, but I can think about a lot of specific products around machine learning, where I've been in the room having discussions about, this is the thing that we built a prototype of it. This probably shouldn't exist. Maybe we shouldn't actually sell this as a customer facing feature. Like as someone who is relatively new to working at large megacorps, it has given me great hope to see that these conversations are happening, even if it is at a micro level, rather than some unified framework across the entire company.
[01:07:10.911] Kent Bye: Anyone else have any comments or questions?
[01:07:13.213] Galit Ariel: I have a question for M that you can't probably answer. Do you have an example? Do you know about a concrete case where a product that would have made money, that was a good product, but ethically ambiguous was not released for Microsoft?
[01:07:35.109] Em Lazer Walker: I do not have a good answer. Again, part of that is being so relatively new to the company. I'm thinking of a very specific example where it is not a product, but a product feature that did ship. And I don't think we have pulled that feature yet, but the fight is ongoing. And I think we are very close to having that no longer be a thing that you can actually use or pay for, even though it shouldn't have shipped to begin with.
[01:07:57.473] Galit Ariel: Because I'm a Mac person and I really want to love Microsoft. And my mind on, because I really think that Microsoft, at least publicly or as a consumer, have really voiced out and applied a lot of things that I really respect and didn't expect to. So make me love you.
[01:08:14.142] Kent Bye: Well, I'm going to say something in favor of Microsoft and against Apple is that in the past, Microsoft used to take on open standards and try to own them and kill them, which Internet Explorer is probably one of the greatest examples of that. But eventually, you know, that shifted. And because Microsoft missed the boat on the mobile revolution, you had Apple with iPhone and Google with Android. Microsoft's been forced to really take this really pluralistic open source, like they own GitHub. They're like probably the most open source company out there now. And who is trying to own and kill open standards is Apple, who does not implement web standards. They try to own everything. I mean, actually, they're quite bad when it comes to promoting open standards and forcing everybody to go through their app. So there's been a huge shift that I've watched in my tech career, where Microsoft was the bad guy and now they're the good guy and Apple, arguably, I mean, Apple is a good guy on privacy, but when it comes to closed walled gardens and promoting like open ecosystems, like Apple's like one of the worst.
[01:09:23.333] Em Lazer Walker: Yeah, so Microsoft is now the single largest contributor to open source on GitHub, more than any other large tech company. And having to brovie to sort of public conversations with, say, the TypeScript team, the old mantra of embrace, extend, extinguish, it seems like extinguish is no more. And even just on my spatial computing team specifically, we have multiple members in the W3C WebVR group And sort of everything right now with new Microsoft seems to be towards, how can we embrace the community? How can we push standards? Like our like on the wall company motto is about empowering everyone to do more. And like, that is silly, that is corporate speak, but also that comes back to our goal is not to own what you are doing. Our goal is to help you the ways that we can, and hopefully that will benefit us.
[01:10:11.063] Kent Bye: Cool. Any other last thoughts, comments, or questions about Microsoft and big tech companies?
[01:10:17.599] Tom Ffiske: I only want to touch on the ethics behind helping military organizations, but which in turn speeds up technological development. I share the opinions of everyone else in the panel. I feel that I also feel very uncomfortable with that, where our biggest and greatest innovations do tend to come from investing and looking to technology. And the same is happening with VR and AR as well. I've been following Microsoft closely with what they're doing with the US Army and all their new goggles, because they actually renamed the goggles to something beyond HoloLens, because it's so different now to what the HoloLens actually is, because of this tight connection they have with the Army. I lean towards no, I'd much rather, I think we've reached the point of development where US tech companies can develop without the need of helping out military contracts, but that's just a very personal opinion and I'm sure others might share my view in this panel as well.
[01:11:16.463] Kavya Pearlman: I don't know if it comes down to $40 million, just saying. It's like, I don't know if somebody would just give away $40 million just because they want it to be ethical and not want it to be called out. It all comes down to money. People are taking risks of very grave magnitude. And they know that even with the worst of the regulations, GDPR, even with their 4% revenue, even with all of these things, if all they have to pay is like X billion amount of dollars, okay, lunch money, here. This is something that we have seen in cybersecurity privacy industry all the time and it's going to continue to happen. It's money that's driving all these decisions.
[01:11:59.221] Tom Ffiske: Oh no, I know, but that's the nature of ethics, isn't it? We know that there's like capitalist gains from it. We just wish it wasn't the case. That's ethics in a nutshell.
[01:12:07.314] Kent Bye: Yeah. And there's also, when I was at the International Joint Conference for Artificial Intelligence, Max Tegmark had tried to get a bunch of academics to sign off saying, we're not going to support any sort of AI that's used in autonomous vehicles, they're going to be killing people. So then when you go down that stack, it's like, well, this algorithm could be used for this use case, so therefore we should eliminate this what they call dual use algorithms. And so some people say that's a good thing to eliminate those dual use things. And some are like, well, this has other uses other than that use. And so how do you draw the line between when you get lower down the stack of the research? So I think that's sort of the tricky thing at knowing where that line is and knowing like when you're going to like put your foot down and say, okay, this is cross an ethical threshold that I no longer feel comfortable with. having technology go out there and take someone's life and killing them automatically. I think that happens more in AI, but there's similar issues that I think that come up in XR as well. Well, tell you what, let's open up for questions and we'll have like 25, 30 minutes for questions and for however long people want to hang around. But as I open up here, I see Lauren Slonick's question has four thumbs up. I'm just going to go down the list here. Her question is, do you know of anyone working or researching how to articulate the qualitative aspects of the type of data you are sloughing off as you use XR and the quantitative data sets that companies are collecting? And I worry about sentiment tracking. I'll open it up. I've got an answer, but I'll open it up to see if anyone has anything to say. Well, I'll sort of add what I know and then have other folks. So I know there's actually a lot of researchers. I know Jeremy Bailenson. First to sort of maybe recast what qualitative aspects, what I take that to mean is that you take a lot of numbers and abstract data and you're extrapolating meaning out of it. So you look at your facial expressions and you're saying they're feeling happy or feeling sad, or they're able to do what Facebook and Cambridge Analytica did with the psychographic profiles where you take a bunch of data and you basically come up with personality profiles. So like I think most of the stuff, the research that's out there is looking at things like giving your eye tracking data, you're able to determine what people are interested in, what their sexual preferences are. I know that the VR Privacy Summit, Jeremy Bailenson did a recap of a lot of that data. And I know that Jeremy's also been working on that as a research topic as well. But I think generally there's a lot of different researchers that are trying to look at what you can take from immersive data and what kind of conclusions you can extrapolate from that data set. And I'll invite both the panelists to share any more pointers people have or in the comments that people want to point to.
[01:14:44.398] Galit Ariel: All the big tech companies have their own research labs, squads that are doing just that. So here's an answer.
[01:14:55.183] Kent Bye: Yeah, a lot of that for the research labs is probably they choose to publish sometimes that SIGGRAPH and other things when it's palatable, but there's certainly a lot of Stuff that's not as privacy friendly. Let's just say that's probably been happening a lot behind closed doors. So that's for sure. Anybody else have anything to add on that.
[01:15:12.747] Kavya Pearlman: I know it's not really research per se, but you know, I would mention that again, the XR data classification framework working group. And so we're trying to bring in researchers who have these sort of answers. We're trying to, you know, have conversations with companies like Tobii or Cognitive 3D, BadBR and use their resources to put together some sort of a framework to understand like how this data could be profiled potentially.
[01:15:39.589] Kent Bye: Cool. Next question. Was that somebody jumping in?
[01:15:43.232] Em Lazer Walker: Quickly, yeah. I just want to say, I think the point that a lot of this research is happening completely behind closed doors at large companies means that a lot of the things we've been talking about are difficult. They are not necessarily solutions to those problems. Having a very public ethical framework In order for that to actually stop someone like private Facebook or Google research teams from still doing the unethical research that needs either much more stringent regulations than I think we've been talking about, or making a meaningful impact on the opinions of the employees actually working there for them to be able to say, no, what we are doing is not right.
[01:16:18.917] Kent Bye: All right. So, Siddhant Patil has a question. How can privacy and security be made profitable? What would make that an important consideration for those businesses which make profit driven decisions or don't care about ethics? Is it possible?
[01:16:32.533] Tom Ffiske: You might have a hop in for this one. Yeah. Excellent. I believe, well, the natural answer is Apple. They've absolutely made a business model for privacy. That has been their marketing drive for the last few years. They've seen what's been happening with Facebook and they're like, let's capitalize on that. And the whole deal has been around privacy and it's worked wonders for them because a lot of the way they make money is not actually using user data. But touching on Kent's point, it caused them other issues, which is their walled garden. They're very difficult to work with, which is why there's a lot of issues when it comes to building products for Apple. But absolutely, that's the company I could think of. I saw M, you nodded your head vigorously when I said that. I'm sure you have an opinion.
[01:17:18.963] Em Lazer Walker: Um, I mean, I, I agree with most of what you said, but yeah, I, and I, a thing that specifically worries me about Apple though, is they have currently found privacy and security to be a strong business point because they are the only company doing it. And right now it is a thing that people care about. And if either of those two variables changed, like when everything is still guided by the hands of the market, who knows, like Apple might not be the privacy company a year or two from now.
[01:17:44.450] Kent Bye: Hmm.
[01:17:45.557] Galit Ariel: I have to agree and disagree, but I think Apple started with privacy before it became such a popular, they integrated it before. But it definitely, you know, they have been consistent and persistent and went to great lengths to protect their users, even in court, even against the US government. So on that, I can't foresee unless a black swan, you know, I can't foresee it being not part of their core values in developing products, personally.
[01:18:14.359] Kent Bye: Yeah, there's actually a lot of conflict between privacy and the open web and WebXR. And so they'll make arguments for privacy in order to avoid implementing the open WebXR technologies, which is like an interesting thing that is happening there. For me, I actually don't think it can or should be made profitable. I think it's actually a bad model that you have to pay for privacy. I think that privacy should be a human right that needs to be at a more foundational level of our institutions that are demanding privacy because it shouldn't be something that we by default have to give away in order to get access. Because mortgaging our privacy is bankrolling a lot of technology, which is great for technological evolution, but it's horrible for the future of privacy. So I don't actually think it should be profitable. I think it should be just a human right and we should figure out how to have everybody do it. Now, that's certainly not the case, so.
[01:19:06.339] Galit Ariel: It should be unprofitable to do it any other way in my mind. Right.
[01:19:11.781] Kent Bye: And I think part of it is that it's the culture and the people that value it that has the other market dynamics. But because the market dynamics aren't doing that, then we're in a situation where by default you don't have it. All right, next question, Jonathan Ogilvie. Tom started by saying limit access. And of course, he was talking about protecting data. But my mind went straight to the Ready Player One concept of limiting everyone's access to cyberspace by closing down the whole metaverse on Tuesdays and Thursdays, like a museum that's closing on Mondays, forcing everyone to spend time in the natural world. This extends an idea in ethical game design. Feedback loops optimized for addictive engagement can be good for business, but bad for society. Jaron Lanier's 10 arguments call for a population-wide hard reboot now. Maybe we can get that, not now, but when COVID has a vaccine and the unmediated world is a whole new fun experience for people, do you think the fictional concept of a weekly or twice weekly hard reboot of cyberspace could be realized in actual reality in the next decade? The answer to that is no. Not enforced, but anyway.
[01:20:19.099] Galit Ariel: Well, we have already regulations about limiting screen time for minors and kids because we know there is an effect, a visceral effect of consuming technology at large. And I imagine that for certain populations, it will be enforced, especially for underage kids. But I really trust humans. I really trust humans to, yeah, we were probably going to binge it in the beginning, but we're weird. We're never going to stay in that. We're not very good in stabilizing behaviors. And even if you look at what's happening now, it wasn't a VR, AR panel the other week and the other month. I don't know what era I'm at. And, you know, everybody were kind of like talking about like, yeah, you know, with COVID, all the harsh things, but look at like what it does, the beauty it does for AR and VR. And I pre-prepared with me a loaf of sourdough bread and I lifted it up and I'm like, well, this is the killer app of 2020. You know, this is it. So we want to believe, and a lot of people in tech industry like want to believe that, you know, once we'll build it, they'll all come and they will stay. But I do trust that people inherently and biologically will find a balance eventually. There'll be a blip, but they will find a balance and they will want to reconnect physically. And I'm already seeing it. You don't agree with me, Kavya.
[01:21:42.411] Kavya Pearlman: No.
[01:21:42.971] Galit Ariel: Yeah. I think. No. I think. I don't know. Like I look at the younger generation.
[01:21:49.356] Kavya Pearlman: I'm being Freud. That's all.
[01:21:51.758] Tom Ffiske: I want to just hop in and say the reason why I agree with you is because I looked at World of Warcraft as a great example of an immersive world which people kind of hop into, went hardcore into, but then petered out over time. And I like looking at World of Warcraft as a case study for these worlds people explore. And it happened exactly as you explored it, Galit. It was people who jumped in, got a bit intense initially, but they petered out as they bounced with real life appropriately. But I see Kavya looking to jump in.
[01:22:20.514] Kavya Pearlman: Yeah, I do want to explain myself though. The reason why I feel that may not happen is the current situation and the immediate it may not happen is the current situation. One is we have this elevated need and want to connect with people and we are all cooped up in our rooms. Let's say we build this nice, amazing, compelling, realistic avatar, all of this AR, XR, VR. What happens when these companies use the same exact thing that they use? is inducing dopamine-based models. Whenever you consume these contents, you feel better. I have friends who spend about, today, when XR is not that great, but there are people who never knew VR existed, who now spend about six to eight hours in Echo Arena. who have been struggling with feelings of addiction. So think about how, you know, we are using, literally, we have technologies now, AR technologies, that track your, you know, mental thoughts and make you feel positive. Well, in the wrong hands, that could happen, that could have adverse effects. So that's why I'm not counting on it, because Freud said, you know, humans are inherently, if you leave them to their own demises, they'll destroy themselves. And that's why I'm like, you know what? I don't trust these companies. We have to prepare for the worst. And because they are going to weaponize our information, it's going to happen. And we are going to deal with addiction. We're going to deal with all these issues in XR, which will be worse than the current digital ecosystem.
[01:23:51.277] Galit Ariel: I agree with that completely. So I never said I trusted the companies, but I do trust the people eventually. And really, when I look at the younger generation, the really young, not us young, but the really young, I am seeing that they're using technology more and more as a tool, and they're more skeptical, and they're smarter about how they use it. And they're not so quick to adapt and swallow it the same way that our generation is. But I completely agree. When you have addictive triggers and mechanisms in it, then even if you don't want to be part of it, you're triggered to be part of it. And that is something we have to solve for sure.
[01:24:32.623] Em Lazer Walker: Yeah, I wanted to quickly jump in and point out, like, I think the World of Warcraft example is a really interesting one, because to my knowledge, there have been a small handful of people who literally died because they were so addicted to World of Warcraft, they didn't bother eating or sleeping or anything like that. And that, again, that I think, like, the solution is not limiting screen time, per se. Like, in China, Honor of Kings, I think it's called Arena of Valor in the US, is the biggest game in China, and Tencent, the creators, limited the amount of time that anyone under 18 in China could play it. it didn't really do much, and they limited the time again, and that didn't really do much anything. Even before you get into AR technologies, so many of these games and experiences are fundamentally dopamine slot machines. Like when you have people who are just peddling Skinner boxes, that is a much larger societal problem to solve.
[01:25:20.560] Galit Ariel: And it goes back to the business model of like, I make money from keeping you inside my platform. Because the truth is that social media, most companies, you know, they're not tech companies, they're advertisement companies. You know, let's give Facebook the name it has. It's an advertisement company that uses social construct to sell stuff and to get your data and sell that as a commodity. Same with Google. You know, there's a reason why they're giving it away for free, trust me, and still the most profitable company. So I think this is where like also the business model and the regulation around it and the taxation around it could help a lot in kind of like diverting their incentives on how they want you to interact with the technology and perhaps protect us a little bit better. from destructive applications and subversive applications of tech mechanisms and game mechanisms.
[01:26:14.315] Em Lazer Walker: And I at least have some small amount of hope the way that many different countries across the world have been implementing rules against loot boxes specifically and like one specific abusive form of these sort of psychologically compulsive mechanics, which who knows what will happen, but that gives me some optimism that this is a space where, at least in some instances, governments are willing to come in and regulate.
[01:26:36.410] Tom Ffiske: No, absolutely. I think loot boxes are the perfect example of when a country gets serious, they will introduce rules which really help. It also is a good example of how some companies try to fight back. EA, for example, when fighting back against loot boxes, called them surprise mechanics. So I have to wonder what other companies will do when exploring ethical issues in Immersive.
[01:26:59.009] Kent Bye: And the point that I would make here is that there's an implicit assumption that anytime you're doing anything in VR, that you're escaping and you're not in relationship to other people or to the wider world around you. And I don't think that's a good assumption. I think that you could actually be in deeper relationship with other people. But I think the ethical challenge there, like Galit was saying, is that like, that from a design perspective, are you really just trying to hijack someone's attention and get them into this like Skinner box hamster wheels to be able to profit off of that? Well, they're not really benefiting from that aside from just being addicted completely. So that's more of a design component. But I think as individuals, We all have to kind of figure out how we're in right relationship with the world around us. So being in relationship to the earth and to other humans around us. I think that's a big question that is up to individuals that you can't necessarily enforce by shutting off the internet for two days a week. All right, let's go through some more questions here. Regarding harassment and cyberbullying, how could or would you enforce a penalization system or a governing body to make judgments on harassment? How secure and private could you make the system that users would understand such a system or body in place without divulging the processes? Like Lee says, we've only been trying to moderate the symptoms, not the deeply rooted problems. A few quick thoughts that I have on this is one is that this is kind of like a truth and reconciliation commission type of thing. And or it's a justice, like, how do you prosecute and have defense. And so I'm skeptical of just having a singular governing body. That's what's happening around the world with the movement towards defund the police. It's to try to potentially fund it into more grassroots organization. So it's not necessarily a top down authoritative body that's going to make a judgment about whether or not you were acting in a proper way or not, but I think having it more from the grassroots bottom up. So as much as you can be in relationship of people who are directly involved. and maybe deal with it directly, or if there's gaslighting or behavior that's going to be abusive, that's an open question of how do you have some sort of process to be able to mediate these different types of conflicts and whether or not people get banned, or how do you have people apologize and make it right and have more of a truth and reconciliation model. We're still so early. All of this is very theoretical, but that's just some initial thoughts. I don't know if others have any ideas or thoughts.
[01:29:15.474] Kavya Pearlman: I agree with you, Kent. And I think this is not going to, we cannot create another sort of like a XR police model, but we do have to bring in entities that I would say, you know, because exercise three principle, ethical, unbiased, and trying to build safe, virtual, augmented environment. These are the type of entities that we have to bring together. and have their expertise, their integrity, their ethics leveraged to build a independent review board or something. And this is something we thought about when we were rolling out standards. We wanted to do something like monitoring and reporting, but we held off because we first need to just draw that line in the sand. But that's the next step is we are going to maintain a particular sort of a mechanism where people could potentially report to us what bad happened, when did it happen, and then we'll investigate and we would try to like help out. Like just yesterday, a friend of mine reached out to me on LinkedIn that she's been harassed on many platforms, her research has been deleted. I mean, this person has nowhere to go. So she reached out to me. Apparently, she was like, hey, you're a cyber guardian, or people call me that. So we need to create some sense of this trustworthy entity by bringing a lot of the collaborative people, like people that are on this panel, people like yourself, Kent, to create this independent body that will hold people accountable for that individual experience that took place in Hisar. We will have it.
[01:30:49.026] Kent Bye: It's a hard issue. I mean, I don't, I mean, it's, it's like one of the biggest issues, but I don't know if you had more thoughts.
[01:30:54.948] Galit Ariel: I think it's a much easier issue than we want to admit. For me, it's, again, zero tolerance. Zero tolerance for it as a user of a platform, as a developer, as a spectator. I'm looking at what's happening now in the world, and I think everyone in this panel that I've seen in social media have been voicing out, this is the line in the sand. I'm very disappointed from others that have a lot of voice and are refusing to do it because they're worried about their future career or their position. And again, I don't think we can afford it with bullying, with discrimination, with racism, with any kind. If there is ever a time to draw a line, it's now. And I really urge everyone here to have zero tolerance. you know, zero tolerance for it and call it out, call it out and it's going to be much worse if we don't. Whatever consequence you think you will have in your career, trust me, it's going to be much worse for you personally and professionally if you don't.
[01:31:57.514] Em Lazer Walker: And I totally agree. And I think in a lot of cases that might mean needing to move away from platforms. Like I'm thinking a lot about Riot released their new non-VR game, Valorant, about a month ago. And the executive director of the game, who is a woman, has publicly said that she does not play the game by herself with strangers because she always gets abused. And if the executive director of this game with millions of players can't get the political will to solve this problem, I don't know how you, as part of a larger community that encompasses that game, I don't know how you salvage that other than everyone saying, there are other experiences out there, we don't need this one.
[01:32:35.268] Galit Ariel: Yeah. Yeah. And you're seeing how in social media now, platforms are banning Facebook and it's working. There are many ways to put pressure, you know, especially if it's a for-profit platform, there are so many ways to put pressure as individuals as well today on the big players. These big companies, whatever, the game companies, they're not, they like us to think that there are this like, this cloud that can't be touched. It's Google, it's Facebook, it's Riot. No, they're made of individuals that are worried about their reputation and their profit and people that work there that are worried about their ethics and their future as well. They are touchable. Oh, wait, I shouldn't say that. Don't touch them, though, without consent.
[01:33:22.198] Kavya Pearlman: You're right. And I think shaming works. You know, we do that in cybersecurity a lot where people like, you know, don't close the front door and or just leave all these other back doors intentionally and intentionally or talk about stupid password practices or lose a bunch of data. So we do that a lot in cybersecurity industry. We do some kind of shaming and, you know, have the community kind of yell at them.
[01:33:46.113] Galit Ariel: I like to call it like call them out because you can't shame someone that has none, you know.
[01:33:52.629] Kavya Pearlman: That's right. So, yeah, calling them out. Yeah. And just exposing them. And we're going to use, we're going to continue to use that sort of power of community, bringing other communities along with it, because it's not just XR community. Just like I said in the beginning, it touches all the domains. So, once we create that sort of sense of awareness, it's like, yeah, the XR technology is being used, but look at your community is being put at risk because they're not following ethical principles. then you can bring them also and add to that voice of calling people out. Yeah, you're right. This is one way we would be able to get some headway.
[01:34:29.592] Kent Bye: Two quick sort of complications of that as a devil's advocate is that for one, when you have a closed wall garden, you do have the ability to ban people. But if it's an open decentralized system like the open web or a decentralized system, then harassment is still going to be an issue that the antidote of sort of banning people is maybe a short-term solution that makes things safe in that short term. But I think if you look at it in the long term, you can look at it as the equivalent of sending someone to jail and exiling them. And what does it mean to permanently exile someone from immersive technologies for their entire life? What is the sort of model for rehabilitation or owning harm done or to be able to do other models? So I think like having a balance between the punitive justice, but also restorative justice, how do you integrate that into the fabric of the technology today, when you can track IPs and you can ban people, that is maybe addressing the experience in the short term, but I'm skeptical that that's actually going to change the root of the problem, which could actually be more of a human issue than a solution that has a technological solution, I guess is what I'm saying. and thinking about it holistically, like what else needs to happen? Are we going to put people into permanent exile because they did something when they were a teenager? And that's sort of like a question that's sort of, that's, that's sort of the long term that makes it more complicated. But yeah, so I think in the short term, certainly, yeah, I agree with people needing to be able to have the ability to, just like any private business, say, we're not going to have you here. But when you think about over the long term, but also like public spaces as well. I don't know, Tom, if you had anything else to add.
[01:36:12.159] Tom Ffiske: No, I just share the same views of just calling out is the best and healthiest way of doing it. My only addition is I just want to quote Hank Green. Hank Green would say, we should be judged not by how we acted when we were ignorant, but how we responded when we are informed. I agree with calling out people, but If they have a history, but they are good now, don't dog them based on their history. That's all I'll say.
[01:36:40.358] Em Lazer Walker: That's something that games has been grappling with a lot more accusations of abuse this week specifically. And in games, and most industries having any sort of quote-unquote Me Too moment, usually what happens is that people maybe issue an apology, they silently go into hiding, and then six months or a year later just sort of come back and pretend that things never happened. And then issuing people permanent bans sort of maybe solves that problem in some ways, but it's a clunky bad solution for the reasons sort of talked about. But I think like figuring out actual restorative transformative justice in online communities is a totally unsolved problem, regardless of whether you're talking about XR or not.
[01:37:20.479] Kent Bye: Yeah. Wow. Well, I think we're at the time. It's something that feels like a good place to stop. No.
[01:37:28.142] Galit Ariel: Can we just have the question about Mars and the space exploration, please?
[01:37:32.163] Kent Bye: Because I'll be waiting for it. We'll do one quick round and then we'll wrap up. What do you guys think of space exploration and the hype or hope around moving to Mars in relation to our collective future, living with surveillance capitalism as a global community? Galit, would you like to have an answer for this?
[01:37:50.156] Galit Ariel: Oh, I think you can. So I love this question because I think that the answer is within this question. If there's anything I wish for all the Silicon Valley moguls that want to go to Mars is I will do anything in my power to help them go to Mars. I think they should all go to Mars and they can stay there and then we can stay here and solve all the other problems. That's my answer.
[01:38:15.400] Tom Ffiske: Great answer. I think the other answer I'll take from this question is an answer I think everyone on this panel will agree with. The impression I'm just getting is we all just don't like surveillance capitalism and we should reform it where we can.
[01:38:32.394] Kent Bye: I'm skeptical of colonialism and settler colonial mindset. And I fear that going to Mars is just going to replicate a lot of the settler colonial mindset that we've had on the earth and that we should learn how to live on the earth first before we think about colonizing Mars. So that's my answer.
[01:38:47.776] Galit Ariel: And the Tesla they sent to space crashed into an asteroid. It's okay. Send them. It's fine. We're good.
[01:38:56.380] Em Lazer Walker: I don't think we can separate the colonialist aspects of space travel or anything from all of the complaints about capitalism we have all been railing on. The two are intrinsically linked.
[01:39:10.170] Kent Bye: Cool. Well, I feel like that's probably a good place to stop. I mean, we could talk forever about these topics. And, you know, I just wanted to thank each of you for joining in this discussion, both Tom, Galit, Kavya and Em. Again, this is a never ending topic that I think we're still going to continue to talk about and hopefully come up with more ways of making sense and making these different trade-offs and try to best we can implement the most ethically aligned design that we can with immersive technology. So. Yeah, thank you all for joining us today on this XR for Change panel.
[01:39:41.756] Archit Kaushik: If you'd like to add more such conversations, please register for the Games4Change Festival by logging on to festival.games4change.org. And once again, thank you all for joining and a special thanks to all the panelists for participating. Thank you and good night.
[01:39:58.965] Kent Bye: So that was the XR for Change talk and play panel featuring Tom Fiske. He's an editor of Virtual Perceptions and wrote the book called Immersive Reality Revolution. Galit Ariel, she's a designer of physical spaces, experiential designer, human computer interface, as well as focusing specifically on augmented reality. Kavya Perlman is the founder of XR Safety Initiative, which is helping to build safe XR environments. And then M. Laserwalker, she's a Cloud Advocate for Microsoft Experimental Games. She makes her own hardware, is a part of MIT Media Lab researching spatial audio, and has been looking at online virtual worlds, social spaces, and generally talked about the Microsoft side of things. So, I have a number of different takeaways about this panel discussion, is that first of all, Well, it was a wide ranging discussion, so I'm going to try to break up into different themes. So we start off with Tom talking about data and the concerns around data capture. He has six principles to try to follow in terms of having limited access to what entities can have access to specifically around politicians and what governmental entities have access to when it comes to our data. Transparency and design, and so just having more information of what data are being collected, that's something that GDPR has embedded within it to have a little bit more of that transparency. Understandable algorithms, there's the Algorithmic Justice League that was featured in a documentary called Coded Bias, which is an amazing look at some of the different AI biases that are happening and do we need an algorithmic oversight and some regulatory body to look at that specifically. So that's a big open question for how that would actually play out and how that would work. Then there's the two things of data sovereignty. Users should be able to own their data, but also be able to always get access to it. But yeah, ownership of the data is a big thing. And then finally, this active opt-in and that this presents a problem in terms of notice consent, because you would be overwhelmed with so many different choices and the trade-offs that you would have. This is actually something that's a huge open problem with Facebook and they have a white paper that they just released on February 14th. where they're trying to do user-centered design and have these different regulatory sandboxes to have discussions with regulators as well as people from the industry to figure out what is the good design and how do you give that type of notice and consent. So Facebook's TTC Labs, the Trust, Transparency, and Control Labs, is going to be actively looking at that. And the other thing around data I think worth mentioning is Kavya Perlman's XR Safety Initiative. They have the XRSI definition of extended reality, which started off just by naming definitions of XR and AR and VR, trying to come up with what we're talking about when we talk about all these immersive technologies. And actually in Facebook's white paper of communicating about privacy towards people centered in accountable design, they actually give a citation to Kavya's definitions that she did. It's starting to at least have some dialogue with Facebook, which is something that it's good to see. The next thing that she's really trying to work towards is this data classification framework and public working group. It sounds like it's going to be spinning up here again soon, led up by Diane Hausfeld from Mozilla. I know she's done a great job of looking a lot of the XR security issues and yeah, just trying to come up with some taxonomy systems to talk about the data and what you can do with it. All right, so the regulation around all this technology, I think that was mentioned a lot within the course of this conversation, but we didn't really get into any of the details. But after this conversation and the previous panel discussion I was on, I did a bit of a deep dive and trying to get a sense of like, is there a clear path of what that regulation would look like and how that would interface with the government? I think this is such a huge open question. I mean, we can talk about regulation all we want, but until we actually see some actual policies and some other entities that are pushing forth what those policies would be, then I think we still have to wait and see what the regulatory framework would be, especially when it comes to here in the United States, which tends to have a little bit more of a hands-off and to not do regulation too soon. In my previous episode at the end of that podcast, I do a deep dive into some of the various different issues of looking at the law and some of the perspectives that come up there. So I think that's a huge open question. I'd really have to talk to some lawyers to really get a better landscape of what the laws are and what kind of regulation could be there. I think talking about privacy as something not necessarily universally defined anywhere and India having something enshrined in the constitution about privacy, but there's nothing within our law that starts to define it. There's all sorts of challenges having a fragmentation of all these different interpretations of privacy across all sorts of different laws and the vagueness doctrine to how to really make sure that it's clear when you're trying to apply these different laws. And so anyway, the bottom line is that it's very complicated and I think it's worth digging into more legal experts to really talk about some of the different policy implications there of what can be done as we move forward. So yeah, just other things in terms of XR and some of the different threats, there's obviously the controlling a perception. And what's that mean to be able to hack into someone's world? And does it mean that if you are controlling someone's world with an AR, you're able to control their world? And, you know, one of the things that Tom was saying is that you mentioned this company that is going to potentially be taking ownership of trying to get rights from landowners about who has access and rights to do augmented reality. I don't think that's feasible. And I actually disagree that it should be up to landowners. I think they should be a part of the conversation in terms of trespassing and access to private property. But at the same time, there also needs to be a free speech right for anybody to augment whatever they want and not have any limitation as to whatever people could do. I think that's more of a free speech issue more than anything else. Now, when it comes to people trespassing on private property, that's a whole other issue that I think needs to be addressed as well. But I don't think that's necessarily something that a private company can do a great job of. I know that Mark Pesci has proposed like an open standard of mixed reality service to be able to handle these types of things. But I think it's a big open question, but I'm skeptical that a private company would be able to really solve that. One of the things that M said just in terms of privacy is that you can record your world around you and create a mesh. And then once you create a mesh around your world, then you're able to actually learn a lot about that person. And so she was talking about how there are ways to be able to break up that data so that you could do the AI algorithms on it, but it would be difficult for you to reconstruct everything. And so. Whether that's homeomorphic encryption or trying to break up the data in different ways and separate it, there's certain ways that you could split up the data so that it's difficult to have that data leak and be reconstructed. So there's all sorts of like threat factors there of as you scan your environments, what happens to that data and what kind of security processes on the backend can you do? And so it's just good to hear that there are different options like that. She didn't go into much of the details of what that is, but I know that there are things and concepts like homeomorphic encryption that could help to implement some of that stuff. So there's this whole thing about harassment and this dialectic between what you can do with the technology, blocking people, personal safety bubbles, muting people, you know, kind of the standard first line of defense that you want to have. You know, Galit said, you know, to block someone, it solves a symptom, but it doesn't necessarily solve the problem. And I totally agree with that. I do think that there are larger cultural issues and there's only so far you can go with just treating as an engineering problem. But in terms of like the long-term problem of harassment and everything, I think this is perhaps more of a cultural issue and something that is beyond something that an engineering technology could solve alone. I think there has to be a wider code of conduct and how you cultivate culture. I mean, this is really a process of dispersing out cultural practices and normative standards. And how do you do that when you have such little accountability with people as they're coming in now as anonymous entities and Is it attaching some sort of like social credit score or trust and safety, or what is it that you are able to do? Is it recording everything that happens? I think this is like a really difficult issue and there's lots of different trade-offs that we talked about. And also there's this whole dynamic of public and private spaces that I think is, you know, a one model where in VRChat where you're able to invite only your friends, or maybe you want to open it up just a little bit so you have one degree of separation so that anybody that would be within the room is connected to somebody else by one degree of separation. You can do like invite only, or if you're a friend to anybody that's there, you can come in. And so there's different levels of privacy and containment that you can have to be able to create some of these safe online spaces. And it seems like a lot of the really interesting stuff that's happening on some of these social VR platforms are happening in those private spaces. But I think it is important to try to still maintain some level of public spaces, because if we have the public space disappear altogether, then what's that mean for creating these self-reinforcing filter bubbles and having opportunities to have these collisions with people that you may disagree with? And now, just as we go out into public spaces and reality, then what does that actually look like to have quote-unquote public spaces within VR? Now, technically, a lot of these VR spaces are owned by private companies. So even then, you're bound by the terms of service of these private companies. And so to really think about what does it mean to have public spaces that are owned by the government or owned by individuals that are flagging it as a public space. So that's something that would go into more of the decentralized web and, you know, moving forward, this differentiation between public and private spaces. And Emma had said that every time that she's online, she hasn't spent any time in a public VR space without getting harassed. And so trying to figure out how to address this issue and, you know, mentioning that the executive producer of Valorant from Riot Games, Anna Dolan, said that she can't play solo in her game because she just received so much harassment playing as a woman and she's the executive producer of the game. This is something that's clearly that even though she's a part of the executive leadership, this is a larger culture issue that they're trying to figure out. How do you address issues that are a part of the larger gaming culture? And that's another thing that M said is like, you know, this is a part of the toxic gaming culture that has been put into VR. And so how do you decouple that or how do you address it in some meaningful fashion? Uh, so just moving on surveillance capitalism came up quite a bit in terms of a business model that is based upon taking all of our data. I don't think anyone was really a fan. Not sure anyone had a alternative other than to say that there's other economic business models like Microsoft, who is choosing to go to the enterprise route and to do business with other businesses where they don't want to have data. In fact, it becomes a liability, but at the same time, in order to get a HoloLens, it's like over $3,000. And so you're certainly paying for that. And. Also for Apple, you can potentially get a part of your privacy as well, but you also have to pay for that as well. So just this whole dynamic of there are other economic models that are out there, but you end up having to pay a lot more. So it becomes a equity issue where you just have to have enough wealth and resources in order to maintain your privacy, which also doesn't seem like that's a great solution for everybody. When you start to apply Rawls veil of ignorance, when you think about his theory of justice. And just in general, the military contracts, it's a bit of a paradox that all this technology that we're using is in direct response to a lot of the military investments that have happened. I think each person that is involved in the industry has to decide where their ethical threshold is as to whether or not you're going to be directly involved with any type of technology that's going to be involved in warfare or directly responsible for taking lives. So that's a decision that each person has to take. There's larger discussions around dual use technology and that this tends to be a little bit more active in the AI community when it comes to AI algorithms that are responsible for taking human life through autonomous vehicles. And, you know, what are the ethics around having a human in a loop and leaving up the choice of life and death up into the hands of automated machines and algorithms. And most of the AR technologies that I've seen, at least up to this point, have been around training or maybe potentially even deployed out into the field as well. So, I don't know, it's just one of those things where technology gets funded by the military. That's where our culture is right now. And then I think the final point that I just want to make is there's a big discussion around escapism as whether or not, you know, this is something that people are just going to settle out and they're going to figure out on their own, or if they need a little bit more help. You know, M said that, you know, some of these games are fundamentally dopamine slot machines where you have people peddling Skinner boxes and. You know, it's a much larger societal issue in that there's some optimism when it comes to like things like loot boxes with different regulations to try to reduce different aspects that feel like gambling that are embedded within the games. And whether it's deliberately things like loot boxes or just generally, like what does it mean to create virtual worlds that are trying to just hook people through this addictive Skinner loop? Is that what people want or other ways that, you know, you have different ethics around making sure there's certain amounts of moderation there. So just the final takeaway is that, you know, there's just lots of different issues that still are unresolved here. I think the regulation is probably one of the biggest issues that biometric data and how to get a little bit more of, you know, the risks involved in terms of what's recorded, what's not recorded. And I'll continue to have a couple of other conversations, one with the I triple E's. Ethically Aligned Design, one of the executive director of that. And there's a whole book that they've put out recently, and they have a whole chapter on mixed reality. As well as Howard Rose, he actually listened to this conversation here. And he's more in the medical realm and had some specific things around, OK, what about the actual experience? And what are some of the ethical considerations there? And so we had an extended conversation, not only around the medical aspects, but also just the larger structural issues as well. I think there's just a lot of people that are thinking about that. So look for those conversations here soon. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.