#836 XR Ethics: AWE Talk on Ethical & Moral Dilemmas of Mixed Reality

The Virtual World Society provided me an opportunity to give a main stage talk at Augmented World Expo on the Ethical and Moral Dilemmas of Mixed Reality. I tried to lay out as many of the ethical implications of XR as I could in this talk after talking to hundreds of people about XR over the past five years. I presented this on Friday, May 31, 2019, and I would use the basic structure described in this talk for my XR Ethics Manifesto talk given on October 18th, 2019.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So continuing on, on my series of looking at XR ethics and privacy, this is a keynote talk that I gave at the Augmented World Expo on Friday, May 31st, 2019. It was called The Ethical and Moral Dilemmas of Mixed Reality. This came about because the Virtual World Society was originally founded by Tom Furness and Linda Jacobson had reached out to me and they had some slots at Augmented World Expo to be able to present. different topics and they they wanted to give me a slot on the keynote stage the main stage to be able to give a big talk on ethics and so I Gave this talk at the end of Augmented World Expo was on Friday. It was the last day and on that Thursday and Friday I'd spent pretty much the whole morning trying to distill down a spatial framework to be able to list out all the different moral dilemmas that I could think of and and end up talking about 60 or 70 different moral dilemmas, and then even more within the spatial mapping. And so trying to map out all these different contexts and look at relative to the two contexts to each other, seeing how you can see how there might be some conflicts. So that's what we're covering on today's episode of the Voices of VR podcast. So this is a talk that I gave the Augmented World Expo is titled the ethical and moral dilemmas of mixed reality. And I gave it on Friday, May 31 2019 at AWE in Santa Clara, California. So with that, let's go ahead and dive right in. Hello, everybody. Thank you for sticking around. My name is Kent Bye. I do the Voices of VR podcast. And today, I'm going to be talking about the moral dilemmas of mixed reality. Hopefully, I'm going to cover every single last moral dilemma in my presentation. Well, we'll see. At least have a framework that we can start to look at the risks that are involved with mixed reality. For me, this is something that has come up in talking to people over the last five years now. And there's a lot of various different issues that have come up. One in particular, I was just at this Future of Neuroscience in VR gathering. It was a workshop with about 30 different people. It was put on by the Canadian Institute for Advanced Research. And in that, they had somebody who was a neuroscientist, and he was talking about synthetic speech generated from brain recordings. And what they showed was this demo. Essentially, you can open up the brain for people who have seizures, and you put all these electrodes in their brain. And so it's called ECOG. And with the ECOG, you're able to decode what they're actually saying. So this is not at the point where you can get it to be just externally. But within the next three, four, five years, it was said, It's going to be possible to be wearing EEGs, and our thoughts are going to be available to be able to either communicate with things. So on the one hand, that's amazing that we can just communicate with their minds and not have to use any language or anything. We just be telepathic communication. Yet on the other hand, who's going to be owning that data? Do we actually want Facebook to be owning our thoughts, or Google owning our thoughts? Where does that data go, and where does it live? So Facebook actually announced at F8 that they're working on this. They're working on brain control interfaces in their Facebook Reality Labs, and they presented it a couple years ago. So what if you could type directly from your brain? This is going to be an amazing future potential, especially for people who have accessibility issues and they can't use their hands. And yet, at the same time, now that the Oculus Quest has launched, we're now looking forward to what is the next biometric data that is going to be starting to be integrated within these technologies, whether it's eye tracking data, EEG, or galvanic skin response, or EMG sensors on the headset to be able to detect your emotions. So essentially, we're having access to all of this very intimate information. And it could be used to transform your consciousness. And I'm very interested in this concept of consciousness hacking and what can you do to be able to be empowered with all this data, and what can it allow you to do to become way more connected to yourself and really live up to these superhuman potentials. So I want to hack my own consciousness, but I don't want others to be hacking my consciousness. And so we're at this stage right now where, as we move forward, we're going to have more and more of this intimate biometric data that's going to be used with these technologies, opening up all these new possibilities, but also these huge threats. So that's what will be coming today, is an XR ethical framework to be able to help navigate the moral dilemmas of mixed reality. So on the Voices of VR podcast for the last five years, I've done about 1,100 interviews now. And in that, I started to hear back in 2016, this was after Will Mason of Upload VR had written an article. There was some sort of talkback that was happening from Facebook and reporting back up what was happening. And so he wrote an article, and then the Senate got involved. Al Franken started to send a letter to Facebook saying, hey, what are you doing with the privacy here? And it had created this whole buzz within the VR community back at Silicon Valley Virtual Reality Conference in May 2016. And I started just hearing these conversations about privacy and virtual reality. And so for the last three plus years now, I've been talking to well over 30 or 40 different people about specific issues about biometric data privacy in VR. And so I've asked over 1,100 people now what they think the ultimate potential of VR is. And it's interesting to see the responses. And you get a variety of different responses. And I could start to map a cartography of those responses. They'll say, oh, it's going to change entertainment. revolutionize medicine or change the way we interact with each other, our partners, the way we deal with death and grieving, being able to do higher education and training and travel and different spiritual explorations for your religion, career enterprise applications, connecting to friends and family. People who have accessibility issues and they're isolated and not mobile, giving them new access to all these different experiences, expressions of your identity, new levels of embodiment, new economies and exchanges of virtual resources, teaching kids as they're growing up, all the different primary school, all that's going to be changing, as well as communicating with your friends, and finally, connecting with your home and family. And so in some ways, this provides a bit of a cartography of the human experience that, as I've asked this question, the ultimate potential, the ultimate potential is the human experience. And that the technologies are just modulating us into these variety of different contexts. So in some ways, you could say that this is kind of a rough mapping of context. And as we start to have contextually aware computing, these computers are going to start to determine what context we're in and be able to have very specific information depending on what context we're in that moment. And the difference between AR and VR is that with VR, I could be at home, and then all of a sudden, I'm in a completely different context. I could be at work. I could be playing with my friends. So it's a complete context shift. Whereas with AR, you're starting to be in the center of gravity of whatever context you're in right now, and you're starting to overlay new dimensions of context on top of it. So you're starting to mash up the context. But overall, you're still primarily in that center of gravity of that context you're in. You're in the real world, whatever context you're in. And in VR, you're able to completely shift context. So that's at least how I start to think about some of the differences between AR and VR. And so I got invited to participate in a think tank at Laval Virtual in France where we had this group of people where we were brainstorming a number of different issues. And one of those issues was around ethics. And so we did the World Cafe where we would take little notes and write different scenarios and moral dilemmas up on this page. And you could see that this is quite unstructured brainstorm. And we were faced with needing to digest it and present it to the community. And it was like, well, how do we structure this? We brainstorm all these ideas. What's the framework to be able to understand it? And so we struggled for a bit. And I was like, well, what if I take this model of context and start to match it up with some of these things as a first iteration? So this is what we came up with, these different ethical issues with those different domains. And so you can see there's self-identity and embodiments around privacy and biometric data privacy, virtual goods and resources. There's going to be completely new economic models in terms of how we exchange value with each other. For early education and communication, there's going to be different dimensions of negative transference. And so you could train bad things into children. So that's going to be an ethical issue. home and family, there's an issue of like who owns the mural world. It's like private property, but just because you own that private property, does that give you the right to augment that private property? And then there's going to be opening up your home to private data. There's entertainment, different consent to violence and addiction and make sure the medicine is evidence-based, you're not doing any harm, you're connecting with other people. It's like, what are the safety implementations for trolling that you have? What are your personal rights after you die for your identity? As well as these ethical frameworks and standards for frameworks for making sense of all this, what are the threats from the government to be able to have access to this information, to do different types of surveillance or control? If we look to China, see what's happening. Social reputation scores for friends and community, putting a number on your reputation, but then tracking that number and connecting it to what type of services you have access to in the real world. And then accessibility issues in terms of taking into consideration the ethics in terms of making sure that whatever you're designing is accessible to people who may not be able to see or hear or have impairments with not having able bodies. So this was like a course of a day what we were able to sort of digest down. But yet at the same time, it's like a whole lot of other stuff that didn't make that first cut. We felt like that was sort of a nice way. But what is a comprehensive framework? And so it still left me to kind of go back to the drawing board to see like, OK, what are the structures to be able to maybe have a little bit more of a flexible or robust framework to be able to essentially create black mirror scenarios if we want to imagine these potential risks, but design around them, but also just to take into consideration what's it mean to start to blur down all these contexts that used to be separate. We're erasing and dissolving those boundaries, and so there's new ethical issues that come up, and we have to have a way to navigate that. So I went to the American Philosophical Association. And what I noticed is that philosophy takes a very waterfall approach. It takes a really long time for philosophers to come up with these frameworks. They kind of go off in their ivory towers and then think about things. They don't show up to these conferences. They're not talking to people. They're kind of disconnected from what's actually happening. And I kind of take the opposite approach. I like to have lots of conversations. I like to test my ideas constantly. And so I like to fail fast, meaning this presentation I just finished this morning. It's not comprehensive. It's the first cut, and I want to get it out there. But the point isn't that I'm done. It's incomplete. And I like that. I like the fact that it's the start of a conversation. And I hope that the best possible outcome is for me to show this to everybody and then to start all these different conversations to be able to see all the blind spots that I didn't think of. And so I sort of took that waterfall iterative approach. I was like, well, in order to actually explore these moral dilemmas, My theory is that you have to have all these conversations with these people in the community, and then they're going to tell you stuff that you never thought of. And so the more conversations I have, the more I can flesh out this landscape of moral dilemmas. So there's a sensory design Slack group that's meeting up in Mozilla Hubs, having these different discussions. And I gave a presentation, and I actually facilitated a conversation first, and then based upon what the conversation was, then presented the ethical framework. There's also Diane Hausfeldt, who recently did this paper called Making Ethical Decisions for the Immersive Web from Mozilla. It's on Archive. Definitely Google this and look it up. She's got a whole tying into the academic citations. She does a great job from an academic paper perspective, fleshing out what's at stake here. And Diane is actually going to be on a panel discussion with me, as well as with Six Day AI. Magically is actually going to be talking about their privacy policies for the first time at SIGGRAPH, as well as with Venn Agency. We're going to talk about decentralized self-sovereign identity. So at SIGGRAPH, we're going to be talking about a lot of the nuts and bolts of what's it take to do architecting for privacy. But in this talk, I'm trying to take all these different sources, like a lot of things that I read in this paper from Diane, and start to pull out different things and sort of fit it into an overall framework here. So this is my approach. So I'm taking the domains of human experience with these different contexts. And so we have the self and biometric data and identity. We have the resources, money, and values, early education and communication, home, family, and private property, entertainment, hobbies, and sex, health and medical information. So anything that's other, you have self and other, but also partnerships and law, death and collective resources, philosophy, higher education, career, government institutions, friends, community, and the collective culture, and then finally, whatever is hidden, exiled, or issues of accessibility. So you can see there's like these little emojis there. And I like that just because you sometimes can look at the picture and get inspired with whatever things would be included in there. Like I'm super inspired by like Gödel's incompleteness theorem, which essentially says for any logical system, it's either going to be consistent or complete. And you can choose one or the other. And so I'm choosing consistency, but it's going to be incomplete. I think that's part of the problem with philosophy. They haven't really necessarily come up with a comprehensive framework for all of the entirety of the human experience. It's because, like, how would you even start? So I'm like, well, I'm just going to slice it in this way, and then we'll do an iteration, and then see what comes out of it. And then we'll, you know, you could slice it 36 different ways, or 48, or a million different ways. But I'm just starting with this for now. So what I also heard from Dr. Anita Allen, she's the founder of the philosophy of privacy, and she said at the American Philosophical Association, she said, we don't have a comprehensive framework for privacy. That's a thing that doesn't exist. And I was shocked to hear that. And I was like, oh, that actually makes a lot of sense because it's left up to Google and Amazon and Facebook and all these major companies. They have to make that decision. They have to decide what that threshold is between what is private and what is public. And I think they're doing a bad job. They're taking it from the lens of what they're interested in. They're not thinking about your data sovereignty and what you want to actually own for your data. And so when I look at this, it actually fleshes out the things that are at the bottom here. Yourself, your biometric data, your name, your social security number, all your money, your finances, your resources, what you're buying, what you're paying, your mortgage, your early education influences, your private communication with your friends. your home and family, where you're from, your place of origin, your ancestry, your private property, where you live, what you like, what your values are in terms of your entertainment, hobbies, who you're having sex with, as well as your medical and health information. All of this is different dimensions of your PII, your personal identifiable information. So this is my first cut of, OK, well, if there isn't any existing comprehensive framework for privacy, well, let's just use this for now and see where we go. So we're going to break this out a little bit. So the interesting thing that I was looking at here is that we have all these different domains of human experience here. And let's take an example. So let's talk about my biometric data. I want to use that to understand what's happening in my body to do meditation, so do some sort of consciousness hacking. So I'll put that in the medical realm there. Now, I might be using a technology from Facebook or any sort of other virtual reality technology. So that technology, they're going to be wanting to use my data for their profit. So they're using my data, so that's the other, and then the profit. So in some ways, the essence of any moral dilemma could be two different individuals with different contexts and different motivations. On the one hand, I want to have my biometric data for consciousness hacking. On the other hand, Facebook wants to use my data for profit. And you have this inherent tension between, my autonomy for my data sovereignty, and then the utilitarian trade-off between the services that are provided based upon the exchange that I'm giving by mortgaging my privacy to have access to all of this technology. But looking at this, you can start to do this kind of fractal expansion. So you can go to each of these different contexts and then start to blow it out to look at all the other dimensions from that lens and go around. So that's what we're going to do now. So looking at all these different domains, so you can look at the icons. And then we're going to flip into, now we're going to look at the self and biometric data and identity. So first we have the biometric fingerprints. So your fingerprints and your eyes, your iris scan fingerprints. All these companies are going to want to get your biometric fingerprint so they know who you are, and they can always identify you. You'll never be able to be anonymous in that way. They're going to be able to modulate your perception. There should be a diversity of embodiment options and avatar for people. Self-sovereign identity is a WC3 standard, which is wanting to try to have all of your information and controlled, and then you decide what you give out. So that's sort of a decentralized architecture, and I think self-sovereign identity is going to be a big part of that. But another big thing is, what are the different biometric data that's going to be personally identifiable? So as you move around, what is going to be PII and what's not? In terms of resources, who owns the data? Do you own the data? Can you sell access to your data? What would it be like to have an architecture where you control and own your data, but you can sell people access to it? And I think the thing we're facing right now is that in order to have privacy, you actually have to pay a lot of money. You have to shell out in order to not have your data sold. You have to pay for it. And then we talked a little bit about, what does it mean to have technology that's going to be able to read your mind before you actually say anything? What are the implications? What are the ethical implications of that? What if that gets recorded? It's automatically transcribed. It has a mistranslation because nothing's perfect. And all of a sudden, you get flagged as a thought crime. And that information is shared with the government. And all of a sudden, you're on a list that you're being targeted. And you can no longer fly on TSA because you thought something that was mistranslated. That's going through the different possibilities here. What happens if people do 23andMe? There's like all these sort of disclaimers like you could be getting all sorts of information about you and your family and you know there's all sorts of implications but what happens when we eventually get tying your genetic information down to your behavior? Having all this information and like what's it mean to sort of mash that together with your genetics and what are the ethical implications there? So I think it's going to be a huge potential for biofeedback into gaming. So being able to put your biometric data into an experience, and it's going to be able to change as you experience it. But at what point is your agency taken away? Because now all of a sudden, you're going to essentially be subconsciously reacting with your body. And so what if it's not what you want? So how do you decide what is conscious and unconscious there? And then there's the ethics of detecting reporting. You're going to be able to detect all sorts of medical conditions. What are the ethics of reporting that? Informed consent. This is a point where, for example, you install Instagram. You say, you can use my camera. Well, are you consenting to having that camera look at your face without you knowing it to record your emotions and harvest your emotional saliency based upon the content you're looking at? Well, according to the privacy policy, they could be doing that. They have patents to do that, but are they telling you? Do they have an obligation to say, oh, by the way, we're secretly turning on your camera to look at you and record and harvest your emotions without you knowing it? That's certainly an issue. So deep fakes of your identity. What happens when other people start to take your identity and start to spoof you, both your identity and your likeness and everything you look like? How do we sort of handle that? This whole dimension of mortgaging your privacy, others profiting on your data and harvesting your emotions. The autonomy of data sovereignty versus the utilitarian public good. So that's basically whether or not you control your data and whether or not your data could help do all sorts of research and help people discover things that we wouldn't be able to discover without that. What happens when you start to want to biohack yourself, the ethics around that? So biometrics used in interviews. So do you want your employer to be looking at your heart rate while they're interviewing you and for them to use that as to whether or not they're going to hire you or not? And so the Chinese government could already be starting to do these loyalty tests where they're giving you this stress test and asking you a very pointed question like, do you support the Communist Party? And whatever your reaction is, your body could be telling a story. There's a lot of implications there of how this could be misused by the government. But we could also be sharing our biometrics with our friends. Wouldn't it be great to be able to share your heartbeat with other people and to really cultivate this deep sense of intimacy? But are we going to have an architecture that we feel safe to be able to do that? And then again, the Biomex is a polygraph. What are the levels in which we have anonymity? What if all of our radiation of our biometric data were always going to be identified in some way? And what if there's false positives where we're looking at a piece of content and our emotions are being tracked, we get distracted by somebody who makes us smile, but yet we're looking at something that's flagged as terrorist content. And all of a sudden, again, we're sort of connected into some sort of association that flags us in some sort of way with machine learning. And then again, if we have all this biometrics, can it be de-identified? So this is a pretty comprehensive just looking at one. And I don't know how much time I'm going to go for all of these. And I'll have the slides available. And I want to leave some time for questions at the end. But that's the idea, is that you would take one of them and look at all the other contexts as a strategy. So I have 12 others. And I'll just riff quickly through now. All right, so resources, money, and values. This is a lot of the economic business model when I was at the VR privacy summit. And the thing I took away was that as long as the business model of these surveillance capitalist companies are diametrically opposed to privacy, then there's always going to be attention there. Until they change their business model, there's just no getting away around. They're just going to want to capture and record our data. And we have to find a way to either create completely new business models or find a way to navigate that. So there's this general opened versus closed, like the walled gardens, so the open ecosystem versus the closed ecosystem. And Neil Trevett said, for every successful open source project, there's always a proprietary competitor. So there is going to be this tension between open and closed. But are there ways for us to either do subscription models? Or, like when I talked to Oculus, they're not going to allow any application to do cryptocurrencies. There's no way for you to have your own economy unless it goes through their payment processing. So these walled gardens are going to limit the new potential decentralized value exchange that could happen for cryptocurrency. And so we're not going to be able to have that unless we create something that's completely open. outside of what their walled garden is, or we get them to change their policy, which I doubt they're going to do because they want to own the platform. We want to have some autonomy over our data. We want to own our data. But then there's the issue right now in terms of avatars, like who owns the copyright of those avatars? It's sort of an open free-for-all now. How does that get mitigated for being able to get consent or use? And then I think that for every physical object, there's a supply and demand dimension. So it makes sense to have constrained resources tied to a market-based economy. But what about virtual experiences that aren't tied to the physicality of reality? It doesn't follow the same supply and demand dynamics. And so we can start to have gift economy dynamics. but I feel like there's going to be ways of sort of how do we have this balance between the more young traditional market economies and the more yen value exchanges that are coming from gift economies or complementary and alternative currencies. We need new business models. I think there's a lot of conflicts of interest that I'm seeing between academia and industry right now. There's a lot of collaborations, but then who owns the IP? I've been in situations where I've seen an academic present their research, and then I want to talk to them, and I'm like, oh, we haven't announced it yet. It's like, whoa. It's shifting. There's a shift that's happening right now. I don't know if people realize, but all these industry and academic collaborations, it's harder for me as a journalist to talk to these academics now, because they're a business. So there's investors in ways that they don't want to disclose what their research is. It should be public. It should be like that's the part of the whole academic thing. There's these new conflicts of interest that are arising that I just want to flag as an ethical issue. And also gambling. What are the ethics around trying to manipulate and control people to engage in gambling and using their money? So that's going to be, I think, an issue as well. So communication and early education. There's so much about eye tracking and our emotions. That's going to have so much fidelity of communication. But there's also risk in terms of who has access to that and what can they tell. There may be people that are looking at that from the outside and be able to tell all sorts of information about our sexual preferences. And it's just basically a risk. Once we start to broadcast out our biometrics, people can get access to it and use that against us. We don't know the full impact of the physiological development of VR on children. What age is safe for them to start using it? For how long? Those are big open questions there. I think there's going to be a lot of really great free educational resources that are available. But at the same time, how do you balance for how much do you expose children as their systems are still developing? I think we need to have different dimensions of end-to-end encryptions. I think we're going to actually be shifting into a lot of people being able to work from home with these technologies. And so that's going to be like a culture shift where you don't necessarily need to live within a city. And so my experience of that is that there's different dimensions where if everybody's virtualized, that makes sense. But then if there's one person who's not at the office, then how do you navigate this new world where people could be literally working from anywhere? And how do you have that cohesion of how to tie these together? If you're in an experience, what if people start to share sexually inappropriate content and it's a public area? What are the moderation tools that are available? I think that if you make things available, then you can kind of imagine what the worst case scenario is and how to deal with that if you need to do that type of moderation. When doctors start to consult with their patients, then you need to have a whole level of privacy that meets the standards of HIPAA regulations. So that's going to be a consideration. First Amendment, free speech, and hate speech. And so all of these things, in terms of If we have everything being surveilled and we know it's being surveilled, that's going to have an impact on the first minute. Because if we feel like it's going to be recorded, then we're not going to say things that we know that other people are going to look at later and potentially hold against us. So there's a trade-off between the ephemerality of speech versus being in a virtual environment and not knowing whether it's being captured. So there's a free speech implication there, but also hate speech and how do you deal with hate speech and how do you mitigate against it. The code of conduct, I think, is important just in terms of there's ways in which you can enforce a certain amount of conduct within culture, but there's also things that are going to come from the culture themselves. And when you ban people and you have anonymous communication, there's a certain amount of, yeah, you can ban people, but what happens 5, 10, 15, 20 years from now when all these things are much more connected, then what's it mean to ban people? You're essentially saying there's a whole part of reality that you don't have access to. ethics around that, and how do you have either rehabilitation or find ways to do restorative justice. So home, family, and private property, you can have whatever's happening in your home be scanned and the objects in it. And so you have a shirt and shampoo that's there. What if Google and Facebook are getting access to whatever that is and selling that data to the competitors? What are the ethics around that? You could have things that you're revealing about your home and your space that could be made available to make you vulnerable for people to either come in and burglarize you in some ways. Protecting your home address is something that we don't want to have spatial doxing. There was a whole thing with the wristwatch of people who were running around these military bases, and the data was automatically being updated. And it actually was revealing where these locations of these military bases were. And so not knowing what the defaults are, and if you're radiating this information, you could be revealing information about where you live. And then also, what if there's something like Mark Pesce's Mixed Reality Service, where you actually want people to come on your property and use it? How do you give them permission to augment your property and to use it? At this point, it's a bit of a fair game. But when you're entering into somebody's private property, there could be ways for people to make it into the commons and allow the property that people own to be able to use it. People are going to be doing rehabilitation exercises at home to be able to cure themselves. So this is an important one, the reasonable expectation of privacy. So the third party doctrine says that any data you give to a third party is no longer reasonably expected to be private, which means that two things. One, any data you give over, you're saying, as a culture, we don't want this to be private. So if we let all these companies start to record our biometrics, our eye-tracking data, our emotional profiles, we're saying that we're okay with the government having access to that information as well, because if they want to go to the companies and get it, they can, and there's nothing for the companies to really stop them from doing that. And then the other thing is that it actually weakens the definition of the Fourth Amendment, because as people collectively do this, that means that we're going to collectively weaken what those privacy protections are, which means the government could start to capture our tracking data, our emotional data. As long as the collective culture has decided to do that, then the government's like, well, this is what people seem to want to do. We should be able to have access to this as well, because they seem to be OK with giving this away. So there's implications there that are unintended in terms of the third party doctrine and the reasonable expectation of privacy that I think people should be aware of what's happening. There was a court case in the Supreme Court called the Carpenter case. That's a good indication that the third party doctrine may not be this blanket, whatever you give to third parties is going to be not private. But there's still a lot of work to be done to sort of really prove that out to say like, if you're going to actually be recording data, just realize that you're erasing and eroding the privacy for the collective. And if you don't know what you're doing with the data, then don't collect it. That's at least my position, which there's going to be utility for doing it. But I think if you have a clear, specific idea about what you want, then you can be a little bit more strategic about what you're actually gathering without having these unintended consequences of erosion of privacy. There's ecological sustainability, so how are these products related to the earth and how are they going to be sustainable in some ways? I don't know if they're sustainable now, and as we move forward, will they continue to be sustainable? So is there a collective right to augment whatever you want? It seems to be a free speech issue, but there's also public privacy. It seems to be leaning towards the collective right to augment whatever you want. Then there's the whole situation of, like, Pokemon Go at the Holocaust Museum. That is something that happened, and so you may have the free speech right to do that, but maybe there's a cultural agreement. Hey, let's maybe not catch Pokemon at the Holocaust Museum, because that's a little disrespectful. But I think moving forward, it's like, how do we navigate that? Is that something that the Holocaust Museum has, please don't play Pokemon? Do they have to disclose that? Or how do we navigate that? All right, so medical information. There's different information about cyber sickness and a lot of the issues that we still have to be resolved with cyber sickness, making it comfortable experiences for people. Being able to eventually do individualized medicine, so being able to very clearly detect what is happening for you as an individual, that's something that is going to be made possible. And I went to the Awakened Futures to look at the cross-section of psychedelics, immersive technologies, and meditation. The theme there was that these technologies can allow you to become your own healer. So I think that looking to what Adam Ghazali is doing with experiential medicine, VR and AR is going to be essentially FDA approved to be able to do specific things and modulating your consciousness for healing. So you're going to be able to become your own healer. Making sure we have secure communications around the medical information. Do you own your own medical data? Where does that data live? What are the ethics of detecting, reporting medical conditions? You're going to be able to detect, if you're recording all this information, all sorts of conditions that people have. So what is your ethical and moral responsibility to tell people that they have these conditions or they may be at risk? Should HIPAA be governing all the biometric data? So allowing us to share all this data, I think it's going to actually shift the philosophy of science. Science likes to control all the conditions of a context in an environment. But now that you can control the conditions of a context within VR or AR, well, especially VR, Now you're going to be able to look at what's specific to an individual and start to do these individualized personalized assessments for personalized medicine to put someone in a VR experience and sort of do a battery of tests and figure out how they respond. But it's going to be allowing us to sort of dial into the individual rather than the collective, which is kind of the way it's done now. There's going to be lots of public research benefit from sharing our biometric data. So what are the ways in which we can do that safely and securely, whether that's with homeomorphic encryption, differential privacy? What are the decentralized architectures that are going to really make it possible to be able to not only make your data available to people that you want, but to also make sure that it's secure? Yeah, so move on to the next one here. So any sort of law issues, your partner, or anything that's the other. So from the biometric perspective, there's the violation of personal space and personal safety bubbles that has been a big issue, as well as having a variety of different avatars to choose from. But what are the sort of implicit discrimination issues that may come up, especially if you don't have a diverse set of avatar selection? We're going to want to be able to share virtual resources with our friends. Just like we may lend somebody an album, what's it mean to be able to share things in virtual spaces? Can you do that, or is it impossible to do? Being able to mute people, or if there's hate speech, being able to basically eliminate that in some ways. We want to balance the free speech rights of individuals, but also provide tools for people to provide safety for themselves. And I think that's part of the dilemma of how to balance that. And I also think it's important to have private spaces to gather in. And if you only have the public space, then you don't have the ability to have that safety and security of having your own space. And what's it mean to be able to start to augment other people? If I start to put some sort of anime avatar on you, is that a creative expression that I have a right to do? Or is that, if we're in relationship, then what is the dynamics between me augmenting your identity and your visual representation? And I think that there's going to be a lot of new boundaries for intimacy, that we're going to be crossing boundaries. And so each couple has to navigate, what are our agreements now? Are our agreements changing in terms of what we are or are not OK with? What type of interactions? I think there's a risk to anthropomorphizing AI agents. Anytime there's an AI agent, you start to believe that it's a human. Now can that AI agent start to manipulate and use your emotions to be able to achieve whatever its means are? And so there's a lot of issues there. Empathy is a big, hot topic within the VR community at large. And so noticing the different dimensions of the boundaries around empathy, the negative and bad things in terms of you want to be able to step into someone else's shoes, but you also don't want to do a traumaturism by thinking that you get a sense of what someone's actual embodied experience is by putting a 360 camera in Puerto Rico after the hurricane and thinking that you have all this empathy about what they're going through. But also, there's not a lot of accountability for how we're going to be able to hold these big companies to task to make sure that they're doing the right thing on the side of privacy. How do you handle illegal content or sexual assaults within XR? big open questions coming up with these comprehensive frameworks for privacy and for ethics. And how do you deal with having a sense of restorative justice within these communities? And there's different emotional labor that's involved there. But do we really want to get into a architecture that permanently bans people forever? I think part of having rules and code and conduct is that you actually have to enforce it. So what are the resources that need to be made available if you're having these social gatherings together? And there's going to be implicit trust and reputation scores, but what happens when those get into the wrong hands? And then being able to actually block people that are being harassment and trolls. And actually, is it going to be even possible to be an anonymous user? Are you radiating so much personal identifiable information that you can actually unlock that and figure out who you are? Is it really ever going to be truly possible to become anonymous? So death and collective resources. So do you have the right to delete your data, to say, whatever you got on me, just delete it and get rid of it? And should the data be treated as ephemeral, like it's sort of a stream that's flowing by, and it's radiating in real time, but it changes context once you record your emotional profile. And there's going to be lots of noise in the data as well. Who has the right to your likeness after you die? I think, you know, there's a lot of talk at Facebook F8 about they're not going to be recording, they're going to be doing end-to-end encryption and all the communication, but are they recording, like, the metadata of who you were talking to when? I think there's a lot of information that you can still glean from the metadata layer. So we shouldn't forget that there's all these other layers of information that can still get in. What's it mean to kill people within virtual worlds? Is there some sort of psychological impact that it's going to have on children as you as an individual? Is there a path for users that are blocked? What are the processes for them to either be rehabilitated or to enter back into a community? And there's going to be different dimensions of having different funeral rituals within these virtual environments, and starting to see that with different people that are passing as well. So just the different sort of rituals that are starting to form as well, and the importance of having those different types of honoring people when people die, and actually gathering people, and maybe reinventing what funerals even are, as there's lots of taboos that we have around death. So right now in academia, there's lots of siloed academic disciplines. I mentioned briefly some of the emerging conflicts of interest for these academic industry collaborations. Should there be an obligation for disclosing what's real and not real? As we start to blur that line, should be there a way for there to be some disclosure? That's something that came out of Laval Virtual. We already have a lot of filter bubbles of reality. And so what's it mean to continue this down to not only in the social media sphere, but into this shared reality? And what does it mean to have different realities that people are living in in the shared space, but the physicality being co-located, but being in completely different worlds together? A lot of people, they say, is this real or not real? But if you experience harassment in virtual reality, then you have a direct experience of that being real. And so maybe your direct experience is sort of the phenomenological basis for reality. That's the basis of reality is your experience. There's a lot of technical debt for machine learning. It's not perfect. And so what are the ways in which we give our agency over to these algorithms, but yet they're not perfect? And so how do you update that and account for that over time and manage that? There's a psychological impact of visualizing all these dystopian futures. And so what about world building the protopias or giving us a direct embodied experience that's something that gives us hope and possibility? And I think there's actually a possibility to help define what the nature of reality is, because as we refine our perceptual system to be able to have more attention to observe reality, Jaron Lanier says we'll always be able to tell the difference between what's real and not real, because we're just going to continually train our minds, and then we're going to experience reality. But there's sort of like, as we go into VR and AR over this large period of time, it's going to change the way that we perceive reality. But it's also going to perhaps help us understand the underlying nature of reality. It's an open philosophical question. I'm not positive that it's going to get us to a clear answer. But I do think that there's a potential for us to help understand the nature of consciousness a lot more. I think there is going to be a lot more interdisciplinary collaborations between industry and academia. There's a lot of potential there, but also things that need to be cautionary tales to look out for. For any training of AI, you need to make sure that there's a diversity of training for that, especially for creating these headsets, you know, talking to women who have big curly hair. Obviously, there wasn't very many women on the design team that were involved with how to create these different headsets that actually account for different head shapes and sizes and hairstyles. So making sure that you have a diversity of people that you're trying stuff out so you're actually being inclusive for a large range of people. Now, in terms of the career and government institutions, this is where you get into some of the more scary dystopian stuff, especially when you look at what's happening in China. But you start using biometric data as like a polygraph test and stress test. And with these companies, they're legally mandated to maximize their profit of shareholder value. So in essence, that's what they're trying to do. And yet, they're not public benefit corporations, and so there's things that are of the public benefit that get externalized. They're not into their algorithms or their equations that they're taking consideration for. So what happens when all this data is collected and the government goes to these companies to ask for it, and they can have access to what you're thinking, what you're feeling, what your reactions to specific stimuli are, especially when you start to have these totalitarian governments be in cahoots with some of these companies. And if we look at the Stoughton documentation, there's these fire hoses that are going in from these companies into the government. But also, like in China, there's a whole memory holing of different things that have happened. So what are the implications of having more centralized control of what comes through in our feed, of the possibility for those people with power to be able to control what we can and cannot see? There may be already a lot of stuff that's being memory holed. But on the other side, we all have the ability to be able to take these historical monuments and start to augment them in different ways. So we talked about the third party doctrine. NDAs, I just want to say, a lot of these companies, they'll have NDAs to protect their intellectual property. But that prevents the outside from saying, what are these APIs that you're developing? We've got this next generation, this next hardware. What are you doing with biometric data? Is that part of the API? Are you collecting it? Are you allowing the developers to collect it? There's not a lot of transparency because there's a lot of secrecy with the NDAs. And I think we're going to see a lot of not only information warfare, but what is experiential warfare? Once we start to have these governments that have access to creating these VR technologies, then could they start to shift public opinion by creating really compelling experiences? And what's that mean to take what's already happening with information warfare out there into the whole domain of experiential warfare? And do we need a GDPR for the United States? And if you go to the extreme, you can imagine all sorts of ways that you could torture people within VR. And I've heard that this is something that's already been being explored from different people, that it's kind of scary to think about where you could take that in terms of really putting people through the paces with VR in a torture scenario. I think we need a lot more transparency in terms of these companies and what's happening and more of a dialogue and for them to be participating more in conversations like this. But there is a lot of government surveillance. So I think we already have it here in the United States. Again, look to China to see the extent of starting to put social credit scores on people where the stuff that that happens, you just watch like Black Mirror episode one of season three where you can see with these different interactions that people have these ratings of each other, that that gives them different access to whether or not they can take the bus or have access to education or you know, just whole dimensions of society that get cut off because of these virtualized putting numbers onto people and cutting off access to them. But what happens when all this data that has these psychographic profiles on us get leaked into the wrong hands and how that's going to get exploited in different ways? So the government could be getting access to our behavioral and biometric data streams, especially if we start to weaken the third party doctrine. And if there's a collective agreement that this is useful and we want to do this, then there's trade-offs in terms of the abuses of government against us. And so the last couple ones here that I'll go through, again, this is sort of a framework that you can start to brainstorm. And this is sort of my first brainstorm. And after this, I'd be very curious to either talk to more people and get more feedback of scenarios that I may have missed or things that you thought of when looking at this. So as people, we can identify how people move. So if we can identify how our friends move, then AI can be trained to do that as well. So looking at the implications of us being able to unlock who people are based upon how they move, and the implications of that, what we're radiating of ourselves, and how much of that can we really hide with noise, because all the noise can be averaged out anyway. But having communications that's peer-to-peer, and having secure communications, but also maybe we want to broadcast our biometrics to our friends, and it's going to be like a fun experience. and having those private Hangouts. Again, the social network analysis that's made available from the metadata is something that, if that's recorded, that also is getting a lot of information. And then finally, the hidden exiled and accessibility. So taking consideration, designing for non-abled bodies, it's going to make design better, because you're going to have to work around things that you didn't think about. And it's going to help everybody. But a lot of these technologies that get super dystopian are also going to have huge accessibility applications, especially being able to read people's thoughts and if someone's paralyzed, if they're blind, or if they're deaf. So for example, auto transcripts for people who are deaf, if you're in an environment and someone's deaf, then it would be very handy for the transcripts to be automated so that whatever's being spoken could have a transcript. But is that something that you need consent for from everybody in that room that you're now all of a sudden going to be doing these transcripts? What if something gets transcribed that they're not supposed to hear? And so there's lots of issues that could come up from that as well. Dan Hotsfeld talked about that in her paper. What we value is what we're looking at. So you'll be able to tell a lot of information by our eye tracking data. So even when you're in a virtual reality experience, you're going to be able to see what other people are looking at if you have this one-to-one eye tracking. Do we want to be broadcasting all of our eye tracking one-to-one that way? Because that actually could be revealing a lot of really sensitive information about ourselves, especially if we have other people that are recording that and documenting it and extrapolating it in specific ways. What does it mean to start to live permanently in these virtual worlds? People get so addicted that they don't want to leave, but they get these dopamine hits that are so high that they don't want to interact with the real world. I haven't mentioned this, but illegal content in XR. So what about child pornography? It's something that, if it's all virtual, then how do you navigate these sort of like, there's laws against it, but what if it's sort of just imaginal? Is that something that could be used to rehabilitate people, or is that still weird, and how do you, obviously you don't want that, so you have to sort of deal with it in different ways. But all this we could be harvesting or creating a map of our unconscious psyche and who has access to the data and what can they know about us if they have access to all that. A lot of also unknown long-term effects of using all this data. And what is the dark spatial web going to look like if some of this information gets leaked We have to assume whatever is recorded is vulnerable to be published out on the dark web. So what is it possible to have information that's out there, maybe de-identified, but what if you can unlock it by recording enough information, and now all of a sudden you have these huge repositories of embodied information that could be exploited in different ways? Thank you for your patience. That was sort of a quick tour of an ethical framework of navigating the moral dilemmas of mixed reality. Hopefully you get a sense that there's a lot that's out there, and this is just, I'd say, just the tip of the iceberg. I mean, I feel like if I talked to anybody in this room, they'd probably list maybe five or six other things that I didn't even think about here. The point isn't to name everything. It's more of to give at least some sort of strategy to start to brainstorm and think about all different ways. Because as you're designers, as you're making stuff, we have to find ways to make these different trade-offs and design trade-offs. So again, these are the different domains of the human experience and what I think should be public and private. And it's going to be a bit of an open question. I just talked to Kavya Perman. She does an amazing new initiative called the XR Safety Initiative that's going to be an institution that's working with the open AR cloud that is together, these two organizations that are really going to be trying to figure out ways to gather the community together. And so I feel a deep sense of relief that I don't have to just be like the Cassandra up here saying, hey, there's a lot of really scary stuff that's coming. We should all be scared. Well, you can channel that fear into action by getting involved with the XR Safety Initiative by Kavya Perlman and the OpenAR Cloud. They're the leaders of helping figure that out, because there's an emerging counter movement of people that want to see something different than these closed walled gardens. If we don't have alternatives, then essentially we're left powerless for governments and institutions and corporations to manipulate and control us and to use their surveillance capitalism models, and we don't have much recourse. what Mozilla is doing with hubs. They have all of their source code open sourced and transparent. You can audit it. Wouldn't it be amazing if everything that Facebook and Google did that we could see the source code, and we can look to see what they're doing with the data, and we could have some sort of accountability? Or they had accounting of what they were recording and not recording any moment, because they say this is what their privacy policy says. But at any moment, they could switch and change that. I can ask the architects of the privacy policy and the engineers, are you recording our conversations? And they'll say no. And then in their privacy policy it says they can. And the next day they can start recording. They don't have to tell anybody. And now they're recording all the conversations because it was architected in their privacy policy for them to slowly turn the knobs up and do that. And they have no obligation to tell us that they're doing it. They don't have to come back to me and say, oh, by the way, when you said that, we're now doing something different. So I think it's time for people to kind of like, yes, I'm super excited about the quest. But there's so many open questions about privacy and ethics. And I don't feel comfortable saying that I've got a satisfactory answer from these companies that they're doing the right thing. Because there's not a collective outrage yet. And I feel like there still has to be this collective outrage of other journalists, other people in the community asking these hard questions. So that's all I have. And I just wanted to thank you for being here, because I think everybody that's here is concerned about this and I think going to be a part of making this shift and change that needs to happen. And like I said, get involved with what's happening with Mozilla and the XRSI and OpenAR Cloud. These are great institutions that you can look to to see what they're doing to actually bring about some change. So thank you. Virtual World Society would like to present you with this medallion for your work on behalf of the good in society. Aw. You can put that on your wall. Thank you. Thank you. Awesome. And I think we have a few more minutes if there's any questions. I don't know if there's any slideo or if you sort of yell out a question, I might repeat it. So any questions? I'm going to upload on the slide share, slash Kent Bye. I'll upload them when I get back there. And also, if you check Twitter.com slash Kent Bye. And all my work that I do is supported by Patreon. So if you want to support the work that I'm doing, you can go to Patreon.com slash Voices of VR. And yeah, I rely upon the listeners to be able to do all this work that I'm doing in journalism, as well as all this advocacy work. So definitely appreciate that. Yes, Kevin. Have you talked to the screenwriters at Blackbeard? No, I have not talked to the screenwriters of Black Mirror. But I feel like this sort of outline could be a good framework to create a full season. Yeah, at the VR Privacy Summit, we had a whole brainstorming session. And so we had everybody in the, like, 50 different people from the XR community. We came up with some pretty good Black Mirror scenarios. So I feel like Black Mirror is a good way to ideate what's possible, and then go get involved with the XRSI and open AR Cloud to help design around it. Yeah, Nicole? Yeah, so the question is, the difference between how much should creators be creating protopias versus how much should we be creating dystopias? It kind of depends on your temperament in terms of whether or not you really like horror and you like to really scare people. I think there's a need for both and balance. I'd like to see a balance, because I think there's a value for the scaring people. But my problem with Black Mirror is that it kind of leaves you with, OK, now what? There's not a lot of positive ways to put your energy. I think it's actually harder to create the protopias. And I feel like VR, you're able to create a whole immersive experience that allows people to be fully embodied in a culture that's completely different. So I actually think there's huge opportunities for people to create embodied experiences that give something that's a vision of the future that we want to live in. What does all of this look like when we get all of that figured out? What kind of culture emerges from that? So an example of Protopia. So I went to the Immersive Design Summit. There's a number of different organizations. One's called 13EXP, and they're creating 13 different experiential programs that are going to be focused on positive social impact. So I feel like there's people that are starting to think about what the experiential design around the Protopia or Utopia is going to look like. We've got a question here. What are some positive actions, if any, in the privacy space? So I'd say keep the pressure on the big companies in terms of what their privacy policies are. There needs to be a lot more action in the third party doctrine in terms of getting that changed completely. The initiative that Kavya just started with the XR Safety Initiative, XRSI, that you can look up and see. It's a whole new institution that's looking at the different security and privacy implications out there. So there's some good movements that are out there. And I've got about the VR Privacy Summit, MIT, and they're going to be doing some other outcomes as well. I've got a lot of podcasts that I've been doing here as well as in the past on the privacy. So if you just do Voices of VR, VR Privacy, those should come up. Should there be government oversight? So I think that one of the things that Lawrence Lessig says is that there's four ways to modulate culture. One is laws and oversight, but there's economies, there's culture, and there's also technical architecture. And I think you need all of them. You need to have competitors that are creating alternative architectures like what Mozilla is doing with hubs. You need viable economic entities that are providing market pressure for people, and you need, I think there does need to be some equivalent of GDPR in the United States, but also, Kavya said that there's a whole NIST privacy framework that's being developed right now as well, so I think all those things, I think the government's really far behind, and I think there needs to be the philosophical policy level to say this is what is happening, and this is the risks, to help tell the story, and then get them to watch a lot of Black Mirror as well. Because I think that will help give them some sense. So how can we empower the government to enforce privacy laws while minimizing their capacity to abuse the data that they police? Well, the FTC is kind of in charge of a lot of that. And actually, there's not really a lot of comprehensive privacy laws that are out there already. And I think that's a good point in the sense that a lot of the conflict of interest is that the government that is supposed to be enforcing this is also getting a fire hose of a gold mine of Intel data from around the world. And so there's not a lot of incentive to be super tough against Facebook when they're also giving them all this information. EULAs are required for everyday tech, so many of our rights have already been signed away. The horse has left the barn. Are we doomed? Yeah, the whole idea of adhesion contracts that you basically forego all of your rights to all of this information. People don't read it. I don't know how enforceable it is to some extent. But also, I think progressive permissions is a good approach of saying, hey, if we're going to start to secretly record your face, are you okay with doing that? And, you know, disclosing, when you start to progressively disclosing, I mean, the tradeoff is that you have permission fatigue where you always have to, like, GDPR, yes, I clicked the cookie over and over again. But at the same time, do you want to have yourself be secretly recorded? So just a couple more minutes. I'll take maybe one or two more questions here. Can you talk about spatial doxing? So the spatial doxing, I think, is like, for one, revealing where you live by having some dimension, like even if you're taking a Snapchat filter or whatever, like you may be revealing information about your environment. So that's the personal identifiable information about the doxing is like detecting what you're showing and being able to trace down where you're at. There was a Shia LaBeouf who was in the middle of nowhere. And there was an airplane that was going back in the background. And people on Reddit were able to determine where he was by tracking down the flight patterns. Because they were able to see, in real time, a live stream of what the flights were going by. So stuff like that, where you don't know what you're revealing around you that may be able to geolocate you. And if you're trying to protect your location, then that could be an issue. What are some solutions to data harvesting? I'll stop on this one here. So the thing is that there's good things that can come from getting this information in terms of trading AI. And there's this trade-off between your own data sovereignty of what you own versus the utilitarian aspects of what you gain from having access to that information. The problem is that they're recording the data, and they don't know what they're going to do with it. I think, advocating not recording it. Or if they're going to record it, have it record locally and then have different levels of encryption or ways of doing differential privacy or homeomorphic encryption. I'd love to hear more people talk about what those decentralized architectures are. What are the scalability issues of those? Are there ways to do processing on the data to do public benefit that don't require you to give up your sovereignty to that information? hosting it on yourself is that somehow not giving it to a third party, but they're able to do processing. So as much as possible, putting stuff on the edge. So that's my thoughts on that. Well, I just wanted to thank everybody for joining me today on this keynote. And yeah, if you want to learn more information, go to voicesofvr.com, and I'll be posting a link to my slides on my Twitter at twitter.com slash can't buy. So thank you very much. So that was a talk that I gave at the Augmented World Expo main stage. It was called the Ethical and Moral Dilemmas of Mixed Reality. So I'm gonna keep it brief just because this is the talk that I gave. But the main thing here was that it was a bit of just trying to spatially organize a lot of these different moral dilemmas. After I gave this talk, the talk that I gave at the Greenlight XR Strategy Conference, with my extra ethics manifesto, what I had done was I went through this open ended brainstorm that I did in this talk and try to distill it down into each of the different categories and try to list out the major dilemmas. This is something that is never going to be fully complete, but I would hope that for me, it was at least a strategy that I could take to start to try to map out all these different things and to see where they might fit and to see how they could be clustered together because there's an infinite amount of moral dilemmas that you can think about in the Human experience it's completely unbounded and this whole concept from girdle and completeness theorem was that you know, if you try to be Consistent then it's going to be incomplete, you know, you can come up with a consistent framework But there's always going to be something that's not in your framework and I think that's part of the challenge is when you look at different ethical frameworks is that you have like virtue ethics and deontology and consequentialism. So, you know, are you only looking at the consequences of your action, the deontological approaches, which is what I did a little bit later at the XR ethics manifesto, which is trying to come up with the rules or the guiding principles. And then the virtue ethics is like trying to go even higher level and say, like, We're trying to create openness and transparency and accountability and truth and justice and goodness, you know, all these higher level virtues, but it's hard to see how that applies down at the small level. But even if you look at ethics through the lens of context, which is what I'm trying to do here, it's going to be incomplete. There's going to be still more principles that are going to be out there. So I think that's at least a challenge of trying to embrace this incomplete nature of this topic and to try to at least come up with some framework, but then realize that you actually need a pluralistic approach and many different people talking about this. That's why I think that this issue is so difficult is because it is so based upon moral intuitions. And the moral intuitions changed based upon who you are, based upon what your interests are, based upon your life experience. And so it really needs the entire community to come together and start to share what their moral intuitions are, where their ethical thresholds are, where their lines are, because You know, the thing that I came up with over and over again is that thinking about things like privacy engineering, you're looking at how, you know, some of these things are going to be completely against each other. Like if I make a hard line stance saying never record any biometric data, don't do any surveillance at all. then that may be in conflict with creating safe online spaces where there could be an element of if your identity is tied back to yourself, then you're going to be a little bit more accountable and you're going to be able to actually prevent or block people if you need to. And if you don't have that, then you have this situation. If you have public VR spaces and no way to get people who are really trying to disrupt things, then that could be a real problem if you have no way of identifying somebody who's really trying to derail things. And so There's these trade offs like that of creating the safe online spaces with like creating the sovereignty of your identity. Same thing with the surveillance capitalism as a principle. There's a lot of things that are ethically wrong with it. But at the same time, it's been able to bootstrap this ability to give free access to information to everybody in the world. And so you have a bit of this moral dilemma of like, We are releasing certain aspects of our own sovereignty over our own data that our privacy is being mortgaged in the collective, but there's a larger utilitarian purpose of giving universal access to all this services for free. And so people who couldn't pay for it, who don't have the means to be able to have access to this information, It's like this whole thing that we don't actually want to take back. We don't want to roll that back because it's actually like we want that still but how can we do it in a way that's still ethical and moral and maybe gets away from some of the negative aspects of surveillance capitalism as well as looking at like the societal impact you know, there's a public benefit, but there's also risks and things that could be being used to undermine democracy. So one thing that I'll throw out in this specific case is that maybe we need to get away from tracking individuals and we need to focus it on the content and see what type of archetypal aspects and context is given there and seeing how you can maybe match ads up to the content in the context, rather than just with the individual. you know, maybe that will be one way to put a little bit of the pressure off of trying to have everybody having these invasive surveillance technologies, especially as we open up the floodgates to all this biometric data. So there's this recalibration that's happening in the entire industry, need to take a step back and be able to look at these different trade offs. And I think in this approach, I'm trying to give what Mel Slater had told to me through Stephen Ellis was this concept of equivalence classes, which is that as an engineer and you have these different trade-offs, you're trying to look at if you take away some of this, maybe you're going to have a little bit of this equivalence class have more or less of that as well. So you have these trade-offs. And so this for me is trying to map out the different trade-offs between human context And privacy engineering, as I talked to Diane Hausfeld later, both at SIGGRAPH as well as again at Amsterdam, she had gone to this conference called Pepper where they were talking specifically about privacy engineering and how privacy engineering is so difficult because you're not just looking at the technological aspects, you're looking at all these other sociological aspects as well. And so you're creating technologies that are creating these different cultures, these different behaviors, and have these different implications that go above and beyond just the technology. And so ethics and privacy engineering are actually forcing us to try to model and make sense of the entirety of the human experience and how you don't undermine democracies and be able to red team what you're creating to say, if you come up with a communications network that's on the scale of billions of people, now all of a sudden you have to take into account how are you going to be able to prevent things from inciting violence to be able to cause genocide in Myanmar or elections to be hacked and for democracy to be undermined in the United States. Those types of questions didn't used to be in the domain and purview of engineers that were implementing different specifications and delivering on their communications technologies. We're having to take a step back and take a much broader perspective as to all the different implications of technology. And so that's a big part of what I've been trying to do in this series here and XR ethics and privacy is to give some context and some frameworks for the engineers that are trying to create these different systems to be able to look at some of these different trade-offs. And so. I think the big point that I would take away in this talk is that I'm introducing a way to look at context and then from a more relational perspective. So process philosophy, you know, looking at how things are in relation to each other rather than, you know, these fixed concrete objects, but it's really like a matrix of relationships. And so trying to isolate these different relationships, whether it's like you as an individual and your sovereignty of your data versus like the utilitarian aspect of these companies trying to provide services to provide some sort of public benefit to society. And so you have the profit motives versus your autonomy. So those two contexts, as they come together, then how can they be in conflict? Companies need to make money to survive. You want to have different aspects of your sovereignty and privacy. So how can you have these different trade-offs where you're able to find the happy middle ground rather than have it be too far on one extreme, you know, maximizing profit may mean that it completely erases privacy, just as an example. So, that's all I have for today, and I just wanted to thank you for listening to the Wists of VR podcast, and if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. I'm an independent journalist, oral historian, and independent scholar. This has been a bit of my personal pet project of just having the opportunity to speak at these different places, to participate on a number of different panel discussions about this topic on XR ethics and privacy, and to do dozens of different interviews with different people over the last couple of years, trying to synthesize all this information. And if you find that to be value to you and the community, then please do become a member of this Patreon. I really do need the help and support to be able to continue to do this podcast and to do projects like this to bring a larger awareness to some of these issues. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show