#929: Ethics & Privacy in Mixed Reality Panel for IEEE VR Academics & Researchers

The IEEE VR 2020 Conference brought together the academic XR community to share their latest research, and to talk about topics that are of interest to the wider immersive technology industry. I participated on a panel discussion on Ethics and Privacy in Mixed Reality where we talked about the landscape of moral dilemmas, and how the work of the academic research community could help to address potential harms that could be done. Diane Hosfelt is the Security and Privacy Lead for the Mixed Reality team at Mozilla, and she moderated a discussion with Dr. Erica Southgate (University of New Castle), Divine Maloney (Ph.D. student in Human Centered Computing at Clemson University), and myself.

References for Virtual and Augmented Reality Law and Policy

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s the video of the panel discussion

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So ethics and privacy and virtual and augmented reality is a topic that I've been covering over the last four years. And I think it's still probably one of the most important topics that needs lots of dialogue and conversation from the wider industry, the creators, the academics who are doing different research into immersive technologies, what you can and cannot do with the data that are available, as well as the regulatory bodies. And so seeing how all of these are going to come together and find a way to be able to create safe and equitable spaces for us to be able to enjoy our immersive experiences. So I've done a number of different panel discussions over the last number of months and also a number of follow-up interviews that I'll be diving into here, a little mini series on ethics and privacy and technology. So this specific conversation is at IEEE VR. There's an ethics and privacy and mixed reality panel discussion that actually was happening within virtual reality. We're in Mozilla hubs. So if you hear some additional background noise and stuff, that was the only session that was primarily within hubs. And so there was some different technical issues that came up throughout the course of that. So hopefully you'll have a little bit of patience, but I think it's an important enough conversation to be able to dive into talking about a lot of these different issues. Within the last couple of weeks, Tuesday, July 14th, 2020, Facebook actually published a white paper. It's called charting a way forward, communicating about privacy towards people centered and accountable design. So in this white paper, they talk about some of the different trade-offs of notice and consent, and also just some of the challenges around immersive technologies and the data that are available and how on one hand, Facebook needs to have some access to some of that data in order to create the immersive technologies and to make it even work. But on the other hand, there's aspects of what is recorded, what is stored and what is correlated to what we're doing. I mean, there's lots of different issues there, but they have a white paper and they're going to be having broader discussions with different academics and industry and people from civil society and also the government. They're trying to, I think, be a little bit more proactive of trying to figure out if there are going to be new privacy regulations, they want to be able to have their say of what those look like. And I guess on the other side, from more of the privacy-centered perspective, there is a link actually in the Facebook paper into a law review article reviewing a number of different issues within virtual reality technologies. And in there, there's actually a link to a paper called Virtual Reality Surveillance by Gilad Yadin, who says, if legal institutions do not act to restore the balance, then virtual reality cyberspace may usher in an Orwellian future. He goes on and says, So yeah, with the third party doctrine, the fact that any data you give over a third party means that the government could request that data. And I think the implications of immersive technologies of having all this new intimate information and what is available and what you can do with it, I think that's where the research community comes in and says, okay, given that this is available, this is the worst case scenario of what you're going to be able to do with this data. And the big issue I think is what is going to be recorded and not. So, but there's a lot of variety of other issues that we talk about here, but just to give a large level of context, then what's at stake here when we're talking about some of these different issues and trying to bring forth all the different data to be able to have both this conversations with Facebook, but also with the regulators and to figure out like what role does the government have? And do we need stronger privacy protections when we have these different types of immersive technologies? So that's what we're covering on today's episode of the voices of VR podcast. So this panel discussion on ethics and privacy and mixed reality happened at the IEEE VR conference that's happening online in virtual reality. And this happened on Monday, March 23rd, 2020. So with that, let's go ahead and dive right in.

[00:04:04.470] Diane Hosfelt: Hello and welcome to the ethics and privacy panel at IEEE VR. Thank you so much for joining us. At the end, we'll take a few questions from Slido, but we'll also be taking live questions from this Hubs room. As a reminder, this hub's room is being recorded. So keep that in mind if you have any privacy concerns with that. With that, I think we'll get started. So spatial computing capabilities introduce new ethical and privacy dilemmas. And as mixed reality technologies emerge into the mainstream, these issues are becoming more pressing. This panel will explore the many implications of biometric virtual embodiment, and integration of physical world data. It will also address the many intersections with other technologies such as AI and applications ranging from ethics and education to its use in hiring processes. It's important that we address these now instead of attempting to This panel will address current privacy problems and anticipate future problems and solutions with experts in the field. Now, I'll allow them to introduce themselves, starting with you, Kent.

[00:05:19.847] Kent Bye: Cool. So my name is Kent Bye and I do the Voices of VR podcast. So over the last six years or so, I've done around 1400 to 1500 interviews. And specifically within the last three or four years, privacy has been a pretty big focus and topic of what I've been covering. And so looking at the different ways in which we have biometric data that's being shared, and talking to different experts in terms of the different risks when it comes to mixed reality. So one of the big things that I've done recently is the whole XR ethics manifesto, trying to lay out some sort of approach for making ethical decisions within XR. So because it is so new, I feel like it takes this community to try to put forth the information that needs to be fed to these corporations and companies to see what their privacy policy should be and what is the best way to preserve our privacy as we're in virtual reality.

[00:06:13.897] Erica Southgate: Great. Erica? Hi, I'm Erica Southgate. I'm from the University of Newcastle. I'm a researcher in education who's interested in bringing emerging technologies into classrooms and working with teachers and students with this technology for learning. But I'm also very interested in educational technology ethics, and in particular, looking at the way we can use technology in the classroom, in school classrooms safely with our children and young people. and how we can empower teachers around that. Most recently, I've written a report for the Australian government on artificial intelligence and XR technologies in schools, which offers advice on ethical and safety issues, as well as an overview of learning in that space. And I'm really interested at the moment in the intersection between AI and XR technologies.

[00:07:10.711] Diane Hosfelt: Great, thank you so much. And let's meet Devine.

[00:07:13.992] Divine Maloney: Hi everyone, my name is Devine Maloney. I'm a PhD student, soon to be candidate at Clemson University. My degree is in human centered computing. I'm all about VR, AR, all aspects of it. Been doing research on it since my sophomore year with Bobby Bodenheimer at Vanderbilt. Greg Welch at University of Central Florida. I'm generally interested in ethics, social good, privacy. Some work I did during the first half of my PhD was on implicit racial bias and embodied avatars and ethical concerns there. And now I'm looking at social virtual reality and trying to create socially equitable spaces for kids and adults in these like social VR worlds as we move to a more immersive XR VR world. So, yep.

[00:07:58.397] Diane Hosfelt: Thank you all so much. As you can see, it'll be an interesting mix of people and opinions that we have today. So the first thing that I wanted to ask you all is what, in your opinion, is the biggest privacy threat that mixed reality technologies pose today? We'll start with you, Kent.

[00:08:18.438] Kent Bye: Well, for me, I think it's the biometric data and what type of information biometric data is revealing about ourselves. So that could be our eye-tracking data, our gaveline skin response, our gait, how we move, eventually our brainwaves and EEG. So all of this is going to be, eventually, it's going to be tracked and fed into an experience that's going to be able to respond to us. If you think about like medical applications, that's great to be able to track all this information about your body. And if you want to do like conscious attacking, you could have that information and be able to actually improve yourself. But I think for me, the risk is this information getting into a company that has a business model of surveillance capitalism. And then to be able to know what you're looking at in a virtual experience, look at all this additional unconscious data that we're radiating, and for them to record it forever and basically reverse engineer our psyche to be able to understand what we're paying attention to, what we value, and what excites us. You know, there's all sorts of information that they can tell once you have all this full set of biometric data. And at this point, it's a little bit of the Wild West. Once that data has become available, then we're kind of blurring the context where this information used to be like medical information, but now it's going to be the hands of these companies who have a business model of surveillance capitalism. What does that mean for lowering our overall privacy for that information. If they record it and store it on a server, the third party doctrine means that any information like that that's recorded, that then means that the government could have access to that as well. So for me, I feel like there needs to be a good approach for how to deal with the biometric data and where it goes and who records it and then potentially even some government regulations in terms of what companies are even allowed to do with that.

[00:10:08.521] Diane Hosfelt: Great. Thank you. Now, Erica, you might have a slightly different perspective on this because a lot of your work centers around kids. So what do you think the biggest privacy threat is?

[00:10:20.387] Erica Southgate: Well, I have to agree with Kent around biometric data capture. We're introducing this into schools already. It's already there. I'm very concerned around issues of informed consent around that and empowering teachers and parents and children and young people to understand what that means. I mean, I think one of the main issues for me is when we start to incorporate biometric data as well as algorithmic nudging into educational applications, And we don't know when students are being nudged in particular directions or why, particularly with smart, intelligent tutoring systems. And we get to the stage where we really can't adequately explain because we don't have transparency, either because there's proprietary issues around algorithms or the actual algorithm itself isn't transparent. It's a black box. And I do get worried that, you know, one of the basic things of education is being able to explain something. Teachers have to be able to explain something. Students have to explain something. But if we can't explain what this technology is doing and how it's working in classrooms and how it affects children in particular and young people and issues of bias, potential bias, then we've got a problem. And the second issue, which I'll talk about a bit later, but I'll flag now, is around the idea of regulatory capture. So I think we're in a situation now where, for instance, schooling systems are very cosy with big tech and little tech to a degree, but mainly big tech. And the people that procure particular technology systems, XR technology systems or others, really aren't getting independent advice on that. And that includes people who are bureaucrats in those systems, but also school principals. who are allowed to procure these systems and put them into schools. So without independent advice and without technical advice, high-level technical advice, we get to the situation where we just can't ask the right questions of technology companies in this space.

[00:12:23.199] Diane Hosfelt: Very insightful. I definitely am going to want to get back to that idea of regulatory capture, because I think that's a really important point that you've brought up. But first, Devine, what do you think that the biggest privacy threat is mixed reality technologies pose?

[00:12:40.226] Divine Maloney: Yeah, yeah, great question. And Kent and Erica kind of summed up what I was going to say about data. But I'll just take it on a different approach of not only like your biometric data, but also like the content that you're viewing, especially in kids and teenagers that just consume so much content and the content is then geared for them to become addicted to it. And so I think for me, it's just informed consent, but also ethical decision making, whether you're creating a game or some sort of experience. to make sure that people know what they're getting into and if their data is being taken, they know what data is being taken and they consent to that data. So, yeah.

[00:13:17.587] Diane Hosfelt: So, it's interesting that you bring up informed consent because the current paradigm of consent for, well, really all paradigms, is notice and choice, where you're presented with a notice and you either click yes or you click no. Do you think that that really is accurately capturing and informing people? We'll start with Divine.

[00:13:42.645] Divine Maloney: Yeah, definitely not. I mean, no one reads the informed consent. You just kind of like download the app or whatever it is and you click agree because you just want to get to whatever the device is, whatever the application affords. But it needs to be better. And so, yeah, there's some privacy researchers working on this now. how can we educate people on like, hey, this is what you're getting into on the app. And yeah, we're not going to give you just like paragraphs upon paragraphs to read, but here's a quick summary of what type of information is being shared. And I think that just needs to be communicated better and clearer to the consumer.

[00:14:17.495] Diane Hosfelt: Kent and Erica, do you think that there are any unique possibilities for mixed reality to maybe improve the consent landscape?

[00:14:27.937] Kent Bye: Well, I think an application like Rec Room, or even like Alt Space VR, when you go into the VR space, it's a spatialized way of trying to explain the code of conduct. And so there is going to have to be translations, like whatever the rules are, how do you start to communicate that within VR? And so it's going to be very contextual, like Mozilla Hubs and other things that Mozilla work on. When you go in, it's like, are you sure you want to give up access to your camera? So having different geared ways, and do you want to be able to broadcast this information in this context? The other side is the potential permission fatigue, where there's just so many different layers of that, and you really want to go through that every single time you put on your headset. So I feel like it has the capability to do contextual stuff, but also make it worse, meaning that you have to jump through all these hoops even to get into the VR headset.

[00:15:22.195] Erica Southgate: I mean, I think one of the key issues that I'm considering now is really around digital literacy. So these kind of long consent forms where you just click yes or no, it's mainly yes, really not informed consent, let's face it. One of the things we could do, apart from including this in a much more sustained way in curriculum in schools, particularly around data, what data means, data capture, privacy, and is to, I think, offer visualisations of data to users. And I think young people and children will be quite interested in this, actually, immersive visualisation. So if you can actually wade through a visualisation of the data you're giving away, and understand where that might go in a very embodied and physical and visceral sense. That's one kind of educational strategy or pedagogical means through which we can both educate people so that they can better understand what's happening to their data when they click yes or no. I mean, the other thing I would bring up, and we can probably get back to it later as well, is the issue of scraping of data. So, you know, often a lot of tech companies just take in the data anyway. So, you know, there really is a regulatory and legislative issue here, which is beyond consent.

[00:16:37.390] Kent Bye: One of the things I wanted to throw in there is that I watched these philosophers debating about privacy, and Dr. Anita Allen is one of the founders of positive privacy, and she compares privacy as, like, this fundamental human right. And she makes this comparison to slavery, saying it's ethically not right that we sell ourselves into slavery, as an example. So is privacy kind of like that where us giving away our privacy actually shouldn't be something that we're able to do? And so there's a range of different opinions around this should be a right that's protected, but maybe we need, for her perspective, a little bit more paternalistic governmental protection of that right and restrictions around what people can and cannot do. And then there's the other extreme, which is complete libertarian, like you can do anything that you want with your privacy rights. In fact, maybe it should be treated more like copyright, where when you give a license to someone to use something, it's not an indefinite license like it is right now. Maybe there's a little bit more of an audit trail so that you could license out different aspects of your privacy, but be able to revoke it if you wanted to. But right now, it doesn't work like that. Once you give it over, it's like a carte blanche. You give everything over. And there could be some range in between, but I feel like in order to really answer this question of consent, you have to really get to the essence of what the underlying framework around privacy that if we have as a culture, everybody giving up their privacy, that means that if your friends are giving up aspects of privacy, they're giving up your parts of your privacy that you'd have no longer control over. So it's a little bit of a collective thing as well. It's not just an individual's privacy, it's also implications where you can leak privacy about other people. So given that, there's a lot of different dimensions but different perspectives as to even defining what privacy is and knowing whether or not you're on the end of the libertarian side, which is like you have control and you could mortgage away your privacy if you like, or it's something that's more of a fundamental right that we need the level of the state to be able to help protect for us.

[00:18:37.790] Divine Maloney: Yeah I think too with privacy and informed consent it needs to be like equally distributed so equitably so if I want to download an app and I don't want the app to have permission to my camera I still should be able to use the app right and then we run into that like oh you're not going to give us permission for us to have your camera so you can't use the app and that's not fair it's kind of like we're on Mozilla hubs right now and some of us don't have our VR headsets, but we can still view this through desktop, right? And that's how things should be. It shouldn't be just like, if you don't give us your data, you can't use this. It should be designed equitably. So, yeah.

[00:19:13.208] Erica Southgate: Can I just say on the issue of equity, and I think that's a really important issue that you've both raised, this is a social justice issue. I mean, people's digital literacy is variable and it really is related to level of education, so there's an equity issue there. And I think, you know, if we look historically at human rights frameworks, it's often been right to bodily integrity, for instance, and that relates to somebody using your biometric data, harvesting your biometric data, so your virtual body as well as your real body. That is a right, a human right, that's been fought for primarily by minorities. by women, by queer people. And so there have been social movements around, for instance, protecting the right to bodily integrity. And in some ways we need, I mean, I would argue that it is a social justice issue and a human rights issue, and social movements have a role to play here.

[00:20:12.276] Diane Hosfelt: I love that you brought that up, that this is a human rights issue, that regulation has a part to play. and that this really is a social justice issue. Because my next question is, as mixed reality is being deployed in applications like hiring and education, places where equity is so important, what ethical and legal concerns do we need to be keeping in mind? We'll start with you, Erica, because I know you have a depth of experience here.

[00:20:42.136] Erica Southgate: Well, I mean, I think the issue is how we use these technologies to make decisions, transparency decisions, how people can have access to how the decision making was made and on what basis. The history of AI, for instance, is littered with discrimination and human bias. And I think one of the big issues really is around transparency, explainability, and accountability. Like, where's the accountability? You know, that's what I really worry about. In the end, I often feel that there's fatalism in this field, where people just say, oh, well, you know, it's happening anyway. The technology is moving ahead at such speed. The kind of practices such as scraping data without the user knowing about it and its uses. It's already happened, it's OK. It's actually not OK, because we do have a right to be able to protect our privacy and understand what we're giving away and for what purpose. We certainly need people held accountable for that, whether it goes right or wrong.

[00:21:44.743] Diane Hosfelt: What types of mechanisms for accountability do we envision?

[00:21:50.557] Divine Maloney: Yeah, I think if I could just jump in, the keynote today kind of talked about it, right? Like, hey, if we're going to inform the future of MR, AR, VR, we need diverse participants. We need a diverse representation. And it can't just be designed for one specific demographic. It needs to be designed for everybody. And that's like equity, right? So I think for starters, just creating us as researchers and industry folks, making sure that when we are doing some sort of research to make sure it's checking as many boxes as we possibly can and not just doing something cool for the sake of doing something cool.

[00:22:27.367] Kent Bye: In California, they passed specific legislation to be able to protect privacy. And if you read through the Oculus privacy policy, as an example, at the bottom, it says, OK, if you're in California, you actually have a little bit more rights than you do other places. And so you can actually request a full accounting of all the information and data that you have. And this is something that is within the GDPR as well, like the right to be deleted and the right audit information and the right to be able to be erased from all these databases. So I feel like the regulations that seem like a good start for the GDPR, I'm sure it needs to be expanded out and be a little bit more robust in terms of what's happening with the biometric aspect of side of things. Because a lot of the information right now on the web is you're typing information into a keyboard or maybe it's behavioral stuff. But once you get access to like eye tracking data and your heart rate variability, and you know, you start to get all sorts of additional information as to what's happening. And then even if you were to request transparency of that, imagine getting a big data dump of a bunch of what's essentially like matrices of numbers that you have no idea how to interpret, right? You would need to have it within the full context of all of the software to be able to even make sense of it. And so there's all these second and third order layers of meaning that people can get out of this information. And so it's not only having access to that raw data, but also the interpretation of what it means. Like, what kind of psychographic profiling are you going to be putting onto me? But I think there's a challenge there in terms of the algorithmic complexity of what the data actually means. you're not going to be able to have any ability to interpret that, even if you had access to it. So I find that to be a challenge for accountability, because you get Facebook to say, okay, why don't you publicly declare what you're recording? But right now, the way that it's working is that all personal identifiable information is not able to be recorded. But if it's de-identified, then they can record whatever they want. And the thing that I've been saying is that the de-identified data, you could be able to unlock that de-identification. So there could be a biometric fingerprint on the data that they're recording that then you could be able to, you know, be able to identify people later. So that to me is a challenge with accountability is because you have some accountability with the personally identifiable information, but the de-identified information is basically a big black box. They're going to be recording as much as they can and not telling you what's there. And then maybe over time, they're going to be able to be identified, but Again, that seems to be a lack of good interfaces and transparency when it comes to that.

[00:25:02.523] Diane Hosfelt: Okay, so two out of the three of you are researchers. How can we better integrate privacy and ethics as our research emerges into a mainstream technology?

[00:25:19.169] Divine Maloney: Yeah, that's a tough one. That's a many answered question, right? I think first, obviously, like going back to the keynote, diversifying participants, diversifying research. But yeah, yeah, let me think about this really quick.

[00:25:34.830] Diane Hosfelt: OK, great. Erica, do you have any thoughts on this one?

[00:25:38.913] Erica Southgate: I mean, I'd like to see this community, I think researchers, university folk, I'd like to see us work in a space where we do that translational work. So Kent was saying, you know, what if you get this big data dump, what does it mean? So it's two-pronged. Can we develop tools or applications which explain to people the implications of what data is? What happens when we consent to data, giving data away, how it might be used, where it might go? The types of data we're giving away, for instance, can we develop applications that explain biometric data in more embodied and visceral and immersive ways rather than just letters on a page for people to read? So can we develop educational applications that your average folk can download and try out so that it can develop community literacy, digital literacy in this issue. And I think that there's scope within this community to do that. I'm not sure people do that or want to do that to any great degree, although it would be a great kind of app, a killer app for kids to use in schools, for instance. So I think this community could actually work to develop applications for educational purposes. And the other thing I think we could do is to lend our technical expertise to lay people or people in other industries so that we act as independent experts in this space to equip them, to empower them to ask the right questions. And that's going back to the issue of regulatory capture.

[00:27:12.037] Kent Bye: One other dynamic that I just want to put out there is that there's a whole aspect of the economic business model of surveillance capitalism that is in direct contradiction to the privacy concerns. And so the underlying business models to run everything is disconnected from surveillance, then we're going to still have this tension between being forced to provide this option of giving up and yielding aspects of our privacy in order to have access. And so I think that's one big aspect of this is until there's alternative business models, then we may be still in this existential tension, or maybe that tension will always be there. But yeah, I think the deeper dynamic of the economic aspect of this, you know, going to the VR Privacy Summit, there was a lot of academics talking about it, but until there is a completely new business model, then I think we're going to still be in this tension. I'm hoping that we're going to be able to get to that point. But if it's decentralized and connected and you have a little bit more autonomy over it, or another aspect is going to be the regulatory aspect, because the third party doctrine as it is set forth right now is that any information you give to a third party has no reasonable expectation to remain private. Which means that if you have somebody that's recording your eye tracking data and saving it on a server, that means with the United States, at least the US government could say, OK, we want to have access to all of this person's eye tracking data, what they've been looking at. recorded and stored by a third party, there's no reasonable expectation for that data to be private. And that's basically a huge loophole for all of this stuff. And until that gets solved at either the level of the Supreme Court and the third party doctrine, interpretation that is interpreting that any information you give to the third party has no reasonable expectation to remain private. That has been challenged in the Carpenter case, which was around the cell phone location data. So the Supreme Court voted in favor for remaining some aspects of that data, but it's very piecemeal and it's not like a comprehensive framework for all data that we have within virtual reality. you know, or biometric data, or it's very contextual. And so we don't have, from the top level, a way of defining what should be private and what shouldn't be. So there's a bit of, from the lawsuit privacy and the legal perspective, that needs to be taken into consideration as well in order to come up with ways that is not just like at the level of research, but at the higher societal level.

[00:29:33.175] Divine Maloney: Yeah, I think that's a really good point. And Microsoft Research, Jan Lanier, he's working on a data dignity. And so like, how do we, the future of economy will be run off of data. And so we know what type of data we're giving away, and then we get money for the data that we're sending out there. But there's a flip side of like, oh, well, does that mean that the wealthy individuals are the ones that have privacy, right? And so we have to like, think more about these new societies that I think are going to evolve from the technology.

[00:30:03.570] Kent Bye: We're still getting some echo. Yeah, I think it's from VR stream. I don't know if he's at his thing.

[00:30:09.053] Diane Hosfelt: OK, let's see. So how can we as researchers best prevent the misuse of the technology that we create? For example, if we were to find that facial movements during a certain task were to predict political affiliation or, you know, eye movement during a task predicted this. What responsibilities might we have? Start with you, Kent.

[00:30:36.155] Kent Bye: So the question, let me just repeat it so I understand it. What are the implications if there's second-order information that can be disturbed?

[00:30:44.644] Diane Hosfelt: What responsibilities do researchers have to prevent the misuse of the technology that we're creating?

[00:30:53.082] Kent Bye: Well, so I'd say the way that this works for the front lines is going to be what the companies are doing. I'm less concerned about what happens in the context of the research. I'm more concerned about what happens for once this technology gets into the hands of companies like Oculus, Facebook, Google, not so much as Microsoft, but you know, the big companies that are based upon surveillance capitalism that showing like for, as an example of a research example, like, what you're looking at with eye-tracking data, you might be able to extrapolate what your sexual preference is. That's something that has been shown. And so taking into consideration as this technology goes into countries where, let's say it's illegal to be homosexual, and that the government could be able to put you in jail. And what does it mean to be able to have a technology where now that you have eye-tracking data, be able to have access to that information, you could be putting people's lives in danger for either putting into prison or other prosecution. So there's things like that, where I think what the researchers can do is to do as much research as they can to say, given this primitive of this biometric data, here's what you can do with it. And then that is the evidence that's going to be able to be provided to the politicians and the government saying, oh, well, maybe we should have a larger legal framework around this. So I see it more in the context of how the research is going to be fed into the policy decision making, but also for the wider awareness of consumers, knowing that if you are in an application and saying, do you give this application permission to track your eye tracking data and use it? there may be a really compelling use case for why you want to do that within a game environment, but where's that data going? Is it going to be recorded? Because as a consumer, you want to be able to know, oh, well, if they have this data, now they're going to be able to potentially extrapolate all this additional information about me. And so that's a big part of it as well as consumer awareness, but also the policy decision making.

[00:32:46.393] Divine Maloney: I think too, like as researchers, we have to do a better job of not only informing other researchers, but like dumbing things down for the public. So putting things in layman terms of like, hey, this is a virtual avatar, and it has the potential to change your perception, your behavior, your emotion. And like, really making sure that the public understands maybe the dangers of that. And then as researchers, like it's our responsibility to then design or come up with design recommendations or implications to then mitigate these negative aspects, right? So any part of technology, it can be negative, it can be used for the wrong point of view. But as researchers, I think we have to make sure that we inform the groups where this could be highly susceptible and then create design recommendations to mitigate that.

[00:33:31.888] Erica Southgate: So, I mean, I think it's interesting because we're talking about ethical implications and one of the implications of ethics is that we, as professionals, whether you're a technologist or someone in the humanities, for instance, or health sciences, that we understand what ethical decision-making is and that we are equipped with ethical decision-making frameworks so that whether we're in a university or we work in a company, we can work through the implications of what we're doing in real time to be able to think about what could happen if we do particular things or create particular products. And that means actually educating ourself on ethics. And there are many different ethical frameworks actually in philosophical traditions. And each of those will give you a slightly different answer to maybe what you should do. But at the very least, we need to have an understanding of kind of practical or pragmatic ethics and pragmatic ethics of action. So is what we're doing, does it have integrity? Will it cause harm? Does it respect people? I mean, these are the types of basic ethical principles that are embedded in human rights frameworks and that have been around since the Nuremberg Code. And we need to understand how to use those ethics frameworks in what we do. And if we do that, then we don't perpetuate a scientism, you know, kind of this scientism framework where we think everything with technology or science is good. we actually begin to question what we're doing in real time and why and we can as professionals raise those issues with whoever we work and with our colleagues and with the public as well and I think that to me is really key because for too long we've had a that kind of mentality that science will save us and technology will save us. Well, guess what? We're actually in a place now where that paradigm is no longer necessarily true. And I think we can apply this to technology. And I mean, I'm really interested in technologists and humanists combining knowledge so that we can get this right, rather than, as Diane said at the beginning, try to retrofit solutions.

[00:35:48.363] Diane Hosfelt: Okay, great. Thank you so much. I think that now we'll open it up to questions from the audience. If you have a question and you're in the Hubs room, please line up at the podium and we'll take questions in the Hubs room. Otherwise, you can also ask questions on Slido. I'll make sure... We have two Slack channels apparently. We have Panel Ethics and Panel Privacy. I posted the link in both of them. So first we have a Slido question. And this is from Jenny. It says, I agree with Devine's point about specificity and consent rather than just yes, no. Is there work in flipping the model so the consent is with a person rather than an application or experience? Anyone want to take it?

[00:36:43.537] Kent Bye: I think that's difficult to do because it's actually more contextual than it is to the person. There's actually a whole work on contextual integrity of privacy, which is a whole framework that some of the philosophers that is a very robust way of looking at the context. So I think actually the context is really what defines what you decide to be public or private. You know, what you share with your doctor, for example, in the doctor's office is going to be different than the type of information that you tell a bank is going to be different than information you tell to a random stranger that shows up on your door. So I think information in a pallet that's private doesn't have to do with you as an individual, it has to do with the surrounding context that you're in. And so I know there's different contextual workflows we're looking at, and I think it's called contextual integrity. as a framework for privacy and they've actually got a whole logical framework and mathematics trying to suss that out and kind of break down the different information flows in the context. And for me, my framework of trying to understand ethics, I think it is very contextual, and that there are these different trade-offs, and it's impossible to look at every single context and do the perfect thing. That's what makes it a dilemma, is that you have to make these different trade-offs. You have to give up a little bit of your personal privacy in order to get certain access to, let's say, biometric data to be able to track how you're doing to do rehabilitation, for example. that could be a trade-off between your biometric data and other aspects. And so I think that's what makes it so challenging is that we don't actually always know how to make those trade-offs and what you're losing and what you're gaining. So I think that's where having a broader look at a cost-benefit analysis that is more context-dependent rather than focused just on the individual.

[00:38:28.803] Diane Hosfelt: Great. Yeah. We have another question from Anonymous. that says, say we provide knowledge to users about second or third order info that can be extracted from their data. What happens when we learn about new info that can be extrapolated later on? As researchers, we should pull data offline, but who decides? Would you like to take this one, Erica?

[00:38:57.936] Erica Southgate: Um, no. I'll be honest, this is why we call it dilemmas. Like it is really, these are really tough issues that need really deep thought. I mean, maybe Divine's got some, let me have a little think about it before I respond, because I really do need to use my ethical toolbox to think that through. Divine, have you got a response or can I flick it to you?

[00:39:20.512] Divine Maloney: Yeah, yeah, I think that's a really good question. It is really complicated. I think In my research and with my advisor and the people that I've worked with, I think going back to Helsinki's declaration for human subjects research, I think that's a good starting point to think about, hey, what type of research are we doing? What type of data are we? Is it ethical research, to go back to what you said, Erica? Is the welfare of the participant of the utmost importance? Do we have respect for persons? So making sure that participants have autonomy and then rights to make their own decision. Now, when data comes in mind and it's like second or third order information, like, yeah, it's tricky. But, like, I do think just letting, like, informed consent, right? So letting people know like explaining to them what type of data is then extracted. And it's like a continued informed consent. It's not just like a one-time thing of, hey, we're going to take your data on your eye movements. And then later on, it needs to continue. And people need to continue to consent to the data that you're taking.

[00:40:29.872] Erica Southgate: So in ethics we have kind of procedural ethics which are a bit like, this is both in research and in real life or surveillance capitalism, where you have procedural ethics where you fill out the informed consent form for research and you read the information sheet, you fill out the informed consent form or you're supposed to read the information on the application and tick yes or no for consent. But I think Devon's right, there's also another thing called procedural ethics and ethics in practice. Ethics in practice is where the people who are making the applications, who are maintaining the applications, who are using the data from the applications actually continue to think ethically as and to think about the type of what they're doing with the data or whether they're sharing it with others and the implications of that. So it's both around understanding, getting the procedural ethics right at the beginning, but also what does ethics in practice look along the journey, along the way?

[00:41:27.292] Kent Bye: Yeah, and the thing that I would add to that is just that virtual reality generally is an interdisciplinary fusion of all these different experts coming together. And Diane, I know we've talked about computer security and how the accelerometer on a phone is able to pick enough vibrations up to be able to reverse engineer your password, as an example. just by looking at how you're typing on it and taking the accelerometer data and then basically being able to get all this second order information. So it's gonna be pretty much impossible for one individual to know all of those potential threat vectors or what could possibly be done with information, especially when you start to throw in there artificial intelligence that's specifically tuned and trained to be able to extrapolate very specific information on huge sets of data. And so I think there's needing to step back and cooperate with all the different disciplines who might have some stake into this and to figure out as we move forward, like as academia, like how do you get those perspectives from computer security, as an example, you know, there's lots of people who are trying to do the white hat protection around what are the threat factors for computer information technology, and I think there's going to be a similar movement within VR, there already has been, where given this set of information, what is the worst case scenario you could possibly do with this information if it gets into the wrong hands? So I think generally as a practice, if you are recording data and storing it, then you have to ask 10 years from now, if this gets into the wrong hands, then what damage could this be done? And if it's de-identified now, is there a way to unlock that data later? So maybe the best practice is to not storehouse data that you're not using. Like if you're done with the data, maybe you should get rid of it because who knows what could be done with that later because there's all these second and third order things that may have not been even discovered yet for what could be done with it. So I think that's a challenge, especially when you have artificial intelligence that's very civically tuned. As a metaphor, I like to think of we're here in a virtual environment right now. There could be ways in which I'm moving my body that somebody could be watching me and be able to record the bone length of my arms, and that's very consistent, and then maybe I try to be in a VR experience later, and that same type of information around my bone length, that doesn't change, if someone has already recorded me in another environment, they may be able to de-identify me in other environments. And so there's stuff like that, where if you're in a VR environment, and you start to record all this information, how can you then reverse engineer it, and then come up with a fingerprint or a lock that unlocks if you get a hold of a bunch of data, then would you be able to de-identify it later? I think it's really impossible. It's basically the p does not equal np. So it's impossible to come up with all of those vectors. So if they were equal, you'd be able to then essentially predict all of that stuff. But once you discover it, then you can validate it. But the amount of work that it takes to be able to discover it, it's a very hard problem. So that's part of the reason why security has to continue, we'd be this iterative process, because once you discover it, you can validate it, but that process of discovery may take time.

[00:44:29.509] Diane Hosfelt: Yeah, thank you. Kyle, did you have a question?

[00:44:33.572] Questioner 1: I did have a question. It's just that, you know, muting your mic and then unmuting your mic is a little bit more painful in virtual reality. The space that we're in right now, we actually, Blair and I debated whether or not this was considered a public space. And ultimately, we decided that this was a private space. But what are your thoughts on that?

[00:44:51.749] Erica Southgate: Well, it's being streamed, isn't it? Is this being streamed? It's a public space. Will it be on YouTube? Or Vimeo? Will the video be posted later?

[00:45:03.399] Questioner 1: Well, what if it weren't? What if we were just in this space? Does that give somebody a right to record? Where does it become public?

[00:45:11.527] Divine Maloney: Yeah, can I try? So I'm running a study right now on social VR platforms. And I consulted with IRB because I just wanted to observe people's interactions on the platforms. And what our IRB office said going back and forth is that since the platforms are not readily available, so you can't just, it's not like a public, you can't just type in the web browser or whatever, go into the application and then it's there. You actually have to sign up. There's some sort of like authentication. And so that authentication makes it like a private space. So since since anyone can just kind of like log in and hop on hubs and view the conference, that to me is not a private space.

[00:45:54.075] Questioner 1: They can't be in this room without it, but they can be watching, right? So I don't know if that's the same thing.

[00:46:00.318] Kent Bye: Well, here's from a legal perspective. If you look at the definition of the third party doctrine, the third party doctrine says literally any data to a third party is no longer reasonably expected to be private. So that's a problem within that doctrine because that shouldn't be the case, but that's the way the law is written now. So because this is going into a server, then by definition, the government could get access to it if it's recorded and they came and asked for it. So these are Fourth Amendment cases protecting U.S. citizens from unreasonable seizure from the government. And any time you give data to a third party, then it's, quote unquote, no longer reasonably expected to be private. So with the law, that needs to have more Supreme Court cases challenge that. And the Carpenter case is one of the cases that challenges that. But if you extrapolate that out, then any room that's listed publicly within this conference, I'd say is a public room. Anything that is a private, like you created the room and only you give that private link to someone else, then that sort of has a private context. But if you put that link onto Twitter, then all of a sudden that private context turns into a public context. Now anybody can join. So this is why the framework of contextual integrity comes into play, because it's very fluid. You could be having a private conversation with somebody and say, hey, are you okay if I talk about this publicly? Now, all of a sudden, what used to be private is now public. And we sort of negotiate that when we're with each other one-on-one. And we have to sort of navigate that. Journalists do that by saying, unless you say this is off the record, then you assume that it's on the record. And so I think there's different sociological protocols that we have with each other that try to navigate that. But I think we're going to have to, social dynamics are. But there's a legal framework that actually is working against us here. Because the third party doctrine says any data you give to third parties is not private.

[00:47:50.597] Erica Southgate: And what do you do around consent if some people in the room give consent for streaming and others don't? When we've actually signed up for the conference, we've given consent for particular types of broadcasts and we're reasonably informed about that, but I suppose that's an issue, isn't it? We have to think about when we have large gatherings of people, what that actually means in spaces like this.

[00:48:16.477] Diane Hosfelt: What I'd like to do, we're just about out of time, so I'd like to end by asking each of you what consumers can do today to help protect their privacy in mixed reality. So we'll start with Kent.

[00:48:30.449] Kent Bye: Okay, well one thing that I'd say is that I feel like we're at the beginning stages of a war. Like, there's a war where there's companies that are just doing things, and we don't really actually have a lot of recourse to be able to fight back. They're just going to do what they want, and until there's legislation from government, then they're going to continue to keep pushing on the limits. And for me, I feel like the biometric data is on the frontiers of that war, and that within the next couple of years, I don't know when, but eventually we're going to start to have headsets with, say, eye-tracking data. Now, where does that data go? Who owns it? What are the things that we can do from a policy level to be able to have some broader protections? So for me personally, I do think that privacy is a bit of a right. I don't know where we fall on the spectrum between, you know, this is something that we get to choose, the more libertarian approach of you being able to treat it as copyright or being able to license it out and get paid for your data that you're giving out. That's one approach. Or the other approach is getting some sort of paternalistic protections around our privacy, you know, philosophic and legal perspective. And so recently there was an experience called Persuasion Machines that was at Sundance. And part of what they were doing is that you go in the VR experience and they had you sort of sign a release form and without really explaining to it, you were being live streamed. And then after that, I got so upset, then I went down this deep dive. So if people want to get more information, you can check out Persuasion Machines or the XR Ethical Manifesto.

[00:49:58.997] Diane Hosfelt: Thank you so much, Kent. What about you, Erica? What can consumers do today to help protect their privacy?

[00:50:06.443] Erica Southgate: I mean, I think we need people to work in the space where we can translate what privacy means or does not mean with these types of technologies, with either XR technologies and particularly around AI and integration of biometrics into that. but we need translational research. So there's a middle ground between the experts, the technologists who are the experts, and the community. The community obviously isn't one thing, there are equity issues there, equity and diversity issues there, where we can actually produce material, which can be any type of material, not just written text, but for instance using XR or using other types of applications to educate and empower people, and we're really at the beginning of that. We really are at the beginning of that. That means in medicine we have a thing called translational research, where the implications of medical research is translated by middle people, they're usually academics, who produce a whole lot of curriculum material for practitioners, health professionals in the field, so they can understand what that means for their practice. and we need something similar, a similar set up around translational research in this space so that we can empower different communities and we can be culturally appropriate in that empowerment and we can certainly influence school curriculum in that direction. So, to me, it's about people who are interested in coming together to act in a translational role so we can empower communities and that way they'll make really good decisions.

[00:51:39.304] Diane Hosfelt: Great, thank you. And Devine, what do you think? What can consumers do today to help protect their privacy in MR?

[00:51:47.090] Divine Maloney: Yeah, yeah, great question. I think it's not just on the consumers, though. It's like a cycle, right? So first, us researchers, we have to get our work out there and realize that the average person is not going to read a 20-page research paper. And so dumb it down as layman terms as possible. And so get the information out there about what information you're giving up. And then creators of the content, the Apples, the Microsoft, the Magic Leaps, better informing people about what data is being taken. And then from the consumer side, just talking to your politicians and making sure that there is some sort of government regulation around data and around your privacy for MR. And so I think it's those three things.

[00:52:27.158] Diane Hosfelt: Great. Thank you so much for coming to our panel and look for us around the conference if you're interested in talking more about privacy and ethics and mixed reality.

[00:52:37.727] Erica Southgate: Thank you, everyone.

[00:52:39.116] Kent Bye: Thanks, everybody.

[00:52:39.836] Erica Southgate: It's been a pleasure.

[00:52:40.536] Questioner 1: I think you guys actually have a little bit of time. The session doesn't really end until 5.30.

[00:52:45.998] Diane Hosfelt: Oh, we have another half hour?

[00:52:48.678] Kent Bye: Yeah. You guys have any more questions in the audience here?

[00:52:53.399] Questioner 2: I have a question. So for people who aren't really as well versed in these matters with MR privacy and ethics, what would you say is like a doomsday scenario in case everything just goes wrong?

[00:53:07.836] Kent Bye: A doomsday mystery? Oh, wow. Well, that's like Big Brother 1984, you know, where everything we say or do is recorded, and then thought crimes, you know, Minority Report type of style, where prosecuted before we actually do things, you know, that, but also, like, we just Privacy is about being able to have your thoughts before they're fully formed and have an ability to kind of work things out. Without that privacy, then we're putting everything as a public context by default, meaning there's no room to actually make mistakes in that way. But also, I'd say more along the lines of being tracked and everything we say or do being recorded somewhere and having that be extrapolated against our interests. And I think that's the other thing is that there's things that usually within technology, it's trying to help us, but this could be basically trying to undermine us to be like, imagine if you were to go to a doctor, and the doctor could sell that information that you tell the doctor, to an insurance company and that doctor was making a profit based upon a diagnosis he just got you. That would be the type of thing that would start to happen is private being used against us when usually it should be

[00:54:21.225] Erica Southgate: I mean, Clearview AI is the beginning of this, really, or a symptom of this. We should be very, very concerned about what's happening with data collection in that company and its relationship to law enforcement. It's not as though the doomsday is far away. Certainly, I wouldn't say it's a doomsday, but the undermining of our human rights is certainly here right now.

[00:54:46.978] Divine Maloney: Yeah, I think not being able to control our own perceptions of reality without our knowledge. So seeing content, participating in things that you have no control over nor want, I think that's the doomsday for me. That's the COVID-19 of MR.

[00:55:06.108] Diane Hosfelt: All right. I'm back and no longer flying around. Sorry about that, everyone. Sorry I can't read the schedule. Glad that we have more time to talk. We have another question on Slido. Say people would be able to set a personal filter based on the types of data they consent with giving away, preventing them from seeing services that don't adhere to their privacy standard, so they're less seduced. Would this be feasible? And I'm going to add, would this be ethical?

[00:55:34.557] Kent Bye: I think that it goes back to what I was saying earlier, which is if you consider privacy to be a human right, and maybe You know, you can't sell yourself into slavery, and so maybe certain aspects of what you're giving up, maybe people don't realize the full implications of that. If there's been a collective weakening of our concepts of privacy and what it means, then maybe we need more of that framework to be able to help protect us. But I think this gets back to the contextual dimension and the contextual integrity is that with a doctor who wants to help cure something that I might have, yeah, sure, like have all the eye tracking data you want because it's going to help me in my tracking data given over to Facebook and Oculus. That's more the context rather than the data itself. And so for me, it's less about me as an individual or that data. And maybe it's like, okay, I don't consent to anybody ever having my eye tracking data. And that could be an approach, but I don't know if that's workable, because what do you do when you're in that doctor's office and all of a sudden you want to give an exemption? Is it opt-in or opt-out? That's another talk about privacy discussion that hasn't come up. Mostly you're automatically being opted in, but is it an opt-out scenario where you get a degraded experience, and then you can make a choice to have more data to be able to enrich your experience? But that's sort of the opposite of how most things are, is that they give you the best high-fidelity experience, but also take as much as they can.

[00:56:56.172] Divine Maloney: I think it's totally feasible. It's kind of like going back to what Kent was talking about earlier. We'd have to redesign our economy, right? And redesign how companies make money. And so it wouldn't be this. So say, like, I have no restrictions and I want to have all my data given away then I can have all the content I want versus like me not wanting to give anything away does that mean that that I now have less abilities to do things less access to technology like it needs to be designed equitably so that you know people who per se don't want to give away this information aren't any less fortunate.

[00:57:35.906] Diane Hosfelt: So earlier, Devine mentioned that he had been consulting with an IRB about some research that he was doing. So while we're developing technologies in mixed reality, there are research protocols, there are IRBs to protect subjects and ensure the safety of everyone involved. Do you think that there's a need for a real world IRB and how would we design such a system?

[00:58:04.317] Kent Bye: Actually, that was one of the outcomes of the Data Privacy Summit is that the model for doing medical research, there's an institutional review board, the IRBs. And so do we need an IRB for privacy, someone who is overlooking how companies are using our data and being able to take some action? The biggest challenge to that is what is the deeper regulatory teeth? that that type of IRB would have? If it's just a private entity, what incentive does a company like Facebook or Google have to cooperate with that IRB? And what if they don't cooperate and they violate those terms? And so I feel like the challenge is what's the deeper regulatory framework around it and the legal obligations? You know, Lawrence Lessig talks about how there's like four different dimensions of interacting with collective decisions. There's the economic decisions, cultural decisions, the legal decisions, and then the technological architecture and the code. And so all of these are sort of playing off of each other. And so you can have the legal framework, but you also have to have the cultural implications and the people and how they're interacting with it. And then there's how you actually build the technology. And so are there ways to implement it and force sort of a technological implementation? And then there's the economic issues, which is that let's have a social movement where everybody stops using this, and then that will drive an economic impact. And that economic impact, you know, the cultural and economic impact drives some sort of change. The challenge with something like IRB is like, how does it fit into that larger money incentives of these companies to be doing the wrong choice of privacy, then you would need to have one of those other vectors, whether it's cultural, technological, the code or the legal framework to be able to actually change some of that. So that's at least how I think about that. It's a good idea, but then there's a deeper questions around how is it implemented and how is it enforced? Oh, wow. Erica, your voice was maybe coming through really high pitched. I think, Erica, we're hearing like a chipmunk sound. Might need to reenter the room.

[01:00:13.342] Divine Maloney: Um, in the meantime, I think, yeah, can we implement this? So I know a few companies that have like RB like protocols in their research divisions, but you know, like Kent said, it's hard. Like you can have all the protocols you want, but if the company is driven off of. making money off of your data and maybe they need to make a unethical decision and get some research out of it so that they can make more money, that's really difficult. That's really difficult to implement and uphold. That's why I think like there needs to be some sort of like government regulation on all of this, on like privacy, on data, like what companies they're able to do because they're not and they have free reign to do whatever they would like.

[01:00:55.273] Diane Hosfelt: So it sounds like a lot of what we're talking around is wanting some sort of regulation and this seems to me like it's not necessarily mixed reality specific although mixed reality might be perhaps particular reason and we need it now. Would you agree with that?

[01:01:16.198] Kent Bye: We definitely need some, for me, there seems to be a flow between how laws are made. So there's Supreme Court decisions and stuff that goes to the courts. There's conflict. And then there's stuff that could potentially come from Congress that they're making a decision and putting stuff in the law. So, but usually like, you know, we've talked about Diane on podcasts where things have to basically go wrong and they fix it with the legislation. And there's like a propagation where it still feels very early for things to kind of evolve. And I think things continue and people are going to cross a lot of ethical boundaries. And then we'll see how those ethical boundaries are crossed. And then we'll try to come in and fix it. And I think it's really difficult to predict what the whole range of what those problems are going to be, and to proactively come up with those frameworks, unless you come up with a comprehensive framework around privacy, and then have everything driven from there, which I think is, again, a bit of an open question, which complicates everything.

[01:02:13.928] Diane Hosfelt: I know that Erica has experience. She's right behind you. Oh, is she? Oh, hello, Erica.

[01:02:19.892] Erica Southgate: OK, do I still sound like a chipmunk?

[01:02:23.554] Diane Hosfelt: No, you sound like yourself. So we were talking about some regulatory measures and how we need regulatory measures to protect consumers because the companies aren't necessarily going to put this. It seems like we all agree that it's not going to be simply technical measures. And that there's going to be some part of privacy protections that are going to need to be regulatorily enforced. And Erica, I know that you have experience working with your government. Can you tell us a little bit about that?

[01:02:59.562] Erica Southgate: ERICA So, I mean, we do need regulation. I mean, different nations will have different approaches to this. So the US is quite a small illiberal, highly individualistic. interested in individual rights and many Western nations are certainly interested in that, but other nations might balance it with more of a social contract. You can see this social contract or a more collective view of what is good for society. It depends on the national context, the type of regulation, legislation that will be enacted, although there is a lot of policy and legislation borrowing going on, so GDPR sets up a framework, a strong framework, and other nations look to that, particularly where a social contract is strong in that nation. In Australia, for instance, we do have regulation. We do have regulatory bodies that are supposed to oversee big institutions, for instance, although what we've had here really is a history, as in many nations, of regulatory capture, where those industries or sectors, the regulatory body that's been set up to regulate them hasn't done so. very effectively or well or has succumbed to basically being in the thrall of the sector and taking the advice of the sector rather than developing its own or seeking independent advice. So any regulatory system has to be frank and fearless independence to it and there has to be independent advice and there has to be For instance, from this community, researchers who are willing to stand up and give that advice to government and to be public intellectuals on that. And I think that's a really big role for this community, for instance, around XR technologies and AI. And that means possibly not getting industry funding, which for any researcher is a really big decision to make.

[01:04:57.956] Diane Hosfelt: That's a really big point to me. So, to your mind, the best way to try to avoid regulatory capture, as you say, is to have researchers who are willing to stand up and be independent and possibly forego those industry funds. Are there any other ways, too, that we, as researchers, can help avoid regulatory capture?

[01:05:22.427] Erica Southgate: I think our role as researchers in this space is really we have to make decisions as researchers on the basis of our careers and career trajectories and the types of projects we're willing to partner with industry with and the questions we ask. I partner with industry at the moment and the industry partners that I'm working with small tech, they're interested in doing the right thing generally and they see doing the right thing as a marketing strategy, actually, in the ed tech field. So they see being ethical as a good thing and as being particularly important for education and a good thing for their products. But, you know, these are big issues and big questions for universities. We know that there are universities everywhere that have been in the thrall of philanthropists and industry and have done unethical things. So it is very difficult for us. I think we're in a very difficult position, but it's certainly something that we need to discuss, both if you've got a lab in your lab or if you've got a research group in your research group, and just as an individual researcher.

[01:06:30.536] Diane Hosfelt: Now, Devine, as an earlier career researcher, what are your thoughts on this?

[01:06:37.423] Divine Maloney: So honestly, my dream job would be to be like if the government created a separate branch of. all just like technology related matters and be like an ethical advisor to that. I think that would be so cool to think about like all the different implications and all the people who are affected by X, Y, Z policy, especially relating to tech. I mean, we all saw, you know, Mark Zuckerberg was on stand and the senators were asking him questions and it was like, who came up with these questions? These are like, these questions are ludicrous, right? And so like the government is just so behind, it needs to get caught up and it needs to also be proactive, kind of like Kent said. And if the government can be proactive, then we'll have data equity, right? Then we'll have a world that, hey, yeah, I want my kids to play in MR, social VR, et cetera, right? So yeah, that's my take on it. But how do we get there? I hope it doesn't have to be reactionary. I hope it doesn't have to be like, oh my gosh, all these people's data is leaked about your personal information and what you do and how you move and your biometric data. It shouldn't have to be that. It should be more proactive and I think just transparent

[01:07:46.347] Diane Hosfelt: great great well we have a few minutes left for real this time i swear so i wanted to get all of your final thoughts on just privacy and ethics and mixed reality as it stands today where do you think we are and where do you think we need to go we'll start with you divine yeah i think we're i think we're at like the exciting time where

[01:08:16.653] Divine Maloney: Eventually we're going to get that killer app that introduces everyone to MR on the consumers. Or hopefully it doesn't have to be like some sort of pandemic right where like now everyone's into virtual reality and VR headsets are sold out. But I think we're really close, but we also should be thinking about the ethical frameworks to design, like, how can we design this to help everyone and not necessarily target a particular group, whether that be, like, racially, genderly, ability-wise. Like, these are the things that we need to be thinking about as researchers and industry folks. Yeah.

[01:08:54.936] Diane Hosfelt: Okay.

[01:08:55.437] Erica Southgate: Erica? So it's a very exciting time. If you go into social VR it's full of young people and often children who shouldn't be there but really it's an exciting time to connect, to have amazing leisure experiences and educational experiences. I mean we're really at a space where you can feel some excitement. And people are asking really deep questions about the technology and its uses. But I would say, you know, it's not just about the design of the technology or that's very, very important. It's about the design, how we implement it safely and ethically in place, in situ, in context. And we need much more research and much more collaborative research with practitioners around that so that we hone our ethics, our procedural ethics and our ethics in practice for particular contexts. and we also need very strong government structures. So we need to work together to inform those in power about what those government structures should look like.

[01:09:56.174] Kent Bye: To me, I feel like the metaphor is that we're at a crossroads where we could go down a very good utopian path or down a very dark dystopian path. The world right now is in the midst of a global pandemic. And the World Health Organization has said, you know, this was preventable. You know, we could have done a lot of things to stop this from happening. And what's been really striking for me to watch what's happening in the world right now is to see nation after nation that continue to make the same mistake and to not take it seriously. And they're paying the consequences of that. So I feel like it's an opportunity right now for the world to kind of do a global reset because this global pandemic is not the only issue that that is happening with. There's global climate change that is also an issue that mostly people have been ignoring and not having either the economic, political, or cultural will to make a difference. I feel like privacy is in that same realm where there's an opportunity for the researchers to make those arguments and to come up with the research and to show this is what you can do with the biometric data. This is the scary dystopian future we could be living in and to come up with the actual data for that, and to show it to the companies, and to show it to the culture. And frankly, to collaborate with TV shows like Black Mirror, and actually have the data be played out as a narrative. Because if you see the dystopic played out as a narrative, then there's something about that that we can more intuitively understand. because it's hard for us to extrapolate so many of these complexities. You know, the researchers have a role to be able to do the fundamental foundational research that then gets propagated out into other cultural artifacts, whether it's a movie or whether it's journalists and get the information out there. And it'll be up to the culture and the politicians to decide whether or not they can act or not. And I would rather have an overflow of evidence and arguments for this as an issue that's important rather than a poverty of that information. And whether or not the politicians and the culture care and they act and they respond, that is yet to be determined. But the research community here can start to do that foundational work and to get it out there and hopefully be able to wrap up a larger narrative and context around it for why it's important.

[01:12:08.916] Diane Hosfelt: Great. Well, I know that I've already said this before, but thank you all for coming. My ask for the community is that while we are creating these amazing technologies that we stop and we ask ourselves, how can this be misused? How can this be abused? Can novel eye tracking techniques, can this be used to identify people and are there mitigations? Am I using this solely to identify people? And what are the implications of this? What are the privacy implications of the work that I'm doing? And just stop and think for a moment and think about the privacy because once that cat is out of the bag, we're not putting it back in. and we're not going to be able to retrofit protections back in. We'll just end up with a jumble of protections like we have with everything else. So we should really try to build these in from the ground up as we create the technology. So thinking about mitigations from the start while we create things is critical. Thank you so much, everyone. Really appreciate it. And thank you to our audience.

[01:13:26.128] Erica Southgate: Thank you, wonderful. I've had a great time.

[01:13:30.457] Kent Bye: Nice, thanks so much.

[01:13:32.842] Diane Hosfelt: We really appreciated you all being here.

[01:13:34.746] Kent Bye: Yeah, thanks everybody. So that was a panel discussion on ethics and privacy and mixed reality that was happening at IEEE VR and Mozilla hubs. And the moderator was Diane Hossfeld. She's a security and privacy lead for the mixed reality team at Mozilla. Erica Southgate, she's at the University of Newcastle. She's a researcher looking at technology ethics within the classroom. Devine Maloney, he's a PhD student in human centered computing at Clemson University, looking at ethics, social good, and privacy, and implicit bias and embodied avatars, as well as myself here from the Voices of VR podcast. So a number of different takeaways about this panel discussion is that first of all, Well, there are so many different ethical issues here. And I think what is required from the research side to continue to push forward and to be able to do the research that's necessary, but also to come up with larger ethical frameworks. Erica is looking at a number of different ethics frameworks, both procedural ethics, as well as ethics and action. And so one may be a little bit more transactional where you have like a terms of service and maybe you're doing a research study and you have different ethics around the context of that research. research, but then there's ethics in action, which is like an ongoing relationship of how data are continuing to be used over time. For her, she's specifically looking at a lot of educational contexts. And so she's concerned around being able to inform students around biometric data capture, as well as the algorithmic nudging that could be happening. So in the context of learning, if you're gathering lots of different information, let's say eye tracking data and extrapolating different information from that, then what is the different algorithmic biases that are included into those algorithms, but also how algorithms may be nudging people towards specific directions. But also diversity and equity was a big theme that I think came up over and over again here, just that, you know, there's going to be specific things that folks have their own life experiences and they're going to be paying attention to certain things. And so Devine was talking about how some environments may just be super toxic for underrepresented minorities. And so when they get into these different spaces, then what kind of tools are available to be able to help prevent harassment or have personal space bubbles? Or just in general, what's the code of conduct and what's the larger culture that's being cultivated within these different environments? And so if you don't have that diverse representation at all phases of the design of both the technology as well as these different experiences, then you're going to have certain spots that are invisible that are going to be causing harm for people and that you really need to have diversity of representation on all different phases of the design, but also thinking about, you know, how to create spaces that are more equitable. And one of the things that Erica Southgate was saying is that a lot of these issues of equity is actually a social justice and human rights issue, especially when you start to look at different aspects of body integrity. And she says it's worth looking at human rights law and looking to see how a lot of the folks that are on the front lines of a lot of these fights around body integrity are minorities, women, queer people. And so there's going to be certain concerns that they're going to be looking at that unless they're involved within the design process, then they're not going to be having those concerns be integrated. But biometric data seem to be a thing that everybody seemed to be universally concerned about in terms of what data are being recorded, what's being used with it, and all the different second and third order things that you can extrapolate from that data. It's not just the data itself. It's what the meaning you can extrapolate from it and where that is being recorded and who's using that data. And, you know, there's this larger question around what is the role of regulators to be able to come in and be able to interface and mediate some of these different rights. And Erika is saying, you know, in the United States, it seems to be a little bit more focused on the individual and the individual rights, the rights of the companies to be able to innovate. And so it has this more liberal approach of trying to focus on us as individuals and that other countries have taken a little bit more of a collectivist approach and looking at These different issues of privacy as a human right and the GDPR is embedded into that like having different rights that everybody has the right to delete themselves to be able to Not have information captured or to be able to have some sort of accountability and to be able to audit what's being recorded so that's something that is happening in the global international sphere and so I that may be giving a little bit more context for why Facebook is coming out now with this white paper charting a way forward, communicating about privacy towards a people-centered and accountable design that was released on the evening of July 14th, 2020. This seems to be an effort where they're really pointing out some of the different design decisions and dilemmas around notice and consent, you know, giving people full information about the implications of what they're choosing. And I think that's one of the things that they've been getting pushback for a lot of the regulation is that they're not really giving informed consent. They're just aren't really being read by people. And they're just seizing all this data and doing all sorts of stuff that people have no idea all the different information that they're gathering. So Facebook has also created this TTC Labs. It's the Trust, Transparency, and Control Labs. And so they've been doing these design jams with different academics and people from industry to be able to design different aspects for what is notice consent? What's a better way to be able to visually describe some of this? to be able to have a little bit more of an open dialogue between civil society and academia and people who are concerned about these issues to be more in this design process. One of the things that Facebook says in this white paper is that it's difficult to just like have regulation be put forth into the companies and then they have to like design stuff that actually creates a terrible user experience. And so are there ways to create a more iterative process where you're able to have what they refer to as a regulatory sandbox where they work in collaboration with the civil society academics and with the regulators trying to take into account all these different concerns and to be able to create an experience that doesn't create this permission fatigue where you even want to get into VR. You kind of have to like click a thousand buttons and. You already see that with cookies online and all the different consent that you have to give every time you go to a website. You know, that's a suboptimal experience, but essentially having more of an iterative process that has the design considerations. And so there's different trade-offs between comprehensiveness versus comprehensibility, prominence versus click fatigue, as well as the design standardization and design adaptability. And so you're giving up something in order to gain something. And so all design has these different trade-offs. And so how do you visually communicate that? I think that's what Erica is saying, that you need to have this translational work that happens from the people that are actually doing the foundational work, but then to be able to take that foundational work and be able to translate it into ways that are understandable to consumers, because digital literacy is something that's a function of education, and that there needs to be a whole other initiative to be able to translate the implications of this, and trying to take in more human terms and more embodied metaphors that are visceral and understandable, like the implications of what this data mean and what you can do with it. That's in part what the TTC labs, the Trust, Transparency and Control labs within Facebook is doing it, but they're doing it from the interest of being able to record all this information. And there also needs to be a counterpart of the risks that are there. Wouldn't it be interesting that every time that you had consent given over, you'd be able to see both perspectives, not only the perspective from Facebook, but also the perspective from an independent third party that also has the risks that are laid out there as well. I think from Facebook's perspective, there's certain things, especially when it comes to XR, where they can't do what they need to do unless they have certain access to like, say, what's happening with your hands. They need to know what your environment space is and your relationship to that space. And so there's certain things that they have to be recording different aspects of your environment that's around you. In this specific paper, one of the things that they say is that a meaningful transparency in the context of AR and VR must acknowledge that data collection and uses in these contexts may be different from traditional technologies. In order to function, AR and VR systems process things like a person's physical location in a space and information about people's physical characteristics. some of which people might consider particularly private. And that's a quote that's referencing a paper by Mark Lindley from Stanford, as well as Eugene Volokh from UC Davis. And that's a paper called Law, Virtual Reality, and Augmented Reality, which I'll dive into here a little bit more. Facebook goes back and says, for example, advanced VR systems may use technology to measure the movement of people's eyes in order to provide a higher resolution, more immersive experience. So here they're talking about eye tracking data and how that could be used in something like foveated rendering. So to look at what you're seeing on the screen and to be able to do high resolution rendering for whatever you're looking at and for everything else is in lower resolution. And so it's basically a way to get the most out of a limited amount of processing power. I guess the thing that I would push back on is that, yeah, you could use eye tracking data for foveated rendering, but you could also be looking at my pupil dilation and what I'm looking at, what I'm paying attention to, and be able to extrapolate my sexual preferences when I'm paying attention to something. And you could be recording that data forever and attaching it to a psychographic profile that is having everything that I look at forever in virtual reality recorded in some database that essentially, with the third party doctrine, functionally, the government could then have access to all that information as well. For me, the distinction is, OK, there's one thing that is real-time environment, or even if it's a rolling window of, let's say, the last 30 seconds of data, and then it's a whole other thing to be able to capture that ephemeral data and store it forever. So that's some of the different things that I think are at stake that needs to have a lot of discussion around it. But I think when we go into this issue of the regulatory measures, the fact is that in the United States, it's very reactionary. You have to cross a threshold and have harm that is done, and then you take it to a court case, and then from that court case, then law is put into action. There's certain ways that Congress could proactively take action, but they don't tend to do that very often. And in fact, when it comes to technology, sometimes they've taken action in a way that is too early, and then the whole industry goes into another direction. one argument that Limley and Volokh are arguing is that, you know, in the past, if you act too early, then, you know, things could go in a completely different direction. And so we're kind of left with either having an issue that causes so much harm that there's a lawsuit that then takes up to the Supreme Court to be able to have some of these different arguments around basic Fourth Amendment protection rights around some of this different data that we're having, especially if we start to project out into a world where a critical mass of people are using this technology to be able to interact. One of the things that Erica Southgate is saying is this problem of regulatory capture, the fact that the regulators are in bed with a lot of these different major tech companies, that they're not really giving much oversight, that they're just letting them kind of ride the rules. And so that's a risk if there's no independent oversight that has the technical aptitude to be able to step in and say, I know actually, you know, here are the risks that we know about and the harms that are done. And always there's going to be an infinite number of risks and harms. And it's really identifying what are the ones that are the most probable and possible. So because the regulation is such a big question, it kind of sent me down this rabbit hole. Once I saw Facebook had put out their white paper, they had referenced this article from Limley and Volokh on January 1st, 2018. It's called Law, Virtual Reality, and Augmented Reality. It's a journal article from the University of Pennsylvania Law Review. They said, VR systems are also likely to capture information that people may not expect and would consider particularly private. For instance, VR companies will want a detailed map of our bodies to allow us to interact realistically using avatars. They also may want sensory data about physiological responses to apps, both in order to rate games and detect and fix errors making people sick. They may want to track where my eye moves in order to prevent dizziness and to optimize display and rendering. Oculus, for instance, tracks users' head, hand, and eye movements, as well as whether they are sitting or standing. It shares that information with developers and perhaps third parties. Other companies may track, gather, and perhaps resell information aimed at estimating users' emotional responses. There are good reasons for companies to collect that data, but it is likely to be data that people don't expect they are sharing with a private company. So I think overall, it seems that both Limley and Volokh are pretty hands off. I mean, they're pretty matter of fact in terms of like, yeah, a lot of this stuff is going to be recorded and the companies probably have a good reason to do it. But I don't think that they're necessarily looking at a lot of the harms and risks that could be done. So there's a sort of inevitability that they have around a lot of this stuff. And later they say that we feel like, you know, when we're talking about stuff and it feels like we're in the real world, then we feel like we are alone with somebody and we may be more likely to share intimate secrets than we would on a public street or even an email. But in VR, those secrets are inevitably being recorded somewhere and likely being retained. They go on to say later that our movements and actions in the physical world are increasingly observed, recorded, and tracked, but there are still spaces where we are not followed and acts that are not recorded and searchable. In VR, that will likely not be true. Everything we do, we do before an audience, a private company that may well keep and catalog that data and may have lots of reasons to do so. Data mining, security, user convenience, and more. So again, there's a little bit of this just inevitability that, yeah, this is going to happen and not much recourse that they are presenting within their paper. But they do actually link to a paper by Gilad Yadin, who wrote a piece called Virtual Reality Surveillance that was in the Cardozo Arts and Entertainment Law Journal back in 2017. So, Yadin has a little bit different approach where he's actually tying together a lot of these different things in terms of the surveillance and what's happening with the Snowden documents, you know, just this wider concern for what's happening with our privacy. And Yadin says, you know, if legal institutions do not act and restore the balance, then virtual reality cyberspace may usher in an Orwellian future. Imagine a future society where the law does not effectively limit the flow of this unprecedented wealth of personal intimate information into government databases, the benefit for a liberal society of a strong, straightforward virtual reality, Fourth Amendment privacy protection seems apparent. So, Yadin is saying that because of the spatial nature of virtual reality, he's arguing that it should have stronger Fourth Amendment protections because there's a certain collectively agreed upon reasonable expectation of privacy when you're in these certain contexts applies to like a phone booth, for example, or if you're in public and you go to a public restroom. And so Supreme Court law has extended out some of those into physical spaces where we feel like we should have privacy. And so there's this consent that we have and what we agree about what should be private. But everything that happens on the Internet has been flattened into like this as far as the government's concerned, is being shared with the third party, and all that information is free for the government to have access to without a warrant. And so what Yadin is arguing, I think, is a very optimistic argument. And ultimately, he's a legal scholar that, until there's an actual decision decided by the Supreme Court, then, you know, this is all, like, legal theory. But looking forward to saying, like, we should actually take what is happening in cyberspace and virtual reality and actually give stronger Fourth Amendment protections, because there's new reasonable expectations of privacy that we're really wanting to expect. And that like, it's already gone way too far with what's happening online and we need to like really reign it back. And that's why the ACLU is arguing in favor of the carpenter case. And you know, it's one step forward where there was some aspects of like cell phone data that, you know, we don't consent to like the government having access to all of our movements, just because we have GPS in our phones, tracking us everywhere we go. And that there should be some limitations in terms of certain types of information, but yet there's not like a comprehensive framework for how to come down and say like, no, actually all of these different classes of information should actually be private. And so it's really like asking for this comprehensive framework for privacy. And the challenge has been that it has to not be vague. There's like this vagueness doctrine where it has to be clear about how to make a decision, and because privacy is so vague, it's actually difficult to really pin all those things down. So there's been resistance to, like, create new doctrines beyond, like, the third-party doctrine. So this is something that legal scholars have been, like, debating over and over again and still has quite a lot of open questions that need to be expanded there. One other point that I'd put out there is that there seems to be a lot of discussion about this Section 230 of the Communication Decency Act. and that it's this essentially platform immunity where there's just like lots of ways in which that has given kind of free reign for liability waivers to happen, which some people are critiquing in terms of having this platform immunity was a political project that, you know, has really enabled this out of control Silicon Valley being able to run all this different stuff. Lemley and Volokh are pointing to some scholars like Anupam Chandler, who wrote a piece called How Law Made Silicon Valley in the Emory Law Journal back in 2004, arguing that, you know, there's some accidental decisions that were made, I think including this Section 230, that led to why Silicon Valley was so dominant relative to everywhere else around the world, that was having maybe a little bit more paternalistic protections to certain aspects. and that this unregulated innovation that was coming out of Silicon Valley, thanks in part to the Section 230. So there's a bit of like debate within the legal community in terms of the importance of this production of platform immunity versus like this catalyst for innovation. So all this stuff around the legal aspect, I think, merits lots of very specific discussions with legal scholars talking about all these different issues, because there's a variety of different perspectives and opinions. And ultimately, this is either going to take a step back and let things play out as they are, or there's going to take some sort of harm being demonstrated to be able to bring larger awareness to the legislators. And so maybe that's where the role, coming back to the academic research, is showing what harms can be done. ultimately, you know, what's the actual data that can be shown to either feed into the companies to be able to change their policies or to go up to the level of governments to be able to compel different changes for the different terms that are possible. Virtual reality is just like any other technology. There's a lot of amazing things you can do, but there's also a lot of potential harms that can be done. and the benefits have to be weighed against those risks for all this different stuff, as well as a little bit more accountability in terms of if this stuff is being recorded, then what type of personally identifiable information is there, and how is it being used, and do you really need to store house it forever? So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue bringing you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show