#1091: IEEE XR Ethics: The Erosion of Privacy & Anonymity

ieee-xr-ethics-privacy
The ethical questions around privacy in virtual and augmented reality are some of the most pervasive, unanswered questions in XR with the top two questions being: What new types of intimate biometric & physiological data are available through XR? And how are this new XR data being used? The intractable problems of XR privacy have come up in every single one of the 8 white papers as a part of the IEEE’s Global Initiative on the Ethics of Extended Reality. Human computer interaction researcher Mark McGill is excited about the new accessibility features and perceptual superpowers that may come with “all-day, every day augmented reality,” but he’s afraid that the lack of adequate consumer privacy protections and the half-baked consent models for bystander privacy may lead to social reject and backlash for wearing devices that may not be undermining the privacy of the owner of the AR headset, but also potentially violating the privacy boundaries for everyone in their immediately surrounded area.

McGill works as a Lecturer at the University of Glasgow in Scotland at the Glasgow Interactive Systems Group, and he was the lead author of the IEEE XR White Paper on “The Erosion of Anonymity & Privacy.” I was a contributor to the paper along with Michael Middleton, Monique J. Morrow, & Samira Khodaei, but it’s a huge topic that McGill did a great job of tackling through the lens of what’s new about XR sensing and then extrapolates on the potential accessibility benefits, but also perils towards privacy.

The chapters are broken up into XR Sensing and Computed Data, Identity and Anonymity of Self, Augmented Intelligence and Mental Privacy, Identity and Privacy of Bystanders, Worldscraping, “Live Maps” and Distributed Surveillance, Augmented Perception and Personal Surveillance, & finally a look at the existing Rights and Protections.

There’s a pretty bold conclusion that “the current system of digital privacy protection is no longer tenable in an extended reality world.” Also see my interview with Human Rights Lawyer Brittan Heller who also argues that a new class of data called “biometric psychography” needs to be legally defined in order to explain the intimate types of information that can be extrapolated from XR devices.

Here’s a talk I gave last year after I attended the Non-Invasive Neural Interfaces: Ethical Considerations Conference, which gave a sneak peak as to what’s to come with the neurotech like brain computer interfaces, neural interfaces, and sensors XR technology.

Here’s a taxonomy of the types of biometric and physiological data that can be captured by XR technologies as categorized across different qualities of presence that I first showed at that “State of Privacy in XR & Neuro-Tech: Conceptual Frames” talk presented at the VRARA Global Summit on June 2, 2021.
taxonomy-of-xr-data

McGill passed along a graphic from an unpublished pre-print tentatively titled: “Privacy-Enhancing Technology and Everyday Augmented Reality: Understanding Bystanders’ Varying Needs for Awareness and Consent” that shows how some of the same types of intimate information could be extrapolated with depth-sensing AR headsets. McGill emphasized that it’s not just about cameras taking pictures of people with hidden cameras. It’s about capturing fully spatialized information that can then be segmented and processed on many different layers revealing a lot of biometric psychographic information, which happens to be a lot of similar information that laid out in my taxonomy up above as you go further and further down the path of doing composite processing capabilities of data from XR devices:
McGill-(2022)-Privacy-Enhancing-Technology-and-Everyday-Augmented-Reality_Understanding-Bystanders-Varying-Needs-for-Awareness-and-Consent

McGill shares a number of different recommendations in the White Paper, but many of them will also require buy-in and self-regulated behaviors who are the same big tech companies who are pushing forward with innovation of XR technologies while emphasizing the experiential benefits, but downplaying the existential privacy risks. I suspect that we’ll ultimately need more robust privacy legislation that either expands GDPR to more fully account for the types of biometric psychographic data that comes from XR, or perhaps the US will pass a new comprehensive Federal Privacy law — although all indications so far are that XR data are not being accounted for at all in all early draft legislation.

McGill and I do a very comprehensive 2+ hour breakdown of his paper and breaking down all of the exciting possibilities of extended perception while also the many terrifying open questions as to how to reign in the many concerns about how to put some guardrails on XR data. I’ll include some more links and references down below if you’d like to dig more into the discussions on this topic I’ve been helping to facilitate for the past six years. And like I said, every single IEEE XR Ethics White paper mentions the challenges around XR privacy, and so be sure to also take a listen to the other podcast discussions to see how this topic shows up across a variety of different contextual domains.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

TALKS & KEY CONVERSATIONS ABOUT PRIVACY OVER THE YEARS

PRIVACY CONVERSATIONS WITH FACEBOOK EMPLOYEES:

OTHER CONVERSATIONS ABOUT PRIVACY OVER THE YEARS

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. So continuing on my series on XR ethics in collaboration with the IEEE Global Initiative on the Ethics of Extended Reality, today's episode is about the erosion of anonymity and privacy. So XR privacy is a topic that is covered in every single one of the IEEE XR white papers. It's such a pervasive issue when it comes to all the new data that are going to be made available, what happens to that data, and how is that data going to be used? So in this conversation, I talked to Mark McGill, who was the lead author of this paper, and I was a participant in this paper as well, participating in the discussions and providing feedback. So Mark's orientation is working as a human-computer interaction researcher, and he's using XR technologies in terms of accessibility, being able to augment your perceptions and to take advantage of the new dimensions of spatial computing. Now, the challenge is that there may be a lot of social rejection that happens with these technologies, because these devices are going to be recording lots of different information. Not only just the camera information, but also lots of biometric and physiological data of other people, in terms of bystander privacy, which is one of the really huge issues when it comes to augmented reality. So this is a topic that I've been diving deep into many, many years. And I think this conversation is from the perspective of looking at all the different technical aspects of augmented reality and then spanning across all the various different difficult issues around tech policy and legislation. The real issue is a lot of the conceptual frameworks don't really account for a lot of the biometric and physiological data that is de-identified data and essentially doesn't fit under a lot of existing legal structures that even the latest drafts of U.S. federal privacy law are not even covering all the different new types of data that are going to be happening within XR technologies. There's still a lot of bleeding-edge work that has to do both philosophically and understanding the technologies, and then on top of that, trying to come up with the conceptual frameworks to be able to put on what those guardrails are to be able to have a more safe use of the technology. Anyway, we'll be doing a deep dive into XR privacy on today's episode of the Voices of VR podcast. So this interview with Mark happened on Wednesday, May 18th, 2022. So with that, let's go ahead and dive right in.

[00:02:27.591] Mark McGill: My name is Mark McGill. I'm a lecturer or assistant professor at the University of Glasgow in Scotland, in a group called the Glasgow Interactive Systems Group. And my research comes to extended reality from the perspective of a field called human-computer interaction. which is a rather broad field within computing science, and it focuses on things like usability, interaction design, social dynamics. It's a kind of cross between computing science, human factors, and psychology. And within this, I have a number of research interests, predominantly around this idea of ubiquitous, all day, everyday augmented reality that we anticipate is just around the corner. And my main interests are around productivity, passengers, and privacy. So for productivity, I'm focused on things like ergonomics and augmented peripherals, and generally just trying to create more comfortable workspaces. For passengers, our big focus is a part of a European Research Council project called Via Hero, and that's using XR to do things like resolve motion sickness and support safe, socially acceptable use of XR. And those two topics are what kind of led me toward privacy as an interest. So I came at it from the background of the usability of XR and how it will improve people's lives. But the more I focused on augmented reality in particular, and this idea of this ubiquitous adoption and all our glasses are effectively AR headsets, the more concerned I grew about the potential negative impact that this may have on society. And this idea that particularly within my field, we are really working hard on extolling the benefits of these technologies without fully addressing or pausing to consider how to safely deploy them or what they're going to do to how society functions in the future.

[00:04:13.078] Kent Bye: So you're the primary author of this extended reality and the erosion of anonymity and privacy and participating in this IEEE global initiative on the ethics of extended reality. So maybe you could just maybe give a bit more context as to your background and your journey into covering privacy and XR in general.

[00:04:31.101] Mark McGill: Okay, so I'll go back to the main focus is this idea of everyday augmented reality. So the major tech companies on this globe are all pursuing this idea of a pair of glasses that has the capability to augment your perception of reality. You'll be wearing them all day, every day. They will in time supplement and eventually supplant the likes of the smartphones that we're using right now. these kind of glasses will become the gateway to how we experience computing, this idea of spatial computing. The sensing required to drive these devices is quite problematic. So we've referred a bit in past research about requisite sensing. So this idea that a camera array or a directional microphone array underpins the functionality that makes these devices so intriguing and powerful to use. So the camera is an AR headset. are there first and foremost to enable the positional tracking of the headset. We can refer to six degrees of freedom. So these cameras have simultaneous localization and mapping. They are figuring out their position relative to the world. They're using other sensing to then figure out the orientation as well. And that's what allows you to render this exocentric spatial content. So I put on the AR glasses, they know the structure of the room, the mesh, the topography of the room, they can place AR content on the wall, they can augment my table surface, they can do whatever I want there. So that one bit of sensing, those camera arrays that are in these headsets are fundamentally required to enable this idea of spatial computing through AR headsets. But the problem is that they're not there just to enable positional tracking, or at least we're not going to use them for just positional tracking. Because obviously, if you have access to that camera data, you can do so much more with it. You can get an understanding of the context of the user, the actions and behaviors of the user, their environment and their surroundings. And that's what's really going to make augmented reality particularly compelling. If you consider things like augmented perception, or augmented intelligence, where my glasses understand my surroundings, they understand the context of the interactions I'm having with other people, and they seamlessly interleave virtual assistants into that. That's a really compelling example of using those cameras. We're going to want to do that, but the presence of those cameras innately provides the possibility of privacy invasion, both to the users, obviously, because I wear glasses all the time. If I'm wearing AR glasses all the time, they will be subject to witness everything I do in my daily life, but not just me. I mean, I'm at least consenting to wearing these glasses, but if you're a bystander to me, you don't have any means of consenting to what my camera senses of you and what I do with that data. You don't have any awareness of that. So it's this tricky balance between this incredible capacity for enhancing lives and more deeply integrating computing into our daily lives, weighed up against the incredible scope of privacy invasion that's possible here. So that's really what interests me about this topic.

[00:07:48.367] Kent Bye: Yeah, I think that's a good overview of what's at stake here in terms of as we move into this new spatial computing reality, there's a lot of ways in which that these sensors have to sense different parts of the world around them and potentially even parts of your body movements and whatnot. even for the technology to work. And so there's a contextual integrity, meaning that there's information that's required for it to function. And then there's the question of like, at what point do you take that information that's available and use it for other purposes, say advertising or surveillance or mapping identity, or there's all sorts of ways that our sense of our identity and our privacy and our intentional actions can start to be undermined with the pervasiveness of all this immersive technology.

[00:08:30.192] Mark McGill: And not just us, you know, the bystanders as well. So it's one thing if you purchase these glasses and you consent to do this and you choose what applications have permission to what sensors are on there. But it's another thing for everyone else subject to the gaze of your glasses and in the course of your everyday life. They don't have a say in that there. So the user problem, I don't think we have a great handle on how to really minimize the privacy risks layer. And then for the bystander or the passerby, I think the risks are even more greatly amplified. And it's one thing to consensually give away aspects of your privacy. It's another thing to non-consensually erode others' privacy as well.

[00:09:14.092] Kent Bye: Yeah, I think as we talk about XR privacy, there definitely seems to be the virtual reality part, which is all the sensors to be able to have, at least at this point, maybe more in a private context where maybe you're at home and then the AR is taking that. platform of all those sensor technologies, and then including the world and everybody around you. So you have this bystander privacy, which is an additional complication of the implications of privacy when it comes to augmented reality being used in a public context and all these sensor technologies that are picking up on folks. So I think you're starting to break down, I think in this paper and also other subsequent papers that you've been working on, both the XR portion, which is what's the same between both VR and AR in terms of the individual privacy risk, but also the collective and public privacy risks I know there's been a number of different discussions around the relational nature of privacy and how a lot of privacy has been focused on individual rights. But I think there is more and more movements, at least in some circles of the privacy discussions, of talking about the relational dynamics of information that you may have access to, that if you release that information, it actually could be revealing information about your friends or your family or other people that are in your network. and the social network graph analysis. So I think there's other relational dynamics that have to be taken into account. But I guess before we start to dive into this paper that you wrote for the IEEE, I'd love to hear a little bit more of how you approach this as a topic, because I've been covering the privacy implications of this virtual reality and augmented reality technology for around six years. And there's a number of different philosophies around privacy in terms of how to even define it, if it's a human right, if there's a contextual nature of it, or if it's more of a libertarian approach, meaning that we own our data and we can consent to what we can give over and buy, sell, and trade. I know folks like Dr. Anita Allen has talked about privacy as a human right, that maybe it should be treated more like an organ, that rather than selling it and trading it and consenting to give it away, that it should be something that's fundamentally protected and that we shouldn't be able to just give it away. I feel like this landscape is so huge as we start to address these issues that I'm wondering how you started to orient yourself into the larger landscape of the legal discussions about privacy and the whole other existing discussions that are happening and then what's new with privacy and what isn't taken into account with all those other discussions that are happening out there.

[00:11:30.540] Mark McGill: So I think in a slightly weird way, I kind of put privacy to one side when I started with the paper, because I started with the idea of just trying to appreciate what is different when we're referring to XR technology compared to existing similar technology, like smartphones, like IoT, smart home devices. What characteristics make XR particularly problematic? And I think I came to the idea that it's the scale and the scope of the potential privacy invasion there. So an extended reality device effectively amplifies a number of privacy risks. So the fact that you're wearing this device all day, every day potentially, or you're using it in private home contexts, means that obviously the amount of data that can capture is significantly increased. So you're using it in your home living room, It is by default detecting everything that's in the living room. It's got a map of the living room. There's bystander awareness. It's tracking people that enter and leave the room. So there's immediately a capacity there to invade your personal private space effectively. And I think compared to smartphones, compared to IOT, XR has this increased scope in terms of what it can sense around you. But then also the scale of it. It's not like we walk around holding our smartphones in front of us all the time with the camera active, capturing everything we're doing. It would be immediately apparent to anyone that's nearby that we're doing that. That will not necessarily be the case when we transition toward augmented reality. So despite the fundamental hardware being basically the same, the sensing in my smartphone is pretty much the same sensing as an augmented reality or virtual reality device might use. the form factor, where it's used, when it's used, how often it's used, the fact that it's effectively constantly surveilling what you do in your daily life. amplifies all the existing risks there. So that was one aspect was how does AR compare to existing smart technology that we use? And is it meaningfully different? And I think I arrived at the point, yes, obviously the scope and scale is significantly increased. And then given that, what might we use that kind of sensing for in the future? You know, this idea that you've got cameras there, people normally assume, well, okay, the cameras will be used to record what's going on, and maybe it tracks the environment, but there's a lack of awareness about just what you can extract from such a straightforward sensor that we're so accustomed to seeing in devices there. You know, the idea that that camera can track everyone that's around us, that it can do biometric ID of who we're looking at, that it can make determinations based on that data around what we're doing, our actions, our behaviors, that even things like physiological data bystanders can be captured. My camera can pick up your heart rate variability if I so choose. So I think in combination, it's what we can do with that data is deeply problematic. And then the form factor and the context of AR usage just amplifies all the risks that are exposed there. And that was just looking at camera data. You can look the same at microphone data or any other physiological sensing that's attached to these headsets and pick out so many additional risks there. Even eye tracking, there's so much you can infer just from a user's gaze. So I came at it from a sensing perspective. What does this technology enable differently compared to our existing smart home technology? And I think the kind of obvious conclusion there is that, yes, there's an incredible threat to privacy there. And then, considering that, okay, what stands in the way of applications, individuals, the companies that drive these platforms? What stands in the way of them performing such privacy-invasive actions? So firstly, We have a problem that actually a lot of the capabilities that we discussed that AR and XR make possible, people are probably going to want to make use of biometric ID or physiological sensing. You can come up with use cases there where you say, well, my AR headset could detect who I'm talking to at a conference and give me salient details about them to help as a conversational aid. Wow, that sounds like a really useful use case. or if we're talking about physiological state as well. So, you know, if you have a headset that can perform some physiological sensing there, like heart rate variability, there will obviously be a use case there for exercise. So while I want that sensing in my device, I will give permission to applications to get that data. And in doing so, people may not be aware of just how far their privacy will be eroded by providing that data, because it will seem innocuous to them. Well, I'm giving an application access to eye-tracking data, or I'm giving an application access to my physiological state. People don't realize necessarily what can be inferred from that and how that data can be processed further to infer deeper insights into them, like biometric psychography you discussed there. So we have a set of incredibly potentially invasive sensing And we have a set of users that will, in all likelihood, freely grant permission to many applications to use this sensing. So OK, what other protections do we have available then that's going to prevent a kind of mass privacy invasion? Well, there's the legal side. In the EU and the UK, we have GDPR, which is the General Data Protection Regulation. And that is meant to balance this right to be seen versus the right to be recognized. And the intent here is that there needs to be some kind of lawful basis or legitimate interest for what happens with this data. But my worry with the existing legal protections is that they are largely untested when applied to this kind of technology. And there was a nice quote that I saw from Marian Cole, referring to body-worn video. Most body-worn cameras do not provide any privacy-mediating procedure, and thus the de facto procedure is to opt out verbally by asking the device user to turn off the camera. So if we're considering GDPR protections, there's a real lack of application to how will they work with augmented and extended reality generally. So the legal protections I am somewhat dubious of right now. We're really at that point then relying on what platforms introduced in terms of protections right now. And really, if we're talking about virtual reality, the primary protection is just the meta and the likes predominantly don't allow access to the cameras. Okay, that solves the issue for right now, because developers, applications then find it obviously much harder to extract this wealth of data. But that's a band-aid, that's a temporary fix. If we're talking about what you can do with augmented reality, they need access to that data, that camera feed. That's what will drive some of the most compelling AR experiences. And it's not clear then how we take that next step where we can have the best of augmented reality technology, where we can make use of that data to assist in augmented perception or augmented intelligence or all these really compelling features of AR, whilst also protecting against the misuse and abuse of that data.

[00:18:53.113] Kent Bye: Yeah. Yeah. And I think as you were talking about how you're taking a very technology driven approach, analyzing what's new about the technology, and then from there extrapolating not only what the privacy risks are, but also reflecting upon the existing legal structures to see how there's a lot of gaps between the newness of the technology and how there's a lack of protections amongst existing legal structures. And I think we'll, we'll get to the legal section here in the last part where you start to really unpack that, but maybe before we get there, it might be worth focusing on the technology and talking about these sensors and unpacking what they can and cannot do. I think you started to do that here in what you were just talking about. In the paper here, you have the XR sensing and computed data where you talk about the movements and physical actions that are including everything from the optical inertial tracking of the head, body and limb movements, You have the EMG, so electric myography, neuromotor input, the sensing of facial expressions, the auditory sensing of speech and non-speech activity. The neuroactivity, you have EEG for brain-computer interfaces. For the context, you have location tracking, the SLAM technology, as well as the machine learning-driven analysis of all the data, the contextual relevance of all that information and how to make sense of it. And then the physiological aspects of the eye gaze tracking, the heart rate variability sensing, and other biometrics. When I started to look at this as an issue, I started to map out all the different lists of biometric inputs that are out there that have already been integrated in the medical context. And as I look at the Project Galea, I started to see other devices that I wasn't aware of. And there's a whole huge long list of what is going to be able to be tracked within the context of XR technologies. And as I try to discern what is the essence of each of those different aspects, I started to map it over to the different qualities of presence. And so if we think about active presence, all the different behaviors, intentions and actions and movements, creations and engagements, mental and social presence, so you have the mental thoughts, cognitive processes, cognitive load, social presence, and your expectations that you have being extrapolated and predicted in some ways, the emotional presence. And so your affective state, emotional sentiment, facial expressions, and micro expressions. And then the body presence is the different sensory input processing, stress, arousal, physiological reactions, your eye gaze, your attention, body language, and muscle fatigue. So when you add all these things together, all that biometric and physiological data is in some sense, representing these different qualities of presence as you're in these different worlds. And so, given that, as we start to capture all this information, then that information, you know, with the EMG, the control labs of Thomas Riordan, who is now the Director of Neuromotor Interfaces for Meta's Reality Labs, was saying that they can start to detect with EMG the firing of an individual motor neuron, which means that they can detect the intention of one motor neuron for how it intends to move. It doesn't mean that it will actually move, but even with that intention of movements, Then you can start to extrapolate how your body will move. And then from there, as it's a fusion of all these things together, it starts to map out your phenomenological physiological, your mental state and your actions. That starts to also talk about neuro rights, the right to your mental privacy. So all that stuff that's happening inside your head, all the stuff that is your identity. So the stuff that is trying to map out who you are and how you identify, but also what you're interested in. what your values are and what you're willing to pay money for. And then the last one is the right to agency or action. So the degree to which that it's able to nudge your behaviors. I feel like as you start to map out all these different sensors, the context that all of this is happening is with these technologies that are owned by big tech companies that are potentially using this to feed into their existing surveillance capitalism modes or other ways of using that's contextually relevant, but also going above and beyond that to do things that you may not consent to. That's at least how I start to think about it. And as you start to map out some of those sensing data, I see some overlap in some of the ways that you start to think about that. But I'd love to hear some reflections on this XR sensing and computer data, just setting a baseline for what we're talking about here.

[00:22:52.971] Mark McGill: Yeah, I mean, you're right that this wealth of data will lead to a whole bunch of compelling XR enabled experiences. So I imagine it'd be fairly trivial to persuade people to buy devices that have subset of the kind of list of sensing that you showed there, because each of those sensors will be tied to some compelling new use case that drives it, bolstering augmented intelligence or accessibility reasons or for augmented perception. And because that data is there, it's right to assume that other parties will try to take advantage of it. This idea that you're building more of a descriptive model of your behaviors, your likes, your dislikes, your activities, that will be incredibly valuable to any of the tech companies that own these platforms. So I'm kind of torn here because we have this wealth of incredibly valuable data, but it could obviously be misused in ways that we did not anticipate in the future. So what do we do here? Do we design headsets that try to keep this data local to your device? Well, we could do that, but then we're not taking advantage of the wealth of cloud computing capability that we have here to generate the insights that are going to drive the compelling use cases. So we're going to really want to make use of cloud computing. And if we're making use of cloud computing, then that inevitably means that in some way, shape, or form, that data has to leave your device, go to the cloud, and be processed in some way, and then have the results returned to you. And that's where we get into a problem because, you know, Laws like GDPR try to put some kind of control over how that data is processed, how it's retained, your rights to recover that data, your right for removal. But given that wealth of data and given the value of that data, I imagine that companies will work incredibly hard to keep as much of it as they possibly can and try to get users to consent to allowing them to do further processing and longitudinal processing of that data. So yeah, I think that's where I arrived at this idea of this consensual erosion of privacy. You know, we are going to be presented with incredibly compelling devices that will really enhance our everyday lives. And the cost to that may well be this consensual giving away of a lot of our data to allow these companies to generate that model, the biometric psychography, the insights into our lives that they can then exploit to nudge our behaviors. I think my concern there is that many people may be perfectly fine with that as a trade-off, I mean, you look at how Amazon, for example, handles its current Kindle tablets as a kind of benign example. Right now, you can buy one that's subsidized with advertising and people, I presume, I don't know the statistics, but I presume that that leads to significantly more sales potentially for the discounted version, because people are willing to trade a bit of their time and a bit of their attention to get a cheaper device. And for Amazon's side, well, they benefit by being able to pipe targeted advertising directly at you. So it's quite trivial to extrapolate that out into augmented reality and see that, well, you know, this company will have an AR headset that enables all these compelling experiences. but you have to allow your data to leave the device. And you're going to agree to some rather onerous terms and conditions there in order to do so. You're probably not going to pay a huge amount of attention to them. You're going to hope that GDPR or whatever your local legislation is, is going to reasonably safeguard your data. And in return, you're going to get an incredible spatial augmented reality experience The company gets some subset of your data. They then exploit it over the time, develop this model about you, enact that kind of nudging, enact the targeted advertising, manipulate your behavior in ways that you may never even realize potentially. And I find that deeply problematic. And I worry that we are, as a society, going to sleepwalk into that scenario. Consensually, we're going to see these compelling devices and say, great, sign me up. Give me that AR headset. Take my data. I don't care at this point. So then what do we do to try and safeguard against that? How do we protect users from themselves potentially? Or how do we educate people as to how they manage their data? What protections do we mandate in these platforms or in how that data is processed? What kind of pressure do we put on these companies to prevent them from doing these activities? Or do we need new legislation around behavior nudging to try and prevent this kind of exploitation? I'm not really sure, but I think there is a broad scope for interventions and mitigations there. And the timing of this is at least that this everyday ubiquitous augmented reality doesn't exist yet. No one is walking around, well, unless you count Ray-Ban glasses with cameras in them, no one's really walking around with the kind of devices we're describing, but they're not far off. Every day you see new advances advertised in terms of the optics and the form factor. Facebook, or Meta now, they're testing their Project Aria form factor, which is basically a pair of glasses with all the sensing we've talked about, just without the augmented reality rendering capability. And that was them testing out the social perception of this technology and trying to understand, well, how do we make it acceptable and balance the data capture against the privacy legislation? So we are on a precipice of seeing this technology hit this adoption. It might be two years, it might be five years, it might be 10 years. I'm not sure where we are in terms of when Apple or Meta or Microsoft or Google will be the first one to hit the market with that compelling glasses like everyday all day form factor, but we know it's not far off. So there's a real urgency there to try and figure it out. what the best protections are that we can instigate pragmatically that won't necessarily restrict people from making the best use of these devices, but will at least protect from the worst of the potential privacy invasions or the worst of the potential misuse and abuse. I think that's maybe all we can reasonably hope to aim for in the next few years.

[00:29:10.175] Kent Bye: Yeah, there's a lot there that you were mentioning that I think kind of reflects the nuances of this issue. I think the economic realities of how surveillance capitalism and the ways in which that the advertising ecosystem is really subsidizing the existing functioning of the internet in so many ways and so many businesses as well, Facebook, Twitter, Google. You know, all these companies that are these utilities that we use a day and day, either to search and find things or to communicate or to find things to search. But I guess the trade-offs that I see is that there's access that's been made available because things are made more cheaply to be able to make it more equitable in some ways. but what are the costs in terms of mortgaging aspects of our identity and our privacy? And I think that's been a trade-off that we've been willing to take for a long time, just because it has had a little bit of distance, at least psychological distance in terms of the level of intimacy of what type of information that's being connected, being collected.

[00:30:04.899] Mark McGill: You can choose to put your phone away, right? You can choose to put your phone away. You can choose to turn the camera off. You can choose to deny that permission. You can disengage to an extent. But if there's that transition toward AR and spatial computing, disengagement will be so much harder to achieve. And even if you personally choose to disengage, what about the others around you?

[00:30:25.288] Kent Bye: Right. That whole relational dimensions of the privacy that I was mentioning earlier. But yeah, there is that direct use, but there's also a whole sorts of other sketchy behind the scenes data brokering of our financial information and stuff that we're not even fully aware of. And that if people were, we'd probably be horrified in terms of all the information that is an ecosystem that is, and I was talking to EFF, Katica Rodriguez, she was saying how a lot of privacy harms are invisible. So stuff that when there is harms to our privacy, like if we are discriminated against or we don't get a job or we don't get access to insurance, we don't always know why we're denied, especially as we move forward, more and more ways in which that information gets out and is causing harms that we can't fully even articulate because they're invisible. And I think that's the challenge with some of these things that as we talk about this section where you start to map out the implications of our identity, and the harms that can happen from those transgressions. And as we have more and more intimate information, as we talk about all the information that is going to be made available. then aspects of identity and the autonomy of the self starts to get undermined. The recommendation that you have here is that XR stakeholders should actively develop and or support efforts to standardize differential privacy and other privacy protocols, provide the protection of individual identities and data. And I wanted to reflect on that a little bit, just because there is differential privacy, there's homeomorphic encryption, and there's this idea that if it's transferring from one server to the next, which you were talking about, you have this concept of, at least in the United States, the third-party doctrine, meaning that anytime a third party has access to that information about us, then it has no reasonable expectation to remain private, which means that the U.S. government can get access to that information without a warrant. So if you're talking about taking all this data and recording it and sending it up to a third-party server, in essence, the third-party doctrine is then at that point, the government can more or less have access to that same amount of information. And so it's not only the context of the companies having that information, it's also if it gets into the hands of the government that has other implications that go above and beyond if it's processed locally. So I think there's the Fourth Amendment and privacy implications in terms of what the government has access to, But there's also, when I talked to Britton Heller, she says that a lot of the focus of a lot of these laws are treating your identity as a immutable object, meaning that you have your name, your social security number, your address. These are all things that could connect you to your identity, who you are, versus something that may be dynamic or changing. You're watching something, you have an emotional reaction to it and it's logged, but it's all the context and relational dimensions of that information is logged. Meaning that you were looking at this thing at this moment in this context, and that is starting to map out your psychographic information that maybe has a real time processing that no data is being stored, but an inference is being made based upon all this other information. So even though there may not be an implication of the data being stored somewhere. it's still coming up with an algorithmic decision about your identity that doesn't matter if the data is stored or not, because the decision has already been made and there's no way to go back and reevaluate that. But at that point, it becomes more of a matter of getting around the existing regulations by doing more real-time processing and real-time inferences that maybe not be addressing some of these issues. So I feel like there's the elements of the government, but also the elements of the degree to which that we're moving into this more real-time, contextually processing and inference-based that's trying to extrapolate these psychometric information rather than the more immutable, personally identifiable information.

[00:34:00.221] Mark McGill: I think as well, so you come at it from a US perspective. I come at it from a more EU-UK perspective. And these are two countries with some of the best privacy protections there. At least the EU GDPR is the gold standard that's often mimicked. But I mean, the protections do vary significantly country to country. And I know that in the US, OK, there's not quite that GDPR level standard. But you can work your way around the globe and find varying protections there where governments will be more or less empowered to use that data towards ends that these companies may not necessarily anticipate. I mean, you look at the social credit scheme in China, for example. And you imagine how you might use that wealth of data, if available, to enhance that capability. And they may compel companies to make that data available. Or more likely, they will determine a manufacturer within the country to make their own variant of that headset and diverge a bit. So even if we do have pressure on the likes of Meta and Apple to make as privacy-respecting a device as possible, other manufacturers and other countries may lead to the creation of devices and platforms which don't have those kinds of protections. So we're kind of unlocking a Pandora's box there where, okay, we might argue that we can instigate enough legal protections for the US or the EU, But then the tools and technologies that we've unleashed upon the world will still be used in other countries towards misuse and abuse events effectively. So I find that kind of deeply problematic as well.

[00:35:45.843] Kent Bye: Yeah, just to kind of follow on the first recommendation here about the differential privacy, because I do think that as the data gets stored, a lot of the differential privacy for my interpretation is that the company wants to make decisions and conclusions based upon aggregations of lots of data. And if they just store all that data, then that means that there's a risk that like anonymous de-identified data, you have enough of things that you're able to tie together to re-identify who that person is. And so there's a leaking of your personal identity based upon all this aggregated information where they're trying to make these collective decisions. There's also the concept of homomorphic encryption, which is more of a cryptographic way of being able to do processing on the data without having access to the data. And so that would require architectures that would have more processing load in real time, but would have increased abilities of having more aspects of privacy. So that's more of the hardware architecture layer, the differential privacy layer is more of what algorithms are doing at the collective level and how the data is being ingested and controlled so that it is preserving the privacy. Those are just some reflections on the first layer, because I do think there are going to be some technological solutions, but I think there's other legal as well as cultural aspects and economic aspects that I think are also playing into this, but there are some layers of technological infrastructure that is going to help to address some of these issues. And I think that's when I see this first recommendation, I see you're starting to evaluate some of those different technological solutions that can start to address some of these potential pitfalls.

[00:37:15.489] Mark McGill: Yeah, but how do we compel platforms and companies to take on that kind of best practice? Because in a way it's against the potential further use of the data that they're capturing there. We can design these architectures. Do we have to legally mandate them? Or are we going to hope that there's sufficient social pressure or other guidance there that's going to compel them to adopt this best practice. I think that's where I'm a bit skeptical, because if you had access to that kind of data, why would you give it away? Why would you not try to make use of it to enhance your platforms and enhance your understanding of the users that are on them? I'm not sure that there's enough public awareness there, that there would be a sufficient drive from the potential users to enforce these platforms to use best practice here. And it will be a race to the bottom at that point, because it's all well and good if one company comes out and adopts that best practice. But if another comes out and doesn't, and is wildly successful and can get that kind of data set and use it, then others will follow suit there.

[00:38:21.335] Kent Bye: Yeah. Yeah. I think this is probably an instance where there need to be some legislation that would maybe demand or dictate some of this because a lot of these are business practices. That's up to the device manufacturers. There's also legitimate engineering trade-offs when you have these things where you have increased amounts of privacy, but it may have less capabilities. And so a lot of times as consumers, we want to see the most powerful technology that's unhindered by anything. And I think that. those trade-offs haven't necessarily been laid out in that way. And so you have, because it's additional processing power to do something like homeomorphic encryption in different architectures, then yeah, that's not really adopted. But yeah, there was some leaks recently in terms of the types of data that is being collected at, you know, Facebook social networking aspects of Meta as a company. and this ideas of a data lake of just 15,000 features for each person that were being tracked on different degrees and levels. And just data that's being thrown into this giant data lake that has no idea where it came from and how to revoke access to that. And so yeah, just violations of the GDPR, but You know, as we move forward, that's kind of the strategy is that do we want to live into a future where all this biometric and physiological data is just added to this data lake that could not be tightly controlled and then maybe have a data leak. And then all of a sudden, all that information is the hands of a state actor or something that's been. then at that point used and much more of a geopolitical implication. So I think there's a tendency to think about these companies as a big vault where the data is impenetrable. But if you just assume that it's a sieve that is going to leak out information, then I think that also changes the types of approaches that we want to take here. And what are the mandates from a legal perspective that are going to enforce some of these best practices?

[00:40:03.561] Mark McGill: Yeah, and it's not just about your data as a user, it's about the data of the environment around you as well. So when Meta were talking about Project ARIA initially, they were describing how they could use the wealth of data captured from disparate AR devices to generate these kind of live maps of environments where they could have this evolving real-time understanding of the context, what's happening, who's in the environment. And you can immediately come up with compelling use cases there for letting people know there's an event happening or identifying that there's friends or colleagues in the crowd that they could go and meet. So we'll end up giving that data away, but then we're also giving these companies a kind of longitudinal understanding of our everyday lives and the everyday lives of everyone around us. So yeah, I think it's deeply problematic.

[00:40:55.568] Kent Bye: And as we're moving on to the next section of section 2.3, augmented intelligence and mental privacy, but if you maybe set a larger context and then we'll dive into your recommendations.

[00:41:05.613] Mark McGill: Yeah, so I think with the augmented intelligence, it's this idea that this device can effectively supplement your everyday interactions with other people and support those interactions. It has this contextual awareness of what you're doing, what your intent is, what you have to do later in the day, and it can support these kinds of activities. And I think that will be, firstly, incredibly useful for people with accessibility needs. So if you have, for example, acquired brain injury or some other form of impairment, then augmented intelligence is naturally going to be very beneficial to assisting you in your daily life. But I think it's also kind of it's a capability to effectively enhance our intelligence in everyday life as well. So we're having a conversation right now where I am scrolling through the PDF of this paper, trying to remind myself about what I've written. If I had my contextually intelligent AR headset, it would be picking up cues from the conversation. It'd be picking up things about your body language and my body language. It might look at my heart rate variability and my stress and my cognitive load. And it might start to suggest ideas to me about how I might respond to your question. And that would be incredibly useful in a context like this. It might make me seem vastly more intelligent and knowledgeable than I actually am. So that kind of capability is going to be one of those really compelling use cases of these devices. But it's also quite a problematic one. So firstly, it introduces something of a, what's the best way to put it? It's an advantage, right? To anyone that has that device, that capability, and has that cloud computing architecture backing them, they effectively have an advantage over others that do not have that device or have those capabilities. So if you're going for a job interview or something and your augmented reality headset can effectively support you through that job interview, okay, you've gained an unfair advantage in a sense. But underlying all that is that all that contextual data that that device is capturing during our conversations that is then relaying to the cloud to process to give me those suggestions. Again, we go back to, is that data retained? How is it retained? How long is it retained for? What further insights does it generate into my personality or my likes and dislikes, as you were saying. So that, again, is adding to that data lake, but it's doing so in a consensual way, because I will want to use that feature on these glasses because it will make my life easier. It will make my job easier. We will become somewhat, I imagine, dependent upon these kinds of technologies. And that's tricky because once we become dependent on them, it will be very hard to extricate ourselves back out from this. And at that point, we are effectively at the mercy of the companies that provide those capabilities and the terms and conditions they provide to us about what they do with our data.

[00:44:03.878] Kent Bye: Yeah, it reminds me of the five neuro rights that Raphael Usta and the Morningstar group had put out. And the first three I think of as more identity and privacy. That's the right to mental privacy, the right to identity and the right to agency. So to have your, your mental thoughts, uh, in your actions, but also the mapping of your identity and then the nudging of your behavior. And then they have one that's to be free from algorithmic bias, which I think is a larger AI issue in terms of like how to make a training of the algorithms, not be unjust or to bring undue harm to specific demographic populations. And then the last one that they have is the right to fair and equitable access to these technologies. So if there is this movement towards augmenting our intelligence, then how do you create a fair and just society if it's only the rich people that have access to that augmentation? And so if this is going to be a future we're moving into, then how to make it more equitable in a way. And so Yeah, I think it gets into deeper economic issues of how to do that. But as you talk about the augmenting intelligence, but I think there's also the the mental privacy here that is also brought in to the discussion. And you as a recommendation, you say that XR platform should seek to adopt voluntary proposals such as neuro rights to help ensure that the mental privacy of users is not violated. I think this is the thing I mentioned earlier, the right to mental privacy, the right to identity, the right to agency. All of these are connected in different ways in an ecosystem of surveillance capitalism. But I don't know how easy it is to define these things as concepts, and then how to set the boundaries for what isn't, is not okay. Part of the challenge I think is that it reminds me of Helen Nissenbaum's contextual integrity, meaning that When you go into a doctor, you're sharing information with the doctor because that information is going to help with your health. But when you go into a bank, you don't necessarily show your medical information, you are talking about your financial information. And so it is not contextually important to know what your heart rate is in a bank and just as your doctor may not need to know how much money you have in your bank account. as long as you have insurance or whatever else that they can, at least as long as you can pay, they don't need to know how much extra you have. So in essence, there's contextual relevance to information. And I think this is the challenge as we move forward into this is what is contextually relevant and what isn't. And I think GDPR attempts to do that in terms of declaring what those uses and why people are doing it, but the ends up being this consent fatigue. I was just in Belgium and every single website I went to, I had to declare how many cookies I wanted to have tracked me and there was a variety of different interfaces and some provided me with genuine consent. And some were basically like, we're going to track you. And if you don't want to be tracked, here's another minute you have to do to uncheck all these things in order to actually see this website. So I worry that when it comes to this as an issue that you are kind of in an already broken consent model as we move forward and just adding all sorts of additional complexities onto that, where people are just going into what is the path of least resistance to have the experience they want to have. without the infrastructure to be able to set preferences and say a file that would say, Hey, you know what? When I go to the, onto the internet, I don't want these things tracked. This is my declaration about what I want. And if I want to change that, then I can increase it. But as it is now, it just seems to be like a terrible model that still is. Essentially this adhesion contract that kind of forces you into a contractual relationship where you don't really have a choice in the matter. So when I see this, that's some of the different stuff that starts to come up.

[00:47:24.485] Mark McGill: Yeah, and I mean, how we manage consent right now on smartphones and other devices won't translate effectively to this concept of everyday eventuality as well. This idea that, yeah, there's consent fatigue. is going to be a considerable problem because given the kind of capabilities we've talked about, do you want to be prompted every time your device is going to try to access a particular sensor or derive a particular insight from available sensor data? Probably not. People will willfully agree to all the permissions that an app on that device requests and give that data away. So yeah, I think the idea of consent alone is not sufficient for when we're talking about protecting against these kinds of violations of privacy and augmented reality. I think there needs to be other aspects about having more transparency in what the device is doing, both to the user and to the bystanders around you. Because at least then, you'll have some awareness of what the device is doing and then you can act if you think that there's a particular issue in a particular context. So for your point on the adopting voluntary proposals, such as the neural rights, There's a line here about what do we mandate as a society? What becomes legislation that we force them to do? And what is sufficient that we can ask them, well, can you please try to adopt these proposals in your platforms? I have struggled with this idea of community guidelines and voluntary proposals because it's not in the best interests of the companies that are developing these platforms fundamentally. It might be if there's a sufficient backlash from society. that we say, right, this is the line in the sand, the device should not be able to do this. But that's really where politicians and legislation and such should really step in. And if we're relying on these companies to act in our best interests, we're already in a weakened position here, because inevitably, given access to this data, they will try to exploit it in ways that we didn't expect to.

[00:49:30.625] Kent Bye: Yeah. Yeah. I think this again, our larger issues that we have existing systems in place. And I think, again, this is another instance where the XR technology starts to break the existing models that we have, and which is forcing us to maybe reconsider what is a better architectures for these issues because it's already a broken system. So how do we make it better?

[00:49:49.917] Mark McGill: Yeah, especially with GDPR, for example, it's easy to kind of use consent as a means to circumvent the protections that are there. And it'll be quite easy to somewhat trick users into writing consent, or as you were saying, break down their willingness to engage with consent protections to the point that they just agree to blanket permissions for particular applications or particular uses of that data, at which point GDPR becomes effectively irrelevant. So we need better protections there and I think we need protections that are targeted toward XR specifically. I don't trust that community-led principles or social norms or voluntary proposals alone are really going to be enough in the long term to prevent the kinds of abuses that we are anticipating may occur.

[00:50:44.308] Kent Bye: Yeah. Yeah. And as you go through the different recommendations, the recommendation three, we've been sort of talking about, which is that XR platforms should disclose in plain language and give users agency over what personal data is being captured, how this data is processed and to what ends and for how long it and it's processed outputs is retained. So that's a lot of the GDPR. And I think that's a good basis and best practice. I think, again, it starts to break down in terms of consent fatigue and all these other issues.

[00:51:11.395] Mark McGill: Oh yeah, so on the face of it, it is a good recommendation. We wrote it, we put it in there and you read it and think, yes, that makes sense. In practice, how do you actually achieve that? What is the data that we're talking about there? Is it the pure access to the camera data? Is it the inferred computed process data that's derived from that? How do you help users to understand what is possible? What can be done with that data? What insights can be later derived from that data? we don't even know the full extent of the insights that can be derived from that data. If you take one particular sensor stream that you identified previously, there will be research for that sensor stream over the next decade or so where people will infer new insights that we never thought possible there when they apply machine learning, deep learning to that data set. So even if we're attempting to try and disclose to users, well, here's the data that we're going to capture and here's how we intend to use it and here's how it's processed and how long it's retained for, How much do they trust that? How much do they understand that? How much can they meaningfully engage with that? And then will it lead to a level of fatigue where they don't care anymore because they're being prompted continually for the plethora of data streams there, for the plethora of applications that are active. So as much as we have that recommendation in there, and I think in principle, it is something that we should strive to achieve, in practice, incredibly difficult to achieve. I mean, there's a field in HCI around usable security, which is really targeting that concept. It's the idea that you can present a system that is private and secure and gives users these controls, but it may not actually be usable. The users may not engage with it. It may be too complicated. It may be too much of a cognitive load. They may not understand what's going on. And thus, the actual point of the system is rendered moot because they don't engage with it. And I think there's a, how do we enact such protections in a way that are usable is a real research challenge there. We can look to how permissions are handled on existing smartphone devices, but I don't think they are really sufficient when you translate it to the everyday AR context, frankly.

[00:53:25.202] Kent Bye: Yeah, I know that within the context of WebXR, this was an issue for a while in terms of like when you're in a web browser and you want to give permission to go full screen, then you have to consent to that. And so when you go into a WebXR, there's also ways in which that there's different access to different platforms and sort of consenting each of the different levels. So I think it's going to be a challenge to see how this continues to play out. And maybe we get back to the philosophy of privacy, which goes back to what Dr. Anita Allen's take is that this is a fundamental human right. Maybe we shouldn't be taking more of a libertarian approach where you're giving the user control as to make these decisions because all the different various trade-offs of what is lost given these decisions, the user's not really informed enough to be making those decisions. So it's sort of like the equivalent of allowing us to sell our organs because we don't know any different and at what point we're giving away too much information.

[00:54:15.301] Mark McGill: We've just scratched the surface of what these devices are capable of. The kinds of activities that we talk about, the augmented perception, augmented intelligence, biometric ID, your internal state, your physiological state. Yes, it can do all of that right now if you want to, but it will be able to do a lot more in the future that we haven't even anticipated yet.

[00:54:37.463] Kent Bye: Yeah, and the last recommendation in this section, which I think is actually kind of related to a discussion I just had about identity and the digital twins, is that individuals should have the right to decide how their identity or representations or modifications thereof, such as digital twins or augmented appearance, is perceived or appropriated by others in XR. I think this kind of gets into identity issues. you know, how we manage our identity or representations of ourselves, but also the digital twins, but also the right to identity, which is a part of the neuro rights, which is the identity aspect of what happens to the representations of our identity, who controls it, who owns it. I think that's a larger issue that is already existing in a place where we're just by consenting to have advertising and information being collected to serve advertising to us within these adhesion contracts is already allowing these companies to do all sorts of things with modeling our identity. Now, how that identity is being represented in a embodied form, I think is another issue, but also how that information that is available is also an issue that feels like the Wild West in terms of it's a free for all in terms of what's already possible and it's already totally unregulated. And as we move forward, how to really wrap our mind around it as issue and kind of rein it in a little bit.

[00:55:46.763] Mark McGill: I mean, how are identities portrayed in social XR platforms, for example? It enables an incredible amount of self-expression. So there's lots of benefits to having disabilities. Your identity is malleable. Your outward appearance based on your identity is malleable on these platforms. But because of that property, it immediately puts your identity at risk of spoofing of someone else taking your identity and misusing it or appropriating it in some other platform or in some other way. And that risk exists in social VR in particular right now, but that risk will be transposed to social AR type experiences and amplified in the future. So the idea that my outward appearance, which was kind of associated with my identity, could be captured and appropriated by others. They could capture it in reality and then appropriate it in virtuality. Okay, that's problematic. That someone can use their AR glasses to augment their perception of me in some way that I am completely oblivious to, potentially more problematic. Right now, people are quite accustomed to augmenting their outward appearance using the likes of Snapchat or Instagram filters. And these filters will at some point transition to the kind of heads up display of the AR glasses. As a society, what are we going to do there when I can walk around and alter the appearance of all those that are around me? I'm not entirely sure, but I think the popularity of the likes of these applications suggests that it's a feature that we will readily adopt as a society and use. And it may have benign uses. Yeah, it might be funny to render a rain cloud over my head raining on me as I'm walking around or something, but it will also have far more potentially abusive uses in different contexts. I mean, this is kind of mixing up identity and appearance and how your identity is outwardly represented through avatars. But I think identity generally is a challenge across the extended reality continuum here. In social VR right now, we are facing challenges with how you protect your particular identity, how you can be identified in these environments as you, augmented reality in the future, it's more about the fact that others will have so much more power over how you appear in the world. Someone could suggest that everyone sees me with that rain cloud over my head. Maybe I wouldn't like that. Maybe that would be a fairly dystopian society we're wandering around in.

[00:58:18.900] Kent Bye: Yeah. Yeah. All really good points. And as we move on to section 2.4 identity and the privacy of bystanders. So bystander privacy, I think for me, at least got into a lot of my radar when project ARIA started to come up and, you know, just putting all these sensors and going around and recording everything around you and. What are the implications of that? And as I look into this as an issue, what's interesting is that a lot of the contractual structure of which a lot of these devices work is that in order to use a lot of devices, you have to sign an adhesion contract. I mean, it's kind of like a one-way negotiation. You take it or leave it. And you basically are agreeing and consenting to all these different dimensions about as you as an individual use that device, this is what you are deciding to provide to that device. But as you are wearing that device and walking around in the world, that's where you start to get into the more relational dimensions of privacy. Now, all of a sudden, you're implicating other people who have not consented to that type of surveillance or capture or don't know how that information is being used, where it's going, if it's being reported up to a server, if the government's getting access to that information. I mean, there's all sorts of ways in which our public spaces are now really becoming public in the sense that we have a little bit of anonymity and privacy when we walk around in public spaces, but now we're moving into a potential future that when you are in public space, then you actually have little to no privacy according to these big companies. And so I'd love to hear some of your reflections as you start to unpack this issue of bystander privacy, both in this section and other work that you've done.

[00:59:46.032] Mark McGill: Yeah, so I think that's where these risks have always existed over the last decade or so as we've adopted more smart technology, but this is where the risks are amplified. You could have always wandered around in that public space with your smartphone, with its depth camera and LiDAR array and volumetrically capture that space and then do something with that data, but you choose not to. And I mean, this is quite an overt gesture, right? So people are kind of aware if you're wandering around holding your smartphone up, okay, you're probably doing some kind of recording. Maybe people will intervene. Maybe social pressure and social norms will lead to you stopping that behavior. But when we move toward everyday AR, as you're saying, those kinds of behaviors become invisible to the bystanders. These platforms are aiming toward AR glasses that are in a form factor where the actual visibility of the cameras is minimized. Bystanders may just look at you and think that's a normal pair of glasses. So firstly, we have no real capability to convey to bystanders what that device is doing in relation to them. Now, there are token efforts. In some of the newer AR glasses, you might get an LED status notification, right? So a kind of light in the corner of the glasses that try to indicate when the cameras are potentially active, for example. But when we're considering what AR can do is wholly insufficient. What can an LED convey about whether I'm capturing your likeness to you in a VR experience later, or whether I'm augmenting your appearance, or whether I've removed you from my perception of reality entirely? that can't convey that breadth of how I might use that bystander data to the bystander. So there's a real disconnect there where the bystander is unaware about how their data is being captured, how it's being processed, and then how they are being perceived by the AR user. And how do we resolve that? So we've started to do research there to look at how we might provide activity awareness to bystanders so that they can have some kind of understanding about the mode of your device or what activity it's doing, that kind of peacocking behavior where if you're volumetrically capturing someone, well, your glasses will try to advertise that to them. But there's a tension there because would the AR user want glasses that do that? That might make them socially uncomfortable or socially unacceptable to the AR user. Then there's whether the bystander will even see that, or whether the AR user might try to obfuscate or cover up any kind of external feedback that's being provided there, or they might even modify their own glasses to try and circumvent any kind of protections that we implement entirely. So I think Bystander privacy is problematic because how do we include bystanders in the loop of consent and awareness to what activities pertain to them? And that's what we're starting to look at in our research, because we were trying to look at recent surveys that we did, for example, okay, how might we convey activity awareness to a bystander? How might we design mechanisms for once they are aware, that they could revoke consent to an activity or be asked to consent to an activity. But it kind of goes back to your consent fatigue point as well, and usable security and usable privacy generally. We could design a mechanism where your headset tells me what you intend to do, and then I can engage with your headset and do a consent gesture or say yes, fine. Will people use that? Probably not. It's quite a heavyweight mechanism to define there. And again, this goes back to this whole consensual and non-consensual erosion of privacy. We may just slip back into, well, it's too difficult to involve bystanders in the loop here. Therefore, we're going to put the onus on the AR user and the AR platform to try and make sure that whatever they're doing with bystander data is, like you were saying, contextually relevant. So we're trying to minimize the privacy risk while accepting there is some risk there. But how do we determine that when someone wants to apply that Snapchat filter of the cloud over my head or appropriate my appearance in their zombie killer game or do anything else with that sensing of bystander data there? I'm not entirely sure yet. And I think that's where we need more research to really figure out how do we include the bystander in the loop? Can we include them in the loop? And how can we do so in a way that is actually usable and that people can engage with? Or do we need more contextual intelligence in the AR headset, so the AR headset can make more automatic determinations about how to protect bystander privacy. Project ARIA kind of does that in a very light way already. They mentioned that if it detects you're in a public restroom, it will shut the cameras off. Okay, fine. That's one way in which you can deal with perhaps a very privacy invasive situation, but it's quite a heavyweight solution. It won't really apply to your use of AR in public spaces. So what do we do in those kinds of spaces? Can we create more nuanced contextual privacy mechanisms there, or do we have to try and include bystanders in the loop? And really based on our research so far, I'm not sure. I think that's one that we have to really actively work on in the next few years because the scope and scale of privacy invasion for bystanders is so problematic because, as you say, they don't consent to this. They don't have a say in what happens here. And all it takes is one AR user in that public square to capture a wealth of data about hundreds of people over the course of the day. So it's not even like we need a high degree of AR adoption to lead to these kinds of privacy issues. Actually, a very modest degree of adoption, dense urban populations will be enough to scrape an incredible amount of data about these bystanders.

[01:05:52.355] Kent Bye: Yeah, this is such a complicated issue. You know, the phones with the camera sounds initially when they were launched, you would have a click. But eventually, I think people got used to people taking photos. And there was a fears that if you had a phone in your camera, then if you go to a bathroom, then people are going to violate your privacy. Well, I mean, that may have happened, but it wasn't like a, it's not a big scare that when you go into a public restroom, that that's a threat. But I think there's going to be initial standard that people are not using AR headsets. You have an initial kind of glasshole backlash effect with the Google glass people being like, Hey, don't be recording me. Don't want that. And the associations with different technology companies. Meta has been collaborating with Ray-Ban Stories and they have cameras on these glasses and there's a very indiscreet white light that's on this camera and it's not very clear and if people are not familiar with it, they would probably not even notice because they're so similar to how other Ray-Ban style glasses look. And so there's already people walking around with hidden cameras on their face. And as you go into this future that you're talking about the always on pervasive recording of everything, everywhere you go, then that's a whole other future that we're living into that, you know, there's so many standards and ethics and guardrails that need to be in place that it feels like The solution that Metta came up with in terms of bystander privacy was that they have in the terms of service for anyone that owns a Ray-Ban Stories glasses that they're the one who's responsible for gaining consent of other people that they're recording, which by the way is not happening. It's like, there's no social standard to be like, oh, by the way, I'm going to be recording. Do I have everybody's consent? And you know, what if one person says no, I mean, it just, that question's not even being asked. And so if you are recording something and publishing it, then if someone does have a problem, then it goes back onto that user who essentially did that transgressive behavior, maybe recording something that wasn't fully in consent of everybody. So I think that's not necessarily a great way to solve it either is to put all the onus onto the users of these technologies to put the blame of any transgression onto that individual for them violating the terms of service. I don't know. It just feels like such an issue that that's the convenient way is to kind of offload. the responsibility and the liability onto the user without having the companies figure out how is this as an issue incrementally, either take a architectures that are actually viably addressing this as an issue, or is this a normative standard that we're just going to get used to over time?

[01:08:16.340] Mark McGill: Yeah, I mean, the scope and scale of those transgressions will go far beyond recording. So recording is just what is immediately apparent with Ray-Ban. That's the core function. They're not AR glasses in the classical sense yet. They don't have that output capability to do something with what the camera footage is recording. So you can kind of argue, well, an LED indicator is maybe appropriate because it only really has one mode. It's recording or it's not recording, or there's not a huge amount of functionality there or a huge amount of activity that it has to try and convey to bystanders. But when we move toward a device which can not just record or even just not record entirely, but if we move toward a device where it takes that camera feed, and it does some other kind of privacy invasive behavior. So it appropriates your identity or the environment around you. It distorts the reality around you. Then we're into a deeply problematic space because the device doesn't have the capability to convey that to the bystander. And the bystander consequently doesn't have an awareness of what the device is doing. So we will come up with perhaps more privacy invasive behaviors beyond simply recording something for our photo and video albums and bystanders will be completely out the loop as to whether or not that's happening.

[01:09:37.073] Kent Bye: Yeah. I wanted to share this graphic that you did. You know, you were putting together an article on this and you have this big set of both visual sensing and auditory sensing and increasing dependencies and the composite processing activities. I'm wondering if you could maybe talk through and step through maybe some of the stuff that's in this graphic to help me understand what you're trying to say in terms of some of these examples of different processing activities. Cause you mentioned that it's going to go way beyond just the camera and recording, but getting to all sorts of other different types of physiological or other biometric recordings, but also the sound and visuals. And as you start to layer these and how that's used in different contexts.

[01:10:18.243] Mark McGill: Yeah. So a given XR device, you can start off with, it has cameras. It will have a camera array of some kind, effectively a RGBD depth camera. It will maybe have LiDAR. It might have multi-spectral or non-visible imaging as well. And at a base level, you can use that camera array to do some very AR typical things. You can track objects in the environment. You can track markers. You can do some basic computer vision image processing on that to pick out elements in your environment that you might want to segment and then remove or alter or do something with. The cameras are also there to drive the positional tracking, the SLAM tracking of the headset. But those foundations, that kind of basic computer vision that's applied to that camera feed, enable further processing and activities. So if you can segment the objects in your environment using the pure vision, OK, now you can pipe that into a deep learning machine learning algorithm. You can start to do some classification and categorization there. If you've got segmentation of a person in the environment, and you've got the right sensitivity in terms of your camera, you can start to derive insights about non-contact physiological measures. So I could maybe look at you and determine your heart rate. And it's this idea that with each step where we further process that data, we are deriving new insights and enabling new processing steps. So I now have your heart rate data. I now have a segmented image of you. I can then start to do some social signal processing about your body language. I can combine that with the heart rate data to try and make some kind of inference about your cognitive state or your comfort. And then that might be piped into yet more different processing capabilities, and then eventually go out to the app that is requesting that particular data stream. And so it's this idea that When you're starting off with something which is seemingly quite innocuous, the camera feed, the insights that you derive from that can grow exponentially as you layer them up effectively. Research is continually coming up with new ways of processing that data to come up with new insights. At the extreme end of the visual sensing side, we're talking about doing feature tracking of facial expressions, or your body, or your gaze, or your hands. We're talking about topological mapping and scene understanding. So the idea that you're using this headset to generate a complete understanding of the environment you're in. And then that can be fed into other algorithms, which might try to derive things around your internal state. Or they might use that data to actually generate augmented reality experiences. They might alter reality based on that. I might decide to selectively occlude you or selectively replace your face with something else. So yeah, each of these steps, it's kind of like building an onion from the inside out, effectively. And with each layer of this, we are increasing our capacity to understand the world and to alter or augment our perception of that world. And that's where I don't think people are really aware of that. When you confront them with the idea that these headsets have cameras, they intuitively think, well, a camera records reality. And it's either recording or it doesn't record. And if it's not recording, that's fine by me. If it's recording, I might want to know. I don't think there's really a general appreciation of just how far you can go with that data in terms of the insights that you will derive there. And the fact that research is actively uncovering new insights there every day, powered by deep learning, powered by machine learning. They're not topics I'm researching personally, but I am continually astounded by the kinds of papers that come out there. I mean, I remember a few years ago, someone figured out how to use Wi-Fi propagation to determine the heart rate variability of a person in the room. If you'd asked me as a computing scientist, is that possible? I would have never thought that they could use a Wi-Fi router to track my position and determine my heart rate variability, but it's possible. And we will continually be surprised as what new outputs can be generated from that data. It's an incredible technology because it means that that pair of AR glasses or that pair of VR glasses has the capacity to continually surprise us and implement new capabilities, but it also means that our understanding of how to protect our privacy there is constantly shifting because one abstract bit of data derived from that that we thought was relatively safe at one point, someone may come up with a new way of processing it to drive some new insight about personal characteristics or behavior or personality or that kind of thing. So that's on the kind of visual sensing side, but it applies equally to all the other sensor streams that we can envisage capturing from these devices. So even talking about the microphones in these devices, they will typically have directional microphone arrays. That's something that's been pushed a lot because it's useful for capturing your voice for voice interactions. It can be used for active noise cancellation. It can be used for sensing and detecting events in your environment. At a basic level, you can do the same kinds of processing to auditory data as you do to the visual data. You can do segmentation of speech based on speaker diarization. You can detect the direction and volume of different auditory impulses that are picked up. And then based on that, again, you might start to do some classification. You might do some speaker recognition. You might then link AR content to a particular speaker based on deriving their identity from that. You can apply more social signal processing. So you can determine the emotional state of the person based on their voice. You might then apply some AI to it to try and understand the context of what they're saying and the intent behind what they're saying. So again, we're kind of layering the onion up and deriving yet more and more insights from what is a very modest sensor, really. If you ask people, are you concerned about your AR headset having a directional microphone? they would probably shrug and say, well, my Alexa, my Google Home devices have microphones. That's okay. What can it do? It can just record my voice. I don't have any interest in conversations for it. I'm okay with that. They don't realize the extent to which that data can be further processed to derive these new insights. And yeah, that applies to every sensor stream that you can think of that can be captured in an AR headset. And it's an incredibly powerful thing from a human-computer interaction perspective, because it means that we'll have all these advanced capabilities that are in a pair of glasses in your head. With such modest sensing, we can achieve so much. But from a privacy perspective, it then poses significant challenges because people will not necessarily be aware that the mode of sensing is allowing people to derive insights about characteristics, health, behavior, and then use all this data to alter their perception of that person or perform other forms of abuse there.

[01:17:23.116] Kent Bye: Yeah, what's really coming home as you go through all this is how when we talk about neuro rights in the context of a virtual reality headset, it's a lot about how that individual VR context is able to understand what's happening in your environment, and that the different ways it's able to track your behaviors and your actions and your intentions and your active presence in your cognitive load and your mental presence. And then you have the emotional presence with all the different affect and micro expressions and facial expressions. And then finally the embodied presence in terms of what your body's doing, your attention, your actions that you're doing, your movements, and that, that complex of all those things of the neural rights of the mental privacy and your identity, but also your actions and your agency. And that this mapping here is basically doing the same thing, but for AR, And you can imagine an array of AR headsets within a context, talking together, being able to map the behaviors of other people. So being able to extrapolate all of these other things of segmenting them. And once you do that, then all these other things you can start to, with the array of sensor fusion, start to detect the same level of those intents and the emotions and the actions, the behaviors, the gestures, the body language, and the way that they're walking. make all this psychographic extrapolation on that. And then on top of that, potentially change the way that you perceive those people. So the mapping that you have here, I think it's tying a lot of those things together, but taking the same aspects of those neural rights, but putting it into more of this AR context.

[01:18:49.784] Mark McGill: I don't think this is a bad thing either. The capabilities that this kind of layering of processing of AR data It will enable things like augmented perception. So Meta have been actively working on examples where we use that directional microphone array to pick up the speech of another person in a bar that you're conversing to, and then amplify that speech, do some active noise cancellation of the other speech. Great. For accessibility reasons, it will be incredible to be able to pick up the speech, do live subtitling, or do live translation, and layer that translation next to the person. Google only last week, and Google I think they were showing off AR live translation services. So it's not to paint too negative a picture of this. These capabilities will, or this further processing of this data will enable incredible capabilities But we have to be cognizant of the risks that they introduce as well and how to safeguard against them. We want to be able to make sure that, if appropriate, that a user can make use of augmented perception or that we can make the best use of these accessibility features without, for example, enabling someone to surveil and eavesdrop on someone at the far end of the bar by using the exact same features. And that's where we get into the usable privacy side of this. We have this capability. how do we instigate protections or privacy enhancing technologies so that the bystander is involved in there as well, or that we're detecting things about the context in order to prevent the real significant misuses of it. And it won't be a perfect system in that regard, but I think we have to actively work toward preventing, you know, at least the worst of that potential misuse whilst allowing people to make the most of the benefits of AR.

[01:20:41.428] Kent Bye: Yeah, I guess with any technology, there's great potentials for how it can be used and other harms that it could be abused or misused and trying to weigh those I think is important for any new technology. I think the Collingridge effect, which says that at the very beginning of the technology, you have the most capability to start to regulate it, but you don't want to overregulate it and prevent innovation. So you let the innovation happen. There's a small window where you understand what the technology is that can be regulated. But often what happens is it kind of slips into mass ubiquity and people adopting it, and then it becomes almost too late to put the genie back in the bottle. And so I feel like there's this challenge with tech regulation where you preference tech innovation without understanding the full social implications. You don't recognize the full social implications until it's already been fully deployed out into the culture and adopted by the culture. And by then it's kind of too late to kind of reel it back in any reasonable fashion. So I feel like this has been the challenge of tech policy is that it's what's called the tech pacing gap, where the technology is just moving so much quicker than our ways of understanding and trying to prevent any harms that are happening while at the same time, not trying to do it too early to prevent the innovation. So this Collingridge dilemma that has been described, but I don't know how to get around it. I feel like-

[01:21:53.072] Mark McGill: There is another risk there as well, which is social rejection. You exemplified it when you talked about the glasshole debate. We've already been through one cycle where, at least to an extent, the social acceptability of the device led to a rejection and a withdrawal from the market. The release of Ray-Ban has been followed by a lot of advertising recently. I heard of adverts in Ireland, radio adverts and such, where meta are trying to describe the context of that technology so that people understand and don't reject the usage of those devices. So they're clearly concerned about the capability for social rejection as well. So there's the sword of Damocles dangling over AR here. We want to make sure that we can make the best use of this technology. we want to avoid the glasshole type social rejection of it because it will benefit so many people's lives and for the numerous reasons we've talked about. So that's where having these kind of privacy enhancing technologies is really fundamental to preventing that social rejection. Because otherwise we may find that a company comes out with a particular ER headset that sees a level of significant adoption and then a significant backlash against it because of the kinds of capabilities that it enables. So yeah, I am an advocate of this technology. I see immense potential in how it will transform people's lives. I don't want to see rejection. And I think the best way to ensure that is to deal with these kinds of privacy issues now and try to instigate the best protections we pragmatically can without overly restricting users.

[01:23:28.386] Kent Bye: Yeah, and from my perspective, I think there may need to eventually be some legislation that is dictating the contextual uses because the consent challenges are not robust enough to be able to handle some of the different aspects here. So we may be facing the need to have some sort of guardrails for how to rein in the limits of the types of data that can be collected for this type of surveillance capitalism, both from the VR technologies, but AR technologies. I think eventually it may come to that.

[01:23:56.708] Mark McGill: I think platformers should welcome that because in the end, it's in their own best interest because it will encourage the safe adoption of this kind of technology by society. It will reassure people that, okay, I may have some concerns about the fact that I've got this microphone, the rain camera strapped to my head. but I get the sense that there's enough protections in place that it's not going to be overly privacy invasive of myself, my family, the people I live with, the people that I encounter on a day-to-day basis. Now, in practice, if you were to talk to people developing these platforms, I'm sure they would really reflect on, oh that would be a bit of a pain, but I think it's in the best interest of the XR community that we have good legislation covering this kind of technology. It will encourage further adoption, it will make sure that society can safely adopt this technology.

[01:24:47.968] Kent Bye: Yeah, well, before we get to the last three sections, I wanted to just cover the two recommendations of this bystander privacy. I think the bystander privacy is probably one of the biggest open questions in terms of how to actually deal with it. Your recognition number five, you have where some aspect of bystander data is legally permissible to be captured and processed. Bystanders should be made aware that this capture is occurring and should have the capacity to revoke implicit or assumed consent for capture. So. We talked about this a little bit in terms of like how to actually do that technologically or from a social norm perspective, getting consent. I mean, you talked about the ads that were happening, but this is a wider issue in terms of, is there some sort of like phone you hold, you carry around and you have to have a phone that somehow digitally communicates that you are opting out or is there other ways for you to not consent that is more passive? rather than gathering active consent. So I don't know if there's a good solution, either from a legal perspective or technological solution. But I like the idea. I just don't know how to pull it off.

[01:25:44.632] Mark McGill: Yeah, so I mean, we've speculated a bit about this in our research, because we started off with the very avert consent management mechanisms of, OK, you might use a gesture, you might use a vocalization, but you're effectively creating a negotiation between the headset and the bystander based on the intent of the AR user that is asking the headset to do something there. But obviously, that is an incredibly heavyweight mechanism. You know, if you're wanting to volumetrically capture a scene briefly for a holiday snap, are you feasibly going to be going around everyone that's in the field of view of the headset and asking them to directly consent to your device? No, people will reject that technology outright. But we can have used that consent mechanism as a simple, understandable way of trying to probe when might people want to manage their consent over a particular activity. If you say to them, you have the capability to revoke consent, or you have the ability for the device to request consent, OK, what are the problematic activities that shouldn't just be immediately enabled there? And then once we have an understanding of when consent might be deployed, I think our idea there is to try and derive a set of privacy preferences from that in the same way that you might simply go on to Facebook and indicate your privacy preferences in a social media platform. Can we come up with some simple usable set of preferences that you can indicate about what you are and aren't willing to allow in relation to your data as a bystander? And then if we can do that, How do we share those preferences? How do we convey that to other disparate devices around you? And in particular, what if you don't have an AR headset? What if you're a bit more of a technophobe? How can you still have the ability to have agency over how your data is? sensed by these AR devices. I mean, we have some ideas there as to how you could potentially feasibly do it. Some existing research is, like Marian Cole, for example, has looked a lot at consent gestures as one way of doing it, the kind of overt way of doing it, or wearable items that may, within them, have embedded privacy preferences. kind of like a QR code, like a badge or something that has a small bit of data that can be parsed by the device that says, here's how you should treat my data. Here's whether you're willing to be augmented or captured or volumetrically captured or so on. So I think there's scope there to derive those preferences and convey them to an AR headset. And then the AR headset could take into account your preferences, the context, its contextual knowledge of where you are, the social relationship it may have toward you as well, which is probably going to be highly influential to whether an activity should be allowed. And I think if you could have a framework like that, And if you could ensure that it is sufficiently private and doesn't introduce other forms of abuse, like enabling stalking of the person because they have that badge or something like that, then I think it's feasible to envision a way where bystanders have agency over how their data is used in relation to other AR users and other AR devices without having the heavyweight active mechanisms of managing consent. Now, we haven't done that research yet. That system doesn't really exist. We're working toward trying to design that in our research right now. But my hunch is I think that is one feasible solution that would require little to no engagement and hopefully be quite usable. But it's not a solution without its caveats, because can you sufficiently describe someone's preferences and attitudes towards the plethora of activities that an AR device can potentially do with their data, or how they might be augmented by that device. I think that's quite challenging, because even if you had a question saying, well, what's your attitude to how your appearance might be augmented by others, It's very different if someone, against my appearance, to further sexualize me or turn me into some violent zombie or do some kind of bullying or abusive behavior and render signs pointing at me, mocking me. Augmentation of appearance alone doesn't describe the range by which abuse might be enacted there. So then how do you draw that line there? How do you capture those preferences in a way that you can meaningfully act upon them is difficult. But yeah, so I think broadly, heavyweight consent mechanisms are a tool we're kind of using to explore attitudes toward consent. I want to see, like you're describing, some more implicit, lightweight, near-invisible mechanisms for handling bystander privacy where bystanders are included in the conversation, albeit not in a way that requires them to accept and consent to every activity that pertains to them that they encounter in their daily lives.

[01:30:41.179] Kent Bye: Yeah. A number of quick thoughts that come up as you say that is that I can imagine people wearing around a QR code in the future that sort of declares what the range of what they want or not want. I mean that what the fashion of that would be, but you can imagine the utility of having so many people with AR headsets and almost like a creative commons license that allows permissions. It's sort of you're granting permissions to do things. So maybe you want to be in a context where you want to go out and engage with other people in a certain way. And maybe that type of QR code would able to enable those different types of connections. But I've also been at conferences like the Decentralized Web Summit, where there may be people who want to maintain their anonymity. And so having different badge colors to signal to the wider community, hey, don't take any pictures of me because I have this red badge on. And so there's ways in which even within the private context, you have people that are declaring whether or not they want to have their photo taken or not. But in the United States, at least, I think there's certain laws in terms of you can take photos of people in public spaces, but like in other spaces, you may need to get like image release forms. But it also may depend on how you're using the images, if you're selling them or whatnot. So I don't know what the existing dimensions of all that, but there may be existing laws that are kind of built upon to say, okay, given a public space, maybe there's certain things that can happen. But if you're in a private space, maybe you have more rights, just like you have rights over your images more so than if you do, if you're in a public space where you're consenting to be in public and have your image and likeness captured by anybody that's taking a photo. So there may be similar things like that that are happening within AR and what those standards are as a culture that we are consenting to.

[01:32:12.388] Mark McGill: Yeah, and I mean, if you have those kind of preferences, it can be conveyed in some way. What if someone does not have those preferences on them? What if they haven't defined those preferences? What should the default behavior be for someone that has not indicated there? Should it be an opt-out? Should it be the fundamental hardware level that camera data is going to obfuscate or remove the segmentation of anyone that's in the scene, unless you can actually get the preference data from them and decide they wanted to be opted in or not. I mean, that's one potential way you could go about it, where we are still enabling an AR headset that can get all this contextual awareness, but if it involves a person, then the default approach perhaps be opt out, unless they have consented to that activity, either actively or passively through the mechanisms that we've talked about there. I think that's maybe a point that hasn't quite yet been addressed by these kinds of devices. The prevailing way in which this data is handled is very much that kind of camera level. It's do you have the camera data or not? And I think that's just not sufficiently granular to encapsulate what kind of things you can do with bystander data. And it's not just recording pictures of bystanders as well. If the existing legislation protects against the recording of imagery, OK, well, does it extend to if I segment your body and get the skeleton to track the movement of you? Or what other attributes of you can I feasibly extract from the API on that device without crossing that line into saying that I'm handling a picture of you? So there's probably lots of legal gray areas there where, because the law is fixated on recording and capture, it's probably not quite representative of the more nuanced ways in which you might process bystander data.

[01:34:05.758] Kent Bye: Yeah. Yeah. And the last recommendation you have here, I think is going back to identity and privacy, meaning that platforms should refrain from enabling the persistent pseudonymous identification or trafficking of bystanders and their associated data, where there's a risk that requested sensor streams enable such tracking in violation of bystander privacy. Such streams should be obfuscated by default. That is making bystanders unrecognizable. So essentially blurring out the faces of people who are not consenting as a default behavior, which I think they showed in Project ARIA. But yeah, as you're wearing these glasses and maybe recording things, that those recordings are not revealing people's identities if they're not consenting.

[01:34:44.362] Mark McGill: And there are ways that you can do that that don't necessarily degrade the experience of that imagery later, because obviously, you know, if you're on holiday, you want to take imagery or a volumetric capture of your environment. It's not going to be a very enjoyable replay of that if you then watch it back in VR later and you're seeing all these obfuscated faces of bystanders. But there are approaches there where you can computationally generate different faces in there, right? They're strangers to you. You wouldn't necessarily recognize who they were. You can preserve the intent behind the imagery whilst removing a lot of the privacy invasiveness of it when it's being captured. But I think if we're going to have that obfuscation by default, then we need mechanisms to be able to have that consensual usage of bystander data. Because there are too many compelling use cases of AR where it will be really useful to have that bystander data. Think about if someone has difficulty recognizing the emotions of someone that they're talking to, like autism. So they could use their AR headset combined with social signal processing to give them some insights that they may be lacking about what's going on in that conversation. That would be very useful. Maybe if you are blind, you want to have some awareness of where approximate others are relative to you. So you want to be able to track bystanders that are near you. Maybe you want the glasses to be able to tell you when they recognize someone that you know. giving you greater awareness of your environment than you might normally have had. There are compelling reasons to get that data. If we have an opt out by default, well, we need to have some mechanisms of reasonably opting in as well so we don't cut off a whole branch of capabilities there.

[01:36:28.310] Kent Bye: Or erase someone from a scene and you run into them because you don't see them. Exactly. Well, the last three sections here. So the next section 2.5 world scraping live maps and distributed surveillance. So we, you mentioned this a little bit earlier before, but I just wanted to reiterate the point, like the relational nature sometimes of privacy, because you may be in a home where you are device owner. And if you do a screen capture of your, like say living room, there may be aspects of objects or things that are in that living room that may be revealing certain private aspects of people that you are living with. And so there's a collective dimension of some privacy aspects, especially when you start to get into the aspects of world capture, world scraping, live maps, and what you call as distributed surveillance. So love to hear a little bit of reflections on this section and chapter.

[01:37:18.197] Mark McGill: Yeah, so, I mean, this is really dealing with the concept of when we see significant adoption of everyday AR, everyday XR, that if you have these distributed devices, that obviously that data can be aggregated and processed to generate insights, not just about your own individual life, but about the environment that you live in effectively. And this isn't a fantasy, like Niantic, the creators of Pokemon Go, were involved in the creation of this PlanetScaleAR consortium, which was intended to focus on this distributed AR sensing, because obviously it will generate these incredibly rich maps of the world that can then be exploited to enable, in their use case, multi-user, anyplace, anytime experiences. So that's an incredibly compelling usage. But in creating the platform for that, we are effectively creating a platform for distributed surveillance, where if there's one or more AR headsets in a particular location, Their data can be pooled, aggregated, and processed to surveil that location and whoever is in it. Now, how that is used right now, well, you can easily imagine that certain states would love to have that capability to be able to surveil their population en masse without having to deploy CCTV and the likes as they currently do. So I think that is a particularly risky use case of XR sensing here, but it's one that I think these companies will gravitate to because, you know, if you look at Google Maps, for example, and the kind of user generated user captured data there, this is a whole other level of capability here. We're talking about generating a planet wide 3D mesh topography that's being updated in real time based on the events that's going on there. Incredibly useful, but also incredibly invasive and has great capacity to be misused by irresponsible actors, be them the companies, be it states.

[01:39:25.246] Kent Bye: Yeah, one of the initial reactions I have is thinking about China and their social score system, meaning that if someone jaywalks, as an example, that may lower their score that they have according to the government, and that may have them not have full access to say getting on a bus or travel or purchasing certain things. And so there's a ways in which that totalitarian structured society of all pervasive, they have that same type of surveillance and information in the hands of the government, that is using some algorithm to assign you a score. And then that store is then dictating what you can and cannot do within a society. So when I think about this distributed surveillance, I think about those different types of things, because even within the context of say VRChat or RecRoom or even Horizon Worlds, they've already had this type of implicit social score that we have within these worlds to be able to track trust and safety to see whether or not if there's abuse or harassment or, you know, kind of give a baseline in terms of like, if there's problematic people within their platforms, they're able to take more aggressive actions on those reports into either ban or suspend people. But those different types of social score systems are happening within the context of these platforms. But if the government all of a sudden goes to those platforms and say, let me take all the behavior that's happened within this virtual world. Now, all of a sudden, we're going to aggregate that into some collective score that's going to now dictate whether or not you're going to have access to whether or not you can get onto an airplane or not, because we might think you're a terrorist. And so there's this dimension of the public spaces that we have in these different actions. If you have this type of distributed surveillance, you're going to be able to start to track people in a way that who knows what's going to be ended up for that data, not only just locating and doxing you in terms of like, you're at this place in this location, Whether that's in a company's hands, there could be certain harms that happen there in terms of tracking all of your locations, or if that leaks out of the information onto the internet somehow, or people are able to tap into the collective CCTV to track people down, or if it gets into the hands of the government. that starts to evaluate these algorithmic ways of creating scores for people that dictate what they can and cannot do in society. So that's when I think about some of those issues, some of the different harms that start to come up when you are capturing these spaces and relative to the people that are in those spaces and the time and place that they're in those places.

[01:41:42.305] Mark McGill: But there are potential benefits to that technology as well. And that's where we get into a very tricky area, because you may find that someone makes a compelling case to say that that kind of distributed surveillance will make you safer. You know, the headset will detect antisocial behavior that will allow police to be more reactive, that will allow first responders to be more reactive to events that happen there. So you could find that a particularly authoritarian government might put forward a compelling case to say, give us this data because we will make you safer, we'll make your lives better. And in doing so, then as you say, they might then exploit it more toward the social credit system. And I think that kind of tension is very difficult to manage. I think that comes up persistently when we're talking about AR. It's that balance between the benefits versus the cost to society and trying to minimize that cost to society. I think in the distributed surveillance sense, it is tricky to minimize that cost because once you open that door and give them that data, people will naturally exploit that further and further.

[01:42:46.921] Kent Bye: And with your recommendation, you say that the right to privacy should be extended to protecting real-time surveillance of homes, businesses, and public spaces. So yeah, you mentioned the CCTV and as well as in China, I think they've made that argument in terms of the safety benefits that come there. But yeah, I guess in the UK, they've already had CCTVs and there's cameras with the United States. It's just that they're not always necessarily connected to the government or police as much. Maybe that's changed. Maybe there's more and more of that, at least with audio and detecting gunshots and whatnot. you know, the fourth amendment in the United States means that you have a house when in your house, you have a sort of rights to privacy and unreasonable search and seizure. But when you're out in public, there's not as much of that. And then when we talk about the companies documenting where you were going in public and then storing that into a database and then the government having that, I feel like the persistence of that data and the volume of that data and what can be extrapolated from that data starts to get into like, where do you draw the line between how we already treat public spaces? And when you have this type of surveillance, how do you ensure that we're not crossing some sort of threshold that it feels unreasonable to have that much data that's out there. So I don't know how to draw that line based upon our existing understanding of public spaces versus private spaces. But yeah, it does seem like when you have AR, it's maybe taking something that used to be considered public spaces, but if it's persistently tracking us and connecting those things to our identity and being stored in a database within a private company, then yeah, that starts to feel a little bit more of a encroachment onto our commons and our public spaces, which is what Applin and Flick were talking about. And their critiques of augmented reality as a technology is to see these public commons areas that we have being colonized with these different companies, seizing that type of data and extrapolating it for uses that go above and beyond what we have our understanding of what the commons means.

[01:44:36.753] Mark McGill: And I mean, our acceptance as a society of the prevalent CCTV in the UK, for example, that might be the means by which it makes distributed surveillance and AR acceptable, because they'll say, well, you know, we're getting the same benefits that we have from this infrastructure, but we're extending those benefits. We're extending the sensorial range of our surveillance here. I think many people might find that a reasonably compelling argument. Others would hopefully see that as rather more dystopian and push back against it. But given CCTV is continually expanded in the UK in particular, I think this is the natural next expansion of that kind of surveillance. I'm not sure which way UK society, for example, would go there.

[01:45:21.042] Kent Bye: Yeah, definitely seems to be some cultural differences here. What is already existing in terms of the infrastructure of this type of surveillance, especially surveillance by the government and how that continues to expand. Recommendation eight is capture and processing of non-personal real-time data regarding public and private spaces needs to be regulated in the same way that personal data is through the GDPR. I guess, applying the insights of GDPR to this type of data that's being captured in public spaces, especially the bystanders. But as we talk about the public spaces, are you talking here about the personal data or the data about the spaces themselves?

[01:45:54.443] Mark McGill: I think it's probably both, but maybe the potential privacy violations of the space itself are slightly less evident than of the data pertaining to the bystanders. Because if you have the ability to ascribe a noted identity to the person that's being tracked, and they are constantly in view of an AR headset as they traverse the city. That kind of surveillance is effectively enabling the complete tracking of their activities, what they do, where they visit. That's incredibly privacy invasive. I know that CCTV can do that right now, but they don't tend to do that automatically. They might do that selectively where there's an event and then they retroactively go back to the footage and try to track the individual and their movement patterns across the city. But I imagine that for this kind of system, you now have the capability to do that automatically in a distributed sense with a high degree of accuracy because there is no stone left unturned there. So I think for the individual, it's deeply problematic. For the capture of the data pertaining to the space, I'm less sure about what the invasions would be there immediately. I can see the benefits in terms of just the mapping and knowing what events are going on there, knowing how space evolves over time, knowing how busy places are. For the privacy invasions, I'm not sure. I'm sure you can think of some, and I'm sure many others will think of some. Again, that's where these kind of emergent vulnerabilities will come out in time.

[01:47:25.532] Kent Bye: Yeah, there's a certain way in which that in the United States, there's your private property of your home. You have rights to privacy and unreasonable search and seizure. But yet if you are tracking everybody in those public spaces, then you may be able to extrapolate the boundary between the public and the private. In other words, knowing where someone lives based upon doing this type of tracking. And so when you start to think about the boundaries between the public and the private is probably where that comes into play there because, you know, the doxing kind of effect of being able to detect people's patterns of where they're going and those behaviors or if you only are using the public area, you can figure out a lot of the private information because there's only one like one private spot as to where where they could go and go from the public and disappear into the private realm. Well, maybe it's good to go to the last two sections here. This augmented perception and personal surveillance is interesting because you start to get into more of the transhumanist applications of extending our senses of using these devices to have like super sight or super hearing. And then if you have people that are walking around with the ability to have incredible ability of super sight and super hearing, then how does that change our relation to our body proximity relative to the types of information that we may be privileged with? we already have cameras that have like a hundred X zoom. And so, but if you have this like more of a hidden camera, I'd say when you have these glasses on, you're taking it to the point where you may have these superhuman capabilities, but no one around you is aware of them. And so how do you signal and how do you navigate this ability to have extra sensory perception above and beyond or, or human limitations? And what are the different personal surveillance and privacy implications of that?

[01:49:05.884] Mark McGill: Yeah, so I think with the augmented perception perspective, it's kind of going back to what we discussed in terms of the different capabilities of the sensing. So, you know, I have a camera that is trained on you. It can infer insights about how you're reacting, the social signals, the physiological, the non-contact physiological signals there that can then enable some new insights around how you're reacting to a particular situation. So that's one way in which having that camera data can augment my perception of a particular person. But having that data, as you're saying, that camera could be incredibly high resolution, incredibly high field of view. So immediately, in real time, that enables the capability to do some segmentation, do some computer vision, do some machine learning, to identify salient items that you want to track, or to do the tracking for you, or to show you distant items. And these kind of technologies, again, I think these are inevitable because there's obvious accessibility benefits to them. There will be industries, there will be situational and permanent impairments where these kind of capabilities will be incredibly useful. But then as you're saying, okay, if we introduce that capability, how do we regulate and control it so that we're not just enabling some form of individual personal surveillance where I can see something from a hundred yards away or I can use my directional microphone array to eavesdrop on the conversation that's going on in the bar in the distance. I think that goes back to the idea that you need some kind of contextual privacy protections built into the devices to be able to regulate when the use of these capabilities is acceptable or not. But the problem with that is that if you're trying to classify the acceptability of using supersight or of using the glasses to tell me your current emotional or affective state, there's a lot of ambiguity there. There's a kind of gray area where an activity may be difficult to determine if it's permissible or not. But again, that goes back to that capability has a lot of value. So we can't simply ban it. We can't simply remove that capability. We can't simply say as a society, we're not going to support that because there are many valid use cases of it. So instead we have to have more intelligence in the device to determine when it is acceptable or not. And when there's a particular socially unacceptable or privacy invasive usage of that capability.

[01:51:36.763] Kent Bye: Yeah, and I think that feeds into the recommendation nine and 10 number nine is essentially that if you're going to be doing this kind of super sensory perception through sight or hearing with third parties that they should either consent to it or have a way of automatically opting out. And then the recommendation 10 is that for people who do have an accessibility need that we balance the enabling of those technologies with trying to make sure that it's not harming people around, but it is able to give them ability that maybe it's putting them more on parity with people around them in terms of just being able to function in the world if they have impaired hearing or sight so that they can use the augmentation super sight to exist in the world around them. And then you just want to balance to make sure that you're not preventing those use cases.

[01:52:19.775] Mark McGill: Yeah, and inevitably there will be some abuses that I imagine get through the system there, but that's the price that we pay in order to enable powerful accessibility features potentially. For the supersensory hearing example in the bar, well, maybe it detects when there is a social relationship between the two of you. Maybe we introduced one of those heavyweight consent mechanisms, or maybe because of that known relationship and their privacy preferences shared from earlier, the device automatically determines, right, it's acceptable that you can do some selective segmentation and relay of that speech to you. So I think there are ways in which we can safeguard against these features, but that technology doesn't yet exist. And if we don't get to the point of having these kinds of privacy preferences, implicit consent, and these kinds of protections or privacy enhancing technologies, then there will be a pain point for society where we have devices capable of individual personal surveillance that is largely unregulated and unrestricted, which again goes back to the risk of social rejection and glasshole type interpretations of the perception of augmented reality.

[01:53:32.354] Kent Bye: Yeah. Yeah. And as we get to the last section here of rights and protections, we've been talking a lot about GDPR and other laws and stuff. And I wanted to just give some points that I see that jump out and kind of read the different sections and then have you comment. And then we'll dive into the recommendations for this because. The big takeaway that you have in bold here is that the current system of digital privacy protection is no longer tenable in an extended reality world. That seems to be the big takeaway here, that all the stuff we've been talking about, that the existing rights and protections that we have are not sufficient to be able to handle a lot of these challenges that we've been discussing up to this point. And so you talk about the existing rights that are out there. You talk about the consensual and induced erosion of rights. You talk about the non-consensual erosion or circumvention of rights. And then in the last two sections, you have the sustainability of existing legislation and then the non-legislative protections and the need for transparency and consent. So we've been talking a lot about these things. The way that I think about it at least is that there's things like neuro rights that are trying to establish like fundamental human rights that then can be fed into and informing the expansion of those other laws that we have on the books to be able to live into that. I think GDPR was really both combining different aspects of treating privacy as a human right, but also the contextual nature and so making sure that the way that the data are being used is disclosed to you and that you're agreeing to that use, that very specific use of how that data is being used. And as we move into this new realm, we've been discussing all throughout the entire conversation, those existing frameworks start to fall down in a variety of different ways. And we have to come up with new conceptual frameworks for understanding what's happening with this data, and then understanding how to put new legislative techniques in place to be able to prevent that. Or if we're going to just let the companies regulate themselves, but if they're self-interested to basically push a limit for how much that they are capturing, then how effective is that going to be that type of self-regulation approach, which is kind of what we default to, to start with, but how do we start to put in the protections in place to be able to address all these issues? So as I read through these last four pages, that's kind of like the summary that I have, but I'd love to hear your expansion on how to dig into where to go from here.

[01:55:46.516] Mark McGill: Yeah. I mean, So I don't have a legal background, I'm a humble computing scientist, but from my reading of GDPR, it's not that it can't necessarily act as a protective measure here, but perhaps that the interpretation and application of it doesn't yet exist. So I think, firstly, that's where maybe we need further engagement from legal scholars as to, okay, how does GDPR meaningfully apply to the scope and scale of activities that we've talked about in this paper and that we've talked about today? And then On the basis of that, can we have some kind of interpretation of GDPR that can then be acted upon by the platforms where they understand, right, here's what is and is not legally permissible by this interpretation? So GDPR is a rather default framework. It's not prescriptive for what the data is. So it's not that it can't apply to augmented reality challenges. It's just that I don't think we have considered how to apply it yet. And I think if we do that, that's a start because that will mandate certain protections that platforms will have to provide. So yeah, the other thing would be the non-consensual erosion or circumvention of rights. And the idea there is that even if we think we have interpretations of GDPR that might be sufficient, do we have sufficient measures to enforce those interpretations? So the likes of Meta, the likes of Microsoft, the likes of Google can afford to lobby for a different interpretation of this legislation, they can also afford to pay the kind of fines that they might experience based on if this interpretation is applied. So even if we have an interpretation of GDPR that applies to privacy of augmented reality, this may still be eroded in ways that, frankly, society has experienced repeatedly over the last few decades in terms of internet technology and smartphones. And then, so the section on the suitability of existing legislation. Yeah, so the last point is really whether or not GDPR applies sufficiently to the kind of data that we're talking about, where there's a focus on personally identifiable information in particular. OK, where's the line drawn there if I'm using non-contact physiological measures to infer something about your effective state? It's not necessarily personally identifiable information. but it is privacy invasive information that can be used to various unknown ends there. So I think there's maybe a reassessment of, is personally identifiable information, the red line, is that a sufficient red line there? Or do we have to change that interpretation slightly to encompass the kinds of data we know these devices can capture from you that aren't necessarily personally identifiable, but still introduce significant privacy invasive risks? And that's where maybe the interpretation of these kinds of laws can be tweaked slightly. So I think generally, GDPR is a monumental text. I'm not a legal expert, but I think it is entirely feasible. You can come up with an interpretation of it that applies to this technology. But I think it's a collective multidisciplinary effort to understand and come up with that interpretation. And yeah,

[01:59:22.447] Kent Bye: Yeah, there's still a lot of work that needs to be done in terms of ensuring that there's things like the biometric psychographic information that's being extrapolated. If they already know your identity, then it becomes less about that information revealing your identity. It becomes more about, you know, how that information is being aggregated and kind of modeling identities in that way. Just to read through the last five recommendations here briefly, the XR platforms need to adopt rigorous control over what sensor API applications can utilize and how said data is protected from unintended or unanticipated processing, where there might be risky requests for access to occur. I think about like eye tracking data in this case, you know, who has access to that raw data? Cause there's a lot of information that third party developers may not need access to. So the parts of controls and that that's. more in the hands of the platform providers than it is from a tech policy perspective. But recommendation 12 is users given the tools that they need to retain agency of the device, it's sensing activity and client applications using this data, including requiring informed consent for risky sensor data. providing continual awareness and feedback regarding device activity. I think in the web, you have a lot more application specific. Okay, do you really want to give access to this device to have access to your camera? And I think as we move forward, maybe having a little bit more insight into just like an iPhone, you have the ability to determine Given this app, it's asking for all of this information. Do you want to provide all that information to it? And does it gracefully degrade to the point where it can still operate without having all the information that may or may not need, but it's just data hungry. So having that same sort of design pattern before I go on, is there any other comments on those first two that you wanted to make?

[02:01:01.995] Mark McGill: So I think the predominant issue with how permissions are handled right now is that these headsets are largely based on Android, an operating system that is for smartphones, and that permission architecture has been ported over to VR headsets in particular. So you're using an Oculus Quest, you open up VRChat, it will ask you for permission for microphone access, for example. Or if you have a headset that's a non-meta platform headset, you might be able to get access to the camera, camera permission, fine. But I think that that permission paradigm is not really suited towards everyday augmented reality in the future. The idea that you just won't give an application a blanket permission to access the camera seems, given the scope and scale of how that data can be processed, incredibly risky. So there's, again, a kind of usable privacy, usable security challenge here in trying to figure out how do we design more granular APIs that can take into account the different kinds of process data that can result from that underlying source? And then how do we give people agency over what permissions are granted to what application in a way that is usable and that they can make sense and that they can understand? So the idea there would be that an app would not necessarily request permission for the eye-tracking data in its raw, unfiltered sense, it might get access to a more degraded or obfuscated version of that. maybe only occasionally has to know when you fixate on a particular target, but we don't want to give that app the full sensor stream from which it might derive other insights we wanted to prevent. Or instead of getting access to the camera feed, well, maybe you would make a direct request to an API that only gives you the tracked skeleton of the user instead. so that you can do reality awareness mechanisms for a VR headset without exposing the bystanders to further risks. So I think that's where we're in a weird place right now where the predominant API management is all based around smartphone architectures. And I think we need to evolve it toward more XR suitable permission architectures.

[02:03:23.360] Kent Bye: Yeah, in recommendation 13, you say that the company should strive to adopt leading guidelines regarding XR privacy protections and standards and enforce those standards on their app stores and platforms. And so I think each platform does have those in terms of developer agreements, in terms of what the third party developers can do versus what they as a first party developer can do. My concern is a lot of times the things that they're not letting the third party developers do, they're allowing themselves to do. especially meta in that case. But I think what will be interesting for me to see is the ways in which that Apple as they start to come up with their rumored XR devices at some point to see how their privacy approaches may be differing from that as approach, because there may be legitimate approaches where Apple has a much more privacy centric approach, but maybe there's the functionality that you're missing and that you have to only go to a meta headset to get that. So it'll be interesting to see in terms of if there will be degraded functionality that happens with those different types of guidelines versus what the people will want to do versus how much are they willing to kind of mortgage their privacy. I think this will be kind of the existential question of our next era, which is that question of, If they take an extreme privacy-preserving approach, is it going to hinder the applications but preserve the privacy? And is that a trade-off that people are going to be willing to make?

[02:04:41.637] Mark McGill: Yeah. I mean, I guess I would suspect that people would go for the more fully featured AR headset that doesn't have to make that trade-off and gives applications unfettered access to these kinds of data streams. But that's where we can have the best of both worlds if we have better permission architectures, if we move away from camera access as being the main protection for privacy and instead start to break up the inferred insights from that camera data and make them directly available to applications, because then an application can hopefully request data within the scope of what it needs. Maybe an application can get away with taking a low resolution, low sample rate camera data instead. Maybe it can take camera data that's already obfuscated to protect bystanders. Maybe it doesn't need camera data. Maybe it just needs the insights derived from the camera data that is provided in the API. So in that way, we can move away from this kind of one size fits all as camera or nothing, and then enable the kind of plethora of activities that we want to support there. But that puts a heavy weight on the development of the platform then. You know, that's not up to the apps. That's up to Apple and Meta to institute that kind of architecture and provide these rich APIs to cover the common extracted data that you might want from the cameras. And you do get some of that. You know, if you're talking about looking at the Microsoft HoloLens and their depth camera array, you know, you can get body tracking, face tracking data from it. And once you have that kind of granularity of API, you can have much more granular permissions. You can make sure that an application is not going beyond the scope of what it needs. So that's the difference that we need for AR versus right now in VR, where it's still very much, give me a microphone, give me a camera. That granularity is not sufficient really. And the way that Meta in particular deal with that right now, to my immense frustration, is to just block access to the camera. It took years for them to enable any kind of pass-through functionality that could be used on the device. And even then, we can't get access to the pass-through feed to do anything interesting there. In my group, we've done lots of research into reality awareness. So the idea of how we bring bystanders into VR, how we promote safety and room-scale VR experiences. And that research was incredibly difficult to do when we didn't have headsets where we could get that data from. We had to plug our own cameras in or use different other platforms instead. So it would be a shame if we were to hinder the progress of how we use this technology. I think there's a halfway point between the two. We can design better permission architectures.

[02:07:25.580] Kent Bye: Yeah. And the last two recommendations here that I think are connected in different ways, there's the recommendation number 14, which is that industry legislators and researchers need to define an extended reality privacy rights framework that can inform future legislation and provide voluntary standards for XR privacy protections as a stop gap. Meaning that we don't have like in the United States, a new federal privacy law that isn't encompassing all these things. And so we need to have an industry supported framework. I think the newer rights is a great start in terms of the right to mental privacy, the right to identity, the right to agency, the right to fair and equitable access, and the right to be free from algorithmic bias. I think those are all good. I think they're kind of maybe different contextual domains. The first three are like a really connected But yeah, I feel like how to actually translate those rights into the policies I think is the challenge in terms of either the legal framework and how it's enforced versus also the ways in which the companies are designing these frameworks without having any oversight and insurance that what they're actually doing is kind of living into those rights and what those rights mean according to the context. And the recommendation 15, just to throw it in there, because I think it is related, because there will be shortcomings in the legislation and those guidelines, then the rights of the victims of digital harms and privacy violations should also be addressed. And so maybe looking at different populations that are unduly influenced by some of these transgressions and ensuring that the design considerations is putting those concerns at the forefront as we move forward here. I think in the case of facial recognition and AI discussions, there's a lot of ways in which that the film coded bias really impacts the ways in which that the different facial recognition algorithms may be unduly impacting certain demographic populations more than others. And so, as we move forward, it seems like there's not a lot of legislation that's out there that we need some sort of human rights framework as a baseline that is maybe adopted by the companies and just considering other harms that may be out there. Not sure if you have any other thoughts on top of that.

[02:09:22.032] Mark McGill: No, I think with Recommendation 14 around the voluntary standards as a stopgap, I think that's incredibly important because legislation obviously takes years to decades to potentially enact, and some countries may not enact that kind of legislation. So if we can have some common standards there that the major platforms subscribe to, then we can ensure a base level of privacy enhancing technology and privacy respecting XR regardless of where in the world that headset is used. It shouldn't be the headsets are adapting to the particular laws that apply in that country insofar as enabling more functionality or more privacy-based functionality because they can. We should be trying to make sure that there is a baseline there and then hopefully legislation can then catch up and maybe surpass that in time.

[02:10:16.820] Kent Bye: Awesome. Well, that's the kind of roundup of this paper. There's a lot that was in here. It took a while to get through. I appreciate your patience of going through all of this. And I'm curious for you, what you think the ultimate potential of XR technologies might be and what they might be able to enable.

[02:10:34.162] Mark McGill: So I think the thing that I come back to is that I think in time, everyday augmented reality in particular, and I focus on augmented reality a bit more than virtual reality, I guess, because I see that everyday augmented reality will eventually supplement and then supplant the existing computing devices that we're using. It will become the predominant means by which we interact with virtual content and how we experience our digital lives. And I think that fundamentally is transformational to society. It will enable us to take our productive workspaces with us when we're traveling in transit. It will enable us to have telepresent conversations as we're walking down the street with people that are distant to us. It will enable a whole breadth of capabilities that we probably can't even quite envisage at this point. So I'm incredibly enthusiastic about that potential and I just want to make sure that as a society we're in a position to make the full use of that without risking social rejection, without leading to the harm of individuals, without leading to some kind of degradation of society. We've already seen issues around technology adoption previously. You look at the impact of social media on society and then you extrapolate out from that and look at what the potential impact of augmented reality might be. It leads to some concerns there about how we relate to other people, how we might be misled, how our behavior might be modified over time given these kinds of devices. So I think This is a good time to be discussing this because the hardware is not quite there yet. It's still at least maybe a couple of years away. I don't know. Rumors seem to fly around based on Apple and Meta's developments there, but it feels like there's still a gap in time between the iPhone 1 of AR, that device which really takes it to the potential of everyday, all-day adoption. But once we get to that point, we may find that we are in trouble quite quickly if we don't deal with some of these issues around privacy, safety, security, behavior manipulation, all the vulnerabilities and potential for harm that is wrapped up in the adoption of AR. So I'm broadly incredibly enthusiastic about the technology, but also incredibly keen that over the next couple of years, we make some real concerted progress toward voluntary protections, moving toward actual legislation, which enforces this, that it sees adoption by all the major companies involved in the creation of these headsets.

[02:13:14.994] Kent Bye: Awesome. Is there anything else that's left unsaid that you'd like to say to the broader immersive community?

[02:13:19.963] Mark McGill: I'd also just like to acknowledge the funding that we received that helped me do some of this writing and has helped us to do some of the research into privacy and XR in particular. So thanks to Refrain, the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online, and the Sprout Plus Hub on Security, Privacy, Identity and Trust Engagement. So both funded by EPSRC, they both helped to fund some of the research we've been doing in this domain.

[02:13:47.057] Kent Bye: Awesome. Yeah. And you have a preprint that you pass along in terms of the privacy enhancing technology and everyday augmented reality, understanding bystanders, varying needs for awareness and consent, which we were covering in this discussion as well. But there's an additional paper that I think is coming out here soon that you will also be addressing a lot of these, the graph that I pulled up as one that we're unpacking. So. Well, Mark, thanks so much for joining me here on the podcast to be able to do this real in-depth breakdown of the paper that you were the lead author on the extended reality, XR and the erosion of anonymity on privacy. I was involved in the discussions. There was some other folks that I just want to give a shout out to some of the other people that are mentioned, including Michael Middleton, Monique Morrow, and Samira Kaddai, who were also involved in the discussions. I think you took the lead role of writing this up and there was different feedback phases, but yeah, you did a great job of setting up the landscape and it's a big, huge topic, like I said, but I think by focusing on the technology and what's new and what's different is at least a good baseline for other folks to be able to look at what's happening with XR and start to reevaluate all these ongoing discussions, even within the United States in terms of the need for a new federal privacy law and the extension or expansion of GDPR or to what degree does AI laws that are being discussed need to consider some of these things, the neuro rights discussions, you know, all these things are a big complex of things together. And I think it's a good way to kind of at least provide a baseline to help orient us to this as a topic. So thanks again for writing this up and for joining me on the podcast to help break it all down. So thanks, Ken. So that was Mark McGill. He's a lecturer slash assistant professor at the University of Glasgow in Scotland at the Glasgow Interactive System Group. So I remember different takeaways about this interview is that, first of all, well, again, this is such a huge, huge issue that I don't think there's any clear direction as to what exactly to do. I mean, there's a lot of different recommendations that are happening that both require different aspects of what the tech companies are doing, but also from a legislation perspective of trying to fill in a lot of the gaps. It's probably more likely to happen more in the European Union than what's happening in the United States. If there is going to be a new federal privacy law, then a lot of the different discussions that are happening right now aren't even starting to consider the possibilities for XR technologies and everything that's de-identified or stuff that's not personal identifiable information is still not taking into consideration biometric psychography as a concept from Burton Heller to plug the gaps and dealing with a lot of the implications of XR technology. So a lot of the emphasis that I got from Mark is we're not creating systems that are going to be putting a lot of the stop gaps for people to feel like they're able to accept this immersive technologies with augmented reality, just because there could be some social rejection aspects that are probably the most likely to stop some of the different potentials for augmented reality, the whole glasshole backlash that happened. So there's a lot of potentials for creating superhuman perceptions, especially for people who would use it for assistive technologies, could be able to use augmented reality as a functionality. I start to think of, for me personally, using virtual reality in more of your private context, usually in your home or at work. But when you're using augmented reality, you're including lots of other people, so there's whole other dimensions of bystander privacy that is still a big open question in terms of how to actually do the consent models and what happens to that data. It's much more than just the video recording. I think as I start to think about all the different ways you can extrapolate different dimensions of your biometric and physiological data to be able to do all sorts of inferences in terms of what's happening with your actions, your behaviors, with your active presence, what's your mental load, cognitive load, what's happening with social dynamics, and then also your emotions and microexpressions, and then other dimensions of embodied experiences in terms of your sensory experiences, your eye gaze, what you're paying attention to, muscle fatigue, lots of different dimensions like that. So when you add all that stuff together, you start to have what I point to as the neurorights in terms of your mental privacy and then mapping your identity and then being able to nudge your behavior. So the right to mental privacy, the right to identity, and then the right to agency. There's a whole other white paper that will be diving deep into the identity aspect. And then the aspects of intentional action, again, is trying to define all these different concepts and put in what those boundaries are. There's a big contextual dimension as to what is OK and not OK, depending on the context. And that's the challenge, is that sometimes you want to have access to that information, that you consent towards that, and then you're basically giving all that information over. Having that contextual integrity from Nissenbaum, I think, is a key concept that has declared uses of what the purposes are in terms of GDPR, rather than just general purposes without being specified what people are consenting to and be able to withdraw that types of consent. One of the things that Mark McGill is talking about is this concept of consensual erosion of privacy, meaning that people are willing to make these different trade-offs to have access to the technology and that they're consenting towards that. What's it mean to be moving down this path towards people choosing to let go of aspects of privacy? I think that's a larger issue as we continue to move down this, because all the different legal structures are all about those consent. Even in the previous conversations, we're talking about these different dimensions of predatory inclusion, trying to subsidize the technology to make it more accessible and equitable, but at the cost of mortgaging our privacy. Is that a form of predatory inclusion? That's a part of all of this, as well. The larger dimensions of surveillance capitalism, we'll be digging into a lot of those different dimensions in a whole separate interview that we dig into business, economics, and finance dimensions, which are much more around the big tech and the financial dimensions and ways to counter that. There's lots of other privacy issues in the context of medical XR and education and pretty much every other white paper that we're dealing with. It's an all-pervasive issue of XR privacy. There's still not a clear answer for how to address it, and there's still a lot of gaps in terms of what's happening with policy regulation. There are various different technical solutions in terms of homomorphic encryption as well as differential privacy That is being put forth within the white paper, but from my perspective I think they're still gonna at the end of the day need some level of policy recommendations because doing the types of Self-regulation is not necessarily going to do anything different than what is already being done to be able to really handle this as an issue Yeah. A lot of the other discussions that we'll be digging into, some of these other discussions around this topic, as well. There's another preprint that Mark was a co-author on, on privacy-enhancing technology and everyday augmented reality. He has this concept of all-day, everyday augmented reality, which is just this idea in the future of everybody wearing these augmented reality glasses all the time, and understanding the bystanders' varying needs for awareness and consent. In there, he has this big graph that shows that as you start to detect other people and segment them, you have all sorts of different ways of starting to detect other people's biometric information. What are the implications of bystander privacy, where you start to extrapolate lots of other biometric information of other people as you're walking around? There's a technological trajectory that's not only detecting what's happening within your body, but Detecting other people in the context of AR and so he's kind of mapping out all those different sensors and the progress as you start to have more and more intimate information that's able to extrapolate different variations of Psychometric information or personal identifiable information as well both the characteristics of people but also in terms of their identity and if you have those connected then there's lots of different privacy implications there, so Anyway, this is a wide-ranging discussion, and I think this is probably a good encapsulation of a lot of the different debates and open questions that are still yet to be figured out. So hopefully other folks will be able to listen to and continue on with trying to figure out what's the next step and finding some sort of solution that's able to really handle this as an issue. So anyway, that's all that I have for today, and I just wanted to thank you for listening to the Voices of ER podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of ER. Thanks for listening.

More from this show