#1176: XR Privacy Landscape & Data Flows with Future of Privacy Forum’s Jameson Spivack

I interview Jameson Spivack, who is a senior policy analyst at the Future of Privacy Forum leading their work on Immersive [XR] Technologies of VR/AR/MR, neurotech, BCIs, biometrics, ad practices, and regulatory frameworks. We talk about how there are gaps in existing privacy frameworks here in the United States, and the work that he’s doing to help educate consumers, and technology companies of the current policy debates. The Future of Privacy Forum isn’t advocating for any specific legislation, but sits as an intermediary between technology companies like Meta who are funders, and the rest of civil society and academics keeping tabs on privacy discussions. They end up doing lots of consumer education efforts like this infographic that maps out XR technology data flows, and blog posts that elaborate on XR functions and various privacy and data protection risks and mitigation strategies.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that's looking at the future of spatial computing and unpacking some of the different ethical and moral dilemmas with the XR technologies. You can support the podcast at patreon.com slash voicesofvr. So in today's episode, I'm continuing my series of looking at XR privacy and talking to Jameson Spivak. So he's a senior policy analyst at the Future of Privacy Forum, leading the work on immersive technologies, XR, VR, AR, MR, as well as neurotechnologies, brain-computer interfaces, biometrics. So he's giving a bit of an overview of what's happening here in the United States and some of these different issues. Future of Privacy Forum is not necessarily looking at policy recommendations. And so they're more educating both the consumers as well as implementers of the technology rules and regulations and how to comply with those. So that's what we're coming on today's episode of the voices of VR podcast. So this interview with Jameson happened on Wednesday, January 18th, 2023. So with that, let's go ahead and dive right in.

[00:01:13.030] Jameson Spivack: My name is Jameson Spivak. I'm a Senior Policy Analyst at the Future of Privacy Forum, where I lead the organization's work on immersive technologies, which we define inclusively to include XR, so things like VR, AR, MR, as well as other similar technologies like immersive gaming platforms, certain neurotechnologies and brain-computer interfaces. And that's how we define immersive technology. And as you can imagine, a lot of the focus has been on XR because that's a really big thing right now. So we've been doing a lot of work looking at how, what some of the privacy risks and privacy implications are of immersive technologies like XR, as well as how they intersect with other issues such as biometrics, youth privacy, ad practices, and then also how existing or emerging regulatory frameworks either apply to or maybe don't necessarily apply to the kind of data that is required for immersive technologies to function.

[00:02:18.914] Kent Bye: Nice. Yeah. Maybe you could give a bit more context as to your background and your journey into the space of this intersection between tech and policy.

[00:02:26.341] Jameson Spivack: Yeah, absolutely. So I actually started out, I was a political science major in undergrad, and I wanted to do international development. And so I worked in communications for an international relief and development organization for a couple of years. And I got really interested in the different kinds of uses of data specifically that were being used in communications and digital marketing. And at first, it wasn't a really critical approach to that. I was just kind of in awe at what data could do. And then I eventually went back to grad school where I studied tech policy, essentially, and took a more critical lens to what previously had really impressed me. And so it was during grad school and shortly after grad school that I became interested in police use of face recognition technology. And so for a number of years during grad school and after grad school, I worked with the Center on Privacy and Technology at Georgetown Law, which then at the time was being led by Alvaro Bedoya, now FTC Commissioner. And at the Privacy Center, I focused on police use of face recognition, what the privacy risks are, has a disparate impact on historically marginalized and vulnerable people. And then I sort of started expanding beyond police face recognition to look at algorithmic technologies in the criminal legal system overall. So not just police use of face recognition, but also risk assessment, scoring, predictive policing, automated license plate readers, things like that. And then I heard that the Future Privacy Forum was looking for someone to get their work stream on immersive technologies started. And I honestly didn't have that much of a background in it. I had done a little bit in grad school looking at the potential for VR to be used in mental health treatments, but this was years ago. But I was always interested in the cutting edge uses of of certain technologies. So first it was police use of face recognition, and now it's shifted to XR and other immersive technologies. So that's kind of a long winded background of how I got to where I am now.

[00:04:31.390] Kent Bye: Okay. Yeah. And the future of privacy forum I've come across and I've seen that it's sort of an interesting mix of like, there's a lot of funders that are coming from industry. So as an example, is a part funder of future privacy forum, but you also get other grants. And so I always. Try to figure out the orientation of these entities as to, you know, how you're oriented or how do you position yourself when you're getting funding from industry, but also from outside academic sources and how you make sense of what the mission of the future privacy forum is.

[00:05:03.760] Jameson Spivack: Right. And that's a fair question. So like a number of other organizations in the space, we receive money both from industry as well as from philanthropic foundations. But we occupy kind of a unique space in the sense that we are a civil society. And we sit between industry, academic, researchers, and advocates. And so we see ourselves more as conveners and consensus builders that try to work across these different sectors to build consensus as opposed to lobbying for certain legislation or for certain outcomes. And so we do receive funding from industry who their supporters are members and we work with them to try to understand some of the challenges that they're facing as companies. We typically work with chief privacy officers and privacy compliance people within companies or public policy people who are trying to implement and practice what the law requires of them, and they're kind of struggling with how to do that. And we try to help them understand the law and how to solve some of the challenges that they're coming across. But we also work with academic researchers and other people in civil society to highlight beyond just what the law says, what are the actual privacy risks involved? And so it's kind of a unique space that we work a lot more with people in industry than maybe my former organization, for example. But we're independent and we try to be more about building consensus as opposed to lobbing bombs either which direction. So it's kind of a unique space.

[00:06:47.650] Kent Bye: Yeah, that helps elaborate a few things. And I have a few follow-up questions because when I think about this tech policy space and there's all these ethical and moral dilemmas, I often refer to Lawrence Lessig's, what he calls a pathetic dot theory, which I think is probably could be named more of like a socio-political economic theory that has these different spheres of influence where at the largest sphere, you have the culture that then the culture dictates what kind of laws are being set and those laws dictate what kind of economic processes are happening with what's legal or not legal with antitrust and whatnot. And then at some point you have the technological architecture and the code that is addressing these issues. And so from what I heard you just say, I understand that you're not necessarily trying to address these from a legal perspective. So you're not saying that we need any laws per se, or maybe we do need laws, but you're not specifically recommending any laws. You're looking at more of these other domains of both the tech architectures, and maybe some more of the economic or cultural approaches and educating the people who are using the technology, as opposed to sort of advocating for one law over the other? Is that an accurate representation?

[00:07:54.267] Jameson Spivack: So we engage in policy work to the extent that we understand how existing law or proposed law works, or we try to understand how it works, and we try to help our members or other stakeholders understand what the impact of a certain piece of legislation will be. I personally don't do any lobbying. We submit comments to the FTC when states are drafting privacy law regulations. So for example, in Colorado, we might submit comments. So we are involved in that process, but it's we're more about trying to understand what are the practical impacts going to be of a particular law. And we have a kind of unique position because we actually interface with people within companies, the people who are in charge of actually implementing the letter of the law. And so some of the challenges that a policy person outside of it might not understand about how a law actually comes into play, we might understand, okay, well, if you do this, then it's going to be like, the company will be forced to do this, but it's going to be really hard to do it. And so we can provide that perspective that maybe would be missing. we do have our own mission and our values and we do value, obviously, privacy, you know, that informs the work we do, but we try to be practical about like, okay, what's the actual effect of the law going to be? So hopefully that answers your question a little bit better.

[00:09:26.849] Kent Bye: Yeah, I think so. And I think as we continue to talk through some of these issues, I'll get a little bit better beat as to some of the work that you've been doing, because back in October 31st and November 17th, you had published this flowchart that has like the ecosystem of XR technologies. Those are some of the first things that you've been involved in publishing at the Future of Privacy Forum, but maybe you could Maybe take a step back and give a bit more context for the future of privacy forums work regarding XR. Cause I know they had done a few initiatives or at least did some webinars and started to think about the intersection of XR. And then maybe when you came on board, what was the remit for starting to look at this issue and why is that such a specific focus that you're looking at?

[00:10:08.807] Jameson Spivack: Yeah, absolutely. So as I'm sure you know, the Future Privacy Forum has done a few projects on immersive tech. We put out a paper a couple of years ago on AR and VR that Joe Jerome and Jeremy Greenberg now at Meta worked on. We did a series of blog posts about brain-computer interfaces and a report on brain-computer interfaces. And so it's something that FPF had kind of been circling around for a while. And then over the past year or so, it crystallized that this is a hugely important bucket of technologies, XR and other kinds of immersive tech. And we just needed to be part of the conversation. So I joined in September of 2022. So I've been here at FPF for about five months. And so far, a lot of what I've been doing is one, immersing myself, so to speak, in the world of XR and immersive tech and learning the language as much as I can from a technical perspective, but also policy and legal perspective. And then identifying two things. One, what are the kind of pain points for people that are developing or using these technologies? And then also, what are the unique privacy implications of them? So it could potentially be argued that in some ways, certain parts of XR, immersive tech, looks sort of similar to what we've seen with traditional online media, with social media, with gaming in the past. And then it could be argued that there are unique challenges related to privacy that arise in XR and immersive tech. And then trying to identify what those unique risks are, and then how we can address them, both from a regulatory or legislative perspective, but also what can companies that are developing these technologies, what can they do? Even in the absence of requirements from legislation, what can they do? What can they build into their product to prevent or mitigate some of these risks that we're highlighting and that people have been starting to highlight for the past couple of years? So that's kind of been what we're focusing on so far.

[00:12:20.833] Kent Bye: Okay. Yeah. And I had a chance to meet you at this symposium that was just at Stanford, the Cyber Policy Center. It was existing law and extended reality. So a wide range of different perspectives of looking at the various different issues that come up, you know, Eugene Volkoff was saying that, you know, most of the things that happening in VR are not going to be different than say any other technology, whether it's intellectual property law or other existing ways that we are engaging with technology. But I do think that the issues of privacy and the newer rights or newer privacy issues of the type of biometric and intimate data that you're able to gather from XR, I do think is actually going to be different to the degree that maybe we do need expansions of state laws of Illinois, they have very specific definitions of biometric law that ties it to identity. And so finding ways to see if we need to have extra protections, should this be considered medical data is what Rafael Usta from NeuroRights has suggested, or if we need a more comprehensive federal privacy law to be able to address some of these issues and So I'd love to hear some of your initial take of, you know, being at the symposium and hearing some of the different discussions that were there and where you stand in terms of how to actually address putting some privacy guardrails with this XR technology.

[00:13:38.810] Jameson Spivack: Yeah, a lot to unpack there. First of all, I will say that Eugene Volokhs and Mark Lindley's article on XR and existing law from a few years ago was really instrumental in my preparation for this panel and for thinking about this issue broadly. I agree with him that in certain cases, existing tort law or existing criminal law might be applicable to XR but there are definite tension points even setting aside privacy which I'll get to after this but there are definite tension points in how we currently think about harms and how tort law can address those harms and how they exist in the physical world or in online spaces and then how they exist or how they happen in XR and specifically in VR. I think VR is kind of more what I focused on. So like the major example people think of is assault and battery in VR. So VR is kind of the weird middle ground between the physical world and the virtual world. And in the physical world, assault and battery is pretty cut and dry. It requires the threat of physical contact or the fear of imminent contact understood physically. And then battery would be the actual act of physical contact. That doesn't exist in social media or traditional online spaces because there is no fear of or actual contact. But a lot of the research on VR shows that certain experiences, a lot of experiences in VR are more similar to their physical world manifestation than they are to online. So assault in VR, a lot of people would dismiss it and say, oh, it's just like yelling at someone online. It's not an actual harm. a lot of the research so far shows that psychologically, it feels like it's real. And so then the question is, okay, well, how should the law of assault deal with that? And so that's like a still an open question. And it definitely has not been decided. And I think that that'll continue to evolve as more research comes out about how our body and our minds reacts to stimuli in VR, and there's more public recognition that, okay, this is not just someone on social media yelling at you. It's different. So that's my reaction to some of that panel, which was also very fascinating. In terms of privacy, so XR relies on large volumes and varieties of data about users, bodies, their environments, the devices that they're using, how they're using the devices. In the case of some head-mounted displays, even bystanders potentially, if you're using it in public and someone else walks by and it collects data about them. Not only does it collect this data, it relies on it. The technology doesn't function nearly as well. It's not nearly as immersive if it doesn't collect this data. This data is what allows it to create these really captivating experiences, but it can also be used to provide an incredibly intimate view of people's interests, beliefs, behaviors, potentially characteristics about their body. There's been a number of studies that show that data collected in XR particularly when combined with one another or with external data, can be used to make inferences, whether they're accurate or not, about personal characteristics, such as a person's gender, their age, their race or ethnicity, their socioeconomic status, health conditions, so sensitive information. And obviously, in the hands of nefarious actors, this data could be used to make discriminatory or otherwise harmful decisions about people. So if you look at how existing laws, specifically privacy laws, but I can also talk a little bit about tort law, might cover the data that's collected, used, and shared by XR devices and apps, there's a number of things you see. So for privacy law specifically, As you probably know, in the US, we don't have a comprehensive federal privacy law. Instead, privacy legislation is mainly sectoral. So it applies in specific contexts. So it applies in the context of a relationship between a health care provider, or in some cases, a third party, and the patient, or in the education setting. So it's only in a very specific context that it applies. A few states. have their own comprehensive privacy laws, which may cover some of the data or some of the uses of the data in XR. But generally, these kinds of laws rely on a notice and consent or notice and choice framework. The company or the entity provides notice of data collection or use. The user or consumer checks a box, says yes. Done. That's it. Then I can talk a little bit more about that later. So separate from the privacy laws, you can look at biometrics laws. I mean, this is relevant because a lot of the XR data is about our bodies, it's collected from, or it's about them. There's no federal biometric privacy law. A few states have biometric privacy laws, but Generally speaking, these only apply to data used for identification purposes. So a fingerprint, a face print for face recognition, it's generally tied to identity. There might be some exceptions to this, and this is relatively new. So we're keeping an eye on case law to track the progress of this. But if you look at Illinois' Biometric Information Privacy Act, which is the most illustrative of where biometric privacy case law is going, It suggests that certain features within XR that use certain kinds of data may actually be covered. Maybe. It depends. So, virtual try-on apps, so a lot of AR specifically, applications, you know, allow you to try on glasses or makeup, they might, in some cases, be regulated as a biometric because of the way that BIPA, the Biometric Information Privacy Act, defines biometrics. It includes a scan of facial geometry. And so there have been a few cases that have tried to argue that because the virtual try-on app includes a scan of your face, a scan of facial geometry, it should be regulated and have to ask for consent before collecting the data. A lot of these cases reach a settlement, so it's unclear what the precedent is. But it's something that we're watching. It's something to keep an eye on as an indication of where things are going. So that's privacy laws, biometric laws. And speaking of case law and biometrics, this is the case not just for the virtual try-on apps that skin your face, but also potentially voice data that's collected or eye-tracking data. Again, unclear what the case law would say, but these cases are being brought, at the very least. Looking at tort law, so bringing civil action for a harmful act, there could be two potentially liable actors. There could be the company or the platform that is collecting the data. In the case of, you know, if someone's alleging a privacy violation, a privacy tort, there could be the company or the platform, or there could be another user. The issue with both of them is that for the company, the company will typically disclaim any legal liability for contact on or associated with their product, and they will disclaim this liability in the terms of service. So when you click yes to the terms of service or you use it and you consent to it, you just waived any, they have disclaimed liability. This is usually enforceable, not necessarily always the case, but it usually is enforceable. Courts are likely to view a user putting on a headset as consenting to whatever is in their terms of use. And Section 230 of the Communications Decency Act provides immunity from liability for companies in the case of other users or third parties in the case of their conduct on the platform. So in a lot of ways, the company themselves are not going to be able to be held liable. In the case of other users, Professor Volokh talked about this in his panel at the symposium and has talked about it in his paper, the Bangladesh problem as he calls it, where it's really hard to find users to name as a defendant in cases of harmful conduct. So if another user has done something to harm you, including privacy violations, it's going to be really hard to find them in the first place. So kind of limits the applicability of the privacy towards in the space. So that's a very in-depth overview of things. Happy to go into more detail about any of them or yeah.

[00:22:30.287] Kent Bye: Yeah. Just a quick comment on the term of the Bing letters problem. Britton, Heller, and I suggest maybe a better term could be the global jurisdiction problem, which I think is a little bit less connected to any specific country, but yeah, absolutely. There's a couple of points there. I wanted to dig into a little bit of this whole idea of notice and consent and whether or not what we're having with these adhesion contracts are actually informed consent and whether or not there's like purpose limitation in terms of like with GDPR, there seems to be a little bit more of a paternalistic approach, which, you know, Dr. Anita Allen is one of the founders of philosophy of privacy and her presidential address that she gave at the American Philosophical Association in 2019, which I happened to be there. She's speaking about the philosophy of privacy and digital life and pointing to this more paternalistic approaches of like GDPR that are trying to establish more of a human rights framework that is taking a little bit of the onus off of the user and saying, Hey, maybe there's some generalized protections that the user should have that they don't necessarily know what they're always consenting to. It seems like this notice and consent is a bit of a loophole for a lot of these different types of laws that if you can just get the user to agree to whatever terms that you have, you can use the data however you want. It's almost like this data colonization where they measure the data, they own it, and they own different aspects of your identity and can use it for pretty much any context that they want, pretty much what they list. It's hard to see how they're from Helen Niesenbaum's contextual integrity perspective, how they're maybe limiting what the use is. I can understand with some of the immersive technologies that you need to have the data in order for the technology to work. But yet that's a whole other question when you're taking the eye-tracking data and then being able to do real-time biometric inferences, tracking what I'm looking at, or what Britton Heller calls biometric psychography, which is doing this contextual relationship of what my likes, dislikes, preferences are, and doing this whole psychographic profiling and mapping out my identity in a certain way. So there seems to be like these fundamental like human rights of what is our right to identity, our right to mental privacy, our right to agency and to be prevented from being nudged. And how this consent framework of informed consent is what I can see is a bit of a loophole for the companies to do whatever they want. And there's basically no privacy protections after that.

[00:24:46.152] Jameson Spivack: Yeah, absolutely. It's becoming increasingly clear that, in and of itself, by itself, the notice and consent, notice and choice framework is just insufficient to address the risks of immersive tech. And not just of immersive technologies. It's increasingly not sufficient to address the risks of really all tech and all data collection, as you said. And plus one-ing everything that you said. Privacy is relational. You know, it can't always properly be controlled by individuals controlling their own data because so much data is about interactions between people. And so trying to regulate it on a personal level doesn't take into account this relational aspect. Privacy is contextual, as you said, Helen Nissenbaum, it's affected by particular environments and the participants in them. And data collected for one purpose can sometimes be repurposed and used in a different context or for a different purpose for which consent was not necessarily given. In which case consent is kind of meaningless. A lot of times, users will not read privacy policies because they're really long. And even if they did read them, they wouldn't necessarily understand them. They don't really know what they're providing consent to. There's consent fatigue. You just end up consenting to everything. And that kind of renders this idea of consent a little bit meaningless in some cases by itself. When you apply that to immersive tech specifically, They rely far more on potentially intimate data than do websites, traditional online spaces. You have eye tracking, you have hand and body tracking, you have facial expressions, wingspan, height, all this data that the technology itself relies on. You can't get around collecting it if you want to use a lot of the functions of these technologies. And certain immersive technology applications necessitate or more easily facilitate inferential processes, so using algorithms to make inferences about people, their likes, their interests, behaviors. This is already true to some extent in online spaces for things like marketing and ad targeting, but it becomes even more granular and all-encompassing within immersive technologies like XR. And so there's this deeper question of do we need to reconceptualize our notion of data privacy, particularly as improvements in machine learning and more granular data collection render this distinction between sensitive and not sensitive data less meaningful. If a certain piece of data that's collected that seems innocuous or that a user doesn't No could be used to reveal something more sensitive. If that, when combined with other data or used in other contexts, if that could be made into sensitive data, this distinction sort of starts to break down between sensitive and not sensitive. So do we need to shift our understanding of data regulation away from looking at types of data that are collected and user controls over that data and more towards actual uses of data? So what inferences are being made or what purposes is that data being used for? And data controllers, the people that are controlling that data. Which is not necessarily to say that user controls, such as rights to access and deletion, ability to move your data, the ability to opt in or out in certain cases, not to say that that's meaningless and that we should get rid of it. It's not useful. But increasingly, maybe that's just not enough, that we need to focus more on the responsibilities of data controllers. Maybe that would be more effective.

[00:28:16.072] Kent Bye: So I guess where do we go then? Because it seems like we have a broken system and even the proposed laws still aren't fully covering it. It seems to me that the human rights approach that the EU takes with taking privacy as a human right around dignity, you know, there was a paper that Dr. Nina Allen pointed to that was called Towards a New Digital Ethics that was by the EU that was written in 2015, leading up to the GDPR, talking about how they wanted to have this feedback loop between these fundamental rights of privacy and dignity, and how those were feeding into the privacy architectures And from my perspective, that seems to be probably one of the most effective changes that I've seen since I've started covering this in 2016, which GDPR actually forced a lot of these architectural changes from a tech perspective, but also the enforcement that is still ongoing in terms of whether or not some of these different existing practices are illegal under GDPR, and will that be enforced? So we have the US, which is a whole other issue. But it seems to me, as a metaphor, I have it like what the EU is doing is a good five to 10 years ahead of what the US is doing. And so we're playing catch up to some degree. Maybe the state laws like California is mimicking different aspects of GDPR, but still at that point, you have a patchwork of all these different states and depending on what state you live in, you get different protections. And so it seems like this is a fundamental broken problem. Either surveillance capitalism is going to flourish and continue, and we're going to have no privacy or there's going to be pushback from the regulators that are trying to stop some of these different, more pernicious aspects of surveillance capitalism. But at that point, they need to find new business models in order to exist. And the access to the technology is going to be potentially diminished. And so what's the path forward with what seems to be an already broken system? And what are the solutions to start to make it so that our privacy is actually protected?

[00:30:07.107] Jameson Spivack: I mean, that's a great question that I wish I had an answer to. I think that there are certain things that GDPR does that work well and that are right and that would apply specifically, I'm thinking about XR and immersive technologies. So for example, data minimization and purpose limitation. So XR providers must limit the collection of data to what's necessary to the functioning of the technology and ensure that it's not further processed in an incompatible manner. This is something that we recommend. We have a set of best practices that we recommend to companies that are developing or deploying XR to better protect privacy, even if it's not legally mandated. The GDPR, though, legally mandates the data minimization and the purpose limitation. And I think that that's right. One of the things that we also talk about is local on-device processing, which allows for this data to be collected for the purpose of allowing a certain feature. So for example, eye tracking, storing that eye tracking data on the device would allow for the eye tracking function to exist, but not send that data back to the server of the company that developed it, for example. That's a way to allow the technology to be used, but protect against some of the more pernicious uses, some of the surveillance capitalism, as you said. So there are certain technical things that can happen that are not necessarily, they're not legally mandated at the moment, but I think could really help stop some of the more pernicious practices that we see. And yeah, I'm honestly not an expert on GDPR, so I can't speak as much in detail about the specifics of it or about how it would apply in the US. So yeah, if you ask some more specific questions, I can get into it, but I'm not an expert on GDPR.

[00:32:09.238] Kent Bye: Yeah, I think, I think it's just a point to what a system I think is working. And then there's other States that are taking inspiration for some of that. And I've had some other things that I'm doing that are digging into more of the nuances of that. But one of the things I wanted to maybe elaborate on was. What I see is this paradigm shift between seeing privacy is just connected to your identity versus privacy as connected to the different types of inferences that you could be making based upon that data. So identity based harms versus inference based harms and. I think one of the things that I've noticed, at least as a rhetorical strategy that meta will do is they will focus on say, there's a lot of scary things about eye tracking. Here's how we personally are going to implement that only having on device data. There's data that's on device. And there's a big concern around that data potentially having personal identifiable information to it. So really focusing on the identity harms of leaking that information because you could be identified based upon that information. But the other thing is that there's all these real time biometric inferences that you could be doing on that data and. I haven't seen anywhere where they're preventing themselves from doing that, aside from maybe the blog posts where they say they're not declaring their intention, but yet at the same time, they can still do all these real-time biometric processing. So it's a bit of a shell game in a way of really focusing of only having on-device data, but not preventing themselves from doing real-time inferences because it's kind of treating it as an object-oriented aspect of identity harms, while ignoring the inference-based harms, which is really the real-time inferences is where the gold is in terms of what type of information they could get from that. And as far as I could tell, the focus of on-device processing doesn't prevent these other things that are maybe even more harmful.

[00:33:50.640] Jameson Spivack: Yeah, absolutely. So I will say that data and the use of data that is uniquely identifying, as you mentioned, so like face recognition, you're tied to identification, is different from data that is derived from and about our bodies, but is otherwise not uniquely identifying or used for identification, like, I can understand separating them and treating them differently in terms of regulation, because they pose different risks. So for uniquely identifying data, It's the literal piece of data, so a face print, that's sensitive. Whereas for other kinds of body-based data, it's the inferences made based on that piece of data that is sensitive. So using eye gaze data to infer someone's sexual orientation or something. It's the inference itself, not necessarily the piece of data. But as algorithmic systems evolve, it is likely that they'll just get better at taking the body-based data that we currently think of as not uniquely identifying or not particularly sensitive and making them either uniquely identifying or sensitive because they can reveal sensitive information. So if you take the example of gait, There's some research showing that you can identify people by gait or you can identify certain characteristics about them, but it's not really used that way on a wide scale. But it absolutely could someday. So right now, it might not be seen as sensitive or identifying. In the future, maybe it could be seen as both or either identifying or sensitive. So we're not quite there yet, but it's like, how can we craft regulations that evolve as the technology does and recognizing that technology is improving in that way. And that is something that we need to be thinking about. Speaking of the GDPR, so in their approach, biometric data is still tied to data that is uniquely identifying. However, it also requires data controllers to get explicit consent from users when making inferences. So it's still the notice and consent model, but it's at least when the inference is being made, they provide notice to the user and require consent. I think it's for racial or ethnic origin. political opinion, religion, sexual orientation, and a few other traits that are covered under that. But my speculation is that, and I don't work for a company, so I don't know. My speculation is that the dynamic that you highlighted about really emphasizing the identification aspect, or when Meta says, we do not store raw eye image data on the device, but then doesn't say anything about what is done with the process data, like where does that go? Who is that shared with? What purpose is that used for? I wonder if that has something to do with privacy compliance within the organization or within organizations because there is such an emphasis on complying with the letter of biometric privacy laws. And so that becomes the emphasis as opposed to, OK, well, what about these other uses and these other concerns that are not strictly covered by biometrics? And obviously, companies have an incentive to have have less data defined under the statute as biometrics and have less data be regulated. But I wonder if that dynamic that you highlighted is somewhat a result of just trying to comply with the letter of the law.

[00:37:06.725] Kent Bye: Yeah. And I think that's my complaint is that, you know, and I I'd love to, you know, have a chance to talk with Metta at some point to really dig into some of these issues because, you know, I might be wrong in some of my conclusions that I'm coming to, but I think you're probably right in the sense that they're looking at the ways that biometrics are currently defined, therefore they're following the existing laws. But I think with Britton Heller's work in biometric psychography, she's trying to point this out, that there's actually gaps in the existing laws that isn't covering this type of data and that we actually need to update those laws. And talking to Daniel Loeffler from Access Now, there's actually a lot of pending legislation of the AI Act that is going to potentially have new definitions of biometric data that could influence the definitions of GPR. So there's ways in which that's still yet to be passed and determined because they're still in the tri-log deliberation process of actually finalizing that. But there are sections in the AI Act that start to kind of elaborate other definitions of biometrics that maybe are more generalizable. than just tied to identity. So that's still yet to be determined. So there is, from the EU perspective, there could be ways of addressing some of those gaps, but here in the United States, there's still those huge gaps. I wanted to ask a little bit about just generally in terms of privacy harms, because I've talked to Kitita Rodriguez from the Electronic Frontier Foundation, and she was saying that, you know, privacy harms are invisible. It's hard to always know what kind of harms are happening because sometimes you don't always know what types of things are happening behind the scenes. Like if you don't get a job, as an example. that could have been based upon information that was leaked out, you may have no idea why you may be discriminated against. But I'd love to hear from your perspective, if that's a big part of trying to address these issues is creating a taxonomy of these different types of harms and trying to rather than doing a human rights approach of saying these human rights, are there ways of creating a list of all the different harms and trying to create the legislation around those harms?

[00:39:00.685] Jameson Spivack: Yeah. So one thing that comes to mind is actually you just mentioned the AI Act and their approach to regulating AI is like a risk-based framework and looking at what are high risk uses of AI and trying to regulate around that as opposed to just all uses of AI writ large. And so I think that that is a potential approach. you mentioned like a taxonomy. So trying to identify what uses of particular kinds of data might be high risk and then identifying what those risks are and what the harms are and who is harmed by them. That's something that I've actually been thinking about specifically in the context of eye tracking and body-based data more generally, is if you look at the actual data that's collected and used, in eye tracking, some of it's probably lower risk than other data or other uses rather. So the use of facial expression data to sync your facial expressions with your avatars may be a lower risk use. using facial expression data to infer someone's mood or internal state, probably a high risk. And so thinking about it in that sense, high risk uses, and then identifying what the actual risk of harm is and who could be harmed by it, including, you know, is there a disparate impact? You know, is there a bias involved, like including those harms as well? that is a potential approach to regulating. Even companies could implement this internally if they wanted to. They didn't have to wait for the law to tell them to do that if they really wanted to get the goodwill of people. I think that that is a potential approach, and it's something that you see with the AI Act, for AI specifically.

[00:40:48.817] Kent Bye: I'd love to hear any other comments in terms of tech architecture, best practices. One of the things that has come up is different aspects of differential privacy or homeomorphic encryption that could start to potentially have additional processing, but provide extra privacy protections. As far as I can tell, a lot of these companies have resisted adding some of these different things where some privacy advocates are calling for them. So just curious to hear if you have other technical architecture approaches to be able to address different layers of privacy within XR, things like differential privacy, homeomorphic encryption, or anything else.

[00:41:24.190] Jameson Spivack: Yeah, absolutely. I mean, there's definitely practices that organizations can do themselves. So I mentioned before, on-device processing and storage. So when possible, to ensure that the data remains in the user's hands, that it's not necessarily sent back to a server, to a third party, not necessarily accessible to them. Process, store it on the device. purpose limitation and data minimization, so only collecting the data that's absolutely necessary for specific purposes, implementing certain privacy-enhancing technologies like you mentioned, so end-to-end encryption or differential privacy or using synthetic datasets, And admittedly, this is an area I'm not as familiar with. And I know, I think it's easier said than done to say, Oh, just do differential privacy. And then, you know, the computer science and computer engineers will be like, well, what does that mean? You can't really do that in this case. So ideally they would implement these, but I realized it's, it's easier said than done. Other things that they can do are implement certain protections for bystanders. So people that are not using like a head mounted device, but might be incidentally captured, automatically blur bystanders faces so that their face isn't collected. I think having strong cybersecurity practices to prevent hacking of potentially sensitive data. I mean, cybersecurity is important in any context, doubly so if the data that is collected about people is potentially as sensitive or intimate as it is with XR. And then I think that there can be default settings on these devices that are more privacy protective. or more safety protective as a default. So requiring you to opt in to something like eye tracking as opposed to setting the default is that you're already using it or defaulting safety bubbles around avatars to prevent harassment or assault. And then this is not really at the architecture level, it's more just at the internal organizational policy level, is to actually enforce third party data policies. So a lot of organizations have policies for third party developers or others who are creating apps on their platform. in terms of how the data can be used and everything. But actually enforcing that and making sure they comply with it can be really, really challenging. And it requires, I think, a lot of resources to do so. But without enforcement, you don't really know what third parties are doing with your data. And so investing in that is also something they can do.

[00:43:49.531] Kent Bye: Yeah, because having third party developers have access to a lot of the same data, you know, meta may be following certain protocols, but that doesn't necessarily mean that the developers will automatically be following those same protocols. So yeah, that's it. Right. another potential loophole. I think looking at things like Cambridge Analytica, we can see how even though Facebook had a policy of not selling data, when they get data into the hands of a third party and they sell it, then, you know, it's still functionally the process of selling data. So yeah, I wanted to share my screen briefly and have you just make some comments on this Big epic flow chart where you're able to look at all these different things is called extended reality technology and data flows. So love to hear you just talk about some of the highlights that you have in this. There's a lot of things that are happening in this, but what was the intention of trying to map out all these different things and present this to the wider community?

[00:44:40.820] Jameson Spivack: Yeah, so the purpose of this is to try to illustrate what's a really complicated web of data and processes and sharing. And so the way that it's organized is at the top level, You have the kinds of data that are collected and the different kinds of data. So you have sensor data, which is collected by sensors. That includes IMUs, which measure the orientation of your device. Inward-facing cameras that capture your face, your eyes. Outward-facing cameras that can capture your body movements. Microphones that capture your voice. So sensors that are just collecting data about you, how your device is moving, and potentially your environment. Also on the same top level, you have usage and telemetry data. So that's how you're using the apps on an XR device. The XR device data, which is information about the actual device itself, not just how you're using it, but the device itself. And then location data, so precise or approximate geolocation data. And so the purpose is to show that these are all the different data sources that come together to provide certain XR use cases. And those XR use cases are what are central in this illustration. So, you know, shared experiences so that you can play virtual tabletop tennis with someone or optimize graphics. So, a lot of the sensor data or some of the sensor data is used to make sure that what you see is crisp and clear and it's not blurry and you don't get motion sick and things like that. user authentication in the case of iris or retina scanning so that you can log into your profile, personalized content based on body-based data, and then expressive avatars, which is also looking at how you move your face and then mapping that onto your avatar so that when you're interacting with someone in XR, there's more embodiment and you feel like you're actually communicating with someone. So that's kind of the base layer. And then below that, we tried to show that it's... So once you have all this data, it has to be processed. And so there are certain algorithms and certain data processing systems that take all this data and make meaning out of it. And those are powering the functions of these different technologies. And then beyond that, there's also the risks that are associated with this data collection and data usage. And we go into more detail about what those are in one of our blog posts, but we wanted to include that that is an important element of the conversation. So that's kind of a general overview of the graphic. And we're just trying to take a really complex web of data and processes and sharing and use cases and try to show it in a way that was actually easy to understand.

[00:47:34.508] Kent Bye: Yeah, that's, that's really helpful. And I think you did a great job of pulling in all the different sources and yeah, with the blog posts to have more details of some of those things. And the thing that comes to mind as I look at this is that, you know, there's very purpose driven uses of that data, but there's also a lot of ways that all that data could be used for other purposes that as of right now have no constraints as to what kind of inferences are made on any of this data. So yeah, that's the thing I'd point out is the gap that still needs to be solved. The thing that we're, we were kind of talking about earlier. Yeah, so we start to wrap up. I'd be curious if you have any final thoughts of what the ultimate potential of all these technologies are and what needs to happen from your perspective in order to kind of live into the most exalted future of this tech without sleepwalking into some sort of big brother dystopia.

[00:48:21.039] Jameson Spivack: Yeah, absolutely. So, I mean, first off, I think these technologies are really exciting. They're just, they're really cool. I mean, they have a lot of potential, not only for entertainment and gaming and things that are really fun, but also in the health setting for treatment, for diagnostics, for training doctors and nurses. in industry and manufacturing to train people in education. There's just a lot of really awesome applications. But at the same time, just the level of specificity that you can get into with the data collection and the intimacy of some of the inferences that can be made about people is, if it's left unchecked, in the way that it kind of is right now, it's worrying. And so I think that, you know, a starting point is comprehensive federal privacy legislation in the US, whether that is ADPPA style legislation or something else. I think that's a good start. Like it's kind of the baseline that we need and then we can build on it and much in the same way that in the EU GDPR and the E-Privacy Directive, they have that as foundation and then they have digital services act, digital markets act that might fill in the gaps in certain places and it can be built on, but we need that federal privacy legislation in the first place. And I, I'm not going to predict anything about 2023 or 2024 because I honestly have no idea, but I'm not hopeful, but also I'm not ruling it out. So, I mean, I, I can only hope that that's coming our way.

[00:50:02.877] Kent Bye: Yeah. To be determined, I guess. Awesome. Well, appreciated hearing all your thoughts here and the work that you're doing there, Future of Privacy Forum, to try to at least map out these ecosystems and at least identify some of the different things that need to be done potentially at the tech level or consumer awareness of some of these different issues as they look at these different graphics and be aware of ways that they want to perhaps advocate for their privacy. But I think also there's other tech policy or legal things that ultimately are going to have to come in one way or another, like you said, with the federal privacy law or GDPR with AI Act or other things. But yeah, I just wanted to thank you for coming on the podcast and help elaborate and break it all down.

[00:50:37.733] Jameson Spivack: Awesome. Thank you so much for having me.

[00:50:38.854] Kent Bye: It was great talking. So that was Jameson Spivak. He's a senior policy analyst at the Future of Privacy Forum, and he's leading up their work on immersive technologies, including XR, VR, AR, and mixed reality, as well as neurotechnologies, brain-computer interfaces, biometrics, and ad practices, and regulatory frameworks. So a number of takeaways about this interview is that, first of all, Well, I think this gives a good overview of what's happening here in the United States in terms of this issue and kind of a wait and see to see if we actually have some movement for federal privacy law. There does seem to be some effort to have a uniform law that is maybe preempting all the different state laws. But as of right now, as far as I can tell, there doesn't seem to be much political will to come up to any radical breakthrough when it comes to the comprehensive federal privacy law. I put most of my hope when it comes to this by looking at what's happening in the European Union, which seems to be anywhere from 5, 10, 15 to 20 years ahead for where the United States is in terms of regulating some of these issues. And I'll be digging into some more EU-specific analysts here in the next couple of interviews after this one, just to kind of dig into what's happening there. And also the Future of Privacy Forum is a bit of an enigma to me in terms of like they're kind of sitting in this interesting space where they're sponsored by a lot of these companies like Meta, but they're also doing more privacy advocacy, but not necessarily advocating for any specific policy or lobbying Congress or anything. sitting in this interesting middle ground where there may be being a liaison between these academics who are doing the research and interfacing with the companies but Not necessarily proposing anything super radical in terms of what we should be doing with privacy and when it comes to policy They're mostly reacting to what's happening in the landscape and tracking it and understanding it but also doing some of these Explanations so they have a whole flowchart that's understanding extended reality technology and data flows the privacy and data protection risks and mitigation strategies And so I have that graphic, I'll link both as an image. Also, you can go look at the high resolution version of that to be able to see lots of different ways of them kind of just trying to map out the different dynamics with the ecosystem. So they're kind of tracking this beat in terms of, you know, what are the cutting edge discussions, but didn't necessarily get any insights in terms of like, how are the neuro rides going to actually be implemented here in the context of the United States? Again, it's a little bit hard for me to know exactly where they're oriented in terms of some of these different discussions, but always good to hear a little bit of an overview reflecting on what's happening in some of the different technological architecture, what we can do in terms of consumer education and the different challenges of informed consent and Yeah, generally how to think about these issues of privacy and intersections with XR. I think it's one of the biggest issues and some of the unanswered questions. I put more hope into what's happening from the EU and maybe some of that EU regulations will be filtering down into the technological architecture level. But in terms of all the consumer protections, we're not necessarily going to get the same protections here in the United States. So I'll be digging more into the EU perspective in the next couple of episodes, because like I said, EU is really on the frontiers of pushing these issues forward. There's a lot of discussion that's happening within the context of the AI Act, as well as some potential modifications that need to happen with GDPR. That's the General Data Protection Regulation, which California is mimicking in different ways. And at the state level, there are going to be potentially more movements over the course of the next year that is starting to provide more consumer privacy protections in the United States. And maybe as you have more of a fragmented landscape amongst all these different states, that'll put more pressure to have more of a comprehensive federal approach. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you'd become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show