Ellysse Dick is a policy analyst who has written 10 technology policy publications over the last year about VR & AR for the The Information Technology & Innovation Foundation. The ITIF is a non-partisan, non-profit tech policy think tank, who says their mission is to “advance innovation,” believes “disruptive innovation almost always leads to economic and social progress,” has a “considered faith in markets and businesses of all sizes,” and believes in “deftly tailoring laws and regulations to achieve their intended purposes in a rapidly evolving economy.” In other words, they lean towards libertarian ideals of limited government to avoid reactionary technology policy that might stifle technological innovation.
While the ITIF is an independent organization, their tech policy positions have a strong alignment with the types of arguments I’d expect to hear from Facebook themselves. The ITIF lists Facebook as a financial supporter and Facebook has listed the ITIF as an organization as a part of Facebook’s Political Engagement. But Facebook also says “we do not always agree with every policy or position that individual organizations or their leadership take. Therefore, our membership, work with organizations, or event support should not be viewed as an endorsement of any particular organization or policy.” And Dick says that she maintains editorial independence for what type of tech policy research that she’s doing within VR and AR. That all said, there’s likely a lot of alignment between ITIF’s published tech policy positions and the implicit and often undeclared policy positions of Facebook.
Ellysse Dick has written about XR privacy issues in these three publications:
- Why Facebook’s Project Aria Makes the Case for Light-Touch Privacy Regulation (October 26, 2020)
- How to Address Privacy Questions Raised by the Expansion of Augmented Reality in Public Spaces (December 14, 2020)
- Balancing User Privacy and Innovation in Augmented and Virtual Reality (March 4, 2021)
One really interesting insight Dick had in her December 4th piece on Augmented Reality and bystander privacy is that there are already a lot of social norms or legal precedents when it comes to the different types of data collection. Here’s the taxonomy of data collection that she lays out:
- Continuous data collection (non-stop & persistent recording)
- Bystander data collection (relational dynamics of recording other people)
- Portable data collection – (the mobile & portability ease of recording anywhere)
- Inconspicuous data collection (Notification & consent norms around capturing video or spatial context)
- Rich data collection: (the geographic context & situational awareness)
- Aggregate data collection: (combining information from third-party sources)
- Public data exposure: (associating public data to individuals within a real-time context)
Dick says that the combination of the real-time, portable, aggregate, and persistent nature of data recording that may create a new context requiring either new social norms or laws.
I wanted to talk with Dick about take on XR Privacy, why she sees the need for a US Federal Privacy Law, some of the concerns around government surveillance and the Third Party Doctrine, and how aspects of biometrically-inferred data should be a key part the broader discussion about a comprehensive approach to privacy. She calls this data “computed data” while Brittan Heller refers to it as biometric psychographic data.
Dick is not as concerned about near-term risks of making inferences from physiological or biometric data from XR, and cautions us from a “privacy panic” that catalyzes a reactionary technology policy that leads to technologies being banned. I guess I’m on the other side of having a reasonable amount of privacy panic considering that technology policy analyst Adam Kovacevich has estimated the odds of a Federal Privacy Law passing ranging from 0-10% for the more controversial sticking points, or around 60% if the Democrats compromise on the private right to action clause.
Dick says that the ITIF follows the innovation principle, which is to not overregulate in advance for harms that may or may not happen. Creating laws too early has the potential to either stifle innovation, to not have the intended consequence, or to quickly go out of date. Dick recommends soft laws, self-regulation, and trade organizations as the first step until policy gaps can more clearly be identified. This means the end result is that the most likely and default position is to do no pre-emptive actions regarding these privacy concerns around XR, which will likely result in us trying to reel it back once it’s gone too far.
Dick seems to have a lot of faith that companies will not go too far with the tracking our data and ads that could lead towards significant behavioral modification, but for me the more pragmatic opinion is that companies like Facebook will continue to aggregate as much data as possible in trying to track our attention and behaviors creating an asymmetry of power when it comes to delivering targeted advertising.
Overall the ITIF generally takes a pretty conservative approach to new technology policy, suggesting that we either wait and see or rely upon self-regulation and consensual approaches. Dick and I had a spirited debate on the topic of XR Privacy, and in the end we agree on the need for a new U.S. Federal Privacy Law. I think we’d also agree that we need the right amount urgency to make it a public policy priority without leading to a reactionary panic that leads to a technology policy that bans certain immersive technologies.
In the next episode, I’ll be diving into how philosopher Helen Nissembaum defines privacy as appropriate information flows within a given context within her Contextual Integrity theory of privacy. She also argues that notice and consent model of privacy is broken, and her contextual integrity approach may provide some more viable and robust solutions for ensuring users have more transparency on how their data are being used.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So continuing on in my series of looking at some of the ethical and privacy implications of virtual and augmented reality, today's episode is going to be doing a deep dive into the tech policy perspective. So we're talking to Elise Dick, who is a policy analyst for the Information Technology and Innovation Foundation. So, who is the ITIF? They're an independent, non-profit, non-partisan research center that is looking at tech policy. They get a bunch of funding from lots of different tech companies, including Facebook, but they assert that they have editorial independence. They're really focusing on a number of different goals. If you read their About page, their number one goal is to seek innovation. Because they believe that whenever you have innovation, that's going to lead towards a significant increase in per capita income. They also believe that disruptive innovation almost always leads towards economic and social progress, and that they have a considered faith in markets and businesses of all sizes. They believe in deftly tailoring laws and regulations to achieve their intended purposes in a rapidly evolving economy. Deftly means with as little intervention as possible. In other words, trying to be very specific and strategic with all the different policy interventions that you have. So in other words, they kind of have more of a libertarian takes on things. They want the market to do its own thing. They don't want the government to set policy too soon. And they want to just take a step back and kind of see how things evolve on their own and rely upon a lot of soft law or self-regulation or these industry standards and to identify the different tech policy gaps and then step in and say, OK, this is where we need to have a little bit more legislation. So when they do stand up and say, we need legislation on this, you know that it's a certain bar that in order to really have that level of innovation, then there does need to have different ways in that the public interests are protected. So their takes on privacy are that they do want to see some sort of federal privacy law, but it's also very conservative. If I were to say any perspective that's out there that I would imagine be the closest to what Facebook themselves would be saying, then I would imagine that there's a lot of parity in terms of the type of arguments that the ITF are giving versus what Facebook themselves would be saying. That said, they're also an independent entity, and they may actually have some different takes and different nuances around how they're articulating some of these things that Facebook themselves would maybe have some different opinions. But more or less, I think it's safe to say that this is really prioritizing the technological innovation above all else. Now, when it comes to our privacy, this is where by letting all of these different innovations go forward, then there's all these risks to privacy. So what do we do in order to actually protect that? So that's a lot of the heart of what I try to get to with Elyse Dick and unpacking a lot of the different policy papers and then trying to see, OK, what are the recommendations and how we progress from here? So that's me coming on today's episode of the Voices of VR podcast. So this interview with Elise happened on Tuesday, June 8th, 2021. So with that, let's go ahead and dive right in.
[00:03:13.767] Ellysse Dick: My name is Elise Dick. I'm a policy analyst at the Information Technology and Innovation Foundation, where I lead our work on augmented reality and virtual reality and public policy. So looking at the ways that these new technologies are going to be influenced by policy and vice versa. I sort of came into AR VR, I think much like most people accidentally. My background is actually more in the digital rights space. Lots of content moderation work and self-regulation of platforms. And I started to realize that AR VR is really where the next stage of that conversation is going to be and the next stage of communication is going to be. And I saw an opportunity to work on a technology that hasn't figured out the kinks yet, but still has some time to. And now I'm here and talking with people like you and trying to figure out how we can make it work this time.
[00:04:04.992] Kent Bye: Yeah, well, over the past year, you've written quite a lot of different policy related pieces for the information technology and innovation foundation, the ITIF. So maybe you could give me a bit more context as to what is the information technology and innovation foundation and how are they related to this overall ecosystem and what their take might be.
[00:04:25.158] Ellysse Dick: Yeah, ITIF is a tech policy think tank based here in Washington, D.C. So what we really do is connect the innovation world with the policy world and help policymakers understand what's happening in technology and technologists and developers understand what's happening in the policy realm. So I see myself and my colleagues as sort of a conduit to facilitate these conversations and help people understand how all these things are related, how public policy could impact the AR, VR industry and other technology industries, and also vice versa, how these technologies can enhance government, enhance policy.
[00:05:01.014] Kent Bye: Great. Well, the first time that the ITIF really came onto my radar was during the Facebook Connect back in October of 2020. There was the announcement of Project ARIA and there was actually painted off to ITIF where there was some information about Project ARIA and about some of these different privacy implications. And so maybe you could describe a little bit more of this relationship between Facebook, Facebook Reality Labs and the ITIF.
[00:05:26.434] Ellysse Dick: Sure. So ITIF works with a lot of different industry actors, Facebook Reality Labs being one of them. It's part of our job to be part of that conduit and to do the informing on both sides. With Facebook Reality Labs specifically, we do participate in their stakeholder conversations and we give feedback and provide our reports and other information back to them. But we aren't actually part of Project ARIA. I just want to make that clear. We've just commented on and helped to shape how we're thinking about these things.
[00:05:56.786] Kent Bye: One of the things that was striking to me was that there was the announcement of Project ARIA, where they were essentially going to be doing a social science research project by putting these augmented reality glasses that are doing geocentric data capture on employees, which they're the employees that can consent to that, but they were sending them out into public to figure out what the ethical implications were. Kind of like a move fast and break things when it comes to privacy, which I think a lot of social scientists, I talked to Sally Applin and Catherine Flick, who had a lot of issues about the way in which this type of research project was being conducted. And the thing that was striking to me was that Facebook, in some ways, was looking at some of the different ethical implications of these AR glasses. I mean, I imagine that they took a look at what was happening with Google Glass and said, Hey, you know, there was a lot of public resistance from social norms being violated in different ways. So they want to be ahead of that conversation around these privacy issues with these technologies that shift the different aspects of the social norms. But at the same time, they kind of released a RFP request for proposals for other researchers to take a look at this issue and in some ways outsourced it. They were kind of like, we haven't figured this out, but let's hand this baton over to the larger research community and then have them kind of figure it out. But we're going to kind of continue on forward. And in terms of like responsible innovation principles, there's a little concern in terms of like, okay, is this already going to happen? And then this is just a checkboxing exercise that they're going through these things to look at these different ethical implications versus really kind of figuring out some of this stuff before they dive into it. Now, in terms of innovation and ethics and all this stuff, this is a little bit of a chicken and egg problem where you kind of have to like create it in order to like evaluate it. So without doing it, then you can't know it. So there's a little bit of that iteration that does need to happen, but maybe you could talk about from your perspective, how you start to enter into this larger discussion and debate.
[00:07:47.273] Ellysse Dick: Yeah, I mean, look, if AR glasses are coming, which I think at this point they are, the train has left the station, there has to be a way to figure out how to do them right. And I do think that companies finding ways to look into that is good, and it's a step in the right direction. It's better than just releasing them out into the world, into the consumer market with no research into this, without even thinking about what these bystander policies might look like, what wearer policies might look like, and how the technology is going to work. I don't think there's necessarily a problem with bringing in the research community either. I think this is a broad conversation. There's a lot of important questions. Whether companies choose to reach out to the research community and to academics is going to impact how they develop their products. They might not take every recommendation. They might choose to go on their own path. But by getting that consultation, at least they're going forward with a level of knowledge that they wouldn't be otherwise. And, you know, with Project ARIA, they did start with, like you said, Facebook employees on Facebook campuses. And it's just a question of, you know, at what scale do you start it? Do you just start with a person in a room and one other person who consents to be a bystander? That's not really going to work. So I do think as we iterate the technology, we also have to iterate how we research and how we look into questions of ethics and privacy. And I think Project ARIA is a good start. I think there are certainly room for adjustments, but there's going to have to be innovation in the research as well. Yeah.
[00:09:13.872] Kent Bye: Well, one of the things that's, I guess, as I take a look at some of these issues, both with augmented reality, but also with virtual reality in terms of the type of data that are starting to become available, there's a little bit of trying to look at our existing models and concepts of privacy and seeing whether or not they're robust enough to be able to handle this new technology, whether the existing models that we already have, that we just kind of continue doing the same thing, or we kind of need a different approach giving some of the different threats and potential harms that could be done. And as I've been talking to different people, there's different ranges of those different perspectives, everything from, you know, the libertarian approach of just treating this as a property right, that our data and our privacy is something that we can essentially buy, sell and trade, given through these adhesion contracts or terms of service or privacy policies to be able to sign and give our consent over this information. And on the other extreme, you have maybe a little bit more of a paternalistic approach saying that, hey, this is something that should be a human right. We should be treating this data and our privacy as an organ that you can't really buy, sell and trade. So we need to have the government to be able to step in and protect our rights to privacy because it's a fundamental human right and people aren't informed enough to be able to make these decisions on themselves. And then there's maybe a middle of the road position, which is Helen Nissenbaum's contextual integrity theory of privacy, which is, it's all depending upon the context. And it all depends on appropriate flow of how this information is flowing and what's happening to this data. And so I'm curious, like, where you fall on this debate in terms of privacy and how we should think about it, especially with these emerging technologies.
[00:10:39.169] Ellysse Dick: So there's a couple of things that I think about when I think about privacy. First of all, is the contextual nature of privacy, right? I may be more comfortable disclosing information about myself to a close group of friends than to you or to my employer or to the world at large. So even at the individual level, there is a certain comfort with sharing information about yourself, whether that be sensitive information or just simply identifying information. I also think about the changing nature of privacy. As you know, I've written about how concepts of privacy and personal space have evolved over time as digital technologies have done the same. And the same is true as we go into this new phase. So there are going to be social shifts in perceptions of reasonable expectations of privacy and what we do and do not want to share with the world that will change as the technology available changes the way that we communicate and that we share information about ourselves. But I do think the context aspect is probably the most important part to think about as healthcare information is going to be different from my social media information, but also the fact that digital technologies do let us, when we are given the ability, do let us decide what we want to share to an extent. Obviously, user privacy controls aren't everything, but I think there's a lot of unique opportunity to really decide how we present ourselves to different groups of people when we're talking about digital communication specifically.
[00:12:02.990] Kent Bye: Yeah, I do agree the contextual dimension and especially as a lot of the existing laws that are made are very granular in terms of dictating what you can and cannot do with say medical data or education data or data around children, data around video rentals. There's all these sort of specific contexts where the GDPR maybe takes more of a human rights approach and has a higher level of framework that tries to apply across all these different contexts. I guess the thing that I run into, let's say very recently, there was Rafael Usta, the Columbia's Neuro Rights Initiative, as well as in collaboration with Facebook Reality Labs, they held a whole day long symposium and conference having lots of different neuroscience researchers and looking at the implications of neurotechnology. Rafael Usta has looked at this issue as a neuroscientist, knowing the degree to which that we can start to decode information that's in our brain. we're on this technological roadmap that over time, we're going to get better and better technologies that are going to be consumer grade. They're going to be these brain control interfaces that we put on our heads, and they're essentially able to kind of read our mind or at least decode our thoughts in working memory. And given that we have this roadmap towards neurotechnologies in combination with XR and VR that has this type of intimate data that starts to what Rafael Usta is saying, it has the threats to be able to start to undermine our right to mental privacy, our right to identity, our right to agency, our right to have access to this, but also to be free from the algorithmic bias. But I think the three main points in terms of identity, agency, and mental privacy, that there's new threats that are coming from this technology that are different than what we've had before. And I'm not quite sure if the notice and consent model that we have right now is really going to be robust enough to really handle that. And Rafael Usta is saying we need newer rights, a set of human rights principles, maybe passed and added to the UN Declaration of Human Rights, but potentially in then feeding into our federal laws. But I'm curious, as you look at this and the implications of this technology and where this is all going with having essentially mind reading technology as we move forward, whether or not our existing concepts of privacy are going to be robust enough to be able to handle the potential harms that could come from where all this is going?
[00:14:06.130] Ellysse Dick: Yeah, I think that one of the big issues with privacy, when we talk about privacy harms, I think we should always center that around questions of personal autonomy. And so I do think that that neural data, that BCI data is bringing in new questions about potential harms to personal autonomy, to mental wellbeing, and these kinds of things that can have really tangible effects on people. So I don't think we need to necessarily change our concept of privacy, because I do think that idea of personal autonomy is still there. But I do think we need to think about what that means in terms of the information that we reveal about ourselves or share with third parties. So I do think there needs to be some kind of safeguards around the collection of BCI data and the use of that data. And, you know, you can only inform users to an extent what that's going to do. And we have to make it very clear what information you're providing and how it is being stored, collected and used and work within those parameters. We don't want people to find out 10 years down the road that their BCI data is being used for something that they don't believe they can send it to. So we do definitely have to think really hard about what both those safeguards are going to be, as well as what user information and consent looks like and user transparency, because it's going to be different.
[00:15:20.090] Kent Bye: Well, I would argue there are actually two pretty significant paradigm shifts that we're going through. One is over the existing model of privacy is in terms of thinking about controlling information versus the appropriate flows of information that's contextually dependent. That's sort of one paradigm shift. And this is the second paradigm shift that I'd say is what Britton Heller has termed the biometric psychography. And what you've talked about in terms of observed data versus computed data. So collecting this data about ourselves, kind of the raw biometrics and physiological data, then being able to make inferences out of that and being able to make judgments about someone's personality, their character, what's happening in the state of their body. And again, there's no existing laws right now. that are really covering any of this computed data or biometric psychographic data that is being inferred. So those are the two main things, both not being able to really have control over the data, but also the shift into the real-time processing and the biometric psychography that goes away from just thinking about privacy in terms of identity.
[00:16:18.555] Ellysse Dick: Absolutely. And I've argued as Britain has that we should be differentiating biometric identifiable data as well as biometrically derived data in in our law and policy and making sure that we clarify that biometric privacy includes that inferred data. I don't want people to find out stuff about my sensitive personal life based on my eye tracking information or emotion information. And that can have real privacy harms on people, even if it has nothing to do with that biometric identification. So I think that you make a great point that that computed data, that biometric information that is inferring things about you as a person that you did not disclose to a third party really needs to be closely looked at, especially as we're developing more privacy laws here in the United States. If we are not defining biometric information to include that and talking specifically about, you know, what rights we do have around that data. It's going to get dicey.
[00:17:14.027] Kent Bye: Well, one of the things I'm confused about the ITIF is that the end result is that we shouldn't stifle innovation, that information technology and innovation foundation. So you're really preferencing innovation, but yet I'm not seeing any laws that are actually implementing anything that you just said in terms of how we take care of any of this stuff, because that in some ways would require new laws and would potentially stifle innovation. So how do you resolve this in terms of, I totally agree with what you're saying, but some of the conclusions that the ITF and some of your papers are coming to are the exact opposite, which is that we shouldn't regulate anything. We shouldn't do anything to stifle innovation.
[00:17:48.563] Ellysse Dick: So we follow in our policy approach, the innovation principle, which basically says that we recognize that there are trade-offs, but we don't want to over-regulate in advance for potential harms in the future that may or may not occur. We want to balance the ability to innovate and to build on these technologies with the trade-offs of privacy and security and risks of harm. So I wouldn't say we're not calling for no regulation. What we're saying is we don't want to regulate right out the gate and A, make regulations that might be completely obsolete a few years down the line because this technology is moving so quickly. If you come out with a laundry list of certain types of data or certain types of technology that you're going to regulate or ban, they might not even be relevant five, 10 years from now. So you're going to end up with gaps if you do that. Second of all, we don't want to regulate for things that we might want to iterate on, we might want to build on. So we do want to consider harms. We want to talk about the sensitive data and the real potential for harm that exists in this space. But we want to allow there to be space for policymakers, for developers, and for the organizations that might want to use these technologies to really explore the possibilities. And we don't want to limit that right out of the gate.
[00:19:02.995] Kent Bye: Yeah, I was at a artificial intelligence conference, the International Joint Conference of Artificial Intelligence. It was in Stockholm, and they had the debate talking about a similar issue in terms of AI when you deploy it and whether or not the AI is going to potentially bring more harm versus in order to deploy it, in order to learn, and then you have to iterate and adapt. Some situations, you have to put it out there and adapt. In other situations, you want to make sure that if it has a loss of life involved, you want to really make sure that before you put it out into the world, that you're not going to start causing something that's going to maybe have people lose lives. So there's different scales in terms of whatever the context of the application is. If it's something that is critical enough that could lead to the loss of life, then you have a lot higher standards. whereas something that you're just putting out into the world. And I think what I see just generally in terms of Facebook as a company and what's happening with social media is that there's been a little bit of really preferencing innovation without thinking about the unintended consequences of what happens when this stuff goes at scale, right? And then you have things like Cambridge Analytica, you have things like the potential of democracy being hacked or the genocide in Myanmar because there's people spreading misinformation and fake news, but also inciting violence against people, having content moderation issues. And so you have all this stuff with the negative side effects of networked social media at scale. And that now as we move into the next iteration, there's a lot of people that are thinking, Hey, maybe we should think about some of these ethical implications and put in a little bit more guardrails before we start to put this mind reading technology and information that's out in the world without any sort of oversight or any way to prevent it. And then hope for the best and then hope that people have bad intentions. Don't misuse it. which I think that is where we're at in terms of looking at the existing harms that have happened through these types of networked social media at scale. What's the approach as we move forward in order to really make sure we have those guardrails without just taking a step back and having this technology pacing gap, which Thomas Medziger talks about, which is that the technology is about 10 to 15 to 20 years ahead of where the policy is at in terms of even conceptually understanding it. let alone trying to figure out how to make some sort of policy that modulates and puts on some sort of safeguard. So where's the middle ground? How do we balance this so that we don't sleepwalk into a dystopia?
[00:21:15.805] Ellysse Dick: Absolutely. We need guardrails. That's for sure. My approach to this is that we don't necessarily need guardrails specific to this technology every single time. So one of the things I've called for because we're so early in this innovation process is just reviewing existing laws and providing guidance based on existing laws. Just as a starting point to understand, we don't really even understand how our current laws apply to ARVR because we don't really have broad case law on this topic or any specific legislation or really any concrete examples of the application of certain privacy and anti-discrimination, et cetera, laws. to this technology. So, you know, much like technology, policy should be iterative. So I think coming out of the gate with a strong policy, banning certain technologies that aren't even really in use yet is not where we want to be. We want to start looking at how current policies apply. So that's the first step. From there, I think we do need to look at where technology is heading in terms of personal data, in terms of civil rights, everything else we're thinking of, and really think about how we can apply policies that are more technology neutral or policies that address the actual harms that can occur instead of, say, banning a specific technology or only allowing a certain use of the technology or mandating one specific thing. Because like I said, once you do that, by the time a policy comes in place, it might not even be relevant anymore. So I think we need to be looking at this in terms of harms and what we envision those ethical concerns being, and not necessarily the technology itself. Because doing it that way, it's going to slow innovation, but it's also going to leave a lot of gaps down the road.
[00:23:04.153] Kent Bye: Well, I think part of the problem with what I see the way that a lot of these tech policy laws are made is that oftentimes they don't have like sunset clauses or they just, once they're made, they're basically there forever. And if you make a bad decision, then it could actually bring more harm or do something that's around regulatory capture, which is that it actually prevents smaller players from being able to enter into this larger ecosystem. So I think there's a risk there. If you jump in and make laws too early, you could actually unduly benefit the biggest players, which is, I think, a strong possibility that you have to take into consideration. And I know that last year, Facebook put out a white paper in July of 2020, where they were saying that, well, we need to have maybe some sort of way to rapidly prototype public policies, where we bring together small clusters of subject matter experts, but also maybe deploy things in these small contexts and have different laws that are regulating different aspects, but that This whole aspect of rapid iteration doesn't really apply very well to how laws are made. You basically have to do all or nothing, get bipartisan consensus, which, you know, like in terms of privacy, everybody wants the privacy of law, but there's all these little nuanced things and private right to action, or to what degree are these going to supersede the federal law over the state laws? So you have all these different nuances that basically can kill anything from happening. But yet, what's your vision for how to actually take a more iterative approach to these things? Because the way that the political systems are with all the polarization and just the slowness and the misunderstanding, it seems like, like I said, five, 10, 20 years behind. And to really put in some of these guardrails, it doesn't feel like the existing political system as it stands today, at least in the United States, is really fit to be able to have some of these new models. So how do you get there? How do you do a little bit more of an iterative approach on this stuff?
[00:24:52.325] Ellysse Dick: So first I think that soft law has a really big role to play here, soft law and industry standards and self-regulation. Obviously that's not regulation, it's not laws, but it does take a more iterative approach and gives us a chance to talk about what's technically feasible, what are different stakeholder groups looking for in terms of privacy protections, in terms of civil rights using these technologies, and to develop maybe a preliminary framework to build off of. And then that's something that ultimately can inform policy and can inform regulation. whether it works, that can inform regulation. And if it doesn't work or if there are flaws in soft law approaches, those really identified the regulatory gaps that we have and where we do need more rules and laws at a more formalized level. So I think that that approach is a really good place to start because we don't really know where the policy gaps are yet. So we have to identify those. Once you have clear policy gaps, it is easier to create regulations and legislation around some of these issues. There obviously will be debates as to the best approach. But if you have a clear objective, it's easier than just saying privacy should be protected. And then everyone has a different idea about what that even means.
[00:26:03.618] Kent Bye: All right. Well, I'd love to dive into some of your papers and some of the different aspects that you've been digging into. But first, before I do that, I'm really curious about your process because you've written on a variety of different topics, everything from bystander privacy to privacy in general, to diversity and equity and inclusion, and to innovation principles and how VR could be used in different contexts for innovation. So how does that usually work? Do you get like an assignment from the ITIF? Does Facebook come to ITIF and say, Hey, it would be helpful to talk about this, or are you, have your own research agenda? And then how do you go about in terms of actually getting ramped up and covering the existing public policy debates and making sure that you're able to kind of give an overview of what the current landscape is, but also what your personal take might be.
[00:26:50.821] Ellysse Dick: Sure, so first I want to be clear that ITIF is an independent research organization, so we're not taking research direction from anyone. I lead our ARVR workstream, so I generally come up with the research agenda and then work with my colleagues who are working in other related areas of tech policy to flesh out what our different pieces are going to look like. To do that, I usually keep my finger on the pulse of the broader technology debate, because I think what's really important is that we bring AR, VR into these broader conversations that are happening, because everything that people are talking about right now, intermediary liability, privacy, intellectual property, broadband connections, all of this relates back to AR, VR, but rarely makes it into the conversation. So I really want to make sure that I'm bringing that piece in and filling that gap. And from there, I talk to people. It's a great community, the ARVR network, and people always have amazing ideas and an incredible passion for these issues. So I try to talk to as many people as I can in my brief research period of time. I just released a paper on diversity, equity, and inclusion, which is based largely on stakeholder roundtables of people with lived experiences and expertise in inclusive ARVR. So I think that bringing in the different stakeholder perspectives is probably one of the more important parts of my work. And then going forward, I hope that more questions will come up. As I do this more, I'm getting more feedback from the community of what other people would like to see. I'm certainly taking that into account and hopefully these technologies will make it into broader policy conversations and I can keep nudging them forward.
[00:28:27.332] Kent Bye: Okay, great. So I want to dig into, and there's a number of different papers that does talk about privacy and does talk about bystander privacy as well as other aspects of privacy and VR. But I'm just curious if you were to just define privacy, we talked about the context dependent nature of it, but do you have a formal definition for how you conceive of privacy?
[00:28:47.278] Ellysse Dick: I really think of privacy differently depending on the context, but generally speaking, I would say it's individual's ability to protect their personal autonomy through the information that they provide or is inferred about them. When I'm thinking specifically about AR, VR, that's the sort of the lens that I'm coming from.
[00:29:05.534] Kent Bye: Okay. Because I think there's an aspect here of the bystander privacy that also gets into different aspects of public versus private, as well as what happens in terms of the information that is going to these companies versus information that is going to governments. When I talk to like Sarah Downey talking about the Fourth Amendment defines what is private and what is public. So enclosed spaces and different ways that they kind of define ways that you have a reasonable expectation of privacy. The Third Party Doctrine of the Fourth Amendment, which essentially says that any time, any data that are given to a third party, that information has no reasonable expectation to remain private. And so the ability for the government to be able to get information without a warrant. And so the risk that I see, at least for some of this stuff in terms of public private and AR especially, is that you have these devices that are going around in these already public spaces, but people have. some level of reasonable expectation of privacy where something does change when you have these camera devices that if everybody's wearing them, you have essentially a CCTV surveillance cameras that sort of changes the dynamic of what kind of privacy we have in these public spaces. But also it's not just like the public having it, it's these private corporations that are having all this data that you may not consent over to this. So there's the level of consent from these bystanders, but there's also this level of third party doctrine, meaning that you're aggregating all this information. And if you're recording it, then that means that the government can have access to it. So let's maybe start with the government aspect, because I think that the level of abuse that could happen even in the United States. So be curious to hear your thoughts in terms of this difference between information that is stored by private corporation versus the implications of that getting into the state or local or federal governments.
[00:30:47.746] Ellysse Dick: Yeah, so one of the things that I've thought about just to build from the ground up here is government use of AR devices, especially if we're talking about AR devices that can overlay real-time information. And I think that changes the reasonable expectation of privacy because even if they're pulling from publicly available information, it can reveal computed data, sensitive data about an individual or an area that wouldn't otherwise be immediately available to them. So I think that raises some interesting questions about what investigative tools can be used using AR capabilities. So there's that. I think there needs to be a review of how law enforcement specifically is using AR devices, like law enforcement officers wearing or having a mobile phones or whatever they're using AR capabilities. So there's that. Government access to that data is also really important. In the United States, we do have quite a bit of case law building up around Fourth Amendment protections of digital media and metadata. I think that that is going to continue and eventually we will probably see some cases around either using someone's own device data against them or using data from other devices. Again, I would like to see more guidance around that before we get to the legal challenge phase, but I imagine we'll see something around that. In terms of third-party doctrine, I think that that is one of the biggest areas where the reasonable expectation of privacy could just completely shift. you know this is all so speculative because these aren't widespread yet and we don't know exactly how people are going to use them and what their capabilities are going to be but this is a lot of information that could become available to law enforcement again in the U.S. context which is my specialty but in other governments as well where there may be fewer constitutional restrictions on the information governments gather. I think that companies also need to start thinking about what are their responses and policies going to be around government requests for user data. Much like they have for social media, we need to start thinking now about how companies are going to handle that because obviously we can't regulate how they handle that, but it's something that certainly needs to be thought about sooner than later.
[00:33:05.030] Kent Bye: So in one of your articles here, that's about how to address privacy questions raised by the expansion of augmented reality and public spaces published back on December 14th of 2020, in some response to talking about some of this bystander privacy, you have this taxonomy here, which I think is very interesting. And I'd love to dig into this a little bit more as to how you think about each of these different types of data. You talk about continuous data collection, bystander data collection, portable data collection and conspicuous data collection, rich data collection, aggregate data collection, and public data exposure. So I'd love to maybe go through this a little bit and have you explain like this taxonomy of all this different types of data collection when it comes to augmented reality.
[00:33:46.527] Ellysse Dick: Yeah. So actually the point of this list originally was to show that all the technologies that AR even very advanced AR devices do or will use. already exist in some other form. What makes AR unique is that it combines all of these in one single technology. So when we're talking about regulation and legal challenges and all of that, a lot of the laws and rules that already apply to some of these other technologies could also apply to AR. So just something to keep in mind when we're going through the list. The continuous data collection is that persistent recording that AR devices can do if you have wearable glasses, for example. they are going to be always on, right? They're going to be continually bringing in and collecting data. Bystander data collection goes off of that to say, you know, if they're collecting visual data or even spatial data about a user's surroundings, they're probably going to be collecting information about bystanders as well. The extent of that information obviously depends on what the device is doing. What we have right now is police body cams have bystander data collection. But perhaps in the future, we have AR devices that are gathering more information than just audio visual. We have portable data collection, which basically just means that you can bring them anywhere. It's not stationed in one spot. It's not on a pole somewhere. It's not on a big TV screen. It's with you at all times. And then it's also inconspicuous. So that means that really, most AR, when we're talking about wearables especially, you don't know that it's necessarily recording unless there's some sort of indicator. So that obviously raises some concerns about the consent of recording information. There's also the rich data collection. So that's what I was talking about before. It might not just be audio visual. It could use spatial sensors, GPS information, other sensors to gather data that is not just pictures of your surroundings, but actually really rich data about where you are, what's around you, and perhaps what bystanders are doing as well. And then there's also aggregate data information. So that means that it can collect data from a bunch of different sources. Maybe that just means it's collecting data from your various social media profiles, but maybe it also means that it's collecting data from a lot of different sources. including, which is the last one, public data exposure. So if there is publicly available information such as your political contributions or your gun registrations, that information, while publicly available, when it's collected, perhaps in aggregate in an AR situation, could reveal more information in real time. about you. So I think that persistent aggregate and real time aspect is really what makes AR so different from the other technologies that I listed in this report as examples.
[00:36:35.603] Kent Bye: Awesome. Yeah. Thanks for that. That's really helpful because I do agree that a lot of these are happening, but then you're adding them all together all at the same time. And the things that I didn't see in there that I would add is the things that Facebook were talking about in terms of this egocentric data capture, which is both the capture of what you see from your perspective, but it's also what your eye tracking information is happening, as well as what you're looking at in the world. So there's a little bit of like all this other biometric data that may be coming from the individuals, but also other people's biometric information in terms of what they're feeling and being able to detect their body temperature and their facial expressions. And Boz had made this comment internally about, oh, well, maybe we should do facial recognition on augmented reality. It'd be really helpful with people with aphasia who can't recognize faces. But at the same time, there's two implications. One is identifying people that may be bringing undue harm against them in terms of being targeted for harassment or abuse or marginalized communities that may find that they have more harm that's brought to them if they're identified with these devices or misidentified by these devices rather than identified. But also, just this whole aspect of being able to start to identify people. Once you identify who those people are, then you're all of a sudden putting them in a location or place and being able to extrapolate all sorts of other information about those people. There's this whole idea of the bystander consent whether or not they're consenting to be having all this tracking be done on them, especially with their facial recognition and going back to this third party doctrine. If they're being identified and that's recorded on a server, then that's something that the government could get access to, whereas there wasn't previously any of that information. And so I think There's a couple of things in terms of the identifiability of these people. And if you are identifying them and all the biometrics, both from the individual, but also the collective, and you are fusing all those things together. So even though there's not existing laws, when you do combine all this stuff together, I would argue, yeah, it does make a difference. And it does change all these different social norms. And I don't think that the existing laws would necessarily be robust enough to be able to potentially prevent some of these harms that could be done, both from a perspective of what the company is doing with this data, like going back to contextual integrity, the appropriate flow of the information in terms of people having a reasonable expectation for what data may be radiating and what happens to that information. But also getting back to the third party doctrine, we're making all sorts of data that's available to the government that previously wasn't available to the government. And how do we make sure that somebody has some sort of choice in terms of how this all plays out?
[00:39:03.449] Ellysse Dick: Yeah, I think that this is one of the things that I talked about in my other paper about user privacy, but it applies to bystander privacy as well. Each of these individual data points isn't necessarily a reason for concern until you add in multiple, including identifying information. So if you have information, you know, what I talked about in my paper, if you have even a bank account number doesn't mean anything until you associate it with someone and use it to hack someone's account or each individual data point isn't necessarily a harm in and of itself. It's the different ways these can be combined and potentially maliciously misused or used by government and law enforcement that really need to be looked at. I really do think, I mean, you keep drilling down it for a reason. I think you're right. Third party doctrine is going to be really sticky with this. And I'm curious to see if it holds up with these technologies, because if you have enough devices out there that are collecting the world, you could theoretically go to a place and gather all the data that existed at a specific time from a bunch of devices. or from a single person's egocentric data collection. You know, that raises real questions about Fourth Amendment rights that I think we're going to have to, like I said, I think something's going to come up eventually on that, and we're going to have to rethink how we think about that for sure.
[00:40:24.263] Kent Bye: Well, and that was one of my big complaints in terms of, you know, doing this as a social science experiment with Project ARIA and just going out and recording all this stuff without really fully being aware of the third party doctrine of the fourth amendment. There was total blindness to that. It was like, Oh, we're going to figure out the ethical implications. It's like, you don't have to like do it. I can tell you there's ethical implications in terms of recording all this data and saving it to a server. There is going to be these fourth amendment implications and. You know, this is an ongoing debate in terms of the Fourth Amendment, especially with the Carpenter case in terms of having maybe not just like a blanket, any data that in any context that's coming to a third party is automatically, you know, the government can get access to it without a warrant. But as it stands right now, any data that is on the internet or on the web or any third-party server, according to the government, is public data, even though it may have private contacts of email or whatnot. But still, the government can go and get access to it, except for some exclusions for that with some of these different cases, like the Carpenter case in terms of cell phone location data. know, it gets into this appropriate flows and what we reasonably expect in terms of how this data are going to be made available and to whom, you know, we need this data in order for some of this technology to work. But yet at the same time, we have this translation of all this digital technologies into this massive surveillance state that is justifying larger levels of government surveillance. And I think that for me, until we have some reimagining and reinterpretation of the third party doctrine in the fourth amendment, all of these technologies are just going to accelerate this crisis of privacy and the relationship between these governments and this data. There doesn't seem to be a deep understanding of this stuff, even from these companies who just are kind of like, we're going to move fast and break things with this stuff, not realizing how they're breaking the fourth amendment.
[00:42:03.033] Ellysse Dick: Yeah, I mean, I really think there's a role for policy in this. I think that this is when policymakers ask, what should we be thinking about when it comes to AR, VR technologies? I think that this government use of the data and law enforcement use of the data should be one of the first things that come up on the list because it is government and it is something that we can provide guidance on or we can regulate and legislate around how we use this data and how it's collected. So I think that that's a really important entry point for policymakers as well, not just companies.
[00:42:34.640] Kent Bye: So you have another paper that you talked about going into some of the privacy issues. So balancing user privacy and innovation and augmented and virtual reality published March 4th, 2021. And you have another taxonomy of different types of data here. So the different types of user information that's collected in AR and VR, you have the observable data, the observed data, computed data, and associated data. Maybe you could go into like how you're breaking down each of these different types of data here.
[00:43:00.664] Ellysse Dick: Yeah. So this is actually a taxonomy that ITIF has used before in relation to informational injury and privacy harms. So I used this for this paper as well. So we break it down into four different types of data. There's observable data, which is data that can be observed by third parties and replicated by third parties. So that would be like a photo. There's observed data, which is information that you would provide or generate, which third parties can observe, but not necessarily replicate. That'd be something like biographical information. And then there's computed data, which is what we've really been talking about here, which combines this observable and observed data into new information. So this is not user-generated or provided information. It is inferred information that is used to further the service. Probably one of the best examples that we've used in the past for non-ARBR applications of this is like an advertising profile based on your browsing and social media history, right? That's computed data inferring information about your interests and your personality, your demographics that you haven't necessarily provided. And then finally, there's associated data. And this is data that is on its own, like what I was talking about before, you know, a bank account number, a library card number, something that with no context doesn't tell you anything about the person. Even an address, if you have no contextual information about the layout of the city or how to get there, doesn't actually provide new information about a person. So this data becomes dangerous or potentially harmful when it's combined with those other three types of data, something that identifies the user or makes it possible for malicious actors to cause harm, like providing a username and a password, which both alone couldn't necessarily tell you anything about a user, but combined could be used to hack their accounts or steal money from them or commit fraud. When we're talking about AR, VR, all of these are really important, not just the computed, but also the observable and observed information that users are providing, because that's ultimately what comes into the computed information. And that's things like how you present yourself in virtual space. What does your avatar look like? It's things like, are you providing accurate demographic information about yourself? But it's also your movements. It's your eye tracking data. It's the raw data that's gathered by all of those sensors. Those fall into the observable and observed categories. Those are then used to generate the computed information. And what makes the computed information different is that the harms that come from this is not necessarily malicious misuse, but how it is used. Because generally speaking, The only person with the access to your computed data is the third party that is generating it, unless that information is hacked or otherwise given unauthorized access.
[00:45:46.337] Kent Bye: Yeah, I think generally when, at least when you look at the privacy law and how these companies take a look at some of this information, it's either personally identifiable information or it's de-identified data. And they don't necessarily have a class of treating this other data. It's either one of those two things either identifies you or it doesn't, or even if it can identify you through AI, it's still sort of treated as de-identified data. But I do think that there is this new class of information that you're calling computed data. And then Britton Heller is calling the biometric psychographic data, which is this inferred data that comes from all this information. So I do think that there's the PII and a lot of the stuff that is handled around identity, but this sort of computed data and biometrically inferred data does seem like that it's a new class that's not covered in any existing laws. And so how do you propose that we should be handling this type of computed data or biometrically inferred data?
[00:46:37.265] Ellysse Dick: Well, I think as I've said before, we need to start with the potential harms from them. So I always start from a place of personal autonomy harms. Things like if I'm using an AR training application for weeding out job applicants, it reveals to the employer that they have a disability or another protected class against discrimination, and they become subject to discrimination. Now, we do have laws in place, anti-discrimination laws that forbid that from happening, but it's still a violation of the user's autonomy to be able to disclose that information about themselves. So, you know, the first thing I want to think about is how can we put in guardrails for how that computed data is generated and accessed? Should there be guidance for employers about how they can derive data about applicants? Should there be guidance for educators about what information they can gather about students or employers about what information they can gather about their workers? So I think that's a really important place to start. I do think, as you've said, we need to separate this biometrically derived data from biometric identifying data for a couple of reasons. First, they're not the same, right? Technically, the biometrically derived data without identifying information doesn't tell you anything about me personally, right? I have to combine it with the identifying information. But also when used for practical purposes, right, not for malicious purposes, they're used for very different reasons. Biometrically identifying data tells you who I am. Biometrically derived data tells you what my interests are or where I'm positioned in my room if I'm using a VR headset. So if you regulate biometric data as a whole and don't think about the different applications of biometric data, you're going to preclude the use of certain types of data or you're going to leave major gaps in policy of certain types of data. My main recommendation here is, I do support a national privacy law and I think if there is biometric data information in said law, there needs to be a distinction between biometrically derived data and biometrically identifying data because they serve two fundamentally different purposes.
[00:48:49.750] Kent Bye: So I, I think generally I agree with what you're saying. I think the caveat here though, is that given that anybody that's in virtual reality has to be, especially with Facebook, you have to use a Facebook user ID. They have your IP address. They have your location. They have all this other identifying information. So it is going to be potentially correlated depending on how they save it. Right. But I think the point here is that a lot of this information that is biometric psychographic data is going to be correlated to your identity. I mean, this is what I complain about the existing paradigms of privacy is all around identity. And so if you kind of assume that it's stored on a server and it's just like hanging out there and it's like not connected back to an identity, then yeah, that's one thing in terms of like doing aggregate information that you're doing federated learning or homeomorphic. encryption to be able to train AI datasets. You know, there's the risk of that getting leaked out onto the internet and then getting back into identifying information. And so something that you may have thought was anonymous is now all of a sudden you're able to identify very specific identifying information. So that is one use case, but there's this whole other dimension that in the context of Facebook itself, you know, they're moving towards this contextually aware AI, but also just all this information that is kind of doing these real time inferences. And I think because they're coming up with this map and model that's essentially creating this digital twin. then our risks to being able to manipulate or control this model of ourselves by subtly changing our environment. So I think getting back to the human rights, you have the human right to identity, agency, and privacy, that the aggregation of all this biometric psychographic data over time with a more robust model in all these different contexts has the potential to start to undermine different aspects of our mental privacy, undermine our aspects of our identity, and undermine our agency. And so I understand that it should be treated differently, but how do you prevent these potential harms to our identity, agency, and mental privacy from happening?
[00:50:39.938] Ellysse Dick: Well, I think you're right that it does come down to the uses, right? It's not about the data that's collected. It's about how it's used. I don't want to leap too far ahead into a distant future where we imagine the kind of manipulation that you're talking about, because I disagree that that's where we're headed. I know others will fight me on that. I think that we just need to look at what we have right now is the collection of this data. We know that there can be harms to people from discrimination, from harassment and abuse. We know that it can reveal potentially sensitive information or undermine their personal autonomy and their ability to disclose information about themselves. That when certain computed information is correlated, like you said, with identity, that can have real world implications. I think that when we're talking about BCI, which seems to be where we're heading in this conversation, it's too early now to be regulating against certain uses of it. It's barely on the market as it is. And I think that, again, I'll go back to soft law approaches. I think that we need to start small and just start talking about what kind of guidance we need, what kind of standards we need, and what best practices are. From there, we can find where there are policy gaps or where there are real areas that need to be regulated. But BCI is way too early on now to start banning certain potential uses that are well down the line and that haven't manifested yet.
[00:52:10.251] Kent Bye: I guess my reaction to that is that at the IEEE VR conference back in 2021 of this year, there was a research paper from Facebook Reality Labs that was looking at how to potentially extrapolate eye gaze information by only looking at hand pose and head pose. Now they did it with eye tracking information in a very specific virtual environment. They were able to see what people were looking at and how they're moving their hands and their head. But then they were able to train AI to be able to extrapolate that same information by only having access to the head pose and the hand pose. So what does this mean? Well, what it means is that there's this trajectory towards living in a world where even if they don't have eye tracking within the VR headset, they're going to be able to extrapolate and infer that based upon just what your hands are doing, what your head's doing. So you don't necessarily even need to have these biometric markers. And so what does that give you? Well, once you have your attention within a virtual world, you're able to potentially pay attention to other things. And I do think there's going to be a lots of different say biometric markers of some of this information. And as I've looked at the psychographic information, there's certain things about your context and your identity and other things that are immutable. But I think this real time biometric inferences, I'm just going to read through a list here of things where Like this is a list of things that we can already start to see that is happening with just VR alone, which is our behaviors, our intention, our actions, our movements, our, and when we get into BCIs and other ways to sort of dig into this, we have our mental thoughts, our cognitive processes, but cognitive load may be able to be extrapolated, our social presence within environments and how we're relating to other people, looking at our face and our emotions and extrapolating our affective state, our emotional sentiment, our facial expressions and our micro expressions. And then finally, when I look at the body and different aspects of the body, extrapolating things about our stress and arousal rates, our physiological reactions to things, our eye gaze and attention, our body language and muscle fatigue. So this is just one way of just looking at where all of this information is going. And I think the thing that is concerning is that there is no audit trail in terms of whether or not the information that is all coming out, whether it's appropriate or not, because it's basically consent where you sign over in terms of service and they have all access to this data and they can do whatever they want with it. I think that's the issue, is that the way things are set up now, there's nothing preventing from all this information to getting into anybody's hands who owns the technology, and there's no oversight. There's no way for users to say whether or not they're actually really consenting to where this data goes, because we have no ownership over our data. like this sort of a digital colonization model where we sign over the terms of service and it's sort of handed over, but we have no way of knowing whether or not it's appropriate or not, or what even context this information is going to be going into, not only for Facebook, but also for third parties.
[00:54:55.398] Ellysse Dick: I mean, this is why I think of national privacy law is so important. It's not going to solve all of the issues and the concerns, but it would at least create a baseline of privacy expectations among companies and among users. And I think that Until we have that, it's going to be so difficult to answer a lot of these other questions because we won't have that baseline to work off of. So I really think as we're looking at these new technologies coming up, a national privacy law has got to be on the table. We have to figure out the differences in opinion around privacy and find some kind of middle ground to have a national privacy legislation on the books as these new technologies come up. I'd also just say, I caution against the privacy panic when it comes to these technologies. The unknown is scary and there is a lot that is possible. I don't necessarily think that because it's possible, it's going to happen at the scale and at the dystopic vision that some people want to put forward. Obviously, we have to have guardrails and we have to look around corners and make sure that we're not running into massive potential for harm. But I don't want to stop these things out of the gate just because we're worried about the long-term potential misuse of this information. I think I've said it before, but we have to wait and see where this evolution goes a little bit more before we start talking about regulation around it.
[00:56:19.192] Kent Bye: I guess the dystopic panic comes from observing past behaviors for major companies like Google or Facebook. And the fact that there is this existing model of aggregating all this data on us, and that just over time, it's gotten to the point where if people really knew how bad it already is, just wait until we open up the floodgates to all this other information that's only going to make it worse. A lot of the stuff that we give into our relationship to technology has some level of conscious awareness in terms of that we're giving over the data. We're entering it and typing on a keyboard. We're giving information. There's text and words, but there's also lots of behavioral information that's already being tracked. I guess the issue is that there's going to be an exponential amount of unconscious biometric data that we don't even know that we're radiating, that's going to be made available. People, in terms of informed consent, are not going to be really informed, and it's only going to get worse. I guess I'm looking at it from that pessimistic panic position, because that's an observation pragmatically as to where we're at, where people seem to have a lot of apathy around this issue. They seem to be, on the surface, happy to give over and mortgage their privacy to give access. But I think, you know, when we look at larger collective consequences in terms of information warfare and hacked elections, I don't know, it feels like, you talk about the federal privacy law, but I just saw someone do an analysis saying that the odds of percentage of this actually passing are somewhere between zero and 10% of actually resolving this because there's so many other focus on antitrust, that privacy is really like the third or fourth legislative priority when it comes to how our government's going to relate towards these technology companies. They're really focused on antitrust. They're not really thinking about privacy and it's probably not going to have a federal omnibus approach to privacy for another year or two, maybe, but it's not in the short term. And so I guess for me, I'm like looking at all this and saying, okay, I can already see where this is going and it's not going in a good direction.
[00:58:12.962] Ellysse Dick: I mean, I think in addition to, you know, guardrails, like we've been saying, we also need to have touch points. And I'll be honest, I don't have a complete vision of what that looks like, but there needs to be an opportunity to revisit whether this is sort of a multi-stakeholder working group situation or a more formalized legislation. We need to have the ability to look at how technology is evolving and reassess if our approaches are correct. And that's why I think, you know, more iterative approaches to policy is really important because the technology could shift if you go in a completely different direction than what we expect to see. There could be another massive global event that shifts the way that we use communications technologies and changes the way that we consequently use AR and VR. So I think we need to recognize the iterative nature of technology, especially when we're talking about communications technologies and have the ability to go back and look at that. I do think that Look, there's going to be data collection issues. And without a national privacy law, there's just not a lot of legal frameworks to do that around. And so it's a challenging question, but I just really don't want to see us banning certain types of technology. I guess that's my concern is that the privacy panic leads to banning, which, in my opinion, is not the right solution for any of these technologies and any of this type of data collection.
[01:00:29.774] Ellysse Dick: I mean, I definitely think users need to have a clear set of rights and known rights over their data. They need to know what they do and don't own. I do think for data that is collected by third parties, especially when we start talking about this biometric data and this computed data, more transparency and like really straightforward understanding of where the information goes and where it stops is going to be really critical and a better understanding of what it's used for. I think a lot of the concern around, especially things like VCI data, but also biometrically derived information about individuals, is that it's going to be used in a way that people did not believe they consented to. And so I think we really need to look at how it has the informed consent model right now. I think we need to look at how we need to change that or what we need to add in addition to that model to make that data collection and use really clear. When it comes to other information, I'm not as focused on that area, but when it comes to biometric information, especially revealing information about an individual they didn't provide, I think there needs to be some clear parameters around that.
[01:01:43.975] Kent Bye: What do you think about this debate and argument that happens in terms of how notice and consent doesn't protect our privacy? Like Helen Isenbaum talks about the transparency paradox, which is that you have these privacy policies in order to really dive deep into the nuances, it would get way too complicated. So there's this overview of stuff, but then it's kind of vague and people don't really know exactly what's happening. And so it doesn't seem like notice and consent as a model for protecting privacy is really actually working. Do you agree with that or do you think that it is working and that it will continue to work even when we add even more information about our biometric and physiological data?
[01:02:20.006] Ellysse Dick: I mean, I do think notice and consent is a good starting point. It certainly has its issues, but I do think it is at least a good entry to figure out how we're dealing with users' data. I do think that as we're talking about especially more immersive data, we need to find ways to improve on that model, or like I said, find another way to gain that consent. The way that privacy is set up, especially in this country, but overall, I think that that model is, I don't see us completely changing the notice and consent model. And, you know, you can debate whether that is a good model or a bad one, but it's what's in place. And I just don't see companies completely overhauling their approach to user transparency and choice. Instead, I see them adding more transparency and user privacy controls, but maintaining the notice and consent model as a baseline.
[01:03:12.488] Kent Bye: I'm going to be actually talking to Helen Nissenbaum of the theory of contextual integrity of privacy in about a half hour after I finish talking to you. I'll be asking her. I think she's got a whole different paradigm in terms of appropriate flows and gets away from this controlled access of information. I think she has a lot of complaints. I don't know how much of her approach would be translatable into laws that would be a viable alternative, but be very curious to hear both from Matt, as well as how things are evolving in terms of the new implications of this technology. But as we start to kind of wind down here, I want to also just talk a little bit about some of this, because you just released like three big papers on diversity, inclusion, and equity. Maybe you can talk about some of the big takeaways that you had from these three new papers that you just released within the last week or so.
[01:03:58.625] Ellysse Dick: Yeah, absolutely. So I just released a series on diversity, equity, and inclusion in ARVR. It's a three-part series that the first report looks at the opportunities, so the ways that ARVR can enhance diversity and inclusion efforts, as well as open up new opportunities for vulnerable, marginalized, and underserved communities. The second one looks at some of the risks and challenges to vulnerable and marginalized communities, because I do think it's critical that we weigh both of those. And then the third makes some recommendations, particularly for policymakers, but I think a lot of them also apply to developers and civil society and implementing organizations who might be considering using these technologies. And my main argument behind this paper is that if we don't make AR, VR accessible and equitable, it's not going to get adopted to the levels that a lot of enthusiasts hope it will. Just from a pure legal compliance level, if your technology isn't accessible, it's not going to be able to be used in workplaces, in certain government applications. If it's not reaching a broad user base, you're just going to have one single demographic able to use this technology going forward. So, you know, diversity, equity, inclusion, it's a nice thing. It's great to have, but it's also critical for innovation and for adoption going forward. So some of the key takeaways that I have from this paper is first, this one is the obvious one, but any of these efforts, they have to integrate lived experiences people in these communities and people who will be using these technologies. That includes people with disabilities, but also people from marginalized communities like LGBT communities, communities of color, people with less access to broadband internet, people who maybe are in different educational systems, people who are in different levels of employment. All of these communities need to be included in this conversation. Another takeaway is that we need to have a holistic approach to inclusion. So that means that inclusion isn't just making sure that people with disabilities can use your device or making sure that people who have different physical needs can use it or that people are able to represent themselves accurately in virtual space. It means making sure that inclusion also looks at non-AR VR options. It's not going to be the best answer for everyone. And making sure that we have inclusive and equitable spaces that are not virtual is just as important. So those are two of the really big takeaways. And accessibility, obviously, is the last big one. And accessibility means in terms of both accessible devices, but also making sure that we consider different means of access, making sure that not everyone can wear a headset, not everyone has access to a headset. So having WebXR options, having mobile device options, in addition to those non-virtual options. And then I made several recommendations for policymakers, and I won't type into all of them because there are many. But the general idea is that I do think government needs to be focusing on this and investing in this. I would love to see more government investment around accessible technology, and also this question of whether the empathy capacity of AR, VR is useful in terms of implicit bias training and all of that. So I would love to see more government focus on this and I would love to see government start implementing inclusive technologies as an early adopter because I think there's a great opportunity there as well.
[01:07:19.583] Kent Bye: Yeah, I do think that there's a level of having an emerging technology and then getting it worked for a critical mass of the early adopters and the innovators that are in that cluster. But as you expand, you definitely have to realize how it doesn't work for everybody. There's always going to be a fundamental incompleteness of like, you can't serve everybody's needs immediately, but definitely actively seeking out these marginalized or underrepresented communities. to make sure that the designs are being as accessible as possible. So it has been definitely one of those things with the emerging technology that it doesn't always come first, but we're at the phase of maturity. I think that it is definitely very important. So I think that's an important aspect.
[01:07:59.792] Ellysse Dick: I think we are at the stage now where if we get too far past the stage, it's the, oh, we should have thought of that stage. You know, now is really the time to be thinking about some of these inclusion issues. And, you know, when we're talking about the vast possibilities of AR VR devices to serve as assistive technologies, to bring people closer together who would otherwise face social isolation, you know, these already vulnerable communities can really benefit from these technologies, but not if they're not accessible and secure and, easy for them to use. So I think you can't consider the vast possibilities without also thinking about the risks and challenges that exist.
[01:08:36.663] Kent Bye: Yeah, one of the things that I've found with even looking at some of these different ethical issues is that there are these different trade-offs between these different things in terms of who you're serving. And as you make these engineering decisions, it may be biased towards one of these communities or not. And also when we look at just even the human rights approach from the Neuro Rights Initiative and Rafael Yusta and his other co-authors from the Morningstar group, is that they have five different neuro rights, the right to identity, agency, mental privacy, also the right to have equitable access to the technology, as well as to be free from algorithmic bias. And so right there, you have embedded within those five ways in which that marginalized communities may be suffering from algorithmic bias. And then those who may not have the resources to have equitable access, because there's the moral implication of having technology that's able to do neuromodulation or being able to improve your consciousness in different ways that you don't have that technology equitable or accessible to everybody, then you may be accelerating the digital divide that we already have. And I think what's interesting by looking at Facebook's approach is that on one hand, they really are prioritizing above all else, making the affordable technology to be able to make it as equitable as they can. But there may be other trade-offs between the privacy identity and agency that other companies like Apple may be prioritizing other things in terms of mental privacy, but it's very expensive. And so you have different market dynamics that are happening there. But I guess there is a moral argument for what Facebook is doing and that they actually do bring up that they're doing what they do because they want a billion people in VR. That's one of their stated goals. Now, my question is whether or not the trade-offs to the identity agency and mental privacy is worth that goal of getting that many people in at this accelerated rate that may be bringing other unintended consequences from society because we're sacrificing and mortgaging our privacy in different ways. So I think, you know, with these ethical issues, there are these trade-offs and I do see that there's these trade-offs between some of these things that may be funding the ability for some of these companies to make the technology more equitable. And I would just caution that as we start to talk about these things, like how do we bring those trade-offs into question and whether or not we are making a judgment as a society as to whether or not some of these decisions we're making are worth it in terms of the potential harms.
[01:10:59.228] Ellysse Dick: And that's why I think that first principle of bringing in people with lived experience is so important because people are going to raise some trade-offs that we might not have thought of ourselves. You know, certain privacy concerns that they might have that we wouldn't otherwise have or safety concerns or accessibility concerns. You know, maybe that level of trade-off is different for different groups of people. And I think that starting with that conversation with people who are going to be most impacted by these technologies is not just on the developer side, but also on the policy side is really, that's why we have to start there.
[01:11:34.202] Kent Bye: Well, we've talked about harms. Is that something you're also looking into in terms of trying to create a taxonomy of some of these potential harms when it comes to virtual augmented reality? Because it seems like if we are going to have any policy ever in the future, we have to get some sense of the landscape of potential harms that are coming from this.
[01:11:50.879] Ellysse Dick: Yeah, I mean, if you look at my body of work over the last year, that's sort of what I'm trying to do. Maybe not formally as a taxonomy, but just start building out this landscape of harms. My target audience is the policy side. So, you know, I know a lot of you and a lot of your listeners already know all of these things, but I'm really trying to contextualize it in the broader policy discussions that we can understand how these potential harms and also the opportunities that these technologies present feed into other policy discussions and policy priorities. So, um, I'm moving forward on other topics in the next year or so, and hopefully building out that discussion.
[01:12:28.153] Kent Bye: Okay. So yeah, as we start to wrap up here, I'm curious for you, what type of experiences do you want to have in either virtual or augmented reality?
[01:12:36.900] Ellysse Dick: Oh, so many, uh, I do currently have virtual check-ins with my boss in VR. We have our weekly meetings and that's already so fun. I can't wait until VR workplaces become more usable and less like you have to hack your workspace every time you try to log on. I'm really looking forward to VR collaboration tools. I do think AR is super exciting. I would probably get AR glasses if they come out and they're good. I think that, you know, the consumer potential for this technology is so interesting, and I'm just really excited. My background is really in communications technologies, and I'm just so excited to see how these technologies change the way we talk to each other, and I will probably dive in on every opportunity to use that.
[01:13:23.630] Kent Bye: Great. And, uh, and finally, what do you think the ultimate potential of virtual and augmented reality might be and what it might be able to enable?
[01:13:33.626] Ellysse Dick: I mean, the thing that I talk about most is the ability to remove barriers from physical distance and space. I don't think we're going to be living in a fully virtual world. I think that's not going to happen. But I do think that that transformative ability of the way that we communicate and collaborate has so much potential. I think it's going to increase economic opportunities by making virtual collaboration so much easier and not requiring people to relocate. It's going to really make workplaces more engaging and efficient. It's going to make government services more accessible to people. I would love to see, you know, VR and AR based service provision in public service. You know, I really think that collaborative potential is what makes these technologies so exciting. And I'm really excited to see what comes up from here.
[01:14:19.987] Kent Bye: Great. Is there anything else that's left and said that you'd like to say to the immersive community?
[01:14:27.292] Ellysse Dick: I mean, I say this at most of the talks that I do, but, you know, I am really working on translating issues into policy. So my email is public on the ITIF website. I am on all the social media. If there are issues that you think need to be coming up that haven't yet, that's what I'm here to do. And I'd love to keep driving the conversation into the policy space.
[01:14:49.004] Kent Bye: Yeah, and just a quick follow on that. Have you had these discussions with different lawmakers? And you know, what's the response at the policy level of DC? Are people paying attention? And I know there's the reality caucus, but you know, this work that you're doing, how's it being received by Congress?
[01:15:03.803] Ellysse Dick: I mean, there's definitely more interest right now, I would say, in sort of the federal agencies, right? Your NSF and your NIST and, you know, those areas. Slowly but surely, I think it's working into policy discussions. And that's why I said earlier, you know, it's really important that we're using these broader tech policy discussions as a touch point. So things like privacy to bring AR, VR into the conversation. I would like to see it recognized more, acknowledged more in some of these tech policy conversations. And that's what I'm working to do.
[01:15:31.876] Kent Bye: Awesome. Well, I'd highly recommend people go onto the ITIF website and click on your name on one of your articles and look at all these other articles that we've been talking about. I'll probably end up actually linking up to a lot of these that you've written. Of all the different people that I've found at least, you're the most prolific policy writer when it comes to all these XR issues. Probably And number two, probably Joe Jerome that I know has been writing on different stuff, but you've been really digging into a broad range of different topics. And I just really enjoyed being able to sit down and kind of unpack some of these with you today. So thanks for taking the time to not only dig into all these issues, but to come onto the podcast and chat about it. So yeah, thanks for joining me today.
[01:16:08.533] Ellysse Dick: Yeah, this was great. Thanks, Ben.
[01:16:10.634] Kent Bye: So that was Elise Dick. She's a policy analyst at the Information Technology and Innovation Foundation. So I have a number of different takeaways about this interview is that first of all, As I listen back to this conversation, like I said at the top, I imagine when I'm having these conversations that I'm talking to Facebook themselves because a lot of the different arguments that Elise is giving is sort of the same line of argument that I would expect to hear Facebook making. When you read through the different aspects of the privacy law and the nuances of what they want, there is a debate around the federal privacy law. They want to be able to have a federal privacy law that actually preempts all the other state laws in some sense because they want to consolidate all the decisions down into one law rather than having 50 state laws that have different nuances where in California you have to do this, in Illinois you have to do this, in Washington state you have to have these restrictions. So if there's just one omnibus federal privacy law that takes care of it for everyone, then you don't have this fragmented, geolocated approach to how you're approaching privacy with the United States. And if anybody wants to do business in the United States, then they just go to one stop of, this is the law that you have to follow. Sort of analogous to the GDPR in that sense, where there's just one approach and it's kind of more globally applied across all these various different contexts. So I guess one of the takeaways I have from this conversation is that, you know, we do need guardrails for a lot of this stuff. However, again and again and again, the theme that I heard was that, well, you could just look at the existing technologies and see where it's similar and then just follow the existing things that are already there. So, in other words, do nothing. In a lot of these different cases, I mean, we do need a federal privacy law, but even the suggestions are, hey, maybe we should sort of wait for a little bit, see how things play out. And she was really cautioning against this privacy panic. And I will cop to that, that there is a bit of panic that I have around where the state of everything is at, just because I'm observing all the behavior that's already happening in these existing models. And I see that there's a step-by-step incremental progress towards into a world where there's going to be all this physiological and biometric data that is trying to extrapolate our attention, what we're paying attention to, what we're like, what we dislike. And so being able to extrapolate your attention is a big part of, you know, as we move forward, we're in an attention economy. And I was really quite surprised to hear Elise be somewhat skeptical that there would be a desire for these companies to start to integrate all of this tracking technology into these existing surveillance capitalism models that are there. Having this contextually aware system, which Facebook explicitly said that they're creating contextually aware AI, which in some ways is creating a deeper context to be able to do all this tracking so that we're able to do computing that is aware of our context, then we can perhaps have seamless integrations with the world around us with augmented reality, but also in these VR environments. So there's ways in which that there's an experiential element, but there's also a whole element of how that data are going to be used to model our attention and come up with these different models of our biometric psychography to be able to then make decisions on what ads to be able to serve us. So I was surprised to hear her say that I don't want to leap too far ahead into a distant future where we imagine the type of manipulation that you're talking about. I don't know if we have to imagine too far in the future, because I think that's already happening. It's not so far-fetched as to think that there isn't already an enormous amount of tracking on us, and that as we move into these new media, I think, to me, it seems obvious that they're just going to be continuing to look at new ways of tracking our attention and new ways of being able to take these existing models and be able to port them over I mean, just this week, there's already been announcements that they're already starting to do that with an advertising within Facebook and virtual reality. You know, the whole model for Facebook is that they're essentially an ad company that is able to create these social experiences that are able to provide enough value to be able to get enough context about people as they're pulling in all sorts of other aggregate information and data and creating these really sophisticated models of us. So I think that's already happening. And I think, you know, I may be wrong that this is not going to happen and that suddenly there's been a big paradigm shift within the company that they've found these new economic models. But, you know, even with these apps that you pay, they're going to have advertising within that context. And so they're starting with a model of, you know, you have the app store model, you buy the application. But yet, even within that context, we're going to still have the opportunity to have ads. And when that context is created, there's going to be new incentives to be able to do all this additional tracking. So she's constantly against the privacy panic. And I guess in some ways I'm advocating for us to have a little bit more panic just because this is happening and nobody seems to be doing anything about it. And that federal privacy law is such a low priority within the legislative agenda that this is not even on the radar for what's happening in Congress right now. So in order for it to actually be addressed and for us to have a federal privacy law, then we need to have the public and the constituents be talking to the different legislators and really informing them and keeping them up to date, and providing some other alternatives other than just, like, let's do nothing and let the market decide what's going to happen here. So, I mean, this is the dilemma, whether or not we do need to have some sort of intervention. with the government to come in, because from the perspective of ITIF, you don't want to have too much panic. That panic drives decisions that are made too hastily, and then you start to prevent technologies from developing and evolving without having other ways of mitigating some of these different potential harms and potential concerns. Let's sit back and wait and see how things play out. It seemed to be a theme again and again and again. But there was this interesting aspect of taking a look at existing systems and then seeing how some of these technologies are similar. And I think this taxonomy that Elise was coming up with had a number of different things, where it was the aggregation of all these things together that made it new, specifically the persistent aggregate and real-time data of all this. And so there was the Continuous data, that was data that was nonstop collection of data 24-7 all the time, this persistent data recording. The bystander data collection, so looking at the relationality of how we are gathering information about the people around us, which is a whole thing with the persistent recording and the people that are around us. and then you have the portable data collection, so it's on us, it's these wearable technologies that are recording all the time, and then you have the inconspicuous data collection, so that's, usually we have different notifications with the click on a camera, but even those over time get taken away, and there used to be a record light as you're recording different stuff, even on the phones, it's been taken away, but when you have these wearable devices, then do we need to have some way of signaling to people around us that we are either recording or not recording, are we live streaming everything that's happening in real time, So having some sort of indication that there is some sort of recording and transmission that's happening. And then you have the rich data collection, so being able to get geolocated information and situational awareness of the place around you and the people that are there as well. The aggregate data collection, that's when you start to pull in all sorts of other information in and tie it in in combination. all the other information that's already coming from, let's say, our Facebook account, which is pulling in our mortgage information or consumer buying information or anything that you can get from data aggregators all coming in together. And as we have our identity that's tied to it, then that's going to be fed in there as well. And then the public exposure. So there's public data around financial donations that you may have made or other information that is on the public record that is then being associated in real time in real time context. So when you add all these things together, the continuous, persistent data, the bystander data, the portable data, the inconspicuous data, the rich data, the aggregate data, and the public data, then you have perhaps something that's new. It goes above and beyond what we have and all of these other existing laws. So that is something to be considered. And the third-party doctrine, and this being recorded and stored, you essentially have a distributed CCTV surveillance system according to the existing ways in which the third-party doctrine of the Fourth Amendment is written, then there's going to be some serious things that need to be addressed there as well. And then there's the four types of data, the externally observable data that can be replicated, the observed non-replicable data that is kind of like personally identifiable data. It's kind of confusing to call it observed versus observable. I would just refer to observed as information that's tied to your identity that can't be replicated. But then there's the computed data, which is the information that's inferred from all that information. And that's what Brenton Heller calls the biometric psychographic data. And then there's the associated data, the data that is without any context on its own, doesn't mean much, but once added in combination to other stuff, then you have additional information that you're able to infer as you start to pull in all these data sources. So again, I think whatever new federal privacy law that comes up that needs to address this computed data in the biometric psychographic data, because that I think is this new paradigm of where we're at with this real time processing. Even if you're doing the real time processing and you're not storing the raw feeds of the data, There's less concern around it getting into the wrong hands. It's more about the types of inferences they're able to make in real time and whether those are good inferences or bad inferences and how much they're going to be treated as gospel when it comes to if the government wants to get access to that. If Facebook has made some sort of computed assertion about who we are as our personality, is that going to be enough evidence to be able to hold against us if they decide to target individual citizens for whatever reason? So that's sort of an example of how some of this could be abused. But also, you know, as they have all this different psychographic information, then what's the way that we have appropriate flows of information? Helen Isenbaum's contextual integrity theory of privacy, meaning that there's an appropriate flow of information that is dependent upon the context. And there's a bit of a stripping out of this context and a lack of us being able to control any of this information that's happening. So I think that's a big concern that don't really necessarily have a good solution to, but the notice consent model, I don't think is really necessarily a good model. It's what we have. And at least it says it's a good place to start, but in terms of appropriate flow, the existing model that we have, there's no way of actually verifying that it's appropriate or not. The adhesion contracts and privacy policies in terms of service is so vaguely written, they can pretty much do whatever they want. And there's no recourse for us to kind of opt out for anything. Provide the controls that matter, is what Facebook has said. There's a lot of controls that they say don't matter to them, but maybe they matter to us. There's a lot of ways in that we're not going to be able to actually control a lot of the nuances of how this information plays out. So I guess an overall takeaway is that if we do absolutely nothing, then a lot of what was discussed here in this conversation is kind of how things are going to play out. Basically, the innovation is going to be preferenced. There's going to be little or no guardrails. And then when things become too late is when you're going to try to step in and try to reel things back. At the end of the day, it's trying to balance innovation with trying to either deal with the harms that are arising or to prevent harms that may or may not happen and could be something that's more speculative. That's a harder problem to do. It's easier just to say, let's do nothing and let's say these are sufficient and maybe let's have a unified federal privacy law. Which would be great to have that, but there's so much partisan polarization between the private right action versus whether or not these state laws should exist or whether or not the federal law will preempt it. Just those two issues alone are basically derailing any possibility of having any viable federal privacy law here in the United States. So to kind of lean upon that as a solution to have a federal privacy law, but yet to see the larger political landscape as we're so far away from that actually feasibly happening, There's a lot of work that would need to happen within the larger culture to be able to push forward this, which is why I guess I am preferencing this privacy panic position relative to the ITF, which is like, hey, you know, we should just trust them that they're not going to take this in a really dark direction. So, like I said, they could be completely right, and I'm in the wrong here in that I'm overreacting to something. But I think it's at least worth talking about this and trying to see what could potentially be done. I'll be talking to a few other perspectives. I'll be talking to Helen Nissenbaum, one of the founders of the contextual integrity theory of privacy, to get kind of a different take away from notice and consent, but also just the appropriate flows of information. When it comes down to it, again and again, I come down to Nissenbaum's contextual integrity because it is about the appropriate flows. And how do you ensure that it is appropriate? And with these new contexts that are emerging, how do we make sure that the audience of virtual augmented reality see that these flows of information are appropriate and that they're able to consent or have a little bit more targeted control over some of the operating system changes that Apple has been able to implement has really emphasized this aspect of contextual integrity. When I'm using the app, sure, you can do whatever you want, but when I'm not using the app, don't be tracking me all the time. I think that's a decision that most people are making. And so with that, there's trying to cut down on this type of surveillance that we have that is intruding into our lives and potentially undermining different aspects of our autonomy and our agency and our identity when there's other people that are asserting who we are as an identity. our right to identity is in some ways in contradiction to this need to create all these psychographic profiles of our identity and who we are based upon our behaviors or actions across these different platforms. And also, the Electronic Frontier Foundation, the EFF, led a whole session on RightsCon, and I'll be talking to four of the different lawyers and representatives from EFF to be able to unpack some of the lessons learned there, and to look at this from a perspective of human rights, because their perspective is a lot of the harms that come from privacy harms are actually invisible, that you can't see them a lot of times. So actually documenting them becomes this paradox where the harms that are happening, you can't necessarily see. If there's a privacy breach and you don't end up getting a job because you get discriminated against, then there's a lot of ways in that that can be just hidden or occluded because they just said no. You don't know why, because there's a lot of things that are not transparent there. There's ways in which that privacy breaches are invisible and the privacy harms can be invisible. And so how do we deal with this perspective of the opposite approach of taking the libertarian approach would be like the more human rights approach. And do we need to have some deeper protections as a human right within the laws within the United States? So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of ER podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue delivering this coverage. So, you can become a member and donate today at patreon.com slash voicesofer. Thanks for listening.