Mozilla’s Diane Hosfelt is the Privacy and Security Lead on Mozilla’s Mixed Reality Team, and I had a chance to sit down with her again a two months after our SIGGRAPH panel discussion. Hosfelt has been spending a lot of time writing academic papers looking at privacy on the immersive web, including this piece she posted in May titled “Making ethical decisions for the immersive web.” I sat down with Hosfelt at Mozilla’s View Source conference in Amsterdam on October 1st in order to get some updates on her latest work on helping to define the landscape for privacy on the immersive web.
She talks about some of the legal frameworks for privacy, including some of the cultural differences between privacy law in the United States compared to the UK and other countries around the world. She also talks quite a bit about this concept of “privacy engineering,” which is a relatively new discipline that tries to look at the intersection between technical architectures, public policy, and the sociological impacts of technology on civil liberties. She shares some of her takeaways from the new 2019 USENIX Conference on Privacy Engineering Practice and Respect that she attended in August including that privacy engineering is hard, because there are no perfect solutions and it’s an emerging discipline that’s hard to connect the dots between technological implementation and sociological impact and potential harms caused.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So continuing on in my series on XR ethics and privacy, today I have a conversation with Diane Hausfeld. Diane was on the panel discussion that I had at SIGGRAPH at the end of August, and then about a month later, Mozilla had sent me out to the ViewSource conference, so that was their developer relations conference, but they also used the ViewSource conference to bring in lots of different people from the open web, people from the W3C, and it was just a great collection of seeing people that were both working on the open standards, but also thinking about the open web in general. So privacy is a huge topic that is happening right now on the open web, especially with all the surveillance capitalism and how to mitigate against different aspects of being tracked online. And so the browsers are really at the front lines of that battle. So it was a great conference for me to attend and talk to a number of different people, but it was also great just to catch up in a more extended format with Diane Hossfeld, who I originally met at the VR privacy summit and did a very brief, like 10 to 15 minute interview with her. she was on the panel discussion and then now I just had a chance to be able to sit down with her because She's been doing a lot of digging into the privacy law Textbooks and to try to figure out what the strategy is for her own operationalized approach to privacy She mentions attending this conference on privacy engineering and the difficulty of all these different trade-offs and what makes privacy engineering so unique just in the sense that technologists are very used to Doing these trade-offs from technical specs, but what about doing these more sociological cultural aspects around undermining democracy or different aspects of privacy and civil liberties mashed up with elements of being able to empower people with their own biometric data so lots of different very challenging trade-offs that it comes with immersive technologies and privacy engineering and So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Diane happened on Tuesday, October 1st, 2019 at the ViewSource conference in Amsterdam, Netherlands. So with that, let's go ahead and dive right in.
[00:02:16.515] Diane Hosfelt: I'm Diane Hossfeldt, and I'm the Privacy and Security Lead for Mozilla Mixed Reality. Right now, what my focus has really been is talking about mixed reality privacy, security, and ethics. It's a new technology, it's an emerging technology, and we need to be thinking now about the emerging risks and such. and potential mitigations. However, at the same time, it's a technology that people are actively using and actively developing in. So right now, I'm here at this particular event in order to talk to developers in particular about how can we create privacy-preserving prototypes? How can we start as we mean to go on?
[00:02:57.830] Kent Bye: Yeah, just coming from Oculus Connect 6, it's really striking to me to see Facebook get up on stage and they'll, just in passing, say, oh yeah, by the way, we're gonna record all of your space and all of the space in the entire world and just make that available without really talking about the deeper context of the ethical and privacy implications of that very specific feature. So, for me, I think it left a lot of people a little unsettled as to, like, not really feeling like Facebook had really earned a lot of trust of Being good stewards of this immersive future when they're not really honestly talking about some of the ethical and privacy implications and so I think I was talking to a lot of different people that kind of unpacking that because everybody was very excited about the technology and the roadmap where it's going but yet there seems to be a little bit of Almost like move fast and break things type of mindset that is still being applied there so it feels like a threshold is being crossed and And at SIGGRAPH, we were on a panel together talking about those ethical and privacy frameworks to be able to set some guidelines. And it feels like as those thresholds get crossed, then they get defined. And so how do you start to define what those thresholds are and to come up with a framework for privacy?
[00:04:12.402] Diane Hosfelt: It's definitely interesting that you bring up Facebook. I know Oculus Connect just happened and sadly I wasn't able to make it. But you know I followed it from afar and there were definitely... One of the things that really struck me was how, in his keynote, Mark Zuckerberg said something about how, you know, as the ecosystem matures, the privacy and security aspects will also mature. And, you know, that's just not how it works. Like, that's just, that's not what happens. What happens is that you have to actively work on these things. It's easy to defer privacy features until later down the line, right? Like, they aren't necessarily money-producing features. Now, should they be? Maybe, maybe not. We don't want privacy to be something that only the wealthy can afford, because we see that in plenty of things. For example, Washington Post subscription. It costs, I want to say, $20 more for the GDPR-compliant version. So if you're in the EU, you automatically have to pay an upcharge in order to have the more private version of the Washington Post online, right? So it's one of those things that we have to actively work to make sure that we're integrating privacy in so that we're being private by design. It's not something that happens without human intervention. It's not something that just matures, right? You have to guide it. You have to work hard at it. And this is, I think it was a missed opportunity. personally. I think that they really could have focused more on, you know, how are we going to help it mature and what are we going to do? And so with that lacking, it doesn't inspire much more trust for me and Facebook that they're going to do things right, especially if they're going to be, you know, owning this legless oasis that they're creating with Horizons, which, by the way, I have some issues with not having legs. I like my legs. My legs are important to me. They get me places. Legs are a miracle, right? Like, we walk upright. They're really cool. Don't take them away.
[00:06:31.529] Kent Bye: Yeah, I know that I had a chance to do the demo and also talk to Ken Perlin of NYU. They're working on legs. I think the actual tracking of the legs. So I can understand not showing your own legs, but it's possible to show other people's legs so that you're not just like these floating bodies. So yeah. But I guess one of the things that I was thinking a lot about, both before and after, was actually somebody asked me, what's your wild prediction? I was like, my wild prediction is they come out and say they're going to do a privacy-first architecture, because that would be really wild. That would be really surprising and shocking. But they obviously didn't do that. And when we think about those thresholds of privacy and how to define it and describe it, at SIGGRAPH, we were talking about, How, you know, philosophically, it's kind of an open question as to exactly how to define a comprehensive framework for privacy. It's so context-dependent. It changes with culture. But there's, from an engineering and design perspective, you have to collapse the infinite potential into an actual decision, a design decision. And it's usually values-driven to be able to either set up some values to say, these are some principles that we're going to operate around. So is that kind of your approach, is to take that values-driven operational approach to trying to come up with some guiding principles to be able to do a privacy-first architecture?
[00:07:50.003] Diane Hosfelt: Well, you know, it's hard, and I hate to say that, but privacy is hard. Even with context and even with principles, it's constant iteration. It's constant re-examining of the trade-offs. You're never done when it comes to privacy. And so I think that it can be very difficult even just to say, well, hey, these are, you know, you can say that we value your privacy, but what does that mean? Do we value your privacy when it comes to personally identifying information? Well, that gets us into the rabbit hole of what is personally identifying information? And so trying to build a framework for privacy, and particularly when we're talking about unique domains like mixed reality, Usually my answer is just that it's hard and it's going to require constant iteration. Where I try to start, personally, is kind of inspired by how regulatory agencies look at whether something's a monopoly, right? is it causing consumer harm, or is the harm to the consumer outweighing the benefits? Because the way this all started was we were creating a bunch of extra data that then Google realized that we could feed back in, and we could use it to improve people's experience. And then they realized, oh, well, we can also sell this stuff. We can make a lot of money on it, which is why Google's an advertising company. And so at some point in time, all of that was done just to improve people's experiences. And it really was done within the framework of don't be evil. So where was the inflection point where that changed, where we, you know, maybe it was a change in society where we said, hey, wait, this isn't okay, or was it a change in, you know, companies who rely on advertising and selling data to third parties, was there a change in how they did it or at what scale? You know, we can talk about the impacts of big data and the impacts of applying machine learning ubiquitously to the big data that we've gathered, right? And those are all parts of this huge problem that is privacy. So for designing for privacy first, where I like to start is looking at consumer harms, not even consumer, user harms. Like, what is the potential harm? I grew up with a mom who was a lawyer. And so one of the things that she's pretty iffy about that I got from my upbringing is I always ask myself, well, what's the worst that'll happen? Which, as you can imagine, when you're a teenager, is maybe not the best question to be asking. But you know, I didn't get arrested as a teenager, so it worked out at one level, right? And so when we're talking about what's the worst that can happen, if the worst that can happen is that someone dies, that's a bad thing. And you know, that's a very real possibility. There are some countries, for example, where If it's known that you are gay, that you could die from that. And we know that some of the technologies involved in not just mixed reality, like tracking across the web can identify your sexual orientation based on what you search on. And so when we're collecting and selling data that is consequential to people's very existence, It introduces a new dimension. That's not to say that anything less than death is totally fine, but we need to look at what is the harm at the end. It's not just what does this sensor do and what are the risks of this sensor. It's what do all of these things taken together, what is the risk and what is the benefit? Like what are people getting out of it and what aren't they? And what we really need to do is kind of form this taxonomy of harms. so that we can all talk about it in the same language. And there are different taxonomies for privacy harms. Personally, for me, I'm interested in creating one that's specific to mixed reality because there are so many unique considerations.
[00:12:15.447] Kent Bye: Yeah, it reminds me of the conversations I've had with behavioral neuroscientist John Burkhart, where he said that in the neuroscience literature, you look at the same literature to be able to predict behavior and then to also control behavior. So that threshold between creating a model, some sort of statistical data-driven model to be able to understand behavior, what's that threshold? It's like an unknown ethical threshold to then be able to control that behavior. And talking to Tristan Harris of the Center for Humane Technology, he's talking about how this is really kind of like a fiduciary relationship that we should have, that we have this asymmetry of power. These companies have so much power about us and our psyche, and that the way that they're using it to potentially go against our consent to do things and trying to subtly influence us in ways that could be not in a full consent relationship because they have so much more information than we do. So it feels like the company also has their own fiduciary relationship to the shareholders. So they're trying to maximize profit at all costs. Whereas we used to have like these contexts where you would have a doctor be able to give you information. You know, we can imagine what would it be like if your doctor was a data broker to try to sell information that you have from your body to an insurance company and he was getting a cut on being able to get you off insurance. Like you wouldn't want to go to a doctor like that. And that's kind of like what we have with these companies is that we have all this information that they're being the stewards of, but yet it's not a fiduciary relationship. So they're not looking at our best interest. They're looking at their own best interest for profit.
[00:13:53.713] Diane Hosfelt: Well, and it's interesting you bring up that example because, you know, they're already doing that. For example, manufacturers of connected CPAP machines used to treat sleep apnea. They will send home data about your usage. So if you aren't considered a compliant user, your insurer might not cover your next, again, life-saving device. And so then back to the specifics of mixed reality, you know, we're collecting a lot of biometrics, a lot of biometrically derived data that can tell us a lot about the health and well-being of people. And it's something that we can't just say, oh, well, we put this technical stop in where this isn't going to happen. It's not. Unfortunately, like, it's going to happen unless essentially there's regulation to prohibit it. And so I think that the really hard part about privacy, especially for engineers, you know, I'm an engineer, right? Like, it's hard for me. The solutions aren't necessarily technical. We're used to there being an answer. It might not be the optimal answer, but we're used to answers existing. And a lot of the problems in the privacy space don't have answers. And sometimes they're just a choice between bad and worse. And which one's bad and which one's worse? Depends. And especially when you have to admit to yourself that the solutions to the problems, to the technical problems you're having, right? Like this is existential. Like we are collecting data. We must collect this data for technical reasons, but we don't want this data to be misused. but we have to have it. So there's clearly not some hard technical stop that we can just code in, like, oh, don't do blah, blah, blah, right? And I think that that's really difficult for engineers to accept. And to really appreciate is that not everything can be solved with engineering. And that's one of the things that makes privacy engineering so interesting. There's a new USENIX workshop that I went to this year called Pepper. And it's privacy engineering principles and something. P-E-R. Pepper. It was really great, great workshop. And I learned a lot. But the main theme of it is that privacy engineering is hard.
[00:16:28.885] Kent Bye: So one of the things that I would, I guess, challenge is that they absolutely need to record a lot of the data. I mean, sometimes there are things that they do need, and maybe at low frequencies it can get enough. But I guess the mindset is to grab it, to seize it, take ownership of it, and then maybe you'll figure out a use for it later, and you'll have like this goldmine of information, which For me, it feels like you can see that, OK, that's fine if they're like this completely safe trust that is never going to get compromised. But I think about in 5 to 10 years from now, what happens if 10 years of biometric data that's correlated to different experiences leaks out onto the dark web? Like, how could that information be used, especially if you can go into a VR experience and watch people and be able to maybe look at the different signs of how they're moving, their gaits, their bone structure, their lengths, how they move their body. I mean, there's a lot of information that is not personally identifiable as of right now, but very well could be within just a few years, even if somebody who is observing and co-located with you in the same immersive environment being able to scan you, do photogrammetry reconstruction and to figure out the different algorithms to maybe match that with leaked data and then kind of unlock that information and then who knows what other information you could get out of that. So I feel like there's a mindset of wanting to record as much data as you can and then to run AI algorithms and on the one hand there's going to be some absolutely amazing Medical applications for that and I'm sure there's gonna be stuff that I actually want, you know I want to hack my own consciousness, but I don't want to give that power over to Facebook, especially with brain control interfaces and reading my thoughts and Transcribing what I'm thinking and you know having a whole like intimate profile of all of my emotional reactions to content and what I'm looking at what I'm paying attention to what I find interesting and all that sort of metadata that we're going to get from this biometric data, it's going to be like this goldmine of like this keys to my psyche. And I guess I want to see a self-centered identity or way to kind of like, it could be recorded, but maybe I'm recording it and I'm controlling it. And there's other issues that come up with that. But to me, that seems like a much better solution that if I want to get the power of that data, to be able to own it and not give it over to Facebook or to Google or to any of these companies, not knowing what could be done with it.
[00:18:55.905] Diane Hosfelt: It's really interesting that you bring up what's going to happen in 10 years when there's a breach of all the collective biometric data, because do you remember the 10-year challenge that happened earlier this year, where people were posted a picture of them like 10 years ago and then today? And someone asked me, They were like, oh, well, you know, you're a security and privacy person. Do you think that this was all an elaborate ruse by Facebook to study aging? Like aging and do facial recognition across, you know, ages, right? And I was like, no, of course not. They already have that data. So, and the idea of anything being completely secure is unfortunately ludicrous. Nothing is perfectly secure unless, you know, no. I'm just going to say practically nothing is perfectly secure and theory doesn't work in practice. So, none of that, well, theoretically nonsense. So talking about, yeah, how people like to just vacuum up all the data they can get their little hands on and then find a use for it later. That is, you know, our computing abilities today, if you look at a decade ago, if you look at two decades ago, it is amazing how far we've come. Our processing power, our storage capacity, it's mind boggling And sometimes I think, you know, maybe we'd have been better off if we hadn't made all this progress. Because we wouldn't be able to collect nearly as much data. We wouldn't be able to analyze all of this data at scale. And would we be better off as a society because of this? I don't know, but I think it's a real possibility. particularly since we know that the algorithms we use are biased. You know, if they aren't biased inherently, then they're biased based on their training data. And so all we're doing is instead of using these so-called impartial algorithms to reduce bias in society, we're just making it worse. So that's a big concern of mine, especially when we're talking about biometrics and, like you say, like gait and all of these things. Is there a particular gait that is, quote unquote, more attractive? Are people with a certain gait going to be more promoted on dating websites or something? Is this going to give people a complex where they start to, like, try to change their gait. We see with things like Snapchat filters that people are going to plastic surgeons with a picture of their filtered face or filtered body and they want to look like this filter. It's having real societal impacts that I would argue are negative. Like sometimes I look at filters of myself and I'm like, damn, she's pretty. And then I look at myself in the mirror and I have to remind myself that I like myself. I like myself without a filter. I don't need to change myself. I can just be me. And so has all of this power, has all of this storage capacity been beneficial? Yes, definitely. But does it have downsides that we haven't fully reckoned with? 100%, and we're just scratching the surface, honestly. Even here, where I think we go, you and I have gone fairly deep into some of these things before, like at SIGGRAPH, and now, we're just scratching the surface. So I think part of it is, we need to embrace lean data, right? Instead of big data, let's embrace Lean data. And that's a step one for things that are more privacy preserving. Honestly, with new ways of regulation, it's easier and cheaper for companies to store less and collect less because you never know what's going to happen with evolving definitions of personally identifiable information. Information that you collect today might be tomorrow written in as personally identifiable and then you have to protect it as such, rightfully so. But that's all expensive. We talk about companies profiting off of this data, but one of the roles of regulations is to kind of balance that out by requiring protection of it. That is expensive, and data breaches are expensive. and compliance is incredibly expensive. One worry with regulation is always, are we going to be tamping down, you know, startups and smaller companies? Are we going to make it something that only the big players can play in? And so I think that embracing lean data helps to solve that. And, you know, I don't know if I've said this to you before, but Gaze, Gaze is my absolute favorite example for everything because Gaze is such a powerful nonverbal communicator. You know, it's a powerful biometric. It's something that we have conscious control over, but there's a huge subconscious component that we don't have any control over. For example, we all know somebody who's just a really good salesman, right? Well, part of that is that they use nonverbal cues. in order to see when you're wavering, see what resonates with you, and not necessarily in a negative way, and manipulate you into buying whatever it is. I've gotten stuck with that, and I bought something that I did not need, and did not use, and spent way too much money on, because they were just a really good salesman. salesperson, I suppose I should say, because on the whole I think women are probably better at reading body language than men. But that's just me. And so one of the things that we can do is Gaze is also a really good opportunity for us to work with advertisers to do privacy-preserving advertisement. I think that the future of advertising is actually going to be context-aware versus user-aware, right? Like right now it's based on, you know, oh, this person bought a toaster. Let's show him a hundred more toasters. No, I just bought a toaster. I do not need another 20 toasters. Quit showing me toasters, right?
[00:25:18.188] Kent Bye: Ideally, if your financial transactions are private, they wouldn't know whether what you bought or not. So you can understand why they might have that error.
[00:25:26.664] Diane Hosfelt: Oh, yeah, true. I guess buying something is a bad example. Ideally, everything should be private. But you can see how if I'm doing some, I don't know, basketball immersive experience, right? And we can reveal that, oh, hey, I'm doing a basketball experience. And we're OK with revealing that some anonymous user is in a basketball experience. And then Nike's like, ooh, we've got these awesome new basketball shoes. We want to advertise them. Well, the way that you can tell that I've engaged with an ad like that is if I look at it. So going back to the gaze data, gaze data provides a way to do very accurate engagement information. But going back to that whole nonverbal communication and great salesperson, problem, we don't want advertisers to be able to manipulate us into engagement and further things. So what we really need is to build abstracted APIs that only expose a certain level. And that way, you know, people can make money off of the content that they create. by integrating context-aware, privacy-preserving advertisements. Because, you know, yes, we need to look into new ways of incentivization on the web, new ways of paying people for their content, right? Creators deserve to be paid. and the way that it's done currently is advertising. And there is a world where we can have good privacy-preserving advertising and where it can coexist with other sorts of financial incentives. that people other than me are looking into right now. And so I think that it's really interesting that by embracing lean data and by embracing abstracted data instead of just the whole vacuum swoop of everything and then figuring out what we can do with it later, where it can actually be really beneficial.
[00:27:25.047] Kent Bye: Yeah, I know that the FTC is the regulating arm for a lot of these privacy issues and Facebook had like a whole violation that, going back to Cambridge Analytica, a lot of the press that came up around that. I read somewhere that their penalty could have been upwards of like $2 trillion or something, if you would have kind of calculated it out. They ended up getting a fine of $5 billion, which seems like a lot, but compared to $2 trillion is really just like pocket change relative to how much money they make each quarter. But it's still quite a ding. And I feel like what I sense, and I haven't been able to confirm this for sure, but there are obligations of that consent decree. I'm not sure what they are. But I've also heard that there is quite a lot of cultural shifts that are happening within the company. But I guess my frustration also is that as I go to Oculus Connect 6 and try to put in the request to talk to somebody about biometric data privacy or any of these issues at all, and really haven't got much traction. it's sometimes difficult to engage in some of these conversations. And so I feel like trying to have that dialogue becomes limited if both parties aren't willing to come to table and actually talk about it. And so for you, it seems like you're taking the approach of trying to do the research and write the academic papers to be able to be put out there and put onto the record that then will kind of push the conversation forward to perhaps lead to data that could be used as a basis and foundation of legislation to be able to have that more legal policy side in there. Because like I was talking about before, if a company has a fiduciary responsibility to their shareholders to profit above all else, then who's looking out for the interests of the users unless there's some sort of entity, whether that, like the Privacy Summit, we talked about an independent review board for privacy, although without any sort of regulatory teeth, and I'm skeptical to see how far that could actually go, but to see how the other way to actually create a context to have real change of behavior feels like either the market is going to decide to eject, but if they're the market leader by a pretty significant amount relative to the quality of the products that they're doing, they do great products, You know, they're really innovating in a lot of ways, but also that gives them the leverage to kind of do what they want without any recourse to have any sort of market pressures to create this dream of surveillance capitalism that they could get with biometric data. So unless they are coming out and saying, to the public, we are changing our whole MO on surveillance capitalism. We have a new business model that we're going to be experimenting with. Until they do that, I think people are going to be a little hesitant. But from your side, I'm just curious to hear a little bit more about your strategy of trying to do the academic work to be able to put that information out there that could be leveraged in other ways.
[00:30:11.836] Diane Hosfelt: Well, so one, I think, question is, is the FTC who we want our regulatory body to be in these things? And, you know, the current answer is yes, because that's all we've got. So we'll take what we can get. But the FTC, like I was mentioning, it's really focused on consumer harm. So for us, as people who are interested in privacy, in order to communicate with regulators, we need to speak their language. and we need to teach them our language, and we need to be able to interpret between the technologists and the regulators. Like I said, I grew up with a lawyer mother. She does not have any idea what I do. So one of the things that I've had to work really hard on is how can I talk to this about someone who's very, very smart and very knowledgeable about her thing, but who has no idea what I talk about and how can I have this two-way conversation where we're talking about the same thing, we're on the same page, coming at it from our different disciplines of expertise. That's part one of it. One thing that we need to be worried about is we don't want to do preemptive regulation. If we do draconian regulation too quickly, it'll kill the technology. It'll do a few things. First, it'll kill off small companies and it'll reduce the market competition. which will in turn empower the bigger players, but also has the potential to just kill off the technology completely. We saw this happen a couple decades ago when AR first exploded onto the scene. And at that point in time, it wasn't that it was preemptive regulation that killed it off, it was that people were getting physically ill from the experiences that killed it off. But it's very easy to like, see that, you know, on that side, people physically couldn't handle it. And on this side, companies can't fiduciarily handle it, right? So we need to strike this balance between protecting our users and also creating an environment for the technology to thrive and for creators to be able to monetize. And there's a role that regulation plays in this, particularly on the protecting users side, when we don't want to create models like pay for privacy and we don't have the kind of competition with privacy features. And realistically, is competition with privacy features a good thing? Like everything else, it's a trade-off, right? because chances are something that's more privacy-preserving will require more money because they aren't making the money for advertisers, which then means that privacy is the domain of the wealthy, which we don't want because that's unethical that only the wealthy can afford to have any privacy. And that's where, in my opinion, the role of government comes in. to give everyone some baseline protections, whether you're rich, whether you're poor, whether you're somewhere in between. We all deserve some sort of baseline, right? And we deserve a baseline that is sensible and that experts in the industry can agree with. or rather cannot disagree with wholeheartedly. And so really what I've been focusing on is how can I bridge that gap? How can I take this knowledge, this technical knowledge, and how can we make it accessible to people? You know, how can we do investigations that are both meaningful from a technical perspective? Like, can we do gate-based fingerprints? We all think we can, but we haven't done the research yet. and how can we then translate this into something that people from a policy background, whether they be regulators or not, can understand and can take action on. And so that's a really important part for me. And for me, coming from Mozilla, we are very much a values-driven organization, and privacy is one of our number one values. It's actually number four on the manifesto, but still, it's in the top five.
[00:34:20.701] Kent Bye: I've got a lot of thoughts about that. But one sort of technical question about the gate-based fingerprinting, I know that Jeremy Bailenson at the VR Privacy Summit showed some research saying that that's totally possible already. So isn't that possible already?
[00:34:32.895] Diane Hosfelt: We know that it's possible. We don't know exactly how far we can take it and exactly how. There are tons of different scenarios. Like one, can you link a virtual identity to a real person? Two, can you link a virtual identity to another virtual identity? And like three, can you identify someone from their virtual identity instead of just doing a matching? So we've done the initial work, but there's still iterations to be done on it.
[00:34:59.517] Kent Bye: I see. My suspicion is that it's just a matter of time before all those are answered yes, that you can do that. But we'll need to do the research and due diligence to get to that point. Well, just taking a step back at the larger context, very striking to me to look at the FTC, Federal Trade Commission, really in this economic context of looking at the market dynamics and trying to protect the consumer. It makes me think at the VR Privacy Summit how there's a lot of talk about medical insights of how the medical field has a little bit of the right to privacy. I mean, I think generally there's not like a universal right to privacy for individual citizens. I don't know if the GDPR is the equivalent of that right to privacy for European nations and that there seems to be more of an emphasis on liberty and sovereignty given to group organizations. I mean, just to even define a corporation as a person gives these different rights to corporations and these groups to be able to protect their interests of the free market rather than putting as a priority the consumer rights. And I feel like there's a little bit of a shift of that with GDPR where a lot of companies have to implement those features, but then basically have a flag that says, oh yeah, and if you're in the United States, then you don't get to have all these extra features that we've already architected for. So it feels like we're in a situation where the technical architecture is there, but there's not a supporting business model to replace the existing model of surveillance capitalism. whether that ends up being subscription models or something completely different. I feel like we're in the need of a completely different mindset around that. And that I do worry around if there's not a breakthrough in that, then we're going to be relegated to having the government come in in a way that they're not going to be necessarily informed of the nuances and have the potential to have the opposite effect, which is to actually embolden these huge companies to continue to dominate the market because there's no one else that could even start to compete if it's like by the regulation for them that they have to meet all these different obligations. So, but I don't know if like pulling in different insights from HIPAA or medical context or different law around privacy, like abortion has different aspects of the sovereignty of your body and what happens in your body. So I don't know if it's a matter of kind of piecemealing all of the existing laws together into a comprehensive framework for privacy, but it kind of feels like we're ready for a breakthrough in this. Maybe we're not completely ready. Maybe we need another number of instances of seeing the unintended consequences of data getting out there and leaking, I feel like more and more identity theft and different aspects, it feels like it's a constant nuisance for some people who experience that, but it's not like a universal thing. And it's also like a little bit of out of sight, out of mind, where a lot of these power asymmetries and ways in which that we are potentially being controlled and manipulated in subtle, unconscious ways, we couldn't even necessarily articulate or point a finger at. But in terms of the policy aspect, I don't know if you've thought about trying to piece together all the existing laws to come up with the grand unified theory of privacy. It's like the comprehensive framework of privacy that has yet to be defined.
[00:38:18.551] Diane Hosfelt: Oh, that's definitely hard. So first, I want to say one of my favorite retailers, ModCloth, back when I used to live in England. And when I was living there, I had to use a VPN to shop on it. Because while they would ship abroad, they weren't GDPR compliant on their website. So I had to use a VPN, pretend I was in the US to shop there. And let me just say they're owned by Walmart. Walmart has the resources to make it all GDPR compliant. But, you know, that's a personal pet peeve of mine because it was a pain to shop there when I was living abroad. Yeah, no, so it's interesting that, you know, when we're talking about universal rights privacy, so a lot of constitutions, particularly newer democratic constitutions, actually do enshrine a right to privacy and sometimes specifically a right to informational privacy. in their constitutions. We have in the UN Declaration of Human Rights that privacy is an intrinsic human right. We do have a lot of countries adopting this constitutional right to privacy, whereas in the U.S., it's this implied right to privacy with the First, Fourth, and Fifth Amendments mainly, where it's the right to freedom of expression. If you already knew everything about me by looking at me, then how do I have any right to self-expression? because I don't have the ability to choose what information I reveal to you, how, and when. So that affects my self-expression, right? So that's our first amendment. And then unreasonable search and seizure goes along the same way, and self-incrimination. So all of those three come together to create this kind of implied right to privacy, but we don't have any sort of explicit right to privacy. And then again, in the U.S., our regulatory framework is sector-based in multiple dimensions. So on the one hand, we treat government intrusions much differently than we treat intrusions by private companies. One of my favorite examples of this is private companies. They can do whatever they want, unless—because it's not prohibited by this implied right to privacy in the Constitution, which is only against government intrusions. It's only government, except where we have explicitly chosen sectors to protect further. Years ago, During one of our contentious Supreme Court nomination hearings, a reporter got a hold of the potential nominee's Blockbuster records and then published that. So there's actually law in the books prohibiting the unauthorized disclosure of video rental history, which now applies to things like Netflix and such. But does it apply to your music streaming? or your online book reading. No, it doesn't. We're also very sector-based in what we protect from private industry. That's why I say that the US is sector-based, because we treat private companies and government entities differently, but we also treat different sectors of private industry differently, which is very different from the so-called omnibus legislation that's more common in Europe, as we see in GDPR. And there's a really interesting essay in information privacy law, because that's what I do now. I read privacy law textbooks for fun, where it talks about the origins of privacy in Europe and the origins of privacy in the US. So for the US, we're very concerned with government intrusions. We're much less concerned for, I mean, we should be more concerned than we are about private intrusions. Whereas in European ideas of privacy, which are more rooted in these feudal ideas of the right to dignity and the right to control how you're presented to the world, not just in a self-expression, but like the libel laws are much stricter in Europe, etc. And so it's a different approach. And then you have different approaches in Middle Eastern and Asian countries as well. In China, I can't remember what the word is that they use for privacy, but it's essentially linked to shame. It's been an uphill battle in China to get people to recognize that they do have this intrinsic human right to privacy because it's so linked to shame in their culture. And so this whole universal framework, it has to take into account all of these different historical factors, not just the current legal and regulatory factors, but it's really fascinating from a sociological perspective as well.
[00:43:07.276] Kent Bye: Yeah, I know that in China and the Eastern cultures that putting emphasis on the collective, the family, the rituals, you dissolving your ego because you're being in relation of a larger whole. So I could see how having that individuality may be shameful because you want to try to put the collective above your own personal needs.
[00:43:29.419] Diane Hosfelt: And it's interesting because there is actually the right to privacy in the Chinese Constitution. The U.S. doesn't. They have it and the U.S. doesn't.
[00:43:37.863] Kent Bye: It seems like what's happening with WeChat and everything being fed into like a firehose to the Chinese government seems like that would be a violation of the constitutional right. No?
[00:43:48.148] Diane Hosfelt: I don't disagree there. But I'm just saying they have the wording there.
[00:43:54.307] Kent Bye: And I know that recently India just got the right to privacy. So it seems like that even these cultures that have been around for quite a while, I don't know if you were tracking that at all or what kind of catalyzed that.
[00:44:05.075] Diane Hosfelt: Oh, it's really hard to count the number of, I actually did try to do this recently, to count the number of countries that have an explicit right to privacy in their constitution. And then I realized that I didn't have the couple weeks it was going to take for me to read all the language. Maybe that'll be my next job for vacation, is I'll just read a bunch of constitutions. But yeah, you know, it's a lot. I would say a majority of countries actually have an explicit right to privacy in their constitutions or like in their equivalents to like our Bill of Rights.
[00:44:43.396] Kent Bye: Well, I could definitely see the influence of your mother on all of this diving deep into the legalese of all of the nuances of these laws. And with your technical background, I think it's a great combination. One of the things I think of is how immersive technologies and technology in general serves as this kind of interdisciplinary melting pot, bringing together all these different disciplines. And I feel like the challenge with things like this, trying to come up with a comprehensive framework for privacy, is that Because we have all these different contexts in our lives, and technology has actually found a way to embed itself into nearly all of our different contexts, it's all pervasive. And for some ways, by looking at through the lens of technology, it actually helps us crystallize all these variety of different contexts. So when I was listening to Dr. Anita Allen talk about her mandate at the American Philosophical Association with her pointed of like, here's the steps that we need to take in order to come up with a comprehensive framework for privacy. And as she was talking about it, you know, I was kind of mapping it to my own sense of how I map out the contexts of virtual reality. Because as I ask my interviewees what they think the ultimate potential of immersive technologies are, they usually answer in one of the different contexts or domains of human experience. So they'll say things like entertainment or medicine or hanging out with your romantic partners or business partners. You have dealing with death and grieving and trauma. There's aspects of your spirituality, your religion, your beliefs, your philosophy of life, higher education, your career. There's aspects of your friends, your community, being isolated, ways in which you're exiled from yourself or your community, accessibility, your identity, your sense of embodiment, your finances, your resources and values, your communication, private communication, the way that you travel around in your local communities, early education, and then finally your home and family and your ancestors. So you have all of these different contexts where I feel like technology has kind of entered in. And I feel like with Godel's incompleteness theorem, anything that's going to be tried to be complete, it's going to be inconsistent. Or if it is trying to be consistent, then it's going to be incomplete. So it's sort of like going into this, do you have a complete framework for privacy and without any inconsistencies? Well, you may have to bucket it so that you try to come up with higher level of things that are bundled together. I don't know, I feel like that by trying to map things out in that way, we can start to say, well, I don't want you to track my biometric data. I want to have sovereignty of my body and stuff that's happening inside my body be quantified and be stored in other places that that data should be ephemeral. You shouldn't have access to it. Or if you are, do real time processing on it. My financial situation or my transactions shouldn't be something that should be available or be able to be purchased. how I'm traveling around, moving about, who I'm communicating with, what I'm saying, what's happening in my early education, what kind of beliefs I'm being embedded into as I'm growing up, where I live, where my family's from, my race, ethnicity, my ancestors, my hobbies, my sexual preferences, and then my medical information. That to me seems like that stuff should all be private. And then maybe other stuff, there's ways in which there's a marriage license that you have to get, records of your relationship if it goes to an official level. You know, there's death certificates when you die. You know, your religion, when you go to church, people can see where you're going. There may be a religious garb that you're wearing that also shows that. Where you work is something that is somewhat on a public record. Sometimes that's private, depending on where you're working. You know, your friends who you're hanging out with is something that can be seen as you're moving about. And if you ever go to jail or prison, that's something that's going to be on the public record as well. So it feels like there's ways in which I see those areas as trying to define the public versus private. And I don't know, I have hope that there could be like legal frameworks to try to codify that in some way.
[00:48:42.601] Diane Hosfelt: Um, yeah, I definitely, I hope so too, that we'll be able to, it's all about empowering people to choose when do they get to reveal information about themselves. Like, yeah, I recently bought a house, and so my credit reports are always frozen, and so we had to, you know, unfreeze credit reports and everything so that we could prove to our lender that, you know, we're gonna pay our mortgage. And so imagine a world where I didn't have that ability, where a lender could just be like, oh yes, we want to give you money because we know you're going to pay it back without me approaching them, right? Or imagine a world where you can't even get a lender to talk to you because you have too bad of a financial credit history, right? And so it was empowering, albeit annoying, to, you know, unfreeze my credit and be like, I am allowing you for this purpose to look at my financial history and decide that yes, I'm going to pay my mortgage so that you'll give me the money to buy this house. I need somewhere to live. And so I think that what you said is completely true. Things should be private by default. We should be able to open them up. One thing that annoys the hell out of me is Venmo, how Venmo's default is public. Make the defaults private, people. And so that's what I've been focusing on a lot lately is, you know, what can developers do to start now in a way that will be copied? All of technology, all of development is copy-paste in one way or another. Somebody writes the first prototype, And we just copy-paste it over and over again. Earlier today, someone mentioned the Lena picture that's used throughout Computer Vision to evaluate algorithms, to do examples on. And, you know, that's the image that I used when I was in Computer Vision. And I didn't know it was a Playboy centerfold when I first used it. And then when I found out afterwards, I was like, yeah, maybe it's time to retire it because, yes, So it has some really great things and that's why we use it for computer vision, like it's got a good contrast, it has good edges, it has a face. So it has all of these aspects that we want to be able to evaluate and look at when we're doing processing on an image. But there are a lot of images out there. We can pick other ones with the same qualities, maybe even better. And so it's these choices that are initially made that persist for decades. And so it's the decisions that we're making now. The decisions that we make now, whether we choose to suck all the data up, or whether we choose to do lean data, whether we choose to provide abstracted gaze data, or whether we give access to raw gaze data. Right? It's the decisions that we make now that are going to persist. And there's no taking back of biometric information. Once it's out there, you cannot claw it back. You can't really claw any data back. But unlike a password, I can't change my irises. I can't change my fingerprints. Well, I can burn them off, but then I'm going to get a lot of questions at customs.
[00:52:04.096] Kent Bye: Right. But you're eye tracking all the other data. Yeah. You can't do stuff that is your unconscious control. So yeah, there's so much about that, that you will never be able to change.
[00:52:13.284] Diane Hosfelt: Exactly. And so it's one of those things that I would much rather we go cautiously and iterate cautiously than the whole move fast and break things thing. My favorite thing about move fast and break things is the XKCD for it. So it's got a list of jobs I've been fired from. My motto is move fast and break things. And at the bottom of the list is massage therapist. And then if you select the alt text, it says I almost got fired from my job as a funeral hearse driver. But then they realized how much business I was drumming up for them.
[00:52:53.769] Kent Bye: Oh, boy. Well, for you, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?
[00:53:03.535] Diane Hosfelt: I mean, biggest open questions, what the hell is privacy? What is privacy? What is identity? What is identifiable? And none of these really have one answer. All of these have answers that are going to continue to evolve with technology and that we're going to always be asking ourselves. And I think that that's the way it should be. We should always be asking ourselves questions. So for me, a privacy framework isn't a framework of principles. It's a framework of questions. It's what questions do we ask ourselves? How does this harm people? What happens in this situation using this data from this sensor? What are the benefits? What are the harms? And what's the ratio there? What's the acceptable benefit analysis? It's a huge benefit for medical researchers to be able to use a commercial off-the-shelf device to conduct research into detecting early onset Alzheimer's, for example. Instead of having to build their own setups, it's a huge benefit for them to be able to use an already existing device. However, do we need our social networks to be able to be like, oh, hey, you might want to say goodbye to this friend early. They aren't going to be around very long. Or like giving you funeral advertisements, right? Because they've detected a health problem. Like, it's funny, but in a very deeply depressing way.
[00:54:35.499] Kent Bye: Definitely into the Black Mirror scenarios there. Well, just to kind of wrap things up here, I'm just curious what you think the ultimate potential of immersive technologies might be and what they might be able to enable.
[00:54:51.386] Diane Hosfelt: Oh gosh, that's a hard question. You know, you should have just asked me, what is privacy? The ultimate potential, I think, is I think it's allowing us to approach problems in new ways. It adds a new way of thinking for me. Immersive technologies have really broadened my horizons. looking at the potential, looking at what other people think the real potential is, and looking at the risks and the different potential mitigations. For me, the potential is that it adds, pardon the pun, it adds a new dimension to the way that we think. And I think that that can be really, really transformative.
[00:55:44.747] Kent Bye: Great. Is there anything else that's left unsaid that you'd like to say to the immersive community?
[00:55:49.952] Diane Hosfelt: The only thing that I'd like to end with is, you know, let's start as we mean to go on. Let's embrace lean data. Let's embrace privacy-preserving technologies. Because the decisions that we make now, the decisions that we make today, at this pivotal stage, they're going to stick with us. They're going to stick with us through the life cycle of this iteration of immersive technologies and possibly beyond. You know, just like Lena stuck around in computer vision community, even though we recognize that maybe we shouldn't have stuck with her, these decisions are going to last. And so let's do our best to make the least harmful decisions now.
[00:56:30.052] Kent Bye: Awesome. Well, Diane, I just wanted to thank you for all the work that you're doing in this space and continue to do the research and write the papers and to go around the world speaking about this topic. And yeah, keep fighting the good fight. And yeah, thank you for joining me on the podcast today. So thank you.
[00:56:45.716] Diane Hosfelt: Thank you so much, Kent. It's always great speaking with you.
[00:56:48.837] Kent Bye: So that was Diane Hausfeld. She's on the mixed reality team at Mozilla as the lead of privacy and security. So I remember different takeaways about this interview is that, first of all, Well, again, Diane is making this point that privacy engineering is hard, that there's a lot of these different tradeoffs between a lot of different things, and it's spanning across lots of different contexts. So there's trying to encompass the entirety of all the human experience, and then being at the point where technology has transgressed these different boundaries of going too far in terms of taking our privacy and focusing on the surveillance capitalism aspects, there's been all these unintended consequences of that. And so there's been this collective recalibration process that's happening in the entire technology industry of trying to figure out what these new lines and what these new boundaries are. And as she went to this USENIX conference on privacy engineering practice and respect, that's Pepper, it brought together lots of different people from lots of different domains. And I think, again, the takeaway from that conference was there's no single easy answer to how to navigate these trade offs. I myself was trained as a logical engineer at Rose-Hillman Institute of Technology. And so I actually worked in the military industrial complex for a number of years as a radar systems engineer on the F-22. And so I know that whenever you're working on any system, it's all about these different trade offs and how you weigh different aspects and how you're trying to optimize for certain things, but you're going to be taking from one angle and then taking away from something else. And so when it comes to privacy, you're essentially talking about the entirety and the complexity of the human experience. And different aspects of not knowing what those unintended consequences might be. So in encryption may be actually empowering people to propagate different aspects of child pornography or sex trafficking. There's certain implications, social implications to technology that's there that may within itself be agnostic, but within a certain cultural context may not be the best thing for what people need. So just as an example, you give a lot of open and free communications technologies and something like Miamar, and then if they're not at a certain level where they can handle that open communication, then it could be abused by people who could use it to incite violence with dangerous and hate speech and to catalyze this whole genocide that happened in Myanmar. So those types of different dynamics, as we are technologists, we have to think about that as we're building all these different systems. So it also sounds like that Diane's been looking at all these different aspects of these different constitutions from these different countries and these different laws. And in the United States, it's very sector based. So while we do have these First Amendment, Fourth Amendment and Fifth Amendment that get connected together, to form our ability to have these kind of implicit rights to privacy, these rights of freedom of speech. If you know everything that I'm already going to say, then is that really like a freedom of speech? Not only just the free speech, but also the freedom to assembly, which I guess is also included in that First Amendment. We have the freedom of assembly or of association. So explicitly says that the right of the people to peaceably assemble and to petition the government for a redress of grievances. And so we have the right to be able to associate and to assemble with different people. And if we are having all that aspect tracked for this bulk collection and potentially be used to prevent people from getting together, then that is violating our right to assemble. and the unreasonable seizures. So being able to have certain elements of your life that are private. And if you're doing all this data hoarding of all the different aspects of your life, then that combined with a third party doctrine, so that if it says that any data that you give to any third party is no longer reasonably expected to be private, that is eroding our rights to the Fourth Amendment, which is unreasonable seizures. So the government could go to those companies and say, Oh, you know, this is already not private. So we just go ahead and hand it over to us, even if we don't have a warrant or any specific reason for that, just for this bulk collection. We just want to like take in as much information as we can. And then the Fifth Amendment around self-incrimination. So you have the constitutional right to plead the fifth and to not actually give information over. And with the inherent nature of these technologies, if it's just automatically aggregating all this stuff, then what do you do with that? Like Diane said, it's trying to take this aspect of the First Amendment, Fourth Amendment, and Fifth Amendment, trying to extrapolate all these implicit rights to privacy. But there's all this whole movement in other countries with the GDPR and other aspects that are giving you the explicit right of privacy. In fact, a lot of the constitutions that are coming up that are more recent are giving the more explicit right to privacy. And that was one of the research projects that she would love to do if she had time, is to try to discern what are those different countries have the implicit right to privacy embedded within their own constitution. So I think from Diane's perspective, she's looking at these different legal and policy angles and aspects, but also trying to think about the more pragmatic aspects of the engineering practices, things like lean data rather than big data, trying to mitigate the amount of data that you're collecting. And just the way that Mozilla hubs is being architected, just looking at how there's different aspects of how you are able to come into this private social VR experience without having a lot of data being collected. But. as she explains here there's these different trade-offs and so because mozilla hubs is meant to be primarily a private invite only you are giving out people that you want to come into this space explicit invitations and you can either control that access through discord or have different ways of privately giving that information to the people But then you're limiting yourself to people that you already know, and then you don't have as many of the open world interactions. So you have this method of having that sense of like these private backchannel ways of sharing access to these virtual spaces. But if people get in there and they want to be harassing of you, it's really difficult to identify that person and to prevent them from coming back again and again. And so there's these different trade-offs. So I expect that as we move forward, there's going to be lots of different approaches, whether or not it's going to be focusing primarily on these private social VR spaces or these public social VR experiences. And there's just have to weigh these different trade-offs. And if anything, it's important to have a plurality of many different approaches. That's one of the things that Jennifer Granick of the ACLU was saying, is that the only problem that we have with the way that the technology has moved forward is that if you have this dominance from these companies like, Facebook or Google or Amazon or Apple, you know, any of these big, huge mega corporations that are controlling these different social dynamics online. The important thing is to try to have a plurality of many other different approaches that are out there. And, you know, just looking to the lessons of Lawrence Lessig of trying to take a holistic approach of looking at the market dynamics, looking at the culture, the laws and the regulations, as well as the technological infrastructure and architecture of the code of the different systems that are being put out there. And so We're in a situation where it has been skewed towards this centralized consolidation of power and it's just this process of trying to figure out how to decentralize it in different ways or to find other alternatives that are out there. And so part of that is a cultural obligation for people to see like what different types of communication systems they're personally using to start to change both the cultural dynamics as well as the economic dynamics of this whole situation. And there may need to be some level of regulation that comes in to start to break things up. Although you know, how do you start to draw the lines with some of this stuff as well? so the risk as Diane was saying is that if you start to implement too many different laws that are trying to Break up these different things then you may start to inadvertently Hit targets of people that are a lot smaller and you don't want to create regulation that's going to stop the innovation at the same time So again, there's no perfect solutions to a lot of this stuff, but for her perspective of the privacy engineering, she gets excited about it just because it is so challenging and you do have to take so many different things into account. So definitely keep an eye on a lot of the work that Diane is doing. She's publishing different academic research articles. She published a piece on the archive called making ethical decisions for the immersive web that was submitted back on May 14th, 2019. That had just come out right before my AWE talk. It was a big part of. going through a lot of that and trying to map out different aspects that she was bringing about within this piece. And I know she's working on a lot of other different articles as well. So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a list of supported podcasts. And so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So just $5 a month is a great amount to give and just allows me to continue to travel around and have these different types of conversations. I personally think that this is one of the hottest topics that is out there that we need a lot more discussion about. And so this is just something that I've been working on for the past, like seven months, having these different conversations and topics, and I'm glad to finally get it out there. And I'm hoping that it'll help catalyze a larger discussion about a lot of this stuff as we move forward. So if you want to see more of that and want to support this work, then please do become a member of the Patreon. You can become a member and donate today at patreon.com slash voices of VR. Thanks for listening.