#840 XR Ethics: Privacy-First Architecture for the Open Web

Selena Deckelmann
is the Senior Director of Firefox Browser Engineering, and she gave a great talk at Mozilla’s View Source conference titled “Our privacy and the web” that covered a lot of the work that she’s been doing to protect user privacy. Privacy & security is a hot topic with the W3C standards body as well as the browser vendors as there have been many ethical transgressions with how much surveillance capitalism has eroded user privacy. Deckelmann emphasizes that privacy and surveillance are inversely proportional, and so in order to increase the amount of privacy on the web then all of the browser vendors are trying to curtail third-party tracking, fingerprinting, and trying to make the open web a safe place to travel.

I had a chance to talk with Deckelmann at the View Source Conference to get a sense of what type of things that she’s working on in order to implement a privacy-first architecture as well as some of the privacy engineering tradeoffs that she has to navigate. As we move into the immersive web, then there will be even more privacy and security implications that have yet to be solved. But hopefully by exploring some of the lessons learned from the 2D open web, then the immersive web will have a stronger foundation to build upon with it comes to implementing a privacy-first architecture.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So continuing on in my series of looking at XR ethics and privacy, today's episode is with Selenia Deckelman. She is the Senior Director of Firefox Browser Engineering. So Mozilla had sent me out to the ViewSource conference to do an interview about WebXR, talk to Diego Gonzalez from Samsung Internet. And I happened to be there at this conference where there was a lot of other people from the open web, people from Mozilla, but also people from all the different browser vendors, people from the W3C talking about open standards. And it was just a great conference to gather the entire community from the open web. And one of the big topics was around privacy and security. I think we're just seeing in the larger tech sphere all these transgressions of what's been happening on the open web. And I think a lot of these folks are trying to come up with specific architectural decisions of how to mitigate some of these different aspects of surveillance. So one of the things that Selena says is that privacy and surveillance are kind of like on the opposite ends of the spectrum That in order to have more privacy, we actually have to have these different Mitigating technologies in order to stop the level of surveillance that's happening out there And so they're just trying to create all these anti tracking technologies there's certain aspects of ad block which is like just blocking everything after having a number of different conversations and with different people at this conference, I came home and started using AdBlocker. I'd never really used it before just because I thought, oh, well, these companies that are out there, they're trying to have ad-based revenues, and I wanted to support them just through these ads, even though I hardly ever click on any of the ads anyway. But when I turn on the ad blocker, there's some aspects in which, you know, the website didn't even actually work unless you're consenting to being tracked in different ways. And so I think part of what they're trying to do at the browser level is find different ways of sandboxing these different trackers so that they can maybe curtail some of these different aspects of the surveillance that's happening. And so the ad blocker is maybe the first step in a lot of this movement, but trying to actually embed some of these different architectures within the browser itself. And this is happening both with Safari and even on a lot of the Chrome browsers as well, and with Mozilla. And one of the things that's been happening in the larger web industry is that there has been a bit of this consolidation around the Chromium browser and Chrome and Blink. We have Microsoft using it, Samsung, Magic Leap. There's all sorts of people that are just using the Chromium browser. So there's a certain amount of dialectic that needs to happen from Apple and Safari, as well as with Mozilla and the Gecko renderer. lots of different approaches. And, you know, Selena is just giving her perspective. She had just earlier in the day, given a whole lecture about privacy. And so I just wanted to talk with her because in a lot of ways, the future of immersive technologies are going to be needing to learn from a lot of these lessons and architectures that are happening from the open web to not repeat a lot of these same mistakes. And so I think it's really important and vital to hear about some of these architectural decisions and how this is moving forward. and different unique insights that come from doing things like progressive web apps, that's PWA. So being able to have an application that's contained within the architecture of a browser, rather than having a web app on your phone, for example, where you have no idea what type of things they may be tracking and what they have access to. And you may be consenting to give it access to certain things, but it may be opening up the permission for lots of different contexts that you have no idea. So the web and progressive web apps have a very specific security model. And so I think that was one of the things that she talks about here as well. So we'll be covering all that and more on today's episode of the voices of VR podcast. So this interview with Selena happened on Tuesday, October 1st, 2019 at the view source conference in Amsterdam, Netherlands. So with that, let's go ahead and dive right in.

[00:03:52.885] Selena Deckelmann: My name is Selena Deckelman and I am Senior Director of Firefox Browser Engineering. I work from Portland, Oregon and what I do right now is mostly strategy around the browser and I work a lot on privacy and security issues for Firefox.

[00:04:11.210] Kent Bye: So we're here at the View Source Conference that's put on by Mozilla here in Amsterdam. And so this morning, you gave a talk about privacy. So what were some of the big messages that you were trying to tell this gathering of browser engineers and web developers here?

[00:04:24.483] Selena Deckelmann: Right, so the title was Our Privacy and the Web, and so I was trying to kind of set some context for, you know, the past, you know, where we've been with the web and privacy, and particularly surveillance, and where we're kind of at right now and where we might go to. And the main message that I wanted to get across is that privacy and surveillance are very related, and we can't have privacy unless we're free from surveillance. So I went into some detail on that, talked a bit about the state of tracking on the web today, and tried to give people some things that they could do to fight against it, basically.

[00:05:07.705] Kent Bye: So what's happening with tracking on the web? Why is it happening, and how can you get some insight into what tracking is actually being done?

[00:05:16.093] Selena Deckelmann: So today, most of the real-time ad ecosystem is based on targeting individuals with these things that are called profiles. And they're sort of like second or third generation things that are created from algorithms that just analyze people's behavior, things that they've posted online, stuff like that, Form an impression of what things it's it's actually it's probably better just to use a specific example So I had like one slide You know that was describing the different kind of attributes that an advertiser would guess that you had and they were particular ailments, diseases, and also things that I was kind of surprised by, like, you know, impacted by incest or depression and things like that. So ads that would target you on very personal, very specific things, and then those attributes would be passed potentially between sites, even. So, you know, you might be browsing for something on one website, navigate to another website, and that advertising tracker could follow you, including those attributes. So that's a pretty uncomfortable thing, I would say. And, you know, none of those things that I mentioned are like illegal or scary necessarily, just maybe creepy in the US. But say, for example, that an advertising system decided that you were gay and you're in a country where that's illegal. And then that marker starts following you around on websites that you're visiting, you know, while you're a citizen of that country in that country. So that could be quite actually dangerous to an individual. So I think none of these systems were necessarily created to endanger people, but I think the effect of them has to create significant problems that need to be solved.

[00:07:00.782] Kent Bye: Well, just in terms of different medical conditions, whether it's like infertility or other things that you've listed there, there seems to be the potential other side effects of if the algorithm is deciding and it's inferring this, it could be wrong. It's often probably completely wrong on a number of stuff. But still, if it makes these inferences, then that information gets passed off to people who are in charge of insurance coverage or something. It seems like this information could actually be used against us in different ways. And like, I'm just trying to get a sense of how you quantify how that is either already happening or how that might happen.

[00:07:38.388] Selena Deckelmann: Yeah, I think, you know, in certain spaces, particularly, you know, I know a little bit about the U.S. I'm not like super, you know, I'm not a lawyer. I'm not super familiar with all of the laws. But there, you know, there's things that we've done to restrict how medical information can be used for insurance, for example. But there's a lot that we haven't restricted, such as sending people political ads, you know, there's no restrictions really. And there are some advertising restrictions for campaigns, but they in the past were not really enforced online. Now, you know, that's changed with the 2016 election. There's a lot more interest in enforcing that, but I think that we've seen there a real world and sort of unanticipated impact from advertising being used to suggest things, sometimes things that are not true. and to encourage people to vote or even not vote. These campaigns are quite sophisticated. I linked in the notes to my talk to a report that was made about the Russian disinformation campaign. They went through and collected all of the different social media Entities they could find in the post that they were making and they deconstructed the memes that were produced in the intention behind them And that's I think a really interesting case study to get a little bit of insight both in the social media Parts of this that are happening which are not paid ads and also leveraging the ad Ecosystem and the interplay between the two so in terms of like measuring like quantifying that impact it's it's gonna be really challenging because some of these issues are They're fundamental. I mean, to me, I think our election was clearly influenced by the use of this technology in the U.S., I should say. And I think that we're going to just see more and more of that. And I have seen it. You know, there have been reports of other disinformation campaigns, you know, either to encourage people not to vote or to vote a certain way using sometimes fake information to do so. So it's a very adversarial environment. right now. I mean, obviously, you know, this technology is also just used to sell shoes. But I think when we're discussing the impact, it has to be very contextual. There are certain ways that this is being used that are clearly, I would say, not ethical. And we need to kind of explore those spaces and come up with good solutions for managing that. I think you and I, you know, we discussed a little bit about the VR implications of this and one of the lessons that I think the VR world could learn from, you know, the mistakes that we've made on the web is, you know, we created all these APIs to collect information without really considering how that information might be used in an adversarial context, how it might be used against the people that provided the information. And we also did not consider the secondary and tertiary uses of that data. So there's this organization, I think it's called Privacy Now that has this great infographic where they talk about personal information as like the core of the data that's being shared. And then this secondary layer of conclusions drawn from that personal information that's shared in aggregate. And then there's this third layer of the profiles that are generated with that. And it's really difficult to get a handle on how your personal information has been used throughout that entire ecosystem. And that's, I think, an area of policy that is really worthwhile to dig into thinking through not just the secondary use but that third tertiary use case and how VR information might end up getting used in this way as well.

[00:11:14.117] Kent Bye: Well, there seems to be potentially even a fourth level, which is the sociological implications for how you as an individual is connected to a community or in relationship to larger groups of people. Because, as you said in your presentation, that the web community didn't necessarily consider the threat model when they were developing all this, and that it's not only used for the market dynamics, market economy, but potentially individual psychological manipulation, targeting individuals specifically to try to subtly influence them in specific ways. also this sociological shift, so trying to create these memes to maybe shift the election if people are in the middle, but identify people who might be open to hearing or voting in a particular way, specifically targeting those individuals. So creating those psychographic profiles, it feels like we're kind of in a state where the genie's out of the bottle. We have a whole market economy that's based upon surveillance capitalism. And for me, I guess the frustration is that there's not anybody that's necessarily stepping forward and saying, OK, here's a completely new economic model, whether it's a subscription model or new ways of exchanging value, because everything on the web up to this point has been given away for free. So in talking to people like Vint Cerf, he was like, well, back in the early day when people were gated in terms of how much web usage they used, it was kind of like you were on the clock and you were constantly evaluating whether or not this time that you were spending on the web was worth the time that you're giving. And I think there's value of having people just freely explore and look at the web without trying to quantify whether it's worth it or not. And I think that actually creates different dynamics. But having a subscription model, at least, would be like, OK, you're going to pay this much to get unlimited Carp Launch access to everything. But I feel like that also benefits people who have the resources to be able to pay for that. So there's a certain paradox for how do you provide universal access to all human knowledge for free, but yet still expect some sort of market economy there. So I know that Firefox has been very good about coming up with these different ways to get a little bit more transparency and to be able to see what's happening with all these different surveillance capitalism tactics like the trackers. maybe prevent them from tracking you. But I guess I'm wondering, what is the antidote? What's the economic solution to really think about this holistically? What are some ways to get around this existing market economy in this way?

[00:13:31.894] Selena Deckelmann: Yeah, I think we have some models. One model is a library. And the thing about libraries is they're publicly funded. So I do think that there is a place for the state to be involved and to fund basically a level playing field for web creators. So that's one direction that I think is promising, honestly. Beyond that I think there's a bunch of proposals out there right now of people trying to figure this out But there's so much money in this ecosystem that without some resistance without some like, you know sand in the gears Basically, I don't know that we're going to see a change because it requires people to basically not make as much money and as they have been. So my thinking on value that I can bring is to draw attention, draw transparency, get consumers concerned about this, and act in the interest of people who are using these systems rather than the corporations that are extracting value from it. And I think that if you take that perspective, then you know, hopefully that drives change. And I don't think Mozilla has a perfect answer here yet. And we've been quite open about that, that we're not really sure what the next step is. We've just been clear that from our point of view, the principles that we outlined when the company was first founded, they're still highly relevant today. And we have even more problems, basically. And so solving those, it's going to take a massive collective effort to do so. And for my part, I really think that it's really important that there's a democratic approach to this, that it's not just technology companies making decisions, you know, for their own self-interest, but that our governments fully engage. And we're starting to see that, you know, I think the EU regulations and their responses to abuses, I think they're great, you know, and I think there should be more of that. And I think it's going to be quite uncomfortable for the existing companies that are profiting tremendously from the way that the world works now. But I just, you know, I think it absolutely has to change because of this threat to our, not only our individual autonomy, but to society. You know, if the paper that I referenced in the talk from Julie Cohen talks about this idea of modulation and optimizing for stability and smoothing things out and profits and I think that's, well, first of all, I believe that that will stifle creativity, innovation, vitality, you know, in society. And second of all, like, I don't think that that should be how we run our society. I don't think that that should be our core values. So yeah, I personally, I like money, I like making a living, I like having a job. Also, I would like to balance that against like what is good for society. That's what I hope people start to bring more to their work in technology. I think the values are inextricable, and some people try to ignore that. But I really think we got to bring that back into our thinking every day about the decisions that we're making about the work that we're doing. You know, and yeah, that's what I hope for. Anyway, that's what I hope people do.

[00:16:49.888] Kent Bye: Well, as you paint out this different spheres of everything from our individual private information, these individual quanta of behaviors that then drive different, you know, larger predictions that say, OK, you're likely going to do this behavior in the future. And then from there, like a deeper element of our character, a psychographic profile that is trying to describe the essence of who we are. That to me seems like a very private and intimate type of thing that's not necessarily in relationship of my consent or even the inferences that they're making, whether or not they're really even accurate or not. So we have these companies creating these shadow profiles of us that Really, we don't have any transparency or control or sovereignty over that. It feels like a little bit of a colonial mindset of these companies going into our private lives and seizing our pieces of data and aggregating it into this mosaic of this picture of who we are, the essence of our character, that feels like crossing a threshold of what should be reasonable for our own sovereignty, our boundaries of what should be private. But according to the laws of the United States, that seems to be perfectly legal at this point. where there's no universal right to privacy or legal framework that gives a comprehensive accounting of what should be public and what should be private. Or even if this tactic of aggregating all these little bits of information to aggregate into these profiles that we have, these psychographic profiles, So do you think that there's also a legal perspective of some legislation or right to privacy or ways that we could model something maybe like the GDPR, that we could bring that to the United States? Or what are the other policy angles that should be taken into consideration here?

[00:18:28.383] Selena Deckelmann: Well, there's a California law. that's just unfolding that people haven't really assessed the impact of it. But yeah, there's a new California law that I think is going to force us to explore some of this. But yeah, I definitely think that, you know, like I said, I think that there's a place for the state to intervene here. We do have a fundamental right to privacy in the U.S., but I think the specifics of that have not adapted to what can be done with very large data sets and really fast computers. So it doesn't really address the nudging, you know, the just like targeting people who might be persuaded by a piece of information and that information might be true or not, you know, and we just don't really have much that can fight that particular battle right now. It's kind of on us, the technologists that have enabled this ad ecosystem to shine a light on it and help people understand what it is so that then we can all make an informed decision to allow it to continue or not. And, you know, I don't think that it's going away tomorrow in any sense. But I do think that we can start to limit the data that's collected. And there are lots of different ways to do that. One of the things that I've been talking about with Diane Hofstad, who's here and going to be talking about VR this afternoon, is which APIs should we expose and what kind of information should we expose? And this is happening right now for mobile devices. Should we expose viewport size to the web? Because it can be really highly identifying. for people. So just taking that look at it and looking at it through the lens of here's this adversarial use case, I think it really helps people to think through the possible downsides to enabling access to things that could be used to very, very closely identify an individual person. And it's just a different way of thinking, you know? I don't think it comes, I don't want to say naturally, but I don't think it's like something that happens automatically. You know, a co-worker of mine always says, you know, when his preschool said, you know, don't bring the kid to school if they've got a temperature over 100, you know, his first thought was, well, I'll just never take the kid's temperature then. It's not the way most people think. But that's what's needed in order to really understand how an API could be used in such a way that's not for a purpose that helps people, but might actually go against their interests. And that, I think, is the role of Mozilla right now, for sure.

[00:21:16.090] Kent Bye: Talking to Diane, one of the things she was saying is that a lot of the constitutional protections, whether it's the First Amendment, Fourth Amendment, Fifth Amendment, used in combination with each other, that it's really addressing the government's relationship to the citizens rather than these private corporations, that they're kind of their own entities that aren't necessarily by the same standards of some of these First, Fourth, and Fifth Amendment rights. So it's more about the relationship between an individual citizen and the state rather than an individual citizen and a private corporation, which is considered in some interpretations of the law just to be another person. So I feel like there is the potential ways of resolving that legally, but there's also a whole other market dynamic of people deciding not to participate and seeing the unintended consequences and having more of the market impact of the culture. But there's also the culture of the design principles to say, here's a way to kind of approach design on the web in a way that's ethical and has some privacy-first architecture in mind, and then to actually do that architecture and to be transparent in some way. And that's one of the ways in talking to the architects of the Mozilla hubs, having things being open source and have the values be embedded into the architecture. And then for you to be able to go and audit it yourself seems to be a good approach that With the web and the view source mentality, to be able to actually audit and look at it, it gives you the opportunity to ensure that there's some level of accountability and transparency when it comes to how you're actually implementing it. Now, you can put code on GitHub, and what actually is deployed to a server could be different. So it's not like you're able to audit the code on a server, which I think would be the ultimate transparency. But I guess the larger point is that there's ways in which doing the open source approach and doing a privacy-first architecture, and speaking about it, you have to, I guess, cultivate the trust and have the audience have that trust and have ways that they can independently verify it up to a certain point. They might not be able to get down to the code on the server. But Mozilla seems to have put forth a set of principles for privacy and design. Maybe you could talk a bit about some of those design principles that you have to do privacy-first architecture.

[00:23:25.254] Selena Deckelmann: KELSEY HIGHTOWER-FERGUSON Sure. I think one of the big ones that's relevant to this conversation is the lean data practices. And we have, at the code level, any data that we collect has to be reviewed by what we call a data steward, who is someone who's been trained to think through you know, this adversarial mindset essentially, you know, what could this data be used for? What could it be combined with? And what's the possible negative impact of that? They also just asked really basic questions like, how long do we actually need this data for? You know, should it be persistent? Should we maybe only collect it in like a, what we think of as an opt-in environment, like a nightly or a beta browser and not in release? and we apply that system to every piece of data that we collect. It also is being applied to, you know, our internal data collection and, you know, to our employees. So there's a lot of systems inside of companies where they collect HR related information and, you know, don't discard it after a certain period of time and we apply some very For the people that work in that part of our organization, they find it strange, you know, because they come from organizations that retain a lot of personal information about people that work for the company. And we also have these practices internally. So, you know, I think that that's something it's radical in the U.S. to apply that kind of a lens to your own operations as a corporation. But I think that's part of what's necessary. is to really start thinking of this as not just a consumer protection thing, but an everyone protection activity, and making sure that in all aspects of what we do, we really do limit that data collection. In terms of other design principles, You know, I'm mostly, I'll just admit, I'm mostly like a backend developer. I work on APIs and databases. That's like where I come from. And my team is, you know, primarily folks that work on the backend of the browser. But where my team intersects with frontend UX, a lot of the time what we do is focus on educating the user. generally where we try to go to first and then striking that balance between spewing a wall of text that's incomprehensible at them to guiding them through like a very clear set of steps that will help them protect their privacy and that often means that we have to test a lot of different messages with sample audiences and get feedback you know on how we frame things. Like explaining DNS to people, very difficult. Most people are like, why should I care? That's actually their question and that's also very difficult to explain in just like a sentence or two. So we put a lot of effort into essentially field testing things and we make adjustments all the time as we find better text to explain things to people.

[00:26:11.671] Kent Bye: Well, one of the trends that I see is, with GDPR, there's a bit of disclosure that has to happen that says, OK, we're tracking your cookies. And you have to consent to it. Or it's not really even a choice a lot of times. At least, it's like, hey, we're doing this. And then click this box if you're OK with it. And then I would love to see, are you OK with this, yes or no? And then click no. And then even have a browser flag just to be like, just say no every time this comes up, just so that the browser can know that this is my preference. You know, maybe if there's things that access to certain features that I am missing out on, then I can turn those on contextually. But there seems to be this permission fatigue as a phenomena, but also within the context of mobile apps and then even VR, I think of when you install the app, it basically says, OK, we want to have access to all these different things on your phone. And then you, in order to post things on, say, Instagram, then they need access to your camera, then they could be potentially turning that camera on. They have patents to be able to say, when you're scrolling through Instagram, we're gonna turn on your front-facing camera to basically harvest your emotions as you're looking at this content without even disclosing that. And that's a lot different than, you know, having access to my camera to be able to post pictures, but because I click that button, what is the difference between them doing this passive turning on of the camera to harvest my emotions, versus consenting to say, do I want this as a feature to have you have a sense of what I am emotionally reacting to tune. I could see a future where that's part of the consent, but I can also see a part of the future with socially immersive technologies, where you're going to need all sorts of really intimate information from these sensors. And I don't want to have these apps in an environment where I click once on install, and to have it all the time. But at the same time, I'm not sure if every time I want to use all these things to go through a dialog box of 20 checkboxes of saying, I'm going to reductively give access to all of these things. So finding ways to bundle those. But there seems to be this trade-off between the permission fatigue, and then informed consent, and how sometimes those can be in opposition with each other.

[00:28:21.535] Selena Deckelmann: Oh, for sure. I mean, the GDPR, I think, is definitely stretching the limits of people's patience to click yes, OK. I consent. And I've seen some really interesting UX. Actually, so in my presentation, the second slide I had was of this bird whose eyes got blacked out. And it's just an amazing story. But the website that was hosting that had a very interesting GDPR UX they had this little thing that looked like a little fingerprint that you could touch and it would pull down menu that allowed you to just slide off like all of the trackers. And I thought that that was actually quite an interesting UX that, you know, I hadn't seen and some people that I work hadn't seen before, but it was quite intuitive and nice. So I think we're just in the early days of this particular kind of permission granting. And over time, I do think it'll evolve into something that's more sane because right now it is a bit rough. Another thought that I had while you were talking about that was Safari's approach to expiring cookies. Now they have identified different types of cookies that will only last for like a week or hours or whatever and they're kind of experimenting with that idea of applying extremely contextual expiration. to certain types of permissions that you might grant. It's a little complicated because cookies aren't really standardized, and so we don't necessarily know, like the browser wouldn't necessarily know what that cookie was being used for, but it could be something that became like a thing that, like a de facto standard maybe for web developers, or something that could be built into frameworks, right, that ensured certain types of cookies were used for certain types of permissions, and you know, maybe we have some adaptations there. Overall, I think that the issue is context. And we can no longer, and this goes back to Julie Cohen's whole thesis about privacy, it's extremely contextual. And you cannot assume that permission granted in one context is permission granted in another. And that's pretty fundamental change to the way that people have approached privacy policy in the past. It's like, we're going to grant permission to this device. And that's not contextually aware. You have to say, for this purpose. And the way that we have it right now, where we're presenting people with a wall of checkboxes, that also is not contextual. So I think overall, there's just going to have to be this evolution in how we request and grant permission. And I honestly don't know what the future of that is, particularly in VR. I think the user experience there is evolving very rapidly. And we really don't know what that's going to look like. But I think it's a very interesting problem to solve. And I am just super fascinated with all of the things, like this thing on the news website. It's a great UX that I saw them use there. And as more things like that emerge, I think that people are, web developers are really looking for great solutions like that. So the more we get stuff like that out there, you know, something that I really want my team to do is amplify more of these good solutions that we see. So we're kind of starting to work on that kind of a thing more now.

[00:31:36.492] Kent Bye: Well, there was a talk earlier this week talking about the browser ecosystem, how there is a lessening of the diversity of all the different options. We essentially have the Chrome and Chromium ecosystem. We have the Apple Safari and the WebKit, and then we have the. Mozilla and the Gecko, and then there may be some smaller independent browsers. But for the most part, that covers a large section of all the different browsers that are out there. So in terms of Mozilla and Gecko and your browser that you're putting out with Firefox and Firefox Realities, what approaches do you take to privacy and privacy architecture, the design decisions that you've made, that you see are kind of different than maybe some of the architectural decisions that the other browsers have made?

[00:32:20.872] Selena Deckelmann: Well, I think tracking protection, the way that we implemented it, is through a feature called origin attributes. And what we found when we were originally implementing tracking protection in the browser, which is like a network level block in this subsystem called Neko, which is the networking component in Gecko, we found that we were having to change lots of different code in lots of different places in order to affect the control. So what the innovation there was, was to just look at the data structure for an origin, so which is like a site, you know, and just attach a piece of data that could hold basically a key value store. And so then what we're able to do inside the browser is as new information emerges about what we want the browser to know about that origin, we can just attach it there. And so for tracking protection, all we have to say is we've labeled this as a tracker, and then all the other subcomponents know that, and it just travels with the data structure of the origin. So that made it a lot easier to implement, real re-implement, because it was a change to the code, and it's made extending those features and controlling things about what happens with individual origins much easier down the line. So we were able to implement this thing called first-party isolation, which Tor uses to ensure safety and security for their users. But it's also something that you can enable. It breaks a lot of stuff. So your mileage may vary if you flip it on. But it's part of a layered approach to security, right? And it's something that's fundamental to Gecko. I don't know at this point if Chromium maybe has adopted a similar approach. But we finished our implementation, I think, in early 2016.

[00:34:09.932] Kent Bye: So it's basically sandboxing out some of these trackers that are coming in. And how do you determine whether it's a tracker or just some sort of JavaScript that's running? How do you know what is more the malicious trying to surveil you versus something that just is trying to compute?

[00:34:25.197] Selena Deckelmann: Yeah, so the main way that we do it is just through a list. So we have a system that scrapes the web. It's not just us, also. We do this, and this company called Disconnect does it. And what we do is we look at these sites, look at the behavior of the scripts, and then we just label them. And so there's a list of all of the trackers that we've detected. And Disconnect compiles that and maintains it. And if a site mistakenly is put into that list, they can appeal it. And it's a fairly rapid process to get them out. But for the most part, that's not what happens. And so we're just manually maintaining it at a list. We're also interested, like Safari, and it sounds like Edge is also working on this, have intelligent tracking protection where they are just using heuristics and using machine learning to adjust their response in the client browser. That's a less consistent experience from my perspective, but I do think it's an interesting approach and it's probably something that we would consider implementing in the future. But for now, we're really going for the most compatible and the least disruptive experience that we can give people. And so the list-based approach is actually what we found to be the most effective for now.

[00:35:37.175] Kent Bye: Does it block you being tracked? And I guess you said if you turn stuff on, it breaks. So when you're using this, are you breaking the website, or are you just preventing the surveillance from happening?

[00:35:48.583] Selena Deckelmann: It depends on how the website was implemented. So for the most part, the most strict version of tracking protection, it totally blocks the network load. And so essentially nothing happens. The script can't execute. In the least strict mode, the script can still execute, but it can't write data locally. And also, if we discover that the tracker is trying to circumvent our policies, we'll sometimes strip cookies. So in either of those cases, for the most part, we don't see any breakage because the scripts will fail back gracefully when they can't write. But yeah, the breakage that we see is things like parts of the page won't load with the most strict version or just the ads. But we're not really trying to block the ads exactly, right? We're just trying to stop the data collection. So advertiser networks that we've worked with, advertisers that we work with, They look at it and they're able to adjust their scripts to collect less information. And that's really what we're after here, is just to stop the routineness of the data collection, right? That's the surveillance. It's just that it's routine. It's part of the air that we breathe. So if we say, actually, no, we're going to get some of the smog out, it does change behavior.

[00:37:04.304] Kent Bye: And I guess there's also a way to visualize some of these trackers. Is there an extension that you can see to be able to visualize in a network graph of all the stuff that has been tracking you as you navigate around the web?

[00:37:14.592] Selena Deckelmann: Yeah, there's an extension called Lightbeam, you know, and you can use that and it'll show you in sort of like a neat graphical format that you can drag stuff around and look at. We also have in nightly and beta right now and soon in release the protection report. And that's more of just a graph of the trackers that we've blocked for you. And then links to kind of more information about that. And we're hoping like over time to like add more information about these different services. As people are more interested, we're kind of in the early days of this. People aren't even aware of a lot of the ways that they're being tracked. So we're putting something out there initially, and we'll see what kind of feedback we get from users about where we go next with it.

[00:37:55.772] Kent Bye: So for you, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?

[00:38:03.982] Selena Deckelmann: Well, one huge problem is what we do about the web on mobile. And if you saw my talk, I was like, at the end, I was like, just uninstall your apps. And I know that's not realistic for most people. We need some apps. Some apps are not uninstallable. But I think urging people to use the web more, I think will make the web better. And it'll help us solve more problems on mobile if people use the web more on mobile. But that's mainly what I am thinking about these days, is how can we enable more there? And part of that story, I think, are PWAs and finding ways to support that. I think there's a very user-focused solution here, which is not only a PWA ecosystem, but a way for users themselves to say, I would prefer to use the web version of this, and I just want it bookmarked. And maybe it looks like an app on my phone. Because I think that there's something fundamental about that UX. that's really valuable to people? We wouldn't install these apps if it wasn't fun and useful. So is there some way that we can enable that same behavior on phones? So I'm really interested in that problem, and it's something that our teams are talking a lot about right now.

[00:39:12.068] Kent Bye: Is it because the threat vector of native apps is so much higher because they have more access to your hardware that you have more opportunities for them to not only be surveilling you when you're using the app, but when you're not using the app, that it may be calling back? And so the idea is that you should just delete the app because you can't trust them?

[00:39:29.423] Selena Deckelmann: Well, it's like there's multiple kinds of problems. I think one of the big problems is that there's not a well-understood privacy framework for apps. There's nothing. And so whatever is happening there, it's really up to the discretion of the developer. And it's not that every developer is evil. I think that they're maybe just not thinking about it. Whereas the browser has a well-defined privacy model. And we have tracking protection as an option that users can enable. We have restrictions about different, sites looking at each other's content and manipulating each other's content when they're interacting with one another as site isolation. So I think that, you know, it's going back to the original question of, you know, what have we learned over time with the web? And we've learned that good fences make good neighbors when it comes to websites. And that's just not not the case yet. They haven't learned that yet, I would say, for apps. And so until we get there, you know, and I think we can get there, I do think that the available web technologies offer something for privacy and security that apps do not today.

[00:40:33.589] Kent Bye: Right. And finally, what do you think the ultimate potential of the open web and the future of spatial computing might be and what they might be able to enable?

[00:40:44.902] Selena Deckelmann: Well, I think that what the 2D web has enabled already is so amazing. Just connecting people from all over the world, enabling access to information and people that just really wasn't possible before. Like the first talk, the opening talk where Henri was sharing about here's all of these people who are in emerging economies, who are using feature phones, and yet they're so connected with the rest of the world. That's the future. And so I would hope that for any of the VR world, what we're looking at there is more of that, more connecting people with each other in really enriching ways, in ways that fill up their spirit. That's, I think, the future. And I've seen some amazing Amazing things, and also just like super fun games. That's fine, too. But that's what I would hope for it, that we think about how are we enriching people's lives and adding something to it and connecting them to more people in ways that they want. That's what I would hope for it.

[00:41:48.804] Kent Bye: Is there anything else that's left unsaid that you'd like to say to the web community or immersive community?

[00:41:56.349] Selena Deckelmann: I don't know, there's a lot of great work to do. Some of this can get kind of dark. It can be kind of a heavy thing, but I think it's a really, really important conversation to engage with. And you engage with it, you think about it, and then you apply it to your work and you do the work. And that's where my team's at today. We're just really excited about all of the ways in which we think that we can still help. So I would just say, don't be discouraged. It's fine. These are the good problems that we're here to solve. So yeah, just do the work. Yeah, I really have hope that everything's going to be fine in the end.

[00:42:31.854] Kent Bye: OK, great. Well, thank you so much for joining me on the podcast. So thank you.

[00:42:34.974] Selena Deckelmann: Yeah, thank you so much. It was great. Thanks.

[00:42:37.695] Kent Bye: So that was Selena Duckelman. She's the Senior Director of Firefox Browser Engineering. So I have a number of different takeaways about this interview is that, first of all, Well, actually one thing that came to mind as I was listening to this is that I forgot to mention in the Diane Hossfeld conversation of this concept of contextually aware advertising. So what does it mean to do advertising that is based more on the context that you're in and the content that is there rather than you as an individual? And I think that maybe one of the things to kind of pull back from this whole surveillance capitalism model is to focus more on those contextually aware aspects of doing those ads rather than focusing on you as an individual. I think whenever you do browser searches within Google, that is one example of doing a little bit more of a contextually aware approach to advertising, then taking the next level of looking at the actual content and seeing what may be of interest to people that are also interested in that content. But, you know, one of the things that Selena was saying is that, you know, in order to really have good privacy, you actually have to have good mitigating factors for surveillance, because they're really on the polar opposite ends of the spectrum. The more surveillance we have, the less privacy we have. And if we want more privacy, then one of the things to do is to mitigate these different tracking technologies that are out there. So this is something that I think the entire web browser community is looking at and trying to mitigate. I guess I was surprised to a certain extent. I think that may be in some part due to some of the GDPR implementations, perhaps. I'm not sure the exact origin for why there would be companies like Google who are also trying to mitigate some of these different tracking technologies, you know, especially because, you know, Google itself is doing all sorts of different levels of surveillance capitalism, but this is a very adversarial environment, which is what Selena said is that, you know, you're out there and you're trying to like track all the data of who you are and that goes across all these different contexts. One of the things that she showed in her presentation was just these different aspects of the medical conditions that you may have experienced. Like you may be a rape survivor or you may have survived like sexual abuse as a child. So these are the different levels at which you could start to have ads targeted to you that you could start to discern based upon your behavior online. And so, You know, it starts to get into these really weird ethical transgressions of identifying different specific medical conditions that you may have and having like a whole economic ecosystem that's around that. And so she started to show some of those examples during her presentation. It was like, oh, wow, that just feels wrong that you'd be able to identify that on somebody and be able to actually target ads if they've gone through some sort of like trauma in that way. But that's kind of the level that we're in. She said there's kind of like these three levels of data. You have the personal identifiable information, then you have the different conclusions you're able to make from those, and then you have these larger aspects of your character. So there's like you as an individual that is identifying you, what type of behaviors you have and are likely to have, and then different aspects of your core character that are trying to describe what you value and who you are so that that could be added in. And then I said, there's probably even a fourth level, which is like the sociological social dynamics of if you wanted to do a whole level of like information warfare, being able to target people based upon those relationships and those different cultural aspects of what type of political beliefs they may have. So obviously, there's a lot of different discussions that are happening right now. Facebook is talking about how they're not going to do any fact checking of any political ads, you know, really taking a strong stance of this free speech. But then, you know, Twitter, Jack Dorsey within this past week said, They're going to take a step back and seeing that paid speech is different than free speech. And we're not going to let these politicians use our networks to be able to share misinformation. Whereas, on one hand, Mark Zuckerberg is taking an extreme free speech angle, but yet there are unintended consequences of how that goes wrong that can undermine different aspects of democracy. And I think that I don't know if he's going to like change his mind and he's taking a pretty strong stance of free speech. But in some ways, free speech and paid speech may be different. So there's like these different ethical implications when you have these huge communication networks could be used to have all sorts of different sociological impacts that are these unintended consequences to democracy. And this is like a primary ethical decision that is happening right now in the culture. But this is in some ways just a microcosm of privacy engineering and trying to weigh all these different aspects of the ethics, trying to weigh what is the import of giving people the right to free speech, but also to mitigating the harms that are done. And so to look at not only about the benefits, but also the costs. And I think the costs are pretty huge. So we'll see what happens when it comes to all these Elections and the future of democracy when you have these huge communication networks This is like a whole new realm of how do we navigate? All of these different dimensions of our free speech rights with the unintended Consequences of the harms done with people who are malicious bad actors that can use these communication networks to spread misinformation disinformation Dangerous speech hate speech all that stuff that needs to be mitigated in some ways and trying to draw all those lines. I So we talked a little bit about like different potential economic models. She mentions the library and publicly funded information. I think when it comes to like the entirety of all the people on the earth, trying to find ways to sustain themselves, there may be other aspects of microtransactions or subscriptions, or, you know, maybe government funding needs to be a part of it. Maybe there needs to be aspects of the profits that come from these companies that are being reinvested into these different things. I know that Facebook was trying to advocate for these currencies, the Libra currency. There's a lot of. talk of Mark Zuckerberg, you know, going in front of Congress and asking all these questions. And, you know, there could be aspects of trying to redistribute some of this currency into the hands of the people. But yet, you know, that undermines different aspects of the sovereignty of the US dollar and, you know, trying to create a global currency, then you know, that can be ripe for money laundering and terrorism. I mean, there's all these different aspects of, you know, once you start to get at this scale of at the currency, then how do you start to even manage that in a way that is trying to mitigate all the different harms that could be done at collective level, if you talk about having a communications network that has over 2 billion people on it, and then the impossibility of trying to actually like have a law or a policy that works for everybody in the world, So maybe there's this decolonized impulse where it's actually good to have these different countries that are out there with these different approaches and to be focused on the regional needs rather than to have this more colonial mindset that is coming in and having like these universal global currencies and global governments. So anyway, I think it's an interesting discussion and there's these different impulses that are happening at different angles. So getting back to this specific interview, uh, some of the other different conversations of, she said the Mozilla really doesn't have like a, an answer to these different dilemmas of the fundamental economic business models for the future. I know that coil was there that's using web payments. So using ways of exchanging value, using the web and that protocol. So microtransactions may be a little bit easier. Coil was there talking about different ways of maybe having these different subscriptions. So based upon your actual browsing behavior, then how you start to redistribute maybe a fund of money in that way. And so there's these new kind of innovative models that are coming up and they had announced that they have this whole collaboration with coil and Mozilla trying to find ways of implementing these new alternative web payments processes. So really trying to invest some money and some capital into coming up with alternative solutions there that may create these alternatives to surveillance capitalism, as an example. So, you know, just a couple of other things in terms of the permission fatigue that happens once you click yes on, I accept these cookies. That's just as a result of GDPR. There's not really a good, robust way to opt out of that. And so having new models of turning off the sliders, turning off the trackers and actually giving you choices to see how much level do you want to have all these different ad companies track you in different ways. Like I said before about the ad block, once you turn that on, then that can cut off a lot of the funding model for a lot of these companies. And so like finding a good balance of. using the advertising to be able to support the people but to express your right of your sovereignty to not be tracked across all these different multiple contexts and specific privacy architectures with the origin attributes and the first-party approach and trying to like sandbox all these different trackers and that You know, there's the different browsers of Chrome and Safari that are taking different approaches but for me there was a big takeaway is that you know Selena was saying at the end of her talk like I she's deleting like pretty much all of her mobile apps on her phone because the privacy policies will you have no idea or no transparency on what any of these different apps are doing on your phone. There's a lot of first party apps that you can't even delete at all. But there are different ways to sort of toggle different aspects of location and whatnot. But in essence, anytime you download a mobile app onto your phone, then you have no control as to what that is doing. I mean, you could grant the camera permissions for that app and who knows if those apps are passively turning on your camera and doing different aspects of like looking at your emotional reactions. I mean, this is like an actual patent that Facebook has of as you're browsing Instagram, they can potentially turn on your camera to look at your emotions and harvest your emotions. And it's like, when you give Instagram the ability to use your camera you're not expecting them to passively turn it on and start to harvest your emotions when you're looking at content and so there's these like these different aspects of like what is the context of these permissions and when do you actually give permission to use it and do you need to have some level of informed consent and once you start to passively turn on stuff and you're not telling you that you're doing that this is by the way not just Facebook but Apple and Snapchat, and Google probably have all different aspects of this same type of technology. It's like just the idea that you have this app, you give it permissions, and you don't really know what the full context is when those apps are using those permissions. And so what the approach that Selenium was taking saying, hey, actually, the web has a really good security model. So if you have a progressive web app, a PWA, and you have it kind of more sandboxed, then once you open up that context, then it's activated and you have a little bit more of an assurance that it's not going to be getting access to different aspects of your phone. I think the big insight that I got from talking to Selena is thinking about the future of XR, immersive technologies with virtual and augmented reality. And once you give an application on your device, all these permissions, like there's all sorts of really intimate biometric data, your movements, information that you're reading out from your body and if you're just like opening that up for anybody to have access to then there's certain risks there that come with that and I think that's been a big consideration for WebXR when it comes to specifying all these different aspects and make sure that there's not these unattended security gaps and making sure that you're not spoofing different aspects of payment processing. And, you know, there's a lot of the ways in which the security model for the web is really fully evolved and matured. And once you introduce this whole level of immersive technologies, it introduces all sorts of really tricky and difficult problems that have yet to be solved to the point where it's I think it's still at this point very difficult to be within an immersive environment and to go from one website to the next website, because how do you discern and determine If you're actually going to the website that you were intended to, with the web, you have a URL that gives a little bit more of a transparency, but there's not that same level of transparency when you're in an immersive environment. And so just really thinking about different aspects of phishing campaigns and all these different risks that have. been mitigated on the web with very specific techniques, a whole nother can of worms would open once you open that to the immersive web. But thinking about in the future, as we move forward, the importance of having the progressive web app, having different aspects of the security and privacy model be embedded within the browsers themselves. And as we move forward, just thinking about the importance of these open web technologies and WebXR, WebVR, WebAR, and to think about the progressive web app when it comes to these immersive technologies as well. So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a list of supported podcasts. And so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show