#958: A Candid Conversation with Facebook’s AR/VR Privacy Policy Manager: New Potentials for Community Feedback

nathan-white
There is a lot of sensitive data that will captured by virtual reality devices that present a wide range of ethical and moral dilemmas that I’ve been covering on The Voices of VR podcast since 2016. During Facebook Connect, Facebook released their responsible innovation principles, started talking to the media about these principles, & Facebook CEO Mark Zukerberg told the Verge that “One of the things that I’ve learned over the last several years is that you don’t want to wait until you have issues to be discussing how you want to address [ethical issues]. And not just internally — having a social conversation publicly about how society thinks these things should be addressed.” However, the public record showed that hardly any of these ethical discussions about XR having been happening publicly.

Most of the ethical discussions Facebook has been having have almost exclusively happening in private contexts, under non-disclosure agreements, or under Chatam House rules that occluded any public transparency or accountability. I have been asking Facebook for the past couple of years to get some privacy experts to come speak about the ethical implications of biometric data, but they’ve been resistant to go on the record about some of these more thorny ethical issues about the future of XR. The good news is that this seems to be changing as I was given the opportunity to speak on the record with Nathan White who is Facebook Reality Labs’ Privacy Policy Manager for AR & VR.

White has an impressive history with advocating for human rights and technology policy, helping to reform US Surveillance Law while working with Dennis Kucinich, and was motivated to bring about change from the inside by working at Facebook over the objections from some of his friends who thought that it would be a morally-compromising position. White calls himself a privacy advocate within Facebook, and his role is to try to synthesize the outside perspectives about privacy implications from a wide range of privacy advocates, academics, non-profit organizations, civil society, and generally the types of privacy & ethics discussions that are happening here on the Voices of VR podcast.

Part of White’s role is to collaborate with the external organizations and experts on these issues, but most of these opportunities for outside council have been happening behind closed doors and under NDA. But he’s hope to be more engaged in these conversations within a public context because it’s these organizations who are going to collaboratively help to set some of the normative standards for what we do with the data from XR way before the government enshrines some of these boundaries within some type of legal framework. The ethical boundaries and framework more likely to come from a collaboration from the XR community first from organizations like the XR Security Initiative, Electronic Frontier Foundation, and other tech policy non-profit organizations.

Is that enough for me to be assured that Facebook is doing everything they can to be on the right side of XR privacy? No, not yet. We still need to have more mechanisms for transparency and accountability that go beyond the community collaborations and listening to what the culture is saying. Privacy advocate Joe Jerome told me that the trap is feeling like the feedback that’s provided to a company like Facebook can feel like it’s just a “box-checking exercise” for them so that they can say that they talked to privacy advocates. An example is VP Facebook Reality Labs Andrew Bosworth saying, “Consulting with experts across privacy, safety, and AR/VR from the very start is crucial to our product development process to ensure that we have the right frameworks as the technologies we build continue to evolve.” It’s great that experts where consulted, but there’s no transparency as to what exactly any of these privacy experts told Facebook or the degree to which any of their advice was implemented.

This is part of the reason why Jerome advocates for strong enforcement mechanisms in order to have a satisfactory level of accountability when it comes to privacy issues. In the absence of a strong oversight mandate and ability to bring consequences, then it’s have consumers be ensured that companies like Facebook are doing everything they can to ensure that they’re taking consumer privacy seriously.

I trust that White is going to serve as a strong voice for consumer privacy, but at the same time there’s no way for anyone on the outside to know to what degree those consumer privacy concerns are outweighed by competing business interests or used for secondary purposes that fall within a broad range of interpretations of the open-ended and vague language of Facebook’s privacy policy. It’s also an open question for what types of things that Facebook needs to do in order build trust that they’re being good stewards of XR data.

But a big takeaway that I get from my conversation with White is that he doesn’t want to be the lone voice and sole advocate for privacy from within Facebook, and that he’s interested in building more connections and relationships to other tech policy experts who are ramped up on the implications of the data from virtual and augmented reality. There’s a big role for things like XRSI’s Privacy Framework, my XR Ethics Manifesto, or XR ethics & privacy conferences and discussions. There are a lot more open questions than answers, and it’s reassuring to know that there are people within Facebook who are both listening and participating in these discussions. Given this, now is more important more than ever to continue to work on a broad range of foundational ethics and privacy issues in the XR space.

My closing thought is that there’s still a lot more things that I personally will need to see from Facebook when it comes to having more transparency and accountability that they’re moving beyond these discussions and actually putting this type of advice into action. There’s also a lot more open questions that I have about the relationship between the public and companies like Facebook who are becoming more and more like governments. But the type of government is more like a technological dictatorship rather than any sort of representative democracy that has established protocols for how to interact and respond to the will of people. But at the same time, I’m at least encouraged that these open dialogues are starting to happen, and I hope to continue the conversation with Facebook on many other fronts as well. Overall, it’s a move in the right direction, but I think we all need to see more evidence of how Facebook plans on taking action on this front, and how exactly the plan on living into their four new responsible innovation principles.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So ethics and privacy is something that I've been covering on The Voices of VR since 2016. Since then, it's been somewhere between 40 and 50 different podcast interviews, panel discussions, just trying to facilitate a broader discussion about a lot of these ethical and moral dilemmas within virtual and augmented reality. And it's honestly been a bit frustrating because I feel like I've been having that conversation within the community, but not have any good opportunities to really engage directly with some of the major players, specifically Facebook. So hopefully that is changing within this conversation and especially moving forward. But the Facebook Connect happened. There's a new privacy policy. I was in dialogue with Facebook. Mark Zuckerberg actually did an interview with The Verge where Casey Newton who was still working at the verge of the time, asked him about Google Glass and what about some of the ethical debates that happened around Google Glass, how was Facebook planning on addressing them? And Mark Zuckerberg said that, I think the first thing we should do is just talk about more of the issues up front. One of the things that I've learned over the last several years is that you don't want to wait until you have issues to be discussing how you want to address them. And not just internally, having a social conversation publicly about how society thinks these things should be addressed. So I saw this quote and I was like, great, Facebook wants to talk about this publicly. I've been talking about this publicly for the last four years. Wouldn't it be a great time to start engaging about some of these very issues? So after a number of different back and forth and discussions, I did have a chance to sit down with Nathan White. He's the privacy policy manager for Facebook Reality Labs for AR and VR. to see what his role is in terms of being the privacy advocate within Facebook. And it's a very candid conversation in terms of his approach of trying to do the right thing by privacy. So there's still a lot of open questions that I have at the end of this conversation in terms of, you know, how that actually gets implemented. And I'll be responding to some of the stuff that we talk about here throughout the course of this podcast. But I think I'm just happy to begin to start this public conversation more Because there has been a lot of behind-the-scenes conversations that Facebook has been having, but it's all under NDA, Chatham House rules, or they're not talking about what the results of those conversations are publicly. So while they're talking to public interest advocates, it's not what I would classify as a public conversation. So this is maybe the beginning of a public conversation and I would like to see it expanded out into many other contexts and not just on my podcast, but in other ways that more people and more representatives from the public can start to engage in these issues more directly. So that's what we're covering on today's episode of the voices of VR podcast. So this interview with Nathan happened on Friday, October 30th, 2020. So with that, let's go ahead and dive right in.

[00:02:51.058] Nathan White: My name is Nathan White and I am currently the Privacy Policy Manager at Facebook covering FRL for AR and VR.

[00:02:59.962] Kent Bye: Great. So maybe you could give me a bit more context as to your background and your journey into VR.

[00:03:05.125] Nathan White: Yeah, absolutely. Happy to have this opportunity to chat with you too, by the way. I've been listening to the podcast since I started this job, even before I took the job and some of your interviews and some of your writing has been hugely influential in the way I think about this. So I probably shouldn't start with just flattery. Let me tell you how I got there. I guess since we have a little time, I'll start with my original career path. I thought I was going to be a diplomat. I wanted to go live in foreign capitals around the world and see the world. Then in, I can't remember what year it was, the United States invaded Iraq, and that was hugely profoundly impactful for me, and that I thought it was just absolutely wrong, that it was the wrong thing to be doing, and it made me realize that there was no way that I could go be in a foreign capital and defend that action. I thought it was just immoral. So it changed my entire career path, and I thought, okay, well, I guess I'll just go to law school, and I'll go to DC, and I'll become a DC person, and I'll make sure that we don't go into other wars. I decided it didn't really make sense to go to law school and spend all that money, go to law school for three years and then not practice law. That wasn't what I wanted to do. Instead, I found a program in Boston that focused on political communications. I got a master's degree in marketing communications and advertising with a focus thinking I would go into politics and I would use modern digital tools to inform people what was going on in the world so we wouldn't go into dumb wars anymore. Immediately after getting that master's degree, I started working for a really small digital PR firm. Actually, the first digital PR firm that I know of that existed in DC and my first client was Dennis Kucinich. I worked for Dennis Kucinich, who was running for Congress, but also running for president. He'd actually just stopped his presidential campaign and was focusing on his congressional campaign, But he was still getting all these calls from people around the world of, we want to hear from you. We want your leadership. We want stuff. But he was focusing on his congressional can't see. So I was able to help him using digital tools of, well, they want to hear from you in Seattle. Why don't we send a video? Why don't we do a web conference? Why don't we talk to them? Just because you can't go to Seattle doesn't mean that we can't talk to them. And we got along pretty well, and he asked me to join his congressional office as a communications director, press secretary, and then communications director. And I jumped at the chance. I had been working for him, his congressional campaign, for about three months, and it was my first grown-up job was to go work for Congress and to learn how things work, and for an anti-war advocate. It was a dream come true. And it really was a great job. I really enjoyed working for Congress. I really enjoyed learning about how Congress works and what incentive structures motivate members of Congress to actually do things. Also, working with Congressman Kucinich gave me a bunch of opportunities to work on really interesting meaty problems that were really important to the country. I worked with him on health care reform. I worked with him on the bank bailout during the Great Recession of 2008. I worked with the Oversight and Government Reform Committee on a huge investigation of Bank of America's acquisition of Merrill Lynch as part of that bailout. I also, because of who I was and what my interests were, I also maintained an interest in tech policy, which back then in the late 2008, 2009, 2010, there really wasn't a lot of that going on in Congress. In fact, at that time, members of Congress were not even given permission to use Facebook Members of Congress are allowed to talk to their constituents, but they're not allowed to do self-promotion. They can't advertise to the entire country of how great they are, but they can talk to their constituents. So I was actually the one who requested an ethics waiver so that members of Congress could start using Facebook because that's where our constituents were. I also worked on things like SOPA and PIPA, which came up on copyright, and things like Computer Fraud and Abuse Act, which was very complicated. But my main focus continued to be on things about communication. So I got to do things like work with the Democratic caucus on how to use Facebook to talk to people, how to use the internet to talk to people, what is the internet and why is it important. One of my small claims to fame is Reddit was a really interesting platform. And there were people on Reddit who would talk about Congressman Kucinich because he had ran for president. And I thought this would be a great platform to interview people. And so we reached out to Reddit and said, hey, do you want to do an interview with Congressman Kucinich? You want to come out here? We'll ask people to vote on what questions you want to ask him and we'll just do it. And that ended up working out, and it worked out so well that Reddit created the Ask Me Anything team that is now a huge part of Reddit. It's one of my small little claims to fame. Because I was a bit of a techie person and was seen as a techie person, I also worked with Congressman Kucinich on an area that was a big passion for him, and that was the Patriot Act. The Patriot Act, I'm sure you know, for people who don't know, was a set of laws that were put into a package and passed pretty quickly after 9-11. They were packaged as a response to 9-11, but really it was a wish list of things that were being talked about before 9-11. Congressman Kucinich was convinced that it was an overstep. And so we were looking at what might the government be doing and how were they using these new authorities? And so we were looking at what is surveillance law? What is international surveillance law? What are norms in these spaces and how do we think that they might be abused? We had lots and lots of conversations. He tried to repeal the Patriot Act a couple of times. We obviously weren't successful on that. But it gave us quite a bit of focus on international surveillance norms, which were quickly becoming digital surveillance. After I left Congress, I worked as a political consultant for a few years. And one thing that happened that really changed my career was Edward Snowden came forward and leaked a bunch of documents to the media. And suddenly my years of studying the Patriot Act and digital surveillance and digital surveillance law suddenly became hugely important and relevant to what was going on in the world. So more of my political consulting moved away from issues like health and safety. We did a lot of work on agricultural policy and studying GMOs to make sure that they were safe, and how people create GMOs and the pesticides that they use, and focus more on tech policy and surveillance. I was doing a lot of work at the time with Demand Progress, who I had started volunteering with after Aaron Swartz committed suicide and was getting deeper and deeper into the general tech policy world, but my area of focus was on surveillance reform and surveillance policy. From there, I ended up closing down my shop and went to work for an international tech policy organization called AccessNow. AccessNow's mission is to defend and extend the digital rights of users around the world, particularly those at risk. I like to describe them as not a advocacy group. They're not a tech group that would help you install a printer. That's not what they do. They focus on unique threats. If you are, let's say, a gay rights activist in Saudi Arabia, a journalist in, let's say, also Saudi Arabia, or a human rights investigator in Thailand or Syria, you have unique threats and unique concerns, and those are the things that we would focus on. They have a 247 global helpline for users at risk around the world. They follow the sun and hand people off between offices so that they work with people to resolve the issues. And I was one of their people in Washington, D.C. A layman's way of thinking about it is I was essentially their lobbyist, focusing on their issues before Congress. Lobbying is a legally defined term by the IRS. I wasn't actually a lobbyist. But if you think about just somebody who's advocating for a position, that was my job in Washington, D.C., to advocate for tech policy with a human rights focus. And given my history, I also did a lot on surveillance reform, continued to think about the Patriot Act, continued to study Section 215, Section 702, how to reform them, work with those communities and those folks in Washington, D.C., both at the grassroots level, at the grass tops level, and also directly carrying that message to members of Congress. Encryption was an area that I spent quite a bit of time. Folks might remember the famous Apple versus FBI case that got so much attention. That was an important issue for us and one that I was quite active on because we brought a unique perspective to that, that generally people think, is encryption good for computer security or is it good for criminals? Is it some combination of both? The perspective that we were able to bring is if you are a human rights activist or trying to document abuses in Syria, encryption is literally your first line of defense. That if you were trying to get evidence of war crimes out of a war zone and you were stopped at a checkpoint, you don't know who's stopping you. You don't know what side they are on. And if you have digital media, they would likely search it. And if it was unencrypted, regardless of which side, your life was at risk. And so the perspective we were saying is this is a much more nuanced issue, that there are pros and cons, that yes, bad people might use encryption, but also good people use encryption. And so don't throw the baby out with the bathwater. Love that job. Worked with Access Now for almost five years. great group of people. I can't say enough nice things about the work that they are doing in the world. But then I started talking to Facebook about the opportunity for the job that I have now. And the job that I have now is to help the company figure out how to build AR VR products in the most privacy protecting way that makes sense to the world and makes sense to the company. And I can actually break that down. It sounds somewhat complicated, but it's actually a really, really fascinating job that I've had for now almost exactly two years. I'm two months shy of two years. And it's really been a great chance for me to sink my teeth into some really interesting issues and have conversations with people and start to build things that will have huge impacts. So let me explain kind of what my job is. Since 2011, 2012, Facebook has had a privacy program. The privacy program is that any time Facebook uses new customer data, or collects new customer data, or uses customer data in a new way, it has to go through a privacy program. The privacy program is driven by a privacy program manager who brings everybody together, fills up the documents, makes sure that everything is pushing forward a decision, and then preparing things that ultimately get audited. But the privacy program, the meat of it is that a privacy lawyer needs to look at what the use case is and attest, this is legal in all the jurisdictions that we operate, and it's covered by our privacy policy and our data policy. It's essentially, this is okay, we're allowed to do this. The other wing is a privacy policy person, who I like to refer to as a privacy advocate person, who says not just, is this legal today, but is this the right thing to do? Is this consistent with what regulators, academics, and thoughtful people are telling us is the right way to go for where Facebook should be? Those are the two parts that are signed off for any new use of data. So for the last year and a half, I was the policy advocate or the privacy advocate who was watching and participating in discussions about any new use of data or new collection of data, which gave me a huge aperture to see what Facebook is building and how we're thinking about it both today for products that exist and also the long-term for where we are going. I want to add, how I do that is really important. That it is not me saying, I am Nathan White, and I know what's best, and this is where the world is going, and this is how you must build it. Frankly, I think if I were doing that, I would be really, really bad at my job. My job is not to give dictates on what I think is right, but my job is to be an avatar for the external community of saying, this is where regulators think. This is what advocates think. This is what Kent was talking about on his podcast last week. This is where the conversation is going and where we need to be. And how I do that is by talking to people, is by talking to the regulators, talking to the privacy advocates, talking to policy advocates, talking to youth protection advocates, and actually talking to them and saying, here's what our roadmap is. Here's what we're thinking about building. Here's what we have. How do you think that we should do it right? And so sometimes that is very open ended on a large time frame of like we're building AR glasses. How do you think we should approach thinking about it? And sometimes it's also very specific of we have a feature that we've built and these are the protections that we've built into it. Do you think it makes sense? Do you think these are sufficient? And so my job is not to talk to the product teams, but to reflect the external debate of where things are going. And to do that, I need to help people externally understand what Facebook is doing internally. That if I were to approach you and say, hey, Facebook is building glasses that have cameras on them, That might come off weird, but if I explain to you why we're doing that, where we think we're going, what is required for AR glasses to work in the future, if you understand that context for where we're going, that makes more sense and we can have a more productive dialogue. So a lot of what I do is not just internally focused on Facebook, here is what you must do, but also trying to make connections and develop external connections between people. So people in the privacy community talking to people in the VR community, because there's so far, there hasn't historically been a huge overlap in the professional policy class in Washington, D.C. I'm talking about the ACLUs, the CDTs, the EFFs, and the VR activist space, which I'm thinking about XRSI. I'm thinking about the OpenAR Cloud. I'm thinking about folks at the Stanford VR Privacy Summit a few years ago. building connections between those two communities so that they can start to think, how does privacy law and privacy norms and privacy expectations, how should that apply to VR, how does that work when the technology that makes VR and AR work is entirely novel types of sensors and types of data that haven't been considered in the fair information privacy principles of the 19th, what is that, 78. So that is a really long winded way of explaining my journey of what I do. But I really do think I have pretty much an amazing job of helping people to understand what is required in this space, what is required to make it work, and also to help guide decisions so that we are thinking about the long term of what we are building and what we are doing for society.

[00:17:13.375] Kent Bye: Yeah, thanks for that. That's a great overview of both who you are, your journey into this space. And for me, I'm also just glad to have an opportunity to talk to you because I've been sort of wanting to have these types of conversations for a long, long time. And so it's just nice to be able to start this conversation in a public sphere. First, I want to say there seems to be a number of different stakeholders here. And you mentioned there's the law and the process of figuring out what the law is. I talked to Joe Jerome, who's a privacy advocate. He's concerned about it and looking at other issues like tracking the federal privacy law discussions that are happening in Washington, D.C. There's ongoing discussions about what the legal landscape is going to look like in terms of is there going to be a federal privacy law. California has the CCPA. Washington is thinking about doing some sort of privacy law. Illinois may have some specific biometric privacy law. So there's like both the state and the federal law internationally, you have like the European Union has a GDPR, which set forth privacy as a human right, and all sorts of laws around the world. And then you have the stakeholders within Facebook, which is, you know, how do you negotiate the different trade offs between the business interests and the privacy interest. And so let's first start, though, with the law, because you say you want to live up to the law, but the law is actually in this space right now where it's actually ongoing. And so do you and in the capacity of your job, are you interested in, like, having a strong opinion about what that federal privacy law might look like? Is that something that you're outsourcing to someone like the XR Association to sort of have a consortium of different groups within the entire XR industry to come to some consensus with say Microsoft and HTC and Sony for what that law should be. So maybe you could just start with, because the law defines a lot of these boundaries, how you start to interface with that. And if there's other separate people that are concerned about what those laws look like.

[00:19:07.760] Nathan White: Sure, so first I would zoom out just a little bit for and say there are way more stakeholders than that. I sort of think of simultaneously on two different time horizons of what the world is today what the community is today, and where the community is going to be 10 2050 years from now. Oftentimes, they are similar. Sometimes, they are not that similar because new things are coming on board, new technology is being invented. We can't really see the future. But if you zoom out on that time horizon of, let's say, 20 years or 30 years, if we are successful in this industry, the world is going to change, not just VR, not just tech. The world is going to change. That Facebook AR VR org recently changed our name to Facebook Reality Labs. And I think of that as a throwback to like Xerox PARC and Bell Labs of the teams that were building the computers that we understand today. Things like a mouse input, things like a graphical interface for how we interact with computers. We're thinking about changing that structure for how humans interact with computers. So rather than we are sitting at a desk and peering into a computer, the computers can live around the world and augment our existence. If that is successful, That will change the world. I went to Tunisia for RightsCon a couple of years ago, and I was in awe that everybody who was on the internet was using Facebook. If you had a company or a restaurant, they had a Facebook page. That was just how they used it. That was not how Facebook was designed. It originally was designed to bring college students together. But it changed the way that people around the world interact and expect technology. So I think about that on the same scale, that if we're successful, we're going to change the world, not just for the people who are currently using our products, but for everyone in the world. And that means we need to be listening to everyone in the world and thinking, what are people who may not have even considered VR? How could it impact them and what are their concerns? So mostly what I'm thinking about when I think about that scale is not the particular voices of, we've got to check this box, we've got to check this box. My first priority is, how do we get more voices? How do we hear more people? How do we have more people participating in this conversation? Because as I said, Nathan shouldn't be making some of these decisions. Nathan should be saying, this is what the community expects, and the community should be the ones to make those decisions. Over time, norms will emerge that people will expect certain things. I think about your cell phone. It used to be when you took a photo with a cell phone, it made a sound and there was a light on it. That that was because people were concerned everybody carrying around cameras would be privacy invasive. So they wanted to be transparent. this is happening, a photo is taken. We don't need that anymore because now if I hold up my phone like this, you know, okay, that person's probably taking a photo right now. That norm and expectation has evolved and developed. Now that we can standardize that, whether it's standardized in best practices or standardized in self-regulation set up by an industry association like XRI or whether it comes from a legislative body. Eventually, those norms are going to emerge and it's more important for me for those norms to emerge from bringing sophisticated stakeholders together to decide those norms, rather than me saying, this is what I think those norms should be. I certainly have a perspective on it, but I only have one perspective, and the amount of perspectives in the world and the amount of people who are impacted by this is too numerous to count. Does that answer your question about how I think about this?

[00:22:32.525] Kent Bye: Yeah. Well, I guess part of the subtext of that is that, you know, you have a laws that you're following, say like what the definition of biometric data is as an example, where there's a very specific definition that says if it's identifiable, but at the same time, you know, Stanford university just came out with a study saying that like, if you just have head pose data with hand tracking, and you're able to record like 10 minutes total worth of data to be able to train machine learning algorithms to be able to detect it, that you only need like a 20 second sample to be able to identify at a 95% accuracy over the sample size of 500 people, which is statistically significant to say, well, maybe hand pose plus head pose data should be classified as personally identifiable, even if it isn't right now. And so there's a sense of what Joe Jerome said is that a lot of this data that's coming out of VR isn't even necessarily defined clearly by the law. There's going to be all sorts of stuff with like eye tracking data, galvanic skin response, you know, all sorts of other ways that you're going to be able to extrapolate information from. EEG and brain control interfaces. I mean, we're, we're entering into this new realm where the law is like five, 10, up to 20 years behind in terms of where the technology is. And so Facebook is going to have an opportunity to potentially help shape what those laws are. And so that seems to be a bit of this. dialectic there of like there's discussions about these federal privacy laws that either you are going to be directly involved with or Facebook as a company are going to be helping shaping those laws, but those laws are going to be also dictating how this.

[00:24:05.365] Nathan White: Yeah, I think you're absolutely right. And it's why I think I have the best job in Facebook is that my job is not to look at it and say, does this follow the law? My job is to look at it and say, is this the right thing to do? So as you say, with biometric information, usually we look at Illinois, Washington State, Texas, GDPR for what is the definition of biometric information? And it is a very specific definition. Usually it is information about your body that can uniquely identify you. So you're thinking like your iris or your fingerprints. But that's not the only thing that I think users would think of as biometric data. I think that there are far more buckets of data. There's certainly information that could identify you, but then there's information about your body that doesn't identify you but could be used to learn about you over time. For example, let's say an emotive avatar that will smile when you smile. If a computer system logs every time you smile or every time you frown, over time, that would be incredibly sensitive, I think, to most people. There's also then things that could identify you if you have something to compare it to or over time or a sophisticated system. I think that's where you're talking about gate analysis of motion data or things like where your eyes are looking, how your eyes might move around, something like that. Then there's probably other information out there that I think people would think would be less concerning, where things about you that you can change. Maybe your hair color or your hair length. Some people can change that. Some people it might be medically sensitive. Or what you're wearing, you know, my shirt is white today. That's not particularly sensitive. I wore a white T-shirt today that I can just change it tomorrow. So, if we think about all of these different types of data, we need ways of putting it into a framework so that we can say, ah, this is a class one type of data. This is really sensitive. We should never, ever use that. Or if we have to use it, it should never come off the vice. or this is a class two data where there are ways that we might use it, we might need it to deliver the experience like having your avatar smile, but we should have controls on what we do with that data and we need clarity and things like that. But the law is not going to give that. The law is not going to create that. The law is only ever going to enshrine what the community decides is appropriate. That rather than get to an end state where it's enshrined in law, I hope that we can get to a place where the community is saying, this is a sensitive type of data. And so it's our expectation that you would use it in certain ways or you wouldn't use it in certain ways. And that is really important to me. That kind of conversation among sophisticated experts with expertise in privacy and youth and victims of domestic violence and every other perspective that we can bring in with experts from the VR community like you and Kavya Perlman and XRSI. so that we can collectively determine what those frameworks are, then we'd say, Facebook, follow those expectations. That if there were laws in the world, we're going to follow the law. If there were best case norms, like how we use cameras, we're going to follow that. But if they don't exist, What should Facebook do? Should they just make it up as they go along? I don't think that's a great idea. Should they hire people like me and have them tell them what to do? I think that's a better idea because it means I get a job, but I still don't think it's the right thing to do. I think the right thing for us to do is to focus on the community as a whole. and build together. How we do that is certainly complicated, particularly in the time of COVID. We're continually trying to get better at it, and we want to be better at it. We want to have more conversations. But I think that's really what motivates my perspective, is getting to norms set by the community of people who care about it, are passionate about it, and have expertise. Then the law will follow. I might be a little bit skeptical of the law, because I worked for Congress and I advocated before Congress and regulatory bodies for so long. But they're slow. They're slow. And if we wait for the FTC or FCC or one of the alphabet soups in Washington, D.C. to come out with the standards, it's going to be too late. I really do think that it's going to come from the community, that we're going to set norms in consultation and conversations with each other. And it is already happening. I mentioned XRSI a couple of times. They recently put out a privacy framework for how they recommend. It's similar. I wouldn't say it's similar. It's in the same vein of the NIST privacy frameworks for how large corporations should think about protecting privacy. And that is the kind of thing that is far more interesting to me than a regulatory body saying you must protect biometric information. OK, yeah, we'll do that. We've got the lawyers to make sure we follow the law. But is that really enough? Is that what the community expects from a company like Facebook? I don't think so.

[00:28:41.535] Kent Bye: Well, I wanted to give an example that I think is at the heart of some of my concerns around not where it's at now, but where it's going to go in the future. I mean, there's certainly things like being able to track head pose and hand pose to be able to have personally identifiable information potentially and what happens to that and the implications of that. But I think something like brain control interfaces, where UCSF did a study with putting ECOG nodes on the neurons to be able to essentially do speech synthesis. So essentially translating your thoughts into text, being able to read your mind, essentially. And as I went to the Future of Neuroscience conference that was put together by the Canadian Institute for Advanced Research, I was able to talk to some neuroscientists working on this very issue, and there was information that was talked about. saying that within the next five to 10 years, that a lot of this information that is using invasive EEG nodes are going to be able to do with non-invasive EEG with machine learning. So by the fact that they're able to put these neurons on the brain, they're able to train up these neural network systems so that you could do non-invasive, put a thing on your head, and it's able to read your mind. So this is within the next five or 10 years, we're looking at a future where you have brain reading technology. And I guess there's this existential fear that I have, like, what's it mean to have not only Facebook be able to read my mind and what happens to that data, but with a third party doctrine, that means that if you're recording my thoughts, that means the government could come in and say, let's have an audit of Kent Bye's thoughts for the last year, that there's that type of information that is coming from my body and synthesized by these machine learning algorithms. That to me seems to be like an extreme edge case. of like, okay, there's gonna be all this information that's coming out there. Eventually, we're going to be able to read my mind, and then have a log of what my thoughts are. And that's going to be stored on Facebook servers, and then the government could get that. So there's two things there. One is the sensitivity to some of this biometric data that may be coming off my body. And number two, the relationship between Facebook and the government in terms of them coming to you with a warrant or without a warrant, saying we want this information, whether that's biometric data or social scores. So there's a certain amount of trust that you have with how trustworthy do we find each of these people. And that essentially is a way of a social score that Facebook is keeping within your own internal system to be able to reduce harm and abuse and different ways of just having a way to know where people stand. But in China, they have social scores that are applied to people that dictate what kind of services they have. So you have this things that are maintained by private company, but then what happens when that type of information starts into the hands of the government? So let's just start with the brain control interface as a use case, biometric data that's reading my thoughts.

[00:31:20.322] Nathan White: Yeah, absolutely.

[00:31:21.462] Kent Bye: What happens?

[00:31:22.765] Nathan White: So I love working in this space for a lot of reasons. Everything is new and interesting. I've worked in areas where there aren't novel questions that people argue the same things over and over again. So one, I just love working in this space. I love that we can have these conversations. It's really fascinating to me and exciting to think, where can this technology go and what can happen in the future? And it's the cone of uncertainty. The further you get out, the more uncertainty there is. So I do think it's important to take a step back of like today, 50 years, where we're at in the process. And computers being able to read your mind and your thoughts are certainly something that are important to think about. But I think the issues you're bringing up will come up much faster than that. that in the future, we're going to be able to have devices and services out there that are more realistic, that the beauty of VR is the feeling of presence. And if you are a blocky avatar, you don't have the same sense of presence as if I'm looking at you in the eye. So eventually, there's going to be already some of the more expensive devices out there and mostly used for consumer research do eye tracking because they want to know where people are looking in a space. Eventually, I suspect that that technology will find its way into more commercially available uses because people are going to want to make eye contact. You're going to want to see that I'm smiling a little bit or that I'm frowning a little bit. You're going to want to have those emotional experiences with people. And so if you built a system that can know where you're looking, whether it's for foveated rendering or for more personal communication, and then also know if I'm smiling or if I'm frowning, you can, in some ways, potentially read somebody's mind without knowing what's in their mind. That if you look at a red Corvette and you, ooh, that's exciting, you smile a little bit, your eyes widen, you can tell that person's excited by that red Corvette. And if somebody were to observe that and then use that information, is that acceptable? Is that mind reading? Fair game. Or is that, no, that's something that somebody may not have even known about themselves. I don't remember it, it's probably on your Twitter feed, but I saw a couple years ago somebody did a study that said just by eye-tracking data, if somebody walks through a crowd, you can identify their sexual orientation. that these are the kind of things that we need to build frameworks for how we think about it before we get into invasive brain-computer interfaces. We need to think about it before we have those kind of things of, all right, if we're going to have eye tracking for these purposes because they power these services, what are the restrictions that we should put on ourselves for the right way to use that data that is comfortable for the world around us? And that may sound too happy-go-lucky of, you know, we'll all figure it out, but I think it does have to happen, especially if we get out of VR and into the AR sense. For AR to be successful, people will have to be comfortable wearing sensors. We all have sensors in our pocket in our phones right now, but if we're going to put computers around us and augment our reality, then we're going to have to be comfortable that we're wearing them and other people are wearing them. And so for people to be comfortable with that, you need to have assurances. You need to know what are they doing, and you need to know what they're allowed to do and how they're processed. So these are frameworks that I think that we are going to have to create collectively as a community. And then it'll be people like me who convinces and encourages and requires Facebook to build to those principles. And then at the end of that, or probably at the same time, but then that's the conversation that we say, with all these experts, this is the framework that we think makes sense, that this is what comfortable with. Then, if legislation comes along and says, this is the best practices and you must meet these minimum standards, that's great. That raises everybody's vote for smaller developers and smaller folks who may not be able to invest that much or that level into the socialization and development. I think that that comes not from, how do we stop computers from reading our brain? I think that comes from developing frameworks of New types of computers will observe new things and new experiences, and we need to put what appropriate restrictions on them are so that people feel comfortable, that you're not having your innermost thoughts being read without your permission. Maybe we get to some point in the future where you want to be able to tweet without talking and you want it to read your mind. I don't know. Maybe people will want it at some point. But we need frameworks for how to use those sensitive datas long before brain-computer interfaces. I think we're going to get there in probably the next few generations of, I would guess, VR headsets. Then you also brought up the issue of third-party doctrine and government accessing data. Yeah, that's important. That's what I think a lot about. And sometimes I wonder if my history in international surveillance law might have been relevant when I was getting hired. It wasn't something we talked about in interviews, but in the role, I think about it a lot, particularly with what we've seen from the Snowden leaks and what we've seen, and I think is fairly aggressive data grabs from law enforcement. DOJ in particular is pretty aggressive about what they want to get their hands on. There's an epic debate going on among privacy and law enforcement of, are we going dark because encryption is making it harder to get data that a couple of years ago was easier to get? Or are we in the golden age of surveillance where there is so much information about people available that if law enforcement goes to ask for it, knows to go ask for it, then they can get access to it? Joe mentioned this on your podcast, there's an epic paper on this called Tiny Constables, which was written by Ashkan Sultani and actually Kevin Bengston, who works with me at Facebook now in our AI team in a similar role as me, where they examined this. And the thesis of Tiny Constables is essentially surveillance used to be really hard and really expensive. If you wanted to know where somebody was going all day, you had to have a team of people handing off to follow them all day. Now you can put a GPS attacker on somebody's car and follow them around for months at a time where they're going. It's suddenly really, really cheap. And so there's a tension in that, is it becoming too cheap? It used to be a big question that people bring is, how much law enforcement can we afford or how much law enforcement can we tolerate as a free society? How much of that is appropriate? What right does law enforcement have to data just because it does exist somewhere? I don't have great answers for that. I don't have great answers for where we are going to be 10 years from now. But I think about it a lot. And it drives some of the way that I think about things of basic principles of privacy product development, of data minimization. If you don't need it, if you don't need it for a specific purpose, if it's not useful for something, don't collect it. Because, yeah, sure, maybe someday somewhere it'll be interesting. But it also is just a big target that law enforcement might say, ah, you've got a lot of data there. I would like to access this. We see it with people who are paying attention with Apple for, You know, they're very secure of what's on your phone, but things that are in the cloud are not necessarily encrypted backup and more access is available there. Photos stored in the cloud are huge targets for law enforcement of, in Pike's place between this time, what photos are available? But I'm very aware of law enforcement's desire to do their job, stop crime, protect people. And I do think that there's a tension there of as we create more data, as we go about our lives, not all of that data is even fair game for law enforcement, that I do think about how we need to address those. I don't have great answers for what we're going to do 10 years from now and when these new kinds of data come around. But that's exactly why I'm so thrilled that EFF just a couple of weeks ago wrote a blog post on a warrant should apply for VR maps that you create in your space. That we need people out there like EFF who are thoughtful in this space, who understand government surveillance, who understand privacy law, but also understand the VR technology so that we can have that healthy debate about what is and isn't appropriate and what should be off limits. I also hope the courts will eventually provide a little more clarity on things like Carpenter for even among older judges, a recognition that things collected at scale are much more invasive than an individual data point. I think I've heard you use the term mosaic theory, the idea that one phone call or one location data point may not be that revealing. But if you have my location data for 30 times a day for 10 years, you know everything I've done. You know a lot about my life. So I think we need more clarity from the law and we need to have that healthy debate with law enforcement. And I think it's happening mostly right now in the encryption conversation about the going dark or the golden age of surveillance. And then just quickly, the last thing you mentioned was the situation in China where the credit scores or the social credit scores, the social scores, I forget what they call them. I always get it mixed up with dark near episodes or black near episodes. Yeah, frankly, I'm concerned about that. I don't have any control over what they do in China, but I think about some of these conversations, and I'm passionate about these conversations, and I'm passionate about making sure that we drive this forward. Because if folks like you, and Kavya, and EFF, and Joe, and everybody else we've been talking about, if you're not the ones who are driving this conversation and creating what these norms are, The norms are going to develop in places like China, where they're going to throw a lot of investment at it, and they're not going to have that same ... They might have thoughtful conversations, but I'm not a part of them. I know if we do it here, we're going to have thoughtful conversations, and I'd much rather us be the ones creating what those norms and expectations for a global market are than leaving that territory to somebody else. One of the reasons why I was convinced to come over here is that if these conversations don't happen now, We're going to regret it 10 years from now. We're going to think, how did we miss this? How did we not create these frameworks and create these standards? How do we not have these discussions? And then we're going to be in a place where advocates are now in a bunch of different places where they're trying to change an entrenched system. I want to make sure that we can build that system in a way that is inclusive and diverse and considering of all those perspectives and all those opinions so that we don't have to rebuild it later. We can build it right the first time. Of course, there's some hubris in saying that anything we do the first time, even if we do our best to get as many voices in, we're not going to get it right the first time. We're going to have to iterate. We're going to need to bring in new voices as VR expands, as there's more audience, as there's more use cases, as there's more technology bringing new sensors and new types of data into it. We're going to continually have these conversations. It's not an end point, but it's urgent to me that we start having these conversations and publicly as soon as possible.

[00:41:35.155] Kent Bye: Yeah. And I, I think the underlying concern that I have is like, what does it require for Facebook to really build trust with the community that all of this stuff is really happening? And just as an example, when I started talking to Joe Jerome, he said, you know, sometimes he'll be consulted with companies like Facebook. And he said to me, he doesn't know if that's just a box checking exercise. If Facebook is talking to someone like Joe, talking to someone like Kavya Perlman and XRSI to say, we're in dialogue with these companies, but yet what degree to the advice that you're getting from these companies, is it being implemented into the day-to-day practices? And I think that's the underlying concern that I have, just even from talking previously with the privacy architects back in 2018, you know, they're like, okay, we are not doing this today. We're not doing this today. We're not doing this today. And it's like, okay, great. You're not doing that today. But according to the privacy policy, it says you could start doing this tomorrow and you're under no obligation to tell me what you did. So it's sort of like the challenge of like, because of the privacy policy is saying, here's what we're allowed to do. I have to assume that you're going to take it to that full extent of like the worst possible interpretation about what those mean. And so when things like conversations are being recorded or movement track data, you know, all this biometric data, I have to assume that as much data that is made available, according to the privacy policy, I'm consenting to be used but there's no limitation for what those contexts are. Like Helen Nissenbaum's contextual integrity as a theory of privacy, there's no bounds in which that privacy policy is saying we're only going to be using this data for this specific context. It's basically up for any secondary use that you'd want to use, as long as it's for the business interest or the use of the technology. And Joe said he doesn't know the degree to which that this information is absolutely necessary for each of these X, Y, or Z applications. And so you have this challenge of like, okay, how do you build trust with the larger community that you're doing the right thing when the laws aren't there to sort of enforce it. And you have to do your due diligence by talking and having all these people. But when it comes to the end of the day, there's all these other business interests and other things like that, where there could be tensions of the interest of serving data for advertising purposes, as an example, and be able to mine information to get psychographic profiles on us. So like, I guess that's the concern that I have, that I have a lot more trust of having this opportunity to have this conversation with you, but still at the end of the day, as everything sort of gets worked out, there's a certain amount of a lack of transparency that is just behind the private corridors within Facebook. And I don't know what that solution is to be able to say, Facebook's living into the most exalted potential of really taking care of all this data, given all the different things that are afforded within the privacy policy, and the potentials for how this data could be mined to get all this information about us that could be used in like advertising purposes.

[00:44:21.745] Nathan White: Yeah, glad you brought it up. Really, really important. The first thing about the checking the box and the consultations, let me address that one first. In this conversation, I've been talking at a pretty high level about we as a community need to develop these frameworks. We as a community, I keep saying that, and that community needs to continually expand. And I really believe that. I think that that is important. Why I think your podcast is so important. Why I think conferences are so important. Why I think places like RightsCon are so important. But that is hard to do in a narrow way when you have a product or a feature. Take, for example, Project ARIA, which I worked on. I worked on Project ARIA for over a year on building an end-to-end way of securing data and being transparent about what we were doing. But ultimately, that is me working with a product team, and that is me giving them advice based on my experience with the privacy community and these vague higher-level conversations. But at the end of the day, it should not be, well, it's OK because Nathan told us we can build it this way. He said that this was fine. We want to actually talk to people outside of the company who have unique perspectives and say, hey, this is what our idea was. This is what we thought. We thought this was the right way to do it. What is your thoughts? And so sometimes we do do those black box consultations where we go to people and say, here's what we did. Here's what we're thinking. Can you give us your expertise? Do you think that we did this right? Are there things that we should have done? Are there things that we should change? What are your thoughts? And then we go back to product teams and say, OK, well, we learned this from experts in the field that we're really going to have to make sure that we're really transparent about what is going on. So we need to do an extra piece of transparency in this particular way, or we need to make sure that this type of data is really secure. Sometimes we do get feedback that they might not want to share publicly, and so we do have private conversations. Also, when we're having conversations about things like Project ARIA, I did ask people to sign NDAs so I could tell them all the details of what I was building and what was in it so that we could share it. And yeah, I get that that's a black box, I get that most of the world doesn't see it, and we're kind of asking, you're like, oh, trust us, we do these things in the background, but we don't tell you about it. I totally hear that that is not sufficient, and it's not enough. But I do want to assure you, to the extent my assurances mean anything, that it's not a check the box. It's not a comms-driven process where we say, we want to tell people that we consulted privacy experts, so let's go consult some privacy experts. I'm sure there is some comms value to doing that, and I'm not going to pretend there isn't, but that's not why I do it. That's not what's interesting to me. What's interesting to me is, did we get it right? Does this make sense from your unique perspective? This is a podcast, so I'm a relatively privileged white man that flies back and forth between DC and San Francisco. I have a pretty privileged way of viewing the world. There's a lot of perspectives that I don't naturally see. I try to understand, but I don't naturally see. And folks like Joe, who works for Common Cause, we keep bringing up Joe. His ears must be burning. He works for Common Cause. His day job is youth protection. And so when we look at it and say, your day job, what did we miss about this? Is this right? And we are so grateful to those folks who give us their time because they are giving us their time. They're not asking for money. They're giving us time because I hope they believe that it's influential and it results in a better practice. So to not check the box, I don't think people would talk to us. It wouldn't be worth their time if it was just, oh yeah, you talk to me so you can say that you talk to people. Why would anybody want to do that? We're all on Zoom all day. We don't need another 30 minutes on Zoom to do that. We're doing those conversations because we really do want to hear from people. But those aren't the only conversations that we want to have. And those are kind of the before we launch something, or once we've got an idea, or we're getting kind of baked. But that isn't the only thing that we need from the community. We want to be driven by the community of what is socially acceptable, what do community want, what features do users want, and what does the community expect from us? That we do need to have more of those conversations, they do need to be less opaque, I hear you that it does seem a little bit of a black box that I'm telling you I had conversations with people and I'm not telling you who I had conversations with. That's frustrating to me as well. But I think of that as just being one piece of the recipe. That's just one thing that we are doing. In my mind, I don't want to stack rank levels of importance. It's one of the things that we're doing. It's in the batter, it's stirred in, but it's not the only way that we want to rely on this. We also want to have conversations in public. And the way I think about that is that if you think 10, 20 years from now, there's going to be an Electronic Frontier Foundation, there's going to be a Center for Democracy and Technology, there's going to be a New America Open Technology Institute that have divisions that focus on VR, that there are going to be people who look at the way that we look at bias in news or the way that we look in fairness and algorithms, that people whose day jobs are to pay attention to this stuff and have expertise. But for those people, for Joe to turn his passion into a day job, there needs to be enough of a community where there is reason to do that work. If you write a research paper, there needs to be a conference that you can go present it at. You need to know that you didn't just waste your time presenting that. You're actually, because you were there, it's going to up your visibility in the field. There might be future job opportunities. There might be other media opportunities. There needs to be an incentive for people to develop into their day job. I think that that naturally is going to come, but my fear is that it will come in response to industry rather than developed to be collaborative with industry. So I am constantly looking for new ways to have those conversations and to bring more people in. Some of the things that we've done, also a little bit behind the scenes, we've been having, I've been calling them policy forums in areas where we bring in experts in other areas of policy And we just talked to them about AR VR and what the roadmap is and what the technology looks like and who people are in the field, so that they know who to go in and talk to and get more questions to try to build up that level of sophistication so that they can communicate with VR experts like you and others, so that we can really kickstart that and have more of those public conversations and more of those roundtable discussions and more conversations that Facebook doesn't necessarily need to be part of for us to still learn from, for us to still watch. I don't need to be on the stage, I can be in the audience and I'm still hearing what the people are saying. And then I think there was another point that you had brought up that I wanted to mention

[00:50:39.130] Kent Bye: The sort of the other aspect of that, which is that at the end of the day, once this is implemented. I have this conversation with you. I've had previous conversations, but moving forward. I have no idea no transparency in terms of whether or not the types of data that you're collecting from VR is going to have secondary uses that I may not even be aware of. And if I were aware of, I may object to. So in terms of building trust with the community, there's a certain amount, again, of not only of the history of what's happened with the discussions that you've had, but also just the implementation of do I trust what data is going into this big sucking in of a whole lots of data that's going into Facebook, potentially directly from VR and these immersive technologies. I don't have any idea as to what actually is happening to that data. What kind of machine learning driven inferences are made about me that these online behaviors and actions. And so I guess that's this underlying distrust or like skepticism that I have to kind of assume that it's the worst case scenario because I don't have any other accountability or transparency in terms of what actually happens to any of this data.

[00:51:40.943] Nathan White: Yeah, I think that's fair. I think that Facebook understands that. and that we collectively, they management, but we collectively want to do right. We want to get this right. As I said earlier, AR is not going to be successful if people don't feel comfortable wearing sensors and being around sensors. That requires comfort. If you're not comfortable, people will reject them. People will put up signs and say you can't wear them in this, the same that happened with Google Glass 10 years ago. I think that there is a deep and widespread understanding that we have to get this right. We have to be truthful. We have to have people who trust that we are doing the right thing or we're not going to be successful. I think the company really does understand that. How we demonstrate that trust is also important. How we show up every day is also really important. And so I hear what you're saying in that, why should we trust you? And why should we not assume the worst? I hear you. And I think we do need to show up every day and demonstrate that we're not doing the worst, that we are committed to doing the right thing. That is challenging. And let me give you some examples of why that's challenging. The first is, you said earlier, we sometimes say, well, we're not doing that today. You know, that might sound like a shaky answer, but to me, that's the only answer we really could give in that Facebook and other companies are inventing technologies. We have an idea where we think we're going to be in three years, where we think we're going to be in five years. We think that this is going to come together, but we're still inventing things. One, we don't know what technology is going to do. And we also don't know what features and services that people are going to demand of that technology. I did not predict Twitter 20 years ago, but it's something people want to be tweeting while they watch the Oscars. People might want to do things in the future. There might be reasons to do things. We don't know what the technology is. And so when we say like, ah, we would never do x, that's like tempting fate that something will come along. It's like, ah, well, everybody wants x. How could you not do x? There's also an issue of a company like Facebook. If we say something, if we put something in our terms of service, we are bound by law to that. Joe explained this on your great podcast, which by the way, must listening for anybody interested in privacy in this space. Go listen to Ken's podcast with Joe Jerome though. I guess if you're this deep into this podcast, you probably already did. What he was talking about is that privacy law in the United States is determined by the FTC. And the way that they generally do that is I forget the exact term, but are you misleading consumers? Basically, were you transparent about it? Did you do something that you weren't transparent about it? So if we say we do X or we don't do X, if to change that, we have a duty to recommit. We call it retoss. We have to change the terms of service to imagine these new use cases, which is very difficult. It's very expensive. It's very complicated. It's very long term. And so generally, you want to build it up. You want to do it as little as possible. So that there's, in my mind, a kind of understandable reason why you wouldn't want to make commitments about things that haven't been invented yet and you don't know what you're going to want to do with. At the same time, yeah, the community also wants commitments because if you think of that cone of uncertainty into the future, there's a lot of really, really terrible things that you can imagine using this technology for. I mean, the same things that are so amazing about VR, the fact that you can do the, was it the clouds of, I can't remember the name of it, but the empathy that comes through, that you can put yourself in somebody else's shoes and experience somebody else's being, That is so profoundly valuable that you can put yourself in somebody else's shoes and understand their perspective. But I imagine, you know, let's say a repressive government somewhere in the world starts using VR to educate their children for state propaganda about how great the fearless leader was. That same kind of experience that makes it feel real could also be used to horribly abuse society. And so I totally hear that we need to be more Certain we need harder rules. We need more guarantees. We need more clarity. I think the solution is not a Facebook. Tell us your grandmaster plan that doesn't exist yet. I think the solution is we build this together so that when we come to those decisions of what is the right way to do this or should this data, as you said, be available for ads or not, that it's not just Facebook making that decision by itself, it's the community making that decision in collaboration with Facebook. And do you believe that that's happening? Do you trust that's happening? I totally hear that some people don't. They're not going to believe it. Some people do not trust Facebook as far as they can throw them. I have many, many friends who feel that way. I have many friends who question whether or not it was ethical or moral for me to even come work for Facebook. Ultimately, I think it's too important to get this right, that we have to have those conversations that I'd rather give the benefit of the doubt and get burnt than not try. Because if we don't try, that's effectively the same thing as just getting burnt even worse.

[00:56:33.229] Kent Bye: Yeah. Yeah. We're at the top of the hour and I just want to ask one final question or I could go on and on and on.

[00:56:42.783] Nathan White: I would love to go on and on and on. I actually do need to drop off like right at time, but we can do one more question.

[00:56:47.805] Kent Bye: Okay, sure. Well, uh, just to kind of wrap things up here, what do you think the ultimate potential of virtual reality might be and what it might be able to enable?

[00:56:58.727] Nathan White: I love that you asked that question because the answer is universal. It's it's what are you passionate about? What do you care about? What's exciting to you? The ultimate potential of VR is versatility. It's sure we can have the greatest games. We can have the greatest travel. We can have the greatest work experience. But with the ultimate goal of VR is that it can do all of those things. It can be things for everybody, that it can be an art platform. It can be a music platform. It can be a sports platform. It can be a work platform. It could also be really useful for DoD and military applications. But the beauty and what is exciting about it is that the end goal is what we make it. And that's pretty dang cool.

[00:57:39.480] Kent Bye: Nice. Well, Nathan, I just want to first thank you for joining me here on the podcast. And, you know, my closing thought before we drop off is just that, you know, Facebook as an entity is becoming more and more like a government and the types of scale that you're operating with. You're talking about different aspects of deliberative process of having input. But I guess I would like to see like things like Freedom of Information Act and more transparency, more accountability. And if you really want to create as diverse and open as you can, then thinking about not just having a lot of these behind closed door meetings with certain people that you're talking to, but to have a bit of a paper trail in terms of the types of discussions that are happening there. And like, that's what I would put forth as a challenge in order to really think about how to start to cultivate a deeper trust around these issues. Because while I'm very grateful to be able to have this conversation. Still, at the end of the day, when it comes down to how this actually gets implemented, there's still a certain amount of opaqueness that I can't say I completely trust Facebook is doing the right thing. Even though I know that you're there fighting and advocating for this, there's still, from the outside, as a journalist, not being able to actually independently verify that, if you know what I mean. But thank you for providing the opportunity to be able to chat and to, yeah, just to be able to talk about some of these different issues and to navigate.

[00:58:53.998] Nathan White: Thank you for letting me come on and I really thank you for everything that you're doing for the community. I don't want to flatter you too much on your own show, but. Your influence in this community and your way of thinking, your podcast, the voices you've elevated, the platform you've given people to bring together, I think is one of the biggest pieces of glue in this community. And so I'm just grateful to you for all that you do and are continuing to do for this community and for holding Facebook's feet to the fire. We need people to do that too. For norms to emerge, it can't be a bunch of people who love Facebook and love everything we do saying how great we are. We need a chorus of voices and that includes people who are honest and tell us when we're making mistakes. And so I'm just very grateful for you and everything you do for the community and for allowing me to come on your show and chat with you and your audience.

[00:59:39.121] Kent Bye: Awesome. Yeah. Thank you. So that was Nathan White. He's the Privacy Policy Manager for Facebook Reality Labs for AR and VR. So I have a number of different takeaways about this interview is that first of all, Well, I'm just happy to have the conversation be happening publicly and to have more engagement. And I would just like to see a lot more of this, especially this kind of candid conversation to really get at the heart of some of the complicated nuances of this issue. It's not like an easy issue. And I think there are a lot of things that need to be navigated. Now, in terms of Nathan's strategy for responding to this seemed to be like, let's be engaged with the community and let's not rush to enshrining things into law. Because if there's anything that's going to be enshrined to law, it's going to come from the community first. And we'd rather be in direct collaboration and cooperation with the community to be able to have that conversation. Now, it seems to me that this actually is happening, but it's happening all behind closed doors, under NDA, under Chatham House rules. And so there's like no transparency or accountability for any of those discussions or even the outcome of those discussions. And so as somebody who is really advocating for all these different privacy positions, it's hard for me to look at and see what's actually happening and what those conversations are and what the outcome are. And then even if there are those conversations that are happening and continue to happen, then you get that same dilemma that Joe Jerome said, is that you have no idea whether or not this discussion is actually going to go anywhere. Is the recommendations that's coming from the community, how does it actually get implemented within the context of Facebook itself? Now, Nathan's response to that is that he generally wants to hear from the community, But from the outside, we still have no ability to have any sort of transparency or accountability to see, like, are you actually implementing this privacy framework from XRSI and Kavya Paraman? Are you listening to what Joe Jerome says? Are you actually changing the policies? And that, to some extent, is why Joe Jerome says is that the enforcement is so, so important because you need to have something in law and to have some sort of accountability that is somebody looking after it to see how are you actually implementing this. One of the ideas that came out from Ryan Calo and also came out of the VR Privacy Summit is that you would have some sort of like institutional review board for privacy that would have special access to independent researchers from academia and whatnot, be able to have full access to the data and everything that's happening, and then to come in and do like an internal audit for what's actually happening. to see if they're actually living up to the obligations that they have for their privacy policy. That's something that I think would probably require some sort of law to do that, because I don't think that Facebook is just going to willingly do that. And that in order for it to really have the type of independence that you would need, then I think it would actually have to come from some sort of outside entity to be able to do that. But I think it's going to need something like that to be able to really, for me at least, have the sense of like, OK, there's at least some entity that's out there that is going to have some oversight capabilities. And then, of course, the big question is, like, does that oversight community have the ability to be corrupted in some ways? Or any time there's one person that's in the chain for being responsible for everything, I think nobody really wants to be that because they don't want to sort of just give the thumbs up to say, Whatever you're doing is okay. Nathan himself, as the privacy policy manager and the privacy advocate, himself doesn't want to be just because Nathan says it's okay, it's okay. He wants to try to get as many different perspectives and viewpoints from the community involved as possible. But I think there's an underlying mechanism for how that actually plays out that is missing. There's no details for how that actually works in practice. So Lawrence Lessig says that there's like four major ways to turn dials to shift collective culture. And he lists out these four things of the law, culture, market dynamics, as well as the technological architecture. So here, we're not really talking about the technological architecture or the market dynamics, we're really talking about one of two options, either it's going to be a law that's passed, it's going to force this to be happening. Or it's going to be from the culture and the relationship from Facebook as an entity, a relationship to the culture and how to really facilitate that type of conversation and to have it so that, like I said at the end, Facebook's kind of becoming like a government. So what does the Freedom of Information Act equivalent look like to get some sort of like a paper trail or documentation? Because that's just to see what those conversations actually are, and to have some sort of way to do some checks and balances there. Otherwise, it's, again, risking falling into that category of having the conversations, but how's it actually taking root and bringing about any significant change? I think the other thing that we didn't have really time to get into is that Nathan is his position as a privacy advocate, is also working internally against all these other stakeholders of the business interest and whatnot. And so there's going to be interest for even how to sustain the company and how does that get weighed against some of these privacy decisions. It's not like a clear decision all the time in terms of the value of the data to be able to do certain types of tasks that, like Nathan was saying, we have no idea where this is all going to go. And so it's difficult to really set clear boundaries. And there's going to be different trade-offs between the business interests and what actually happens to some of this data, the secondary use and if it's going to be mined for psychographic profiling or what ends up happening to a lot of this data. And also just the open questions around the government, the third party doctrine. You know, I'd like to just see a lot more from Facebook in terms of the concerns around any data that they record on a server. The government can come to them and ask for that. So it's basically eroding our Fourth Amendment protections for privacy. When you have all this information and data that's being collected on us, it's essentially outsourcing a lot of surveillance that the government is not needing to do because Facebook's just doing it. If the government really wants that information, they just go to Facebook to ask for it. So that's the third party doctrine aspect that I don't necessarily feel like I had a satisfactory answer to say, you know, Facebook's going to really fight for protecting and expanding and advocating for federal privacy law that enshrines the right to privacy and the requirement to ask for warrants. You know, like, am I talking to Joe Jerome, the federal privacy law that has the capability to start to expand out how the Fourth Amendment should be interpreted with all this digital information? In the absence of that, then it's basically like everything you give to a third party is considered to be public by the government. So there's literally no privacy for any data that you give to a third party in some conditions, like in the Carpenter case when it comes to cell phone data. But pretty much there's essentially a blanket statement saying right now, at least, any data you give to any third party, the government can just get access to. And I think that's a huge issue as well. Nathan's done a lot of work within government surveillance and knows the implications of a third-party doctrine But it's still like a fire hose of information that needs to get some sort of legislative way of addressing that So that's a huge issue that I'd like to see Facebook take a stronger stance on rather than oh We don't know what what's gonna happen there. It frustrates him as well And, you know, when he talks about like meeting for these different frameworks, I think some of those frameworks are starting to come out, whether it's the XRSI privacy framework, Ellen Nissen bombs has the contextual integrity framework, which I think actually starts to try to set that there's specific context. And given this context, you are able to use information in this way. Like I said, with a privacy policy, there's no boundedness in terms of what context information is going to be used for. So I think that's an issue for just trying to draw some boundaries around like Medical information that can be derived from some of this biometric data as an example You know being able to determine someone's medical conditions based upon looking at their movement data There's gonna be all sorts of stuff like that where maybe the government needs to step in and start to say, okay Given that we need to sort of take into these certain considerations I'm not convinced that it's just the private market as well as the culture going to be to sort it out themselves I do think that eventually there will need to have some sort of law that comes in to start to enforce some of these different boundaries because They're just not going to do it on their own without that, especially without any sort of enforcement mechanism there. But there's also other frameworks and things that are happening in the works. I was happy just to hear from Nathan that the work that I'm doing on the podcast have some influence in the water industry. There's the XR ethics manifesto that I put forth that tries to put together some framework. But part of the reason why I did that was because I went to the VR privacy summit and the aftermath was that I just saw that we tried to come up with some sort of recommendations at the end of it, but it was such a complicated issue that there was really no comprehensive framework for privacy or thing that we can hand over into industry to start to implement. It was such a huge issue that I really went into and to really look at what are the different perspectives and stakeholders and at least try to map out the list of ethical and moral dilemmas and some sort of framework. That's also part of the reason why I've been involved with this IEEE Standards Association. They have this whole initiative around ethically aligned design and they have a new industry connection group for XR ethics that's going to be starting up here. I'm one of the founding members and there'll be more information about that within the next couple of months or so. But as that gets launched, I think it's going to be a place for folks who are really interested in this issue to come forth and really be this neutral academic place to really set forth some of these frameworks and these ideas and trying to find out how can we set some boundaries around some of this and some larger frameworks just to help people make these different trade-offs around ethics and XR. And I hope that as we move forward that we'll have more direct engagement with companies like Facebook because like Nathan said, they are having these conversations but it's up to the community to some extent to come together and start to hold these different conferences and conversations where they can come in and participate in it. You know, it's easier for Facebook to come and talk to us if Facebook themselves isn't organizing. It just has more neutrality and it just has more credibility if it's done by the community and they can come participate in it. So there's just going to be a real need for that. for the community to come together to be able to talk about these. And there is an open dialogue that I think is happening right now. I still have a lot of questions in terms of like, to what degree does that actually take root into changing how things are actually done within Facebook? And so that's sort of a persistent question of like, what is the relationship as an individual to these big major tech companies, when they're not really democratic institutions, they're fundamentally private corporations, but the impact that they have on society is more like a government than it is like a private institution. what is the democratic processes to be implemented and how do we actually feed back in some organized way just beyond Facebook choosing a couple people they want to have these behind the scenes conversations that metaphorically checking that box of getting feedback, but what is a more robust way to really being as open and inclusive as you possibly can from taking any and all input from the community? That I think is still a huge open question as to like what that actually looks like. I like the intention, but I just need a lot more details for what that actually looks like. And there is this sense of what's happening in China. We don't want a lot of the normative standards to be developed by what is happening with social scores and turning a blind eye to a lot of the deeper ethical and human rights issues. So I trust Facebook in that sense more than China, but at the same time, it just needs a lot more accountability and transparency for me to really trust them because at the end of the day, there's still a history there in terms of going back to 2011 with the FTC, Settled some charges that Facebook deceived customers by failing to keep privacy promises back on November 29th 2011 and there was a consent decree that was signed in 2012 and so a lot of the stuff that started to come up around privacy around 2011 2012 a lot of the things internally that Facebook created was in response to the FTC lawsuit and settlements that came out of that and And so there's an ongoing consent decree that has happened since 2012, which was violated after Cambridge Analytica. That's why there was that $5 billion fee. And so there is a lot of stuff that comes from that level of enforcement. So there's a history there of Facebook deceiving customers by failing to keep privacy promises that they made. The FTC has already sort of established that. So because of that, then as we move forward, then what are the things that Facebook needs to do to really build trust in its customers and its community that they really are on the right side of privacy? especially when it comes to augmented and virtual reality. So what are the things that you need as a consumer to really build that trust? That is still, to me, an open question, and I got a deeper sense that at least there's an advocate on the inside that's fighting for it, but to really have the paper trail to prove, not only me, but to the wider community, that Facebook's on the right side here. Like, how do you build that trust and accountability and transparency? So I think that is a big open question. We'll see how we move forward, how they actually answer that and try to cultivate that trust. I think starting to have more of these open conversations as a start, and I'd personally like to see a lot more. I could have talked for hours and hours about all these different questions that I have over many years. So I just hope for me personally, I just have a continued opportunity to engage in this conversation, but also just broadly for other people in the industry to be able to have access and for the larger community at large to be able to have more of this dialogue be facilitated within the community. So that's all that I have for today and I just wanted to thank you for listening to the voices of VR podcast and I'd like to just take a moment and ask for support for the work that I'm doing here as Nathan said it's making a big impact not only in the larger community but within Facebook itself and so to me that is certainly a validation the work that I'm doing here is having an impact and I need support to be able to continue to do this type of work and I rely upon donations from people like yourself and the community who are listening to be able to continue to do this work. So if you wanted to contribute in some way, then just supporting my work on Patreon would be a great way to do that. Just $5 a month is a great amount to give to be able to help that I can continue to do this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show