There is a lot of sensitive data that will captured by virtual reality devices that present a wide range of ethical and moral dilemmas that I’ve been covering on The Voices of VR podcast since 2016. During Facebook Connect, Facebook released their responsible innovation principles, started talking to the media about these principles, & Facebook CEO Mark Zukerberg told the Verge that “One of the things that I’ve learned over the last several years is that you don’t want to wait until you have issues to be discussing how you want to address [ethical issues]. And not just internally — having a social conversation publicly about how society thinks these things should be addressed.” However, the public record showed that hardly any of these ethical discussions about XR having been happening publicly.
White has an impressive history with advocating for human rights and technology policy, helping to reform US Surveillance Law while working with Dennis Kucinich, and was motivated to bring about change from the inside by working at Facebook over the objections from some of his friends who thought that it would be a morally-compromising position. White calls himself a privacy advocate within Facebook, and his role is to try to synthesize the outside perspectives about privacy implications from a wide range of privacy advocates, academics, non-profit organizations, civil society, and generally the types of privacy & ethics discussions that are happening here on the Voices of VR podcast.
Part of White’s role is to collaborate with the external organizations and experts on these issues, but most of these opportunities for outside council have been happening behind closed doors and under NDA. But he’s hope to be more engaged in these conversations within a public context because it’s these organizations who are going to collaboratively help to set some of the normative standards for what we do with the data from XR way before the government enshrines some of these boundaries within some type of legal framework. The ethical boundaries and framework more likely to come from a collaboration from the XR community first from organizations like the XR Security Initiative, Electronic Frontier Foundation, and other tech policy non-profit organizations.
Is that enough for me to be assured that Facebook is doing everything they can to be on the right side of XR privacy? No, not yet. We still need to have more mechanisms for transparency and accountability that go beyond the community collaborations and listening to what the culture is saying. Privacy advocate Joe Jerome told me that the trap is feeling like the feedback that’s provided to a company like Facebook can feel like it’s just a “box-checking exercise” for them so that they can say that they talked to privacy advocates. An example is VP Facebook Reality Labs Andrew Bosworth saying, “Consulting with experts across privacy, safety, and AR/VR from the very start is crucial to our product development process to ensure that we have the right frameworks as the technologies we build continue to evolve.” It’s great that experts where consulted, but there’s no transparency as to what exactly any of these privacy experts told Facebook or the degree to which any of their advice was implemented.
This is part of the reason why Jerome advocates for strong enforcement mechanisms in order to have a satisfactory level of accountability when it comes to privacy issues. In the absence of a strong oversight mandate and ability to bring consequences, then it’s have consumers be ensured that companies like Facebook are doing everything they can to ensure that they’re taking consumer privacy seriously.
But a big takeaway that I get from my conversation with White is that he doesn’t want to be the lone voice and sole advocate for privacy from within Facebook, and that he’s interested in building more connections and relationships to other tech policy experts who are ramped up on the implications of the data from virtual and augmented reality. There’s a big role for things like XRSI’s Privacy Framework, my XR Ethics Manifesto, or XR ethics & privacy conferences and discussions. There are a lot more open questions than answers, and it’s reassuring to know that there are people within Facebook who are both listening and participating in these discussions. Given this, now is more important more than ever to continue to work on a broad range of foundational ethics and privacy issues in the XR space.
My closing thought is that there’s still a lot more things that I personally will need to see from Facebook when it comes to having more transparency and accountability that they’re moving beyond these discussions and actually putting this type of advice into action. There’s also a lot more open questions that I have about the relationship between the public and companies like Facebook who are becoming more and more like governments. But the type of government is more like a technological dictatorship rather than any sort of representative democracy that has established protocols for how to interact and respond to the will of people. But at the same time, I’m at least encouraged that these open dialogues are starting to happen, and I hope to continue the conversation with Facebook on many other fronts as well. Overall, it’s a move in the right direction, but I think we all need to see more evidence of how Facebook plans on taking action on this front, and how exactly the plan on living into their four new responsible innovation principles.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
[00:02:59.962] Kent Bye: Great. So maybe you could give me a bit more context as to your background and your journey into VR.
[00:17:13.375] Kent Bye: Yeah, thanks for that. That's a great overview of both who you are, your journey into this space. And for me, I'm also just glad to have an opportunity to talk to you because I've been sort of wanting to have these types of conversations for a long, long time. And so it's just nice to be able to start this conversation in a public sphere. First, I want to say there seems to be a number of different stakeholders here. And you mentioned there's the law and the process of figuring out what the law is. I talked to Joe Jerome, who's a privacy advocate. He's concerned about it and looking at other issues like tracking the federal privacy law discussions that are happening in Washington, D.C. There's ongoing discussions about what the legal landscape is going to look like in terms of is there going to be a federal privacy law. California has the CCPA. Washington is thinking about doing some sort of privacy law. Illinois may have some specific biometric privacy law. So there's like both the state and the federal law internationally, you have like the European Union has a GDPR, which set forth privacy as a human right, and all sorts of laws around the world. And then you have the stakeholders within Facebook, which is, you know, how do you negotiate the different trade offs between the business interests and the privacy interest. And so let's first start, though, with the law, because you say you want to live up to the law, but the law is actually in this space right now where it's actually ongoing. And so do you and in the capacity of your job, are you interested in, like, having a strong opinion about what that federal privacy law might look like? Is that something that you're outsourcing to someone like the XR Association to sort of have a consortium of different groups within the entire XR industry to come to some consensus with say Microsoft and HTC and Sony for what that law should be. So maybe you could just start with, because the law defines a lot of these boundaries, how you start to interface with that. And if there's other separate people that are concerned about what those laws look like.
[00:19:07.760] Nathan White: Sure, so first I would zoom out just a little bit for and say there are way more stakeholders than that. I sort of think of simultaneously on two different time horizons of what the world is today what the community is today, and where the community is going to be 10 2050 years from now. Oftentimes, they are similar. Sometimes, they are not that similar because new things are coming on board, new technology is being invented. We can't really see the future. But if you zoom out on that time horizon of, let's say, 20 years or 30 years, if we are successful in this industry, the world is going to change, not just VR, not just tech. The world is going to change. That Facebook AR VR org recently changed our name to Facebook Reality Labs. And I think of that as a throwback to like Xerox PARC and Bell Labs of the teams that were building the computers that we understand today. Things like a mouse input, things like a graphical interface for how we interact with computers. We're thinking about changing that structure for how humans interact with computers. So rather than we are sitting at a desk and peering into a computer, the computers can live around the world and augment our existence. If that is successful, That will change the world. I went to Tunisia for RightsCon a couple of years ago, and I was in awe that everybody who was on the internet was using Facebook. If you had a company or a restaurant, they had a Facebook page. That was just how they used it. That was not how Facebook was designed. It originally was designed to bring college students together. But it changed the way that people around the world interact and expect technology. So I think about that on the same scale, that if we're successful, we're going to change the world, not just for the people who are currently using our products, but for everyone in the world. And that means we need to be listening to everyone in the world and thinking, what are people who may not have even considered VR? How could it impact them and what are their concerns? So mostly what I'm thinking about when I think about that scale is not the particular voices of, we've got to check this box, we've got to check this box. My first priority is, how do we get more voices? How do we hear more people? How do we have more people participating in this conversation? Because as I said, Nathan shouldn't be making some of these decisions. Nathan should be saying, this is what the community expects, and the community should be the ones to make those decisions. Over time, norms will emerge that people will expect certain things. I think about your cell phone. It used to be when you took a photo with a cell phone, it made a sound and there was a light on it. That that was because people were concerned everybody carrying around cameras would be privacy invasive. So they wanted to be transparent. this is happening, a photo is taken. We don't need that anymore because now if I hold up my phone like this, you know, okay, that person's probably taking a photo right now. That norm and expectation has evolved and developed. Now that we can standardize that, whether it's standardized in best practices or standardized in self-regulation set up by an industry association like XRI or whether it comes from a legislative body. Eventually, those norms are going to emerge and it's more important for me for those norms to emerge from bringing sophisticated stakeholders together to decide those norms, rather than me saying, this is what I think those norms should be. I certainly have a perspective on it, but I only have one perspective, and the amount of perspectives in the world and the amount of people who are impacted by this is too numerous to count. Does that answer your question about how I think about this?
[00:22:32.525] Kent Bye: Yeah. Well, I guess part of the subtext of that is that, you know, you have a laws that you're following, say like what the definition of biometric data is as an example, where there's a very specific definition that says if it's identifiable, but at the same time, you know, Stanford university just came out with a study saying that like, if you just have head pose data with hand tracking, and you're able to record like 10 minutes total worth of data to be able to train machine learning algorithms to be able to detect it, that you only need like a 20 second sample to be able to identify at a 95% accuracy over the sample size of 500 people, which is statistically significant to say, well, maybe hand pose plus head pose data should be classified as personally identifiable, even if it isn't right now. And so there's a sense of what Joe Jerome said is that a lot of this data that's coming out of VR isn't even necessarily defined clearly by the law. There's going to be all sorts of stuff with like eye tracking data, galvanic skin response, you know, all sorts of other ways that you're going to be able to extrapolate information from. EEG and brain control interfaces. I mean, we're, we're entering into this new realm where the law is like five, 10, up to 20 years behind in terms of where the technology is. And so Facebook is going to have an opportunity to potentially help shape what those laws are. And so that seems to be a bit of this. dialectic there of like there's discussions about these federal privacy laws that either you are going to be directly involved with or Facebook as a company are going to be helping shaping those laws, but those laws are going to be also dictating how this.
[00:24:05.365] Nathan White: Yeah, I think you're absolutely right. And it's why I think I have the best job in Facebook is that my job is not to look at it and say, does this follow the law? My job is to look at it and say, is this the right thing to do? So as you say, with biometric information, usually we look at Illinois, Washington State, Texas, GDPR for what is the definition of biometric information? And it is a very specific definition. Usually it is information about your body that can uniquely identify you. So you're thinking like your iris or your fingerprints. But that's not the only thing that I think users would think of as biometric data. I think that there are far more buckets of data. There's certainly information that could identify you, but then there's information about your body that doesn't identify you but could be used to learn about you over time. For example, let's say an emotive avatar that will smile when you smile. If a computer system logs every time you smile or every time you frown, over time, that would be incredibly sensitive, I think, to most people. There's also then things that could identify you if you have something to compare it to or over time or a sophisticated system. I think that's where you're talking about gate analysis of motion data or things like where your eyes are looking, how your eyes might move around, something like that. Then there's probably other information out there that I think people would think would be less concerning, where things about you that you can change. Maybe your hair color or your hair length. Some people can change that. Some people it might be medically sensitive. Or what you're wearing, you know, my shirt is white today. That's not particularly sensitive. I wore a white T-shirt today that I can just change it tomorrow. So, if we think about all of these different types of data, we need ways of putting it into a framework so that we can say, ah, this is a class one type of data. This is really sensitive. We should never, ever use that. Or if we have to use it, it should never come off the vice. or this is a class two data where there are ways that we might use it, we might need it to deliver the experience like having your avatar smile, but we should have controls on what we do with that data and we need clarity and things like that. But the law is not going to give that. The law is not going to create that. The law is only ever going to enshrine what the community decides is appropriate. That rather than get to an end state where it's enshrined in law, I hope that we can get to a place where the community is saying, this is a sensitive type of data. And so it's our expectation that you would use it in certain ways or you wouldn't use it in certain ways. And that is really important to me. That kind of conversation among sophisticated experts with expertise in privacy and youth and victims of domestic violence and every other perspective that we can bring in with experts from the VR community like you and Kavya Perlman and XRSI. so that we can collectively determine what those frameworks are, then we'd say, Facebook, follow those expectations. That if there were laws in the world, we're going to follow the law. If there were best case norms, like how we use cameras, we're going to follow that. But if they don't exist, What should Facebook do? Should they just make it up as they go along? I don't think that's a great idea. Should they hire people like me and have them tell them what to do? I think that's a better idea because it means I get a job, but I still don't think it's the right thing to do. I think the right thing for us to do is to focus on the community as a whole. and build together. How we do that is certainly complicated, particularly in the time of COVID. We're continually trying to get better at it, and we want to be better at it. We want to have more conversations. But I think that's really what motivates my perspective, is getting to norms set by the community of people who care about it, are passionate about it, and have expertise. Then the law will follow. I might be a little bit skeptical of the law, because I worked for Congress and I advocated before Congress and regulatory bodies for so long. But they're slow. They're slow. And if we wait for the FTC or FCC or one of the alphabet soups in Washington, D.C. to come out with the standards, it's going to be too late. I really do think that it's going to come from the community, that we're going to set norms in consultation and conversations with each other. And it is already happening. I mentioned XRSI a couple of times. They recently put out a privacy framework for how they recommend. It's similar. I wouldn't say it's similar. It's in the same vein of the NIST privacy frameworks for how large corporations should think about protecting privacy. And that is the kind of thing that is far more interesting to me than a regulatory body saying you must protect biometric information. OK, yeah, we'll do that. We've got the lawyers to make sure we follow the law. But is that really enough? Is that what the community expects from a company like Facebook? I don't think so.
[00:28:41.535] Kent Bye: Well, I wanted to give an example that I think is at the heart of some of my concerns around not where it's at now, but where it's going to go in the future. I mean, there's certainly things like being able to track head pose and hand pose to be able to have personally identifiable information potentially and what happens to that and the implications of that. But I think something like brain control interfaces, where UCSF did a study with putting ECOG nodes on the neurons to be able to essentially do speech synthesis. So essentially translating your thoughts into text, being able to read your mind, essentially. And as I went to the Future of Neuroscience conference that was put together by the Canadian Institute for Advanced Research, I was able to talk to some neuroscientists working on this very issue, and there was information that was talked about. saying that within the next five to 10 years, that a lot of this information that is using invasive EEG nodes are going to be able to do with non-invasive EEG with machine learning. So by the fact that they're able to put these neurons on the brain, they're able to train up these neural network systems so that you could do non-invasive, put a thing on your head, and it's able to read your mind. So this is within the next five or 10 years, we're looking at a future where you have brain reading technology. And I guess there's this existential fear that I have, like, what's it mean to have not only Facebook be able to read my mind and what happens to that data, but with a third party doctrine, that means that if you're recording my thoughts, that means the government could come in and say, let's have an audit of Kent Bye's thoughts for the last year, that there's that type of information that is coming from my body and synthesized by these machine learning algorithms. That to me seems to be like an extreme edge case. of like, okay, there's gonna be all this information that's coming out there. Eventually, we're going to be able to read my mind, and then have a log of what my thoughts are. And that's going to be stored on Facebook servers, and then the government could get that. So there's two things there. One is the sensitivity to some of this biometric data that may be coming off my body. And number two, the relationship between Facebook and the government in terms of them coming to you with a warrant or without a warrant, saying we want this information, whether that's biometric data or social scores. So there's a certain amount of trust that you have with how trustworthy do we find each of these people. And that essentially is a way of a social score that Facebook is keeping within your own internal system to be able to reduce harm and abuse and different ways of just having a way to know where people stand. But in China, they have social scores that are applied to people that dictate what kind of services they have. So you have this things that are maintained by private company, but then what happens when that type of information starts into the hands of the government? So let's just start with the brain control interface as a use case, biometric data that's reading my thoughts.
[00:31:20.322] Nathan White: Yeah, absolutely.
[00:31:21.462] Kent Bye: What happens?
[00:31:22.765] Nathan White: So I love working in this space for a lot of reasons. Everything is new and interesting. I've worked in areas where there aren't novel questions that people argue the same things over and over again. So one, I just love working in this space. I love that we can have these conversations. It's really fascinating to me and exciting to think, where can this technology go and what can happen in the future? And it's the cone of uncertainty. The further you get out, the more uncertainty there is. So I do think it's important to take a step back of like today, 50 years, where we're at in the process. And computers being able to read your mind and your thoughts are certainly something that are important to think about. But I think the issues you're bringing up will come up much faster than that. that in the future, we're going to be able to have devices and services out there that are more realistic, that the beauty of VR is the feeling of presence. And if you are a blocky avatar, you don't have the same sense of presence as if I'm looking at you in the eye. So eventually, there's going to be already some of the more expensive devices out there and mostly used for consumer research do eye tracking because they want to know where people are looking in a space. Eventually, I suspect that that technology will find its way into more commercially available uses because people are going to want to make eye contact. You're going to want to see that I'm smiling a little bit or that I'm frowning a little bit. You're going to want to have those emotional experiences with people. And so if you built a system that can know where you're looking, whether it's for foveated rendering or for more personal communication, and then also know if I'm smiling or if I'm frowning, you can, in some ways, potentially read somebody's mind without knowing what's in their mind. That if you look at a red Corvette and you, ooh, that's exciting, you smile a little bit, your eyes widen, you can tell that person's excited by that red Corvette. And if somebody were to observe that and then use that information, is that acceptable? Is that mind reading? Fair game. Or is that, no, that's something that somebody may not have even known about themselves. I don't remember it, it's probably on your Twitter feed, but I saw a couple years ago somebody did a study that said just by eye-tracking data, if somebody walks through a crowd, you can identify their sexual orientation. that these are the kind of things that we need to build frameworks for how we think about it before we get into invasive brain-computer interfaces. We need to think about it before we have those kind of things of, all right, if we're going to have eye tracking for these purposes because they power these services, what are the restrictions that we should put on ourselves for the right way to use that data that is comfortable for the world around us? And that may sound too happy-go-lucky of, you know, we'll all figure it out, but I think it does have to happen, especially if we get out of VR and into the AR sense. For AR to be successful, people will have to be comfortable wearing sensors. We all have sensors in our pocket in our phones right now, but if we're going to put computers around us and augment our reality, then we're going to have to be comfortable that we're wearing them and other people are wearing them. And so for people to be comfortable with that, you need to have assurances. You need to know what are they doing, and you need to know what they're allowed to do and how they're processed. So these are frameworks that I think that we are going to have to create collectively as a community. And then it'll be people like me who convinces and encourages and requires Facebook to build to those principles. And then at the end of that, or probably at the same time, but then that's the conversation that we say, with all these experts, this is the framework that we think makes sense, that this is what comfortable with. Then, if legislation comes along and says, this is the best practices and you must meet these minimum standards, that's great. That raises everybody's vote for smaller developers and smaller folks who may not be able to invest that much or that level into the socialization and development. I think that that comes not from, how do we stop computers from reading our brain? I think that comes from developing frameworks of New types of computers will observe new things and new experiences, and we need to put what appropriate restrictions on them are so that people feel comfortable, that you're not having your innermost thoughts being read without your permission. Maybe we get to some point in the future where you want to be able to tweet without talking and you want it to read your mind. I don't know. Maybe people will want it at some point. But we need frameworks for how to use those sensitive datas long before brain-computer interfaces. I think we're going to get there in probably the next few generations of, I would guess, VR headsets. Then you also brought up the issue of third-party doctrine and government accessing data. Yeah, that's important. That's what I think a lot about. And sometimes I wonder if my history in international surveillance law might have been relevant when I was getting hired. It wasn't something we talked about in interviews, but in the role, I think about it a lot, particularly with what we've seen from the Snowden leaks and what we've seen, and I think is fairly aggressive data grabs from law enforcement. DOJ in particular is pretty aggressive about what they want to get their hands on. There's an epic debate going on among privacy and law enforcement of, are we going dark because encryption is making it harder to get data that a couple of years ago was easier to get? Or are we in the golden age of surveillance where there is so much information about people available that if law enforcement goes to ask for it, knows to go ask for it, then they can get access to it? Joe mentioned this on your podcast, there's an epic paper on this called Tiny Constables, which was written by Ashkan Sultani and actually Kevin Bengston, who works with me at Facebook now in our AI team in a similar role as me, where they examined this. And the thesis of Tiny Constables is essentially surveillance used to be really hard and really expensive. If you wanted to know where somebody was going all day, you had to have a team of people handing off to follow them all day. Now you can put a GPS attacker on somebody's car and follow them around for months at a time where they're going. It's suddenly really, really cheap. And so there's a tension in that, is it becoming too cheap? It used to be a big question that people bring is, how much law enforcement can we afford or how much law enforcement can we tolerate as a free society? How much of that is appropriate? What right does law enforcement have to data just because it does exist somewhere? I don't have great answers for that. I don't have great answers for where we are going to be 10 years from now. But I think about it a lot. And it drives some of the way that I think about things of basic principles of privacy product development, of data minimization. If you don't need it, if you don't need it for a specific purpose, if it's not useful for something, don't collect it. Because, yeah, sure, maybe someday somewhere it'll be interesting. But it also is just a big target that law enforcement might say, ah, you've got a lot of data there. I would like to access this. We see it with people who are paying attention with Apple for, You know, they're very secure of what's on your phone, but things that are in the cloud are not necessarily encrypted backup and more access is available there. Photos stored in the cloud are huge targets for law enforcement of, in Pike's place between this time, what photos are available? But I'm very aware of law enforcement's desire to do their job, stop crime, protect people. And I do think that there's a tension there of as we create more data, as we go about our lives, not all of that data is even fair game for law enforcement, that I do think about how we need to address those. I don't have great answers for what we're going to do 10 years from now and when these new kinds of data come around. But that's exactly why I'm so thrilled that EFF just a couple of weeks ago wrote a blog post on a warrant should apply for VR maps that you create in your space. That we need people out there like EFF who are thoughtful in this space, who understand government surveillance, who understand privacy law, but also understand the VR technology so that we can have that healthy debate about what is and isn't appropriate and what should be off limits. I also hope the courts will eventually provide a little more clarity on things like Carpenter for even among older judges, a recognition that things collected at scale are much more invasive than an individual data point. I think I've heard you use the term mosaic theory, the idea that one phone call or one location data point may not be that revealing. But if you have my location data for 30 times a day for 10 years, you know everything I've done. You know a lot about my life. So I think we need more clarity from the law and we need to have that healthy debate with law enforcement. And I think it's happening mostly right now in the encryption conversation about the going dark or the golden age of surveillance. And then just quickly, the last thing you mentioned was the situation in China where the credit scores or the social credit scores, the social scores, I forget what they call them. I always get it mixed up with dark near episodes or black near episodes. Yeah, frankly, I'm concerned about that. I don't have any control over what they do in China, but I think about some of these conversations, and I'm passionate about these conversations, and I'm passionate about making sure that we drive this forward. Because if folks like you, and Kavya, and EFF, and Joe, and everybody else we've been talking about, if you're not the ones who are driving this conversation and creating what these norms are, The norms are going to develop in places like China, where they're going to throw a lot of investment at it, and they're not going to have that same ... They might have thoughtful conversations, but I'm not a part of them. I know if we do it here, we're going to have thoughtful conversations, and I'd much rather us be the ones creating what those norms and expectations for a global market are than leaving that territory to somebody else. One of the reasons why I was convinced to come over here is that if these conversations don't happen now, We're going to regret it 10 years from now. We're going to think, how did we miss this? How did we not create these frameworks and create these standards? How do we not have these discussions? And then we're going to be in a place where advocates are now in a bunch of different places where they're trying to change an entrenched system. I want to make sure that we can build that system in a way that is inclusive and diverse and considering of all those perspectives and all those opinions so that we don't have to rebuild it later. We can build it right the first time. Of course, there's some hubris in saying that anything we do the first time, even if we do our best to get as many voices in, we're not going to get it right the first time. We're going to have to iterate. We're going to need to bring in new voices as VR expands, as there's more audience, as there's more use cases, as there's more technology bringing new sensors and new types of data into it. We're going to continually have these conversations. It's not an end point, but it's urgent to me that we start having these conversations and publicly as soon as possible.
[00:44:21.745] Nathan White: Yeah, glad you brought it up. Really, really important. The first thing about the checking the box and the consultations, let me address that one first. In this conversation, I've been talking at a pretty high level about we as a community need to develop these frameworks. We as a community, I keep saying that, and that community needs to continually expand. And I really believe that. I think that that is important. Why I think your podcast is so important. Why I think conferences are so important. Why I think places like RightsCon are so important. But that is hard to do in a narrow way when you have a product or a feature. Take, for example, Project ARIA, which I worked on. I worked on Project ARIA for over a year on building an end-to-end way of securing data and being transparent about what we were doing. But ultimately, that is me working with a product team, and that is me giving them advice based on my experience with the privacy community and these vague higher-level conversations. But at the end of the day, it should not be, well, it's OK because Nathan told us we can build it this way. He said that this was fine. We want to actually talk to people outside of the company who have unique perspectives and say, hey, this is what our idea was. This is what we thought. We thought this was the right way to do it. What is your thoughts? And so sometimes we do do those black box consultations where we go to people and say, here's what we did. Here's what we're thinking. Can you give us your expertise? Do you think that we did this right? Are there things that we should have done? Are there things that we should change? What are your thoughts? And then we go back to product teams and say, OK, well, we learned this from experts in the field that we're really going to have to make sure that we're really transparent about what is going on. So we need to do an extra piece of transparency in this particular way, or we need to make sure that this type of data is really secure. Sometimes we do get feedback that they might not want to share publicly, and so we do have private conversations. Also, when we're having conversations about things like Project ARIA, I did ask people to sign NDAs so I could tell them all the details of what I was building and what was in it so that we could share it. And yeah, I get that that's a black box, I get that most of the world doesn't see it, and we're kind of asking, you're like, oh, trust us, we do these things in the background, but we don't tell you about it. I totally hear that that is not sufficient, and it's not enough. But I do want to assure you, to the extent my assurances mean anything, that it's not a check the box. It's not a comms-driven process where we say, we want to tell people that we consulted privacy experts, so let's go consult some privacy experts. I'm sure there is some comms value to doing that, and I'm not going to pretend there isn't, but that's not why I do it. That's not what's interesting to me. What's interesting to me is, did we get it right? Does this make sense from your unique perspective? This is a podcast, so I'm a relatively privileged white man that flies back and forth between DC and San Francisco. I have a pretty privileged way of viewing the world. There's a lot of perspectives that I don't naturally see. I try to understand, but I don't naturally see. And folks like Joe, who works for Common Cause, we keep bringing up Joe. His ears must be burning. He works for Common Cause. His day job is youth protection. And so when we look at it and say, your day job, what did we miss about this? Is this right? And we are so grateful to those folks who give us their time because they are giving us their time. They're not asking for money. They're giving us time because I hope they believe that it's influential and it results in a better practice. So to not check the box, I don't think people would talk to us. It wouldn't be worth their time if it was just, oh yeah, you talk to me so you can say that you talk to people. Why would anybody want to do that? We're all on Zoom all day. We don't need another 30 minutes on Zoom to do that. We're doing those conversations because we really do want to hear from people. But those aren't the only conversations that we want to have. And those are kind of the before we launch something, or once we've got an idea, or we're getting kind of baked. But that isn't the only thing that we need from the community. We want to be driven by the community of what is socially acceptable, what do community want, what features do users want, and what does the community expect from us? That we do need to have more of those conversations, they do need to be less opaque, I hear you that it does seem a little bit of a black box that I'm telling you I had conversations with people and I'm not telling you who I had conversations with. That's frustrating to me as well. But I think of that as just being one piece of the recipe. That's just one thing that we are doing. In my mind, I don't want to stack rank levels of importance. It's one of the things that we're doing. It's in the batter, it's stirred in, but it's not the only way that we want to rely on this. We also want to have conversations in public. And the way I think about that is that if you think 10, 20 years from now, there's going to be an Electronic Frontier Foundation, there's going to be a Center for Democracy and Technology, there's going to be a New America Open Technology Institute that have divisions that focus on VR, that there are going to be people who look at the way that we look at bias in news or the way that we look in fairness and algorithms, that people whose day jobs are to pay attention to this stuff and have expertise. But for those people, for Joe to turn his passion into a day job, there needs to be enough of a community where there is reason to do that work. If you write a research paper, there needs to be a conference that you can go present it at. You need to know that you didn't just waste your time presenting that. You're actually, because you were there, it's going to up your visibility in the field. There might be future job opportunities. There might be other media opportunities. There needs to be an incentive for people to develop into their day job. I think that that naturally is going to come, but my fear is that it will come in response to industry rather than developed to be collaborative with industry. So I am constantly looking for new ways to have those conversations and to bring more people in. Some of the things that we've done, also a little bit behind the scenes, we've been having, I've been calling them policy forums in areas where we bring in experts in other areas of policy And we just talked to them about AR VR and what the roadmap is and what the technology looks like and who people are in the field, so that they know who to go in and talk to and get more questions to try to build up that level of sophistication so that they can communicate with VR experts like you and others, so that we can really kickstart that and have more of those public conversations and more of those roundtable discussions and more conversations that Facebook doesn't necessarily need to be part of for us to still learn from, for us to still watch. I don't need to be on the stage, I can be in the audience and I'm still hearing what the people are saying. And then I think there was another point that you had brought up that I wanted to mention
[00:50:39.130] Kent Bye: The sort of the other aspect of that, which is that at the end of the day, once this is implemented. I have this conversation with you. I've had previous conversations, but moving forward. I have no idea no transparency in terms of whether or not the types of data that you're collecting from VR is going to have secondary uses that I may not even be aware of. And if I were aware of, I may object to. So in terms of building trust with the community, there's a certain amount, again, of not only of the history of what's happened with the discussions that you've had, but also just the implementation of do I trust what data is going into this big sucking in of a whole lots of data that's going into Facebook, potentially directly from VR and these immersive technologies. I don't have any idea as to what actually is happening to that data. What kind of machine learning driven inferences are made about me that these online behaviors and actions. And so I guess that's this underlying distrust or like skepticism that I have to kind of assume that it's the worst case scenario because I don't have any other accountability or transparency in terms of what actually happens to any of this data.
[00:51:40.943] Nathan White: Yeah, I think that's fair. I think that Facebook understands that. and that we collectively, they management, but we collectively want to do right. We want to get this right. As I said earlier, AR is not going to be successful if people don't feel comfortable wearing sensors and being around sensors. That requires comfort. If you're not comfortable, people will reject them. People will put up signs and say you can't wear them in this, the same that happened with Google Glass 10 years ago. I think that there is a deep and widespread understanding that we have to get this right. We have to be truthful. We have to have people who trust that we are doing the right thing or we're not going to be successful. I think the company really does understand that. How we demonstrate that trust is also important. How we show up every day is also really important. And so I hear what you're saying in that, why should we trust you? And why should we not assume the worst? I hear you. And I think we do need to show up every day and demonstrate that we're not doing the worst, that we are committed to doing the right thing. That is challenging. And let me give you some examples of why that's challenging. The first is, you said earlier, we sometimes say, well, we're not doing that today. You know, that might sound like a shaky answer, but to me, that's the only answer we really could give in that Facebook and other companies are inventing technologies. We have an idea where we think we're going to be in three years, where we think we're going to be in five years. We think that this is going to come together, but we're still inventing things. One, we don't know what technology is going to do. And we also don't know what features and services that people are going to demand of that technology. I did not predict Twitter 20 years ago, but it's something people want to be tweeting while they watch the Oscars. People might want to do things in the future. There might be reasons to do things. We don't know what the technology is. And so when we say like, ah, we would never do x, that's like tempting fate that something will come along. It's like, ah, well, everybody wants x. How could you not do x? There's also an issue of a company like Facebook. If we say something, if we put something in our terms of service, we are bound by law to that. Joe explained this on your great podcast, which by the way, must listening for anybody interested in privacy in this space. Go listen to Ken's podcast with Joe Jerome though. I guess if you're this deep into this podcast, you probably already did. What he was talking about is that privacy law in the United States is determined by the FTC. And the way that they generally do that is I forget the exact term, but are you misleading consumers? Basically, were you transparent about it? Did you do something that you weren't transparent about it? So if we say we do X or we don't do X, if to change that, we have a duty to recommit. We call it retoss. We have to change the terms of service to imagine these new use cases, which is very difficult. It's very expensive. It's very complicated. It's very long term. And so generally, you want to build it up. You want to do it as little as possible. So that there's, in my mind, a kind of understandable reason why you wouldn't want to make commitments about things that haven't been invented yet and you don't know what you're going to want to do with. At the same time, yeah, the community also wants commitments because if you think of that cone of uncertainty into the future, there's a lot of really, really terrible things that you can imagine using this technology for. I mean, the same things that are so amazing about VR, the fact that you can do the, was it the clouds of, I can't remember the name of it, but the empathy that comes through, that you can put yourself in somebody else's shoes and experience somebody else's being, That is so profoundly valuable that you can put yourself in somebody else's shoes and understand their perspective. But I imagine, you know, let's say a repressive government somewhere in the world starts using VR to educate their children for state propaganda about how great the fearless leader was. That same kind of experience that makes it feel real could also be used to horribly abuse society. And so I totally hear that we need to be more Certain we need harder rules. We need more guarantees. We need more clarity. I think the solution is not a Facebook. Tell us your grandmaster plan that doesn't exist yet. I think the solution is we build this together so that when we come to those decisions of what is the right way to do this or should this data, as you said, be available for ads or not, that it's not just Facebook making that decision by itself, it's the community making that decision in collaboration with Facebook. And do you believe that that's happening? Do you trust that's happening? I totally hear that some people don't. They're not going to believe it. Some people do not trust Facebook as far as they can throw them. I have many, many friends who feel that way. I have many friends who question whether or not it was ethical or moral for me to even come work for Facebook. Ultimately, I think it's too important to get this right, that we have to have those conversations that I'd rather give the benefit of the doubt and get burnt than not try. Because if we don't try, that's effectively the same thing as just getting burnt even worse.
[00:56:33.229] Kent Bye: Yeah. Yeah. We're at the top of the hour and I just want to ask one final question or I could go on and on and on.
[00:56:42.783] Nathan White: I would love to go on and on and on. I actually do need to drop off like right at time, but we can do one more question.
[00:56:47.805] Kent Bye: Okay, sure. Well, uh, just to kind of wrap things up here, what do you think the ultimate potential of virtual reality might be and what it might be able to enable?
[00:56:58.727] Nathan White: I love that you asked that question because the answer is universal. It's it's what are you passionate about? What do you care about? What's exciting to you? The ultimate potential of VR is versatility. It's sure we can have the greatest games. We can have the greatest travel. We can have the greatest work experience. But with the ultimate goal of VR is that it can do all of those things. It can be things for everybody, that it can be an art platform. It can be a music platform. It can be a sports platform. It can be a work platform. It could also be really useful for DoD and military applications. But the beauty and what is exciting about it is that the end goal is what we make it. And that's pretty dang cool.
[00:57:39.480] Kent Bye: Nice. Well, Nathan, I just want to first thank you for joining me here on the podcast. And, you know, my closing thought before we drop off is just that, you know, Facebook as an entity is becoming more and more like a government and the types of scale that you're operating with. You're talking about different aspects of deliberative process of having input. But I guess I would like to see like things like Freedom of Information Act and more transparency, more accountability. And if you really want to create as diverse and open as you can, then thinking about not just having a lot of these behind closed door meetings with certain people that you're talking to, but to have a bit of a paper trail in terms of the types of discussions that are happening there. And like, that's what I would put forth as a challenge in order to really think about how to start to cultivate a deeper trust around these issues. Because while I'm very grateful to be able to have this conversation. Still, at the end of the day, when it comes down to how this actually gets implemented, there's still a certain amount of opaqueness that I can't say I completely trust Facebook is doing the right thing. Even though I know that you're there fighting and advocating for this, there's still, from the outside, as a journalist, not being able to actually independently verify that, if you know what I mean. But thank you for providing the opportunity to be able to chat and to, yeah, just to be able to talk about some of these different issues and to navigate.
[00:58:53.998] Nathan White: Thank you for letting me come on and I really thank you for everything that you're doing for the community. I don't want to flatter you too much on your own show, but. Your influence in this community and your way of thinking, your podcast, the voices you've elevated, the platform you've given people to bring together, I think is one of the biggest pieces of glue in this community. And so I'm just grateful to you for all that you do and are continuing to do for this community and for holding Facebook's feet to the fire. We need people to do that too. For norms to emerge, it can't be a bunch of people who love Facebook and love everything we do saying how great we are. We need a chorus of voices and that includes people who are honest and tell us when we're making mistakes. And so I'm just very grateful for you and everything you do for the community and for allowing me to come on your show and chat with you and your audience.