#516: Privacy in VR is Complicated & It’ll Take the Entire VR Community to Figure it Out

jim-PrestonWhen I was at the GDC VR Mixer, Jim Preston struck up a conversation about his concerns about privacy in VR. He works at the VR eye tracking company of FOVE, but wanted to speak to me on his own behalf about some of the deeper philosophical questions and conceptual frameworks around the types of intimate data that will become available to VR headsets. As more and more biometric data streams are integrated into VR, then there a lot of complicated and complex ethical questions that he thinks will take the entire VR community to figure out.

LISTEN TO THE VOICES OF VR PODCAST

Preston says that VR is challenging the long-standing enlightenment model of mind-body dualism, and that VR is able to do a sort of “redirected thinking” in being able completely control all aspects of someone’s else’s reality. This is a lot of power to put into performance-based marketing companies who have an extraordinary amount of data about our private lives, and he has concerns that this data could start to be used to drive consumer behaviors in unconscious ways.

The technological roadmap for VR includes integrations with new biometric data streams including eye tracking, facial tracking, galvanic skin response, emotional states, our voices interactions, and eventually EEG brainwave data. This data has typically had tight privacy controls either within the context of medical applications or market research that requires explicit consent, but it’s being captured within the context of an attention-driven consumer market where there many other vectors of private data that have been collected and connected to your personal identity.

Here are some of open questions around the future of privacy in VR:

  • Do we need to evolve the business models in order to sustain VR content creation in the long-term?
  • If not then what are the tradeoffs of privacy in using the existing ad-based revenue streams that are based upon a system of privatized surveillance that we’ve consented to over time?
  • Should biometric data should be classified as medical information and protected under HIPAA protections?
  • What is a conceptual framework for what data should be private and what should be public?
  • What type of transparency and controls should users expect from companies?
  • Should companies be getting explicit consent for the type of biometric data that they to capture, store, and tie back to our personal identities?
  • If companies are able to diagnose medical conditions from these new biometric indicators, then what is their ethical responsibility of reporting this users?

Preston has a nuanced view of what VR is going to enable in that he thinks that it’s not going to be either a total dystopian or utopian future, but that our future is going to be complicated and complex. Much like centaur chess teams of humans and AI are able to beat any other AI program, then this type of co-operation between humans and machines are going to enable all sorts of new amazing capabilities while also introducing new challenging problems.

The future integration is biometric data into immersive technologies have an array of complicated and complex questions that go beyond what any single company or individual can figure out, but Preston says that this is something that the VR community as a collective should talk about and attempt to answer some of these open questions.

I’ll be featuring some more information from biometric experts from the Experiential Technology Conference on the Voices of VR podcast as well as an interview with Oculus’ Nate Mitchell. For my previous coverage on privacy in VR, then be sure to not miss Sarah Downey’s take on privacy in VR and the relationship between the 1st and 4th Amendment, as well as Tobii Eye tracking’s recommendation for explicit consent for recording eye tracking data, HTC’s Dan O’Brien, these two interviews with Google with some open questions about Google Earth VR & Tilt Brush, as well as my interview with Linden Lab’s Ebbe Altberg.


Support Voices of VR

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to The Voices of VR Podcast. So when I was at the GDC VR Mixer this year, I had Jim Preston from an eye tracking company come up to me and strike up a conversation about privacy in VR. Now, because Jim is working at an eye-tracking company in VR, he's intimately familiar with the types of information that eye-tracking is going to be able to capture and record. So in today's episode, we're going to discuss some of the ethical implications of these new biometric data that's going to be captured with virtual reality, and some of the needs and open questions that we have around that, including what are the conceptual frameworks around how we understand this? What should be public and what should be private? And I really appreciate Jem's nuanced perspective of not going towards the utopian or dystopian futures, but to have some sort of complicated and complex combination that is going to really take a community-wide effort in terms of really discussing these wider issues. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by VRLA. VRLA is the world's largest immersive technology expo with over 100 VR and AR experiences. They'll have tons of panels and workshops where you can learn from industry leaders about the future of entertainment and storytelling. I personally love seeing the latest motion platforms and experiences that I can't see anywhere else. Passes start at $30, but I actually recommend getting the Pro Pass so you can see a lot more demos. VRLA is taking place on April 14th to 15th, so go to VirtualRealityLA.com and get 15% off by using the promo code VRLA underscore VoicesOfVR. So this interview with Jim happened at the VR Mixer at GDC that was happening on Wednesday, March 1st, 2017 in San Francisco. So with that, let's go ahead and dive right in.

[00:02:11.382] Jim Preston: My name is Jim Preston. I am a director of strategy and business development at a company called Fove. We are the makers of the world's first eye-tracking VR headset. We just shipped in January and we're excited about 2017. For me, what was so fascinating about FOV and what's so fascinating about VR is it's a form of human-computer interaction we really just have never seen before. And in fact, I don't think we've really truly contemplated deeply just how it's going to affect our lives really quite radically, really quite quickly. So that's what's sort of going through my head right now.

[00:02:47.593] Kent Bye: Right, so we're here at GDC, we're at the VR Mixer, and we were just having this really deep conversation about privacy and the future of privacy and virtual reality, and I just wanted to allow you to step out of your official role of FOV and just kind of speak on your own behalf about some of these deeper issues and concerns that you've been having about privacy, because when we think about eye tracking, that's something that's very intimate, and I imagine that you can get all sorts of different information. So you're really on the cutting edge of thinking deeply about the future of where the technology trajectory is going and the implications on privacy. So just curious to hear some of your thoughts on that.

[00:03:24.852] Jim Preston: Sure. Thanks for that. And to be clear, I'm representing only myself here. I'm not representing Fove in any of my thoughts or comments. And part of this is launched by recently in philosophical circles, schools of thought, epistemology, there's been a sort of rethinking of what the model of the mind is. And so, for example, you and I probably grew up with what's typically called sort of an enlightenment conception of the mind, which is you have a mind, there's a little guy inside, right? And he kind of looks out through his eyes and he sees the world the way it really is. And he thinks about things and he says it out through your mouth and you have a single identity inside your head and that's just how it works. But actually VR kind of reveals very clearly that that conceptual, that mental model of this little homunculus inside your head is actually not really that accurate. In the same way that, you know, we do redirected walking in VR, the ability to sort of change how people walk in the real world based upon changing their orientation in the virtual world, that same thing is true in the thinking. Just in the same way you cannot walk in a straight line blindfolded, you can't think in a straight line in a vacuum. You think in an extended world around you. You think in a language. You think when you talk to people, when you look at, you are always interacting mentally with the world around you. And so for VR, there's a chance to do what almost is like a kind of redirected thinking in the same way that we do redirected walking. You truly are controlling someone's reality in the sort of context in which they're thinking. So that in itself makes it very powerful, makes it very exciting, but also I think it's a much more accurate representation of the way the mental world actually works for us. But for me, for eye tracking specifically, to sort of drill down into that, eye tracking is so fascinating because We know exactly where your pupils are looking and the eyes are the representation of what you're actually valuing right now, right? Truly what you're looking at and what you're focusing on. And the question is, is that private or public information? What you're looking at? If it's an expression of your thought, if what you're looking at truly is an expression of your values and your thought, then it's a kind of mind reading, right? It really is a kind of knowing exactly what you're looking at, but at the same time, it's obviously very public. We're making eye contact right now. In fact, if I were to look away from you, you'd be kind of offended, right? I mean, eye contact is extremely important. It's a kind of handshake that's always going on, right? So eye tracking has this incredible, potentially difficult question of, like, how deep does this go, right? How much into the mind? So now you extend it out to other things. So pretty soon we're going to have, in the industry, we'll have face tracking. We'll have mouth facing cameras, we'll have edge detection of the face, we'll be able to see the crow's feet, the furrow of the brows, the shape of the eyes, the shape of the mouth, those sort of things. And from that we can extrapolate already with a high degree of confidence, a lot of emotions. you know, four to six. So things like surprise, delight, disgust, that sort of thing written on your face, right? It's like you've got your heart on your sleeve and it's very easy for the machines to sort of look at that, do some machine learning and go, you look confused or you look lost or you really are disgusted or alarmed right now. These are very interior emotional states and In the future, the machines are going to have a pretty good guess. They're not always going to be accurate, and you'll spill people with poker faces. They'll be able to beat the machines. But for most of us, the machines that we'll have in our home, all the things that will be looking at us, all the things that will be listening to us, will have a pretty high degree of confidence of what we're feeling. And so just as in the same way that we've sort of taken our higher cognitive functions and have extended it out into the world around us through Slack channel and through Google Maps and the fact that my phone knows all my phone numbers. I don't know my phone numbers. My phone knows it. I have just sort of offloaded my parts of my brain function to my apps. There's going to be parts of my emotional sort of interior state that I'm offloading or I'm going to be sharing with the machines around us. In some sense, that's going to be great. Right now, for example, Alexa doesn't really know what I'm feeling, doesn't anticipate anything. She just sort of reacts to whatever I ask her. And she'll get better and better over time. And in some sense, that's very good. But in a lot of other ways, there's going to be very troubling questions around just do we really have truly private interior states? So, for example, you and I were talking earlier that when we grew up, you know, we could have a crush on a girl or we could have a crush on Wonder Woman. right? And it's still have a very private interior state. I think a lot of young people that are growing up in this world of eye tracking, face tracking, AR, MR sort of machines, I think those interior states are never going to be fully interior anymore. So for those of us that are part of this business, those of us that are help creating these awesome new technologies, we're kind of mortgaging the next generation's concept of privacy in a way that they won't ever really know. I don't want to get too sinister here. I don't want to make a big brother kind of argument. In the same way that we've all voluntarily given up some amount of our privacy in exchange for this sort of convenience of like Google Maps, which I love, which I use every day. Google gets to know exactly where I go every day. I think the same thing is going to be true of our emotional states. We'll be willing to export them because there'll be so many conveniences that will arise from them. But just as in the same way that the younger generation today that has grown up with just about every mistake they've ever made has been Instagrammed or recorded or tweeted in some way and they didn't have a chance to do stupid things at parties, in the same way the next generation is really not going to know that same sense of interior life, fully interior private life that you and I grew up with. And I think that's something all of us need to grapple with. And that's just one of the questions that I think these technologies are going to force over us in the next three to five years.

[00:08:48.060] Kent Bye: Yeah, and I've sort of seen this writing on the wall as well, because I see the technological roadmap. I see that we're starting with hand tracking and body movements. Once we open that up into eye tracking and emotional states, then it feels like that's a clear shift of now we're moving into our internal subjective realities, these really private, intimate details about ourselves. Like Sarah Downey say, the more that we surrender this information and data over to companies, the more that it weakens the Fourth Amendment protection rights of what is reasonably expected to be private. There's the big companies that are out there that you refer to them as performance-based marketing companies, so they're actually gathering all this data on us and that they are saying that they're going to be personalizing these experiences for us so that they could put us into these different categories to know what kind of ads they can serve to us so that the more they can learn about what our interests and values are the more that they could connect that data to other people. So maybe you could expand on that, that idea of a performance-based marketing in this kind of what I see as this privatized surveillance that we've surrendered our sovereignty into because it's giving us these clear benefits.

[00:10:00.778] Jim Preston: Sure, yeah, there's a lot to unpack there. One thing at very high level I want to say is I want to avoid the temptation to talk in apocalyptic terms, to suggest a Big Brother sort of analogy here. It's very easy to fall into this trap of crying that the sky is falling, to make sort of a lewd-eyed argument that we should be very careful we're heading down a dark path. What I think is we're heading down an extraordinarily complicated path, and we currently lack the conceptual tools to really understand what we're doing. Let me give you a very crude example. We lack the legal tools to understand what we're doing. So right now we have EULAs, End User Licensing Agreements, that no one reads, right? They're like 64 pages long. It's kind of a joke. We all know that none of us read it. They're there for legal reasons, right? But the EULAs that we have now are comically sort of inadequate for the privacy issues that we're talking about that are coming in three, four, five, seven, eight years, right? So just having a 100-page document to sort of cover things that, for example, you know, a company that some of these performance-based marketing companies will know that you have glaucoma before you do. They will know your sexual orientation with a high degree of confidence. In fact, they could know it before the actual user knows it, right? They could know certain things about certain people's sexual predilections just based upon what are they looking at and what are they feeling when they're looking at it. By the way, I include more than just eye tracking. There's going to be a host of biometric data. in the future around galvanic skin response, and those are being worked into headsets right now. So I don't want to suggest that eye tracking alone. Anyway, to return to your original point about performance based marketing companies, these PBM companies, the most successful ones, these wildly successful companies that we're seeing today, Facebook, Google, Amazon, Apple. Obviously, Microsoft is working very hard to move from a traditional sort of software company into more of a data centric company. You know, these are the companies that understand that we live in an attention economy. And they understand the gold of the modern economy is the data, is the personalized user data. And I personally feel that Facebook doesn't spend $2 billion or close to $3 billion to acquire Oculus because they love video games. I think their attitude towards video games is pretty clear. I think they spend that money because they love data. And VR devices are vacuum cleaners for personal data. And they will just get more and more in the future. And I include AR in there as well. And that's not necessarily a bad thing because I think Facebook delivers a value to me. I use Facebook every day. I use Google Maps every day. I use a lot of their products. I'm on Amazon every day. They deliver a value to me that I'm willing to make that exchange. But there's a lot of people that haven't gone through that tradeoff that are going to be making exchange around their privacy that they haven't really thought deeply about, right? That they're going to be giving away a lot of this information. And so I think it's really up to Us as a community, as the VR community in these early days... to make sure we lay out these conditions, these questions, as clearly as possible. And I consider what you do here on the Voices of VR podcast to be a part of this larger effort. And that we have to sort of solve these as a community, because if we don't come together and sort of agree on certain level practices, then they're going to get solved for us, either through the marketplace is going to vote and certain vulnerable constituents. Young women, young children will not be allowed to use it because of a negative experience, or it's going to be solved through courts and that sort of thing. So I would like to see these sort of questions honestly addressed. But look, there is a very, very complicated future. around a world in which internal emotional states are going to be shared with our machines. And what does that mean to your point about reasonable presumptions of privacy? Those are just starting to fade. What counts as a reasonable idea that you should not be listened in on? And I think what makes it so complicated is that we're all sort of party to giving that away. We all take pictures of our food. We all check in at the airports and tell Anyone who cares, you know, where we're actually flying to this weekend. We post pictures of our children. We're giving away freely of our own free will all sorts of data because we're getting something back in return. So that's why I want to stay away from really negative tones of like Google's in this sort of or Facebook or my or Sony or in this sort of sinister cabal to grab sort of data. I think they do want to make it a highly personalized experience. And I think for somewhat innocuous reasons, but also for let's be honest, we live in an advertising and attention driven economy. And advertisers are keenly interested to know what you're looking at and what you're feeling when you're looking at it. So you take a mobile economy that we see in video games now, a $50 billion business, it's a fairly crude sort of advertising experience where you're looking at sort of a 2D ad or something like that, and they're willing to pay extraordinary sums of money to purchase, for example, middle-aged women in North America who play puzzle games, that sort of thing. And they pay enormous amounts to the LTV of those users. Imagine a future in which advertising is so much more compelling. It's so much more involved. It's not just watching a 2D image. You're interacting with, opening up a 12-pack of Coca-Cola, looking for that legendary Hearthstone card. Meanwhile, Coca-Cola is looking to see what your eyes are doing, to see if your cheeks are flushed, to see if your mouth opens up, because they want you to have this sort of positive experience. And Google wants you to have that positive experience and you want to have that positive experience, right? So they want to know what you're feeling so that in real time they can have the most optimal experience and they will pay enormous sums of money to make sure that that particular customer has that great experience, right? And is that a terrible thing? Well, personally, I like ads that are tailored to things that I'm interested in and I know that Google is tracking me all the time and that's why I use Google when I don't use DuckDuckGo when I could, right? And I think there are other issues in this future where this face tracking and the eye tracking and the whole emotional experience is going to get even more complicated. So you consider the concept of avatars, the ability to have some sort of separation between yourself and your presence online with yourself and other people, but also chatbots. Brands are going to have avatars. They're going to be represented as anthropomorphic things in the world, right? Right now, Alexa is basically a black can of tennis balls that sits on my desk, right? But very soon, she will be a character model. And it's very likely that she will look different for me than she will for you, because through machine learning, they will notice that I got a thing for redheads. And that, you know, Alexa will very likely be a redhead of a certain height and a certain body build, but for you, Kent, I don't know. You know, maybe you're a brunette kind of guy, and she will look different for you. Right, because literally they will change, palletize the hair, the eyes, right, the character model. They can change the shape within the bounding box. You'll see a different model, right? Sort of like the Ansari in Mass Effect in the same way that they kind of appeal to different races across the galaxy and kind of look at the ideal form of them. That's the kind of world we're looking at where they have that emotional data. You could literally see sort of a different character even in the same virtual space than what I'm seeing, right? And is that necessarily sinister? I don't know, but there are things that we really haven't thought of. that are going to be on top of us before we really had a chance to think through them.

[00:16:47.825] Kent Bye: Yeah, and I think that my impression is that there is a certain element of the engineers and people that are creating the future with these virtual reality headsets and technologies that actually have really good intentions and that they don't really want to suck in all this data. But yet there is a different level of the legal and sort of business side that I think is a bit of a pressure. And I think that right now we have this situation where there's a lot of earnest people in the industry that are really just trying to create the best VR experience. And sometimes they're just using that data to be able to improve the experience. And I think that is totally legitimate and that's sort of the line that I've been told. Oculus is being treated as an independent company right now, and that Oculus is sort of off on their own doing things, and that, you know, when I talk to Oculus, I sort of get this, like, oh, I can't speak on behalf of Facebook. And so, like, they're kind of maintaining this sort of, like, we're independent, and, you know, we're not necessarily doing that. However, if you look at their privacy policy, they've architected for a future such that they can implement all sorts of things to do just that, which is to collect all the data, to get all the physical movements, and to tie it back to our personal identity, and to not only that, to use other services to pull in information from our mortgage data, our financial history, and to create this, in essence, a super profile of who we are and what our spending potential is. So, what does it look like to architect with privacy in mind? What are the things that need to happen now such that technology is always ahead of the legal? the legal is always kind of like just catching up to what's happening on the bleeding edge of technology. And it's meant to be like law is supposed to move slower. You know, we're not trying to overregulate things before the technology is even there. And so there is a sense of like, let's see how this unfolds. Let's see what benefit we get out of it. But yet, how can we architect for privacy in mind from the beginning? And what are the types of pressures and demands that we can do to get the level of transparency that's needed?

[00:18:38.377] Jim Preston: Yeah, those are all great questions, and I'm afraid right now I have just simplistic thoughts that I can share right now. To your first point about the folks at Oculus and Vive and Sony, and they're doing pioneering work right now. Right now, and I think you feel the same way, Ken, we're all enthusiasts in VR. None of us are in here because we have grand schemes about building information architectures. We all love VR, we love what it's capable of, and I've talked to a lot of folks, I have good friends who work at Oculus, And, you know, they're very proud of what they're making. They're very conscientious, thoughtful people. And, you know, I think I feel that when you're at something like here today, we're at, you know, SVVR and we're, you know, people that feel a passion for this. And so I want to be very clear that I don't, I don't get any sinister intentions for anyone right now. It's just, there's some dangers of overlooking certain things to address your second part of your questions of what can we possibly do in the short term? What I would suggest to the executive producers, the journal managers of the world are making this content. is take a look at some of the mistakes we made with the COPPA Act. And for those who don't know, C-O-P-P-A, the Child Online Privacy Protection Act, that came online. It's been revised a couple of times, most recently in 2011, I believe. It's United States law, so it doesn't apply internationally, of course. But this was the law that's sort of the now famous Under 13. You couldn't really track. If you tracked any data for anyone under the age of 13, you had to just not even record it. You couldn't even bring it to the servers and then throw it away. You weren't even allowed to sort of store the data. And at the time, I was working on the Madden NFL product at Electronic Arts. And, you know, we just grabbed data, right? We didn't really kind of know what your age was, so we had to put in an age gate. We had to go in and sort of mark data, right? And this is what I would suggest to a lot of people working on this right now is really start to make sure we're flagging this data that we're starting to get because we will be able to uniquely identify people very soon through iris patterns, through retinal patterns, through certain eye shapes, even through voice. There's other things besides eyes. You can uniquely identify people, be able to know that, be able to attach all sorts of data to it. So if a court comes and says, Look, there's a very unfortunate incident that happens in three years with a young woman online, and they want all information on, and I'm just hypothetical, you know, 14-year-old females, all has to be discarded. If you don't have that data flag, you don't have the ability to actually purge that from the system and show that, you might have to purge all your data, right? And that could be a very crushing blow to an organization, especially if you're a developer, where your lifeblood is your ability to maintain your contact to your customers. So that's the short term, but even I think that's just a band-aid, right? I think there might be one solution where, will technology offer us a way out of this? In other words, will there exist a technology in the future we can kind of overwhelm the recording where you could have multiple different sort of identities online? We're constantly flooding the system with what are, I guess, unique identities, but they still attach to you in some ways, right? And so instead of trying to hold back from the system, what your eye shape is or what your voice sounds like, what your face shape, what your emotional sort of thumbprint is. to try and flood the system with more and more data so that you become sort of muddied out in the whole thing and not become so crystal clear and identifiable. Maybe there's a technological solution in which you can go online with the same eyeset but present a sort of different profile to the system. I'm just sort of speculating right now whether there could be a technological solution to these problems, but I think there's going to be challenges here that the courts are just, as you point out, just conceptually unprepared even to answer these. We really are going to have to answer these as a community. or the marketplace will sort of answer for us.

[00:22:12.303] Kent Bye: I think the biggest challenge that I see is that, you know, in talking to Sarah Downey, what she said is that there's these two different classes of personally identifiable information, that's the PII, that's anything like from your name, your social security number, your phone number, your address, your IP address, and that that is a special class of information that has to be protected and not shared and revealed because it's like private information. and so we're starting to talk about these things like physical movements and eye tracking and emotional states and a lot of this is a bunch of numbers that if anybody looks at they were anonymizable that you can just kind of look at that data and not really trace that back to anybody. And so you just store it away in the database. So then we move forward and then we start to add in EEG. I'm talking to Connor Russomano from OpenVCI. He said, actually, you know, some of this biometric data may actually have like a unique fingerprint that if you apply the right artificial intelligence, and kind of unlock the personal identity. So then all of a sudden you have these huge swaths of databases with all these numbers that before was not PII, is now suddenly PII, and you've completely de-anonymized that anonymous data. So Sarah Downey, what she says is, well, the solution is just simply don't record anything, and if you don't need it, don't store it, which serves up a dilemma, which is as people who are architecting systems that are trying to train AI systems, you know, data is gold to be able to train AI. So we could go into VR and move around and then be able to train AI neural networks to be able to be convincing as humans because they're mimicking how we're moving. And so there's all sorts of ways that you could take that data to train AI or to improve and personalize the experience. And so there's this question of like, okay, there's a trade off from the personalization, amazing type of experiences and the interactive narratives and all sorts of crazy, amazing things that we can't even imagine yet in terms of like what we're going to do. And there's the trade off of that privacy of storing what is now anonymous data, but could be unlocked with the right algorithms in the future.

[00:24:13.266] Jim Preston: Right. I think that is correct, but incomplete. In other words, The challenge is not just the types of core data. The challenge is also going to be the extrapolated data that comes out. So, for example, there might be something that you and I agree is completely innocuous, normal data that you're ready to give away freely, your GPS coordinates, for example, and what type of OS your phone is. And we're not too worried about that. Or you're okay giving away your zip code. Right and you're okay with giving away your age and from that certain parts of information we can extrapolate things like we pretty much know what your average salary is and Once we know where salary is we can put an LTV against you and we can try and separate you from your money, right? There's a lot of things that we can take from some very base data that we would want to record It's what you would probably regard as relatively innocent that can extrapolate some really complicated and difficult things so to her point I think that's one, it's a reasonable solution is yes, let's not even record personally identifiable information that may seem unnecessary. But even things that are necessary that might seem innocuous, it doesn't take that much to have a machine that is constantly listening to you. to sort of extrapolate out what it thinks your education level is, right? To extrapolate, okay, I now know your zip code. I have a pretty good idea of what your gender is. And so given that you're a white male that lives in this zip code in San Francisco and use these words, I pretty much know this is probably what you make. And I'm going to show you these kind of ads, just for example. And each one of those individual sources of data you'd probably be okay with, but added up there's a whole sort of meta layer of data that that problem alone won't solve. So I want to agree with her point that yes, being very judicious about what data was actually retained is one part of the solution, but there's a whole other problem with extrapolation that it doesn't solve. Getting back to another core point that you make, I think the real challenge is one at a community level. And let me give you an example, Twitter. None of us knows where Twitter is. We know where the building is. In fact, we're a few blocks away from it right now. What I mean is literally where in space is Twitter. So we both know that language differs based upon where you use it. There are things you would be comfortable saying in the locker room you wouldn't say in the boardroom. There are things that you and I would be comfortable saying in the bedroom that you wouldn't say in the classroom. Where the location of it matters and what we're allowed to say there, right? So for a long time in the early days of Twitter, people made these stupid jokes on Twitter. They made these awful sort of embarrassing jokes because they kind of thought it was a locker room. But the rest of society was like, no, actually, we think it's a boardroom. Therefore, you're fired, right? And none of us agreed on that, right? Much of in the same way that I talked to some of my female friends, and they're like, no, you can't break up with someone over text. That's just not allowed. I'm like, OK, now I know. The community, I guess, sort of got together and decided this. What we haven't done as a community is there's this new data coming, and we haven't decided what kind of data is it and how comfortable are we with it. right? Some of it is obviously very private. Your health information, that's not particularly controversial, right? But there's things like what you're looking at. Are you allowed to sort of freely look wherever you want? That's an expression of like your absolute sort of free will. And if something is trying to catch your attention and track your eye movements and Is that eye movement? Is that your personal interior data? Or is that public data? It's your eye movement, right? So the only way I think we really solve this, it's not going to be the lone philosopher on a podcast. I really think it's going to be a community that's going to decide. And it's going to be things like the Kronos Group that are working on things like hard technical standards, but also the VGLA, which are working on soft technical standards. But it's going to be leaders at Oculus, leaders at Google, folks that you're talking to on a regular basis, and going to have to like force this issue of like, we're going to have to make some decisions about this stuff. at the base level of code, of what you're willing to record, and at the upper level of policy, that someone like Clay, at the upper levels, or Jason Rubin, and these sort of true leaders of our industry, they're going to have to make some uncomfortable decisions on what are we going to track, what kind of games are we going to have, what kind of experience are we going to have. And that's the role of leadership, is to make those difficult judgments.

[00:28:16.650] Kent Bye: So the thing that you were talking about that reminded me of is the thing that Sarah Downey said, which is essentially that in order to have a full freedom of expression of your First Amendment, you need to have that Fourth Amendment privacy, because there is that context switch between what you say in private and what you say in public. when you're in public, on the record, then you may have a less authentic way of expressing yourself because you may be in the context of representing a company or it just may be something that you may not want to be official and on the record. And I feel like this point about what's private and what's public is sort of getting to this sense of like slowly we're eroding all of our private spaces to the point where more and more it's harder to have those private contexts to really have authenticity and being able to be completely in alignment to just authentically say what is on your mind. We're talking about right now it's communication in a public sphere and social media or whatnot that's already started to happen in the Twitters and the Facebooks of the world where you go on there and there's a certain amount of like pressure to be able to put forth your best face because you know you want to get the likes and there's sort of a conditioning that happens that puts forth a certain behavior but sometimes being completely radically authentic is not necessarily rewarded or necessarily smiled upon and so what we're talking about here is a future where these immersive technologies are more and more potentially the more that we use them everywhere and wear them all the time eroding those private places and then switching the context into what used to be private is now public.

[00:29:50.040] Jim Preston: Sure. Let me let me try and strike a different note than a sort of dystopian utopian binary sort of better or worse world. Let me just strike a note of complexity we're not ready to handle. So let me give an example. I believe it was either last night a couple of nights ago the CEO of Uber was recorded in one of his own cars basically berating one of his own drivers right and this video went out and he was suitably shamed and he wrote a very contrite sort of message to his team and it was a mea culpa and it was I need to do better I need to be better as a leader and I look at that and I don't say oh man that's too bad there was a dash cam there and that poor guy's privacy was violated there's something to be said for that old proverb that, you know, you can judge a man by what he does when he thinks no one's looking, right? And so I think it's true, we can all be somewhat disingenuous when we think we're being recorded and maybe are being a little fake. Maybe sometimes that's a good thing. Maybe there'll be a little less catcalling in the world. If guys know they're going to constantly be courted, they maybe cut that shit out, right? Maybe that's not such a bad world after all, where we're all kind of being watched a little bit more and we're all on our best behavior a little bit more. Of course, it's easy to imagine the reverse, especially in light of current political situation, that of a more police state where everything we said is sort of being recorded. It's easy to imagine those two polarities. I wanted to encourage us to get beyond that sort of simple, facile thinking of, oh God, we're going to be living in a police state, or we're going to be living in sort of great, wonderful, transhumanist world where I'm suddenly, the singularity comes and I'm involved in a beam of light, right? No, no, no. Here's what it's going to look like, and I take the world of chess to be an example, right? You know, obviously Kasparov was beaten by Big Blue, it was a momentous time in the development of AI, and obviously recently with a Go player being beaten by AI with Google, and some people might react, go it's over, if we're beaten by machines at chess, why are we even bothering playing chess anymore? We're done as a species, let's hang out by the pool. What ended up happening is, What's going on in the chess world right now is so fascinating because you're starting to see this hybrid chess. You're starting to see teams of human chess players combined with a really strong chess AI. And those teams together will beat any individual chess AI out there right now, right? And so what we're seeing is a combination of the human and of the machine. It's a more complicated, unusual version of chess than what we're normally used to but that's the complicated world that chess is in and that is what's coming for all of us. We're coming for a world in which I will have interior states that my machines will be aware of and in some cases that'll be great and in some cases it's going to be kind of frustrating and it could be quite compromising and it could be quite uncomfortable but it's gonna be really really complicated and we need to start thinking through this now. So I try to avoid this binary of like Yes, it is possible to imagine the abuse of this, but I can't imagine a world in which all my ads are like really compelling, entertaining ads or stuff I actually want to buy. And that doesn't sound so bad to me, right? So I really want to make that note. It's a complicated world that it takes us decades to kind of figure out as a social species of what's legit and what isn't. But things are going to change just so much quicker. And that's why I think we need to start thinking through a lot of this stuff as a community, as a VR, AR, MR community as fast as possible.

[00:33:05.678] Kent Bye: Awesome. And finally, what do you see as kind of the ultimate potential of virtual reality, and what am I be able to enable?

[00:33:14.086] Jim Preston: Yeah, your famous question. So having listened to your podcast, I've always wondered to myself, what would be my answer to this question? Because I've never heard a fully satisfying answer. You know, I think there was the woman from Unity. I forget her name, right? Timini West. I think she gave a really profound and beautiful answer, as I recall. I think she's come the best. But I've been trying to duck this question for as long as possible. And, you know, I think the ultimate potential is that in the same way that I used that chess metaphor of the human and the machine working in tandem, to really be better than anyone individually, I really do think that's going to be true of our future world. The human-computer interaction, we are going to have machines that are so much more empathetic than anything that we would ever expect, and it could actually make us better people. Imagine a machine that was very, very patient with you. A machine that was, you told it to like scold me when I don't go to the gym, but in a kind of loving way, right? A machine that would know when you had been lazy or a machine that know when you were down, right? Or something like that. We could potentially be much, much better by these machines and these interfaces with them that are really sort of beyond our imagination right now. And so the virtual reality that we're talking about today, I really think is, Nothing like the world that we're going to inhabit in the 10 to 15 years from now. And that's that ultimate promise is like suddenly your interior state is just going to be extended out like a network to the machines that are going to surround us. And hopefully it'll be a much more emotionally rewarding world. But you've got a big smile on your face, so I don't think you're buying it.

[00:34:50.221] Kent Bye: No, I love it. I love it. I think that part of what you're saying here is just that these centaurs, these combinations of being able to combine the machine and humans. For me, what I see is that VR is like the training wheels where we're going into VR and we're learning what it means to be present. And that the more that we can learn how to be present, the more that we can be present in our real lives. With VR, the more we learn about VR, the more that we can learn about what it means to be human, the more that we can learn about perception, the more that we can learn about experience. And that with AI, the more that we study AI, the more that we learn about what human intelligence means, and what consciousness means, and these different levels of empathy. And I think that the vision that you're painting is one that's very nuanced and complicated, and not either utopian or dystopian, but a little bit of a mix of both. It's like the best of times and worst of times, but yet, that is possible. It's a vision that we can actually come up with a balance of both and to have the protections in place that enable all the amazing futures that we all want to see happen. But yet we have to ask those difficult questions that we're asking now of like what is the framework to be able to enable that to happen? What is the protections that we as a community want to demand to have that level of transparency to ensure that we can have the trust that we can give over our data, and that if we want them to delete it, then we can say, OK, hey, I want to hit the Delete button. I want you to actually delete it, not really store it forever. These levels of transparency and controls that we have over this data that's being collected can enable these amazing futures of this collaboration of man and machine.

[00:36:25.710] Jim Preston: Well put, Kent. I think we should both get a drink now. Awesome. Thank you so much.

[00:36:31.274] Kent Bye: My pleasure. I really enjoyed it. So that was Jim Preston. He works at an eye tracking company and he was just speaking on his own behalf about some of the larger ethical concerns that he has about the future of privacy in VR. So I have a number of different takeaways about this interview is that first of all, I just really appreciated Jim's nuanced approach here. I mean, I know he's working for an eye tracking company and so he's obviously participating in the VR industry and really believes in the potential. and that we don't have to necessarily cry wolfing to say everything's going to be terrible, we need to stop everything right now, but that it's going to be a complicated future and there's a lot of open questions that we as a community have to really start to figure out. So some of the big takeaways that I got from this is that we're dealing with performance-based marketing companies, that we live in an attention-driven economy, and that data is gold, and this is what the companies want, and this is what they have been going after. If I step into the shoes of these big companies, I can see why they'd want to collect the data to be able to train AI algorithms and to eventually do all sorts of personalized advertising. And that's essentially their existing business model. So there's just a larger economic question as to whether or not this privatized surveillance that has been slowly eroding all sorts of different domains that used to be private and now are somewhat semi-public because of these companies and our decision to consent over to surrendering this data over to them. So I guess as a community, we have to ask, is this the business model that we want? Is this what we want to support? Do we want a completely vertical integrated solution where we pay for everything that we get? Or if there is going to be a experience-based metaverse that's driven on the WebVR, is there going to be an ad revenue that's going to be able to support some of these experiences? Or are there cryptocurrency solutions that could also be there? But I think one of the things that really was striking to me was this question that Jim asked, which is, where is Twitter? Is it a public context or is it private context? And I think for the most part, most people I think now these days would say any social media is pretty much public. There's private context with direct messages, but more or less, you know, it's a public context. But it sort of raises other question about, you know, trying to think about a larger conceptual framework for thinking about, well, what should be private and what should be public? So I've just looked at the list of what is classified as personally identifiable information and things that are kind of reasonably expected to keep private. And so I'm going to just try to paint different contexts under which, you know, you may be sharing some of this information. So, let's just start with the body and your appearance. So, if you go to a dating site like OKCupid or Match.com, you're essentially listing off all sorts of different specifications about your height, your weight, your body, pictures of yourself, usually not your name but, you know, because that's personally identifiable and there's a somewhat level anonymity. But, you know, if you have your picture out there, then it's sometimes easy to do reverse image search and find out who you are. Once you have that name, then you have all sorts of ways of connecting information that you may give into these sites and it's now kind of in this more public context. So then there's a bank. If you go to a bank, you may give how much money you make, your tax information, your credit card number, your purchases that you're making. You may have a mortgage on a house or financial investments. So this is the type of information that I think is probably some of the hardest information for these social media companies to get, but they can purchase it from third-party companies. And so ProPublica has reported that Facebook is actually going out and started to purchase some of this financial data to be able to tie it back into your personal identity. So let's just move on to like your personal communications, your email. This is something that I think over time that we've seen more and more back channels into being able to communicate with people. You know, there's SMS and text, but you know, we see a lot more people using like Facebook messenger and Twitter, direct messaging and Google Gmail. Snapchat and Instagram, you know, there's all these other ways that we're privately communicating with people. All these things are things that we kind of think as of our private communications, but yet at the same time, a lot of the companies are still tracking all sorts of information and tying it back to our identities. And then we have our home and where we live. And so, you know, you have your address and your IP address and your ethnicity, where you're from, your parents, where you grew up, your mother's maiden name. These are all things that are kind of related to your home and family that either security questions that you're trying to keep private, or once you have an IP address, anything that you do on a browser, it can be tied back in to your personal identity. So a lot of these social media companies are tracking your location with your GPS on your phone. You know, if you're able to track your movements and see where you're spending a lot of time, then you can determine from your locations and your movements where you live and then connect where you live to your identity. So then you have your kind of things that you do in private context in terms of your hobbies, what you do in your bedroom. This could be anything that you're watching on Netflix or Amazon prime or YouTube, your viewing habits, what kind of sexual preferences or kinks or pornography preferences. These are also all things that we tend to keep private, but yet things that we find entertaining and are looking at and clicking on. Because there's so many different social media share buttons that are spread all over the internet, there's hooks for all these different companies to also track your web browsing behaviors and what you're also looking out in different ways. And then finally, I think there's just like medical information. And in the context of a doctor, you may talk about your symptoms. So if you're starting to Google different symptoms, then there start to be some of this leakage of private medical information that could be tied back to your identity. So given all these different domains of privacy, like Jim was saying, you can start to take little bits from these different domains and start to extrapolate and put together a picture of who you are and your demographic information. And ultimately, what these companies are trying to do are determine your underlying values, what you're interested in, and what you're willing to pay money for. And if they can figure that out, then they can start to direct advertising to you that is going to be very highly personalized. Like Jim said, there's a lot of great benefits to being able to have personalized advertising, but it also may be coming at a cost at the trade-off is that the more that we're surrendering data over to these third parties, the less that we're able to have a reasonable expectation for this type of information to remain private. So there's actual weakening of the Fourth Amendment that can happen with this. Now, when you paint this picture and look at what's already happening, a lot of people will just kind of throw up their hands and say, okay, well, this is already over, we've lost this battle, and what's the point? And I think the difference is that there's these new frontiers of biometric data and information that has yet to be unlocked and connected to our personal identities. And I think it's going to enter into a whole other new realm. If I look at what's happening with VR, I feel like some of the last frontier of data that VR is going to start to enable are all these different physical and biometric markers that are going to start to unlock all of what we're looking at, our attentions, our level of engagement, our level of interests, our emotional states, perhaps even our thoughts and what we're thinking once EEG gets to that point. So I just got back from the Experiential Technology Conference, where I had a chance to talk to all of these different neuroscience researchers who are looking at these different data streams of biometric data. And for most of them, they're doing it within the context of medicine, where there's HIPAA protections and privacy that is controlling a lot of that data. It's also being used in the context of market research, but even in those studies, there's very explicit consent forms that people are assigned where they know that they're going to be tracked in certain ways. And sometimes they're even paid for their participation in these studies. So the thing that I see in VR, just to paint this conceptual picture, is that this next frontier of data that these companies really want, really have to do around the body and all this biometric data. this same biometric data is going to start to be made available for these performance-based marketing companies. And Oculus has already started to lay down their privacy policy framework such that they can start to tie all this information back to our personal identity. And there's these questions of what parts of that should be private. Should this be classified as medical information? There's a lot of different biomarkers that you could start to determine all sorts of different medical conditions. And so what are the ethical implications of a company knowing about a medical condition before you do? Should they report it to you? Should they tell you? What as a community should we do in order to have the level of transparency or the feedback mechanisms to be able to talk about this as a community? What are the trade-offs that are here? So, you know, I just want to call back in the nuance that Jim was having around this is that it is a complex and complicated future, and that it's something that sort of goes beyond what any one lone philosopher on this podcast can really talk about. But I think as a community, this is just a larger discussion that we need to start to have. So in the future, I'm going to be having more biometric data experts talk about what is possible to be captured, what are the implications of that. as well as just some comments from Nate Mitchell from Oculus talking about some of his thoughts on privacy in VR, and some of my reaction to that. So, that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoyed the podcast, then please do tell your friends, spread the word, and become a donor. Just a few dollars a month makes a huge difference. So, donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show