Avi Bar-Zeev has been involved with immersive technologies for over 27 years, including working on the very first prototypes of Microsoft’s HoloLens back in 2010. He’s mostly been working on secret projects for the last 11 years, and so I got was able to catch up with him at AWE to talk about his journey into the XR space. We also talked a bit about his article that he wrote for Vice called “The Eyes Are the Prize: Eye-Tracking Technology Is Advertising’s Holy Grail.”
Bar-Zeev lays out the following eight ethical principles to help navigate the privacy concerns around eye tracking and other biometric data:
- Eye-tracking data and derived metadata is considered both health and biometric data and must be protected as such.
- Raw eye data and related camera image streams should neither be stored nor transmitted.
- Derivatives of biometric data, if retained, must be encrypted on-device and never transmitted without informed consent.
- Apps may only receive eye-gaze data, if at all, when a user is looking directly at the app, and must verifiably follow these same rules.
- Behavioral models exist solely for the benefit of the users they represent.
- EULAs, TOS, and pop-up agreements don’t provide informed consent.
- Don’t promise anonymity in place of real security, especially if anonymity can later be reversed.
- Users must be given an easy way to trace “why” any content was shown to them, which would expose to sunlight any such targeting and manipulation.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. So continuing on in my series of looking at XR ethics and privacy, today's conversation is with Avi Barzev. Avi has been working in the XR space for 27 years now, since 1992. He's a pioneer in the space, having worked with Disney, working on the Aladdin Magic Carpet VR experience there. He worked on Keyhole, which ended up getting sold to Google and turned into Google Maps. And then he says for the past 11 years, he's been working on all sorts of different projects, but the only one he can talk about is the HoloLens. And he's also recently been working at Apple, so who knows what he's been working on at Apple. But I wanted to try to unlock as much as I could about Avi, his journey into this space and flashed out a little bit more about the evolution of the HoloLens and what he's been working on. But the day before I had talked to Avi, he had just released this really big article in Vice. It was called The Eyes Are on the Prize, Eye Tracking Technology is Advertising's Holy Grail. So Avi did this deep dive into eye tracking technologies and did this whole write-up and came up with these eight guiding principles for how to handle biometric data. So the article had just come out. So I didn't get a chance to do a super deep dive into that article because I hadn't had a chance to read it. And I also wanted to just get more context as to Avi and his journey into this space. I'll be unpacking his eight principles here at the end of the conversation since we didn't have time to dive into it in this conversation. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Avi happened on Wednesday, May 29th, 2019 at the Augmented World Expo in Santa Clara, California. So with that, let's go ahead and dive right in.
[00:01:55.768] Avi Bar Zeev: My name is Avi Barzev. I have been working in the realm of augmented virtual reality for about 25 to 30 years almost. I guess about 1992 was when my first startup was. I guess you could say I'm kind of a storyteller. I started out wanting to be a writer and was working in a startup and also had some tech skills and then decided to just build things that I thought were interesting, I thought people would like. 92 was a little bit too early to do a lot of the things that I wanted to do, but not too early to think about AR contact lenses and all the cool things that we'll eventually have. I was lucky enough to meet the right people and get intros to get really cool jobs that I wouldn't have known about otherwise. Things like, in 1994 I got hired into Disney to work on a totally secret VR ride called Aladdin's Magic Carpet Ride. I had already started but I came in and helped them in a bunch of ways and it was just great experience to be able to try things that are possible today but to try them in the 90s that were too expensive but were just really amazing and good learning experiences. And so I've just been able to keep trading up to better and better positions and jobs and interesting projects and try to stay as much as possible on the forefront of things and work on things that seem like nobody really knows how to solve this problem yet but let's try it, let's prototype it, let's see what we can figure out. And that's when it's the most interesting to me. So I've been spending the last 11 years or so working on projects that were always secret at the time. The only one that can really talk about much is the HoloLens and only things that have been made public really. But it was a super fun project. It got to help convince people that AR was going to be the big thing. Even then, I think when we started it in 2010, VR hadn't really had the resurgence that it's had. But we had these discussions about what should we do? Should it be AR, VR? What are people going to really like? And we're able to convince enough people that AR was going to be kind of a 10 to 1 kind of a thing. We're just sort of following the same ratio of how much of your day do you spend consuming entertainment and escapism versus work, versus talking to people, versus doing things in the real world. And that just seemed unnatural. That would be the place where all the value would be. So I've spent my time for the last 10 years just trying to figure out where the value is and what are those experiences that people are going to decide to ultimately pay for and in many cases put hardware on the most sacred real estate, which is their face. So it has to be really important. It has to be really useful. So I've been thinking and working on a lot about that over the years.
[00:04:23.568] Kent Bye: Can you give a bit more context as to how you entered into the VR industry, like the first time you saw VR, and if the Aladdin project was the first piece, or if there's other things that led before you got into VR?
[00:04:35.673] Avi Bar Zeev: I'm trying to remember back. I think it's been a while. But I met up with the CEO of this, eventually a small company called World Design. I met him online. It was back then it was Usenet, was the Reddit of the day. And I met him. I liked the guy. He's still a great guy. Naively, me and a few other people drove cross country. I was living in New York, drove out to Seattle for my very first startup that was going to be the world's first VR design boutique. We'd use whatever hardware we could get, not invent anything, but help people with design problems, figure out what their challenges were, talk to architects, talk to home builders, talk to whoever else needed to plan in 3D and space and be benefited by it. The same use cases that are today. We tried in the 90s, no one was willing to pay for it, it was too expensive back then. The hardware costs hundreds of thousands of dollars per piece we had to mostly borrow. But it was a great experience to just sort of have the whole future open to us and to say, what can we do? Let's scrape it together. And we did. We ran out of money really fast in 92. We had too many people. I didn't know anything about business back then. So we had way too many people on staff. And I didn't know enough to raise the red flags and say, hey, man, we're going to run out of money. Wound up living in the office like a lot of people do for their first startups. And so I literally took over one of the offices and turned it into a bedroom. Stayed there for another year after the money ran out, but in that time was able to scrape together some pretty interesting projects. For example, the one that got me the attention from Disney was, we managed to get $30,000 from a casino to go prototype a virtual environment theater, we called it. Kind of like a cave, but doesn't have to be stereoscopic. It was just very simple, but immersive theater. And so we built the whole thing out of wood and acrylic and borrowed the projectors and borrowed the SGI computers and made a really cool experience like a virtual hot air balloon ride. You go in there and you just sort of feel transformed. It was pretty amazing how we could pull it off without having to wear anything. So the connection was that somebody from SGI, who we'd borrowed the computers from, came to see what we did with their computer, and then they were also working with Disney, and so that's how I got connected there. And the company eventually went away. All the people that were in it have gone on to do fairly interesting things, but it's sort of one of those things that it's just a once-in-a-lifetime thing that comes together, and had I been less naive, I might not have driven cross-country and done any of that stuff, and it would have been a very different story. The same is true for most of the things I do. I'm a little more seasoned at how I go find the jobs now. Now I know enough people in the industry that I can call somebody up and say, hey, who do you know that's the exact right person at this company to talk to about this thing or that thing? It makes it a lot easier. But I'm not the most social person, but it does come down to networking at the end of the day.
[00:07:13.997] Kent Bye: So what were you working on in between Aladdin up until you started working on the HoloLens in 2010?
[00:07:20.552] Avi Bar Zeev: A lot of things. So, I'm trying to remember all of them. So, Disney worked on a bunch of interesting projects. There was this thing called Disney Quest that closed, I don't know, 5-10 years ago. The last one actually closed more recently. And it was meant to be a theme park in a building that had a lot of VR experiences. So, I was the technical lead on a few of those things. One of them was this virtual jungle cruise ride where you use oars and row and I was very fascinated with multi-user controls. Like, you can't have four people steering a car, that doesn't work, but you could give four people oars and tell them to row and do the physics to kind of make it feel like you're rowing somewhere. So that was a lot of fun and helped them with a bunch of other rides there too and did that as a consultant as well after I left Disney. I worked on it as a third party. and found interesting projects to do in the meantime around the late 90s. I worked for a company that was doing military simulators, so I came in and helped them optimize. There weren't that many people back then that understood 3D graphics, especially real-time 3D graphics, so I had a leg up on having that high-end experience early on so I could help people optimize and make great experiences. And eventually I decided, got hooked up with some of my former colleagues from SGI that had worked on the Disney stuff, and they started this company called Intrinsic Graphics. And that company ran out of money eventually and went away. But it spun off a company called Keyhole that turned to Google Earth. So essentially Google Earth started as this tiny little startup. And some of the people, you know the names that were in that. John Hanke was the CEO. He's now the CEO of Niantic. Phil Keslin was their CTO. And a bunch of other really smart people were in that startup and built this cool little app. My job was more on the UX side, on the client side, and writing code, but also designing a lot of the interfaces. So I had a lot of fun with that. It was a really great project. It was a difficult thing to monetize. It was very expensive to run all the servers. And the company always struggled with how to make money at serving up these maps for everybody. But having it be acquired by Google at the end was actually a really great move. Google had the money. They had the scale. So it was a really good fit. So that's essentially where Google Earth came from. And when I was done with that, I went off and did consulting again and hooked up with Philip Rosedale and Corey Andrejka at Linden and Second Life. Helped them with a bunch of the technology, like I wrote the code that creates all the objects in Second Life. So it's a sort of offshoot of my academic work. Before I left school, I'd learned a lot about computational geometry. So I applied that and made some interesting code that would build objects in the world, helped them rewrite the rendering engine, things like that. I had a bunch of other small clients here and there. And then, you know, around 2008, I decided, well, I just had a kid. I need better health insurance. So I'm going to take a real job, stop this crazy consulting thing and start up life. And what was interesting was I had I guess I was the only person at that time who had worked on both Google Earth and Second Life. And a lot of people, I was reading people saying, you should mash these up, you should make something that puts avatars on Google Earth. And I wrote a bunch of blog posts about how bad of an idea that was, like all the things that would go wrong with it. And it turns out Microsoft had had a secret project called First Life that was exactly that. And they were running into some problems with it, which was exactly what I was running with. I had no idea. But they decided, well, this guy seems to know something about this particular problem. And so I was recruited into Microsoft to help them. And in typical Microsoft fashion, that project, after we went and fixed it and figured out how to redesign it so it's not just avatars walking around on a giant empty planet, But it was more like hey, let's go build the back end for social local mobile apps Let's figure out how to make e-commerce spatial all the things that are sort of hot today We were thinking about in 2008 and in true form my boss at the time not my direct boss, but the VP Famously, I remember him saying this mobile will never be big. They canceled the project and saying, this is not going anywhere. I'm like, oh, pull my hair out. And some people got it, some people didn't get it. And so I spent a while at Microsoft moving around. I found a group called the Startup Business Group, thinking, oh, great, startups inside big companies. Let me go help with that. And we worked on a couple of cool things, interesting things. One of them they shipped, but didn't do very well. It was called Avatar Connect. So we used the body tracking of Kinect in order to let you puppeteer an avatar. The problem was somebody along the way decided that we would not do full body tracking, that you couldn't walk around. So we had to plant you on a stage with other people sitting around. And we basically invented the world's most boring talk show format of people, friends, just sitting around, not being able to move. And the tracking was not quite perfected back then. And so you would occasionally see the arm go above the head. And it was better for comedy, I think, than anything else. But in doing that, I had started prototyping another approach to solving the same problem, which was to do more of a depth plus color fusion, which would create something that looked very much like you, like 3D video, from using a Kinect as well. And I had to borrow that Kinect from people who were making Kinect at the time, which was Alex Kittman and the team. And so I got to know them a little bit. And so they recruited me to come over and help with a new project that wasn't really decided yet. Nobody really knew what it was going to be. They had been spending most of their time on, I guess you'd say they were exploring 3D displays. They were trying to figure out what can we do that is like the Kinect where we get rid of the controller and your body is free to move and so it's not encumbered, it's not a headset, it's not worn in any way. They were really trying to figure that out and I came in and sort of said, well, No, I think you pretty much have to put the display on the head. That's really the only way you're going to solve this and helped prototype and figure out a lot of those early experiences. I won't say too much about it because I think most of this isn't public, but I'll just end by saying that it started as we're going to figure out the next generation Xbox. So a lot of those early thoughts were about entertainment and what would be the next console and how would you make AR. like a console, and I think history has shown that we're not quite ready for that yet, that AR isn't necessarily going to be the entertainment media at the beginning, maybe ultimately, and so it's gone through a bunch of evolutions and motions within the company to try to figure out the right fit, and I think they're doing a great job now, and they're focusing on applications that make sense, and, you know, more power to them.
[00:13:30.378] Kent Bye: Yeah, well, what's interesting to me is that if you go back to like 2007, you have the launch of the iPhone, which at that point, you know, is a brand new approach to phones and you have like Blackberry and Nokia, all these other big phone companies that, you know, it's not too many years after that where you have a complete mobile revolution. Then you have like Apple and Google, Apple with iOS and Google with Android that really make good inroads into that mobile market but it's interesting to hear that your manager and probably overall within the culture of Microsoft that they kind of miss the mobile revolution in some ways. They tried with Windows phone but never really took off and so in some sense both Microsoft and Facebook have been left with having to interface on the mobile platforms and to in some ways be lower down on the hierarchy from the architects of these systems, both with Apple as well as Google. But in that situation, both Apple and Google have had very good business models that are pretty sustaining, and it's provided an opportunity for both Facebook and for Microsoft to be able to do a bit of a leapfrog into these immersive computing technologies. You know, when I got my Oculus Rift DK1, I bought it January 1st, 2014. And at the time, there was the Kinect 2 that was just coming out. But at the same time, it almost felt like no real support from Microsoft into that. And I always kind of wondered why, because it seemed like it was very compelling for academic use cases. But for the consumer application, maybe it didn't. necessarily get picked up as much by developers and so there wasn't a lot of actual experiences and then they had made a decision to not bundle it directly with the Xbox so then it was an extra peripheral which meant that if you did want to make it then it would have to have an additional attachment rate for people to actually go out and buy that peripheral to add into it so it was a little bit of like a number of decisions by Microsoft that kind of made it a dead-on-arrival type of piece of equipment, but still useful. But it seems like in the background, you have all these other stuff happening with the HoloLens and, you know, the continuation of a lot of the people on that Kinect team, but also, you know, trying to, you know, I guess, further this completely new paradigm of spatial computing that was being incubated in the context of this mobile revolution.
[00:15:46.590] Avi Bar Zeev: I guess the way I'd say it, I think you're right about a lot of what you said, and the way I'd say it is, it's to the credit of the team and the people who are motivated that it survived as a product. And that's one of the reasons why I came over to work with that team was because they had managed to convince the company to do something very hard with Kinect and fraught with problems. Anything computer vision related is fraught with problems. There's this thing that happens, you've probably seen with computer vision where it's just, works fine for me, works great in my office, and then you try it anywhere else and it completely fails. And it's very hard to make it robust and very expensive. So it's pretty amazing that they were able to pull that off and build the Kinect and ship it. And it did really well at first, fastest selling piece of electronics. But I think you're right that you were kind of implying that the development model is hard, that developers were not used to building for something like this and it required a lot of inside knowledge and careful iteration to get good games with that technology and not everybody was able to pull it off. Make something really amazing and part of the whole new ways of thinking about games. And I think AR will be the same when it gets around to entertainment again, I think. It's going to require completely rethinking the way you build games, and anybody who has a lot of experience in a traditional way is going to have to go back to school and learn the new ways and figure them out again, which I think is exciting. Not everybody thinks that's exciting, but I think that's the most fun part is handling the unshattered territory. And yeah, they probably made some decisions that they would have probably done differently. I think you can look at a lot of those things from Microsoft in those days. And I think it wasn't a culture of trying to promote rigorous debate around these ideas. It was a little more top-down in many ways. And so I think that's what it is. Hoping they are changing their culture and having better results for it. They certainly seem to be doing better on the design side of things, for sure. So you've got to give them a lot of credit for that. But I think it's really to the credit of the team that they were able to have the project survive, make it out to launch, and continue to survive in an environment that's not one of rapid uptake, but it's a slow build. And I think you alluded to some Apple products too. I'm going to be even more careful about what I say about any Apple stuff because I have a lot of NDAs, but I think to Apple's credit, something like the watch I heard rumors of an Apple Watch three years probably before it launched and even three years after it launched, it was still gaining momentum, right? It didn't just, it wasn't an overnight success and had to find its sweet spot and the thing that people really wanted it for. Partly because I think a lot of us just stopped wearing watches. I mean, it was just, I mean, I still don't wear a watch even though the functionality is great, but it doesn't quite meet the threshold for me to wear an accessory. And so it's an uphill battle. It's a very hard thing to convince people to do something new. And I think whoever comes out with really good, all-day wearable glasses is going to have that same challenge. Credit to North for being brave and coming out with something, even before the technology is fully ready to do all the things that people really need these things to do. So I don't know if it quite has the value proposition yet that it's going to need to have, but it's a good attempt. It's a good take on it, and it could work. I don't want to be pessimistic, but I tend to think that it's going to need more utility. Even if they can get the style to look more or less like glasses, it's still going to need more utility before people really adopt it.
[00:18:58.243] Kent Bye: Yeah, and you had said that it takes a lot of precious real estate. Your face is like the most precious real estate. And the thing that I wonder is if it's going to start from the Bose AR frames or audio. I guess my hesitation to AR, I've always been sort of focused on VR more. But I think of AR as like, I'm going to do something to solve a problem. So if I was on a factory floor and I needed to have a spatial interface to be able to have overlaid information to give me more context as to what I was doing, then that And I think that we've seen that Microsoft's really been focusing on those frontline workers as some of the initial use cases for AR. But the idea that I would be able to have something on my face to be able to check my email at any moment at any time, I'm not necessarily sure if I want to do that. And also, what are the social implications of me looking at you, but am I really looking at you? Am I looking at my email? So having the context switch of the social taboos of not knowing what you're actually looking at with social interactions, I feel like there's so many sociological taboos that obviously all of these things can change just by people, if they find it useful enough, they're going to start to do it. I mean, back when the Walkman first came out, it was probably pretty weird for people to be walking around with headphones on. But now that seems to be pretty commonplace that people are having already a level of augmentation of into their own little audio virtual realities as they're walking around. But I still have this intuitive gut hesitation that people want to wear these glasses around all the time I think that's insightful.
[00:20:27.757] Avi Bar Zeev: I think that like I don't always make eye contact right now It would be freaky if I stared at you constantly, right? I will look away But if I were looking away to read something you would probably know that and you probably wouldn't be happy with it I feel the same way And so I don't think we want to just take the current phenomenon of everybody staring at their phone down around their waist and just move it up a few feet. That's not an improvement to have that same experience just higher. It's like worse, I'd say. Yeah, probably because now you're pretending to have the social engagement but not really having it. So I'm not a big fan of the school of thought that says we should be putting big information cards next to people's heads of all the things that we should know. I think there are better ways to help people to remember things other than just putting giant blocks of text. Just like I don't think labeling everything in the world is sort of the canonical common AR use case of labeling everything on the street, that's a terrible use case. I don't want to label everything. How many times do I really need to see that information? Maybe I want to see where the best restaurant is, but I have an intent, I have a purpose, and I might want help with that, but I don't just want information inundated. And I think that if you look back at, say, Google, why were they successful, right? They had a good algorithm, but they started by saying the first 10 links that you get, and usually the first link you get is the right one. Being able to infer and guess what the right response is to any given user utterance or intent is really important, because I don't want to be searching to page six of search results. That's when I have a bad experience. So solving the problem of that utility and the ease of use is really hard. And whoever figures that out, I think is going to do really well. But it requires solving a whole bunch of other problems. I mean, once you say, I need intent, then you have to ask the question, how do I get it? How do I know what you want? And that's where a whole bunch of other technologies come in that have nothing to do with the displays at the end of the day, but they're more in the world of watches, smart watches, in terms of biometrics and very low power sensing of interesting events. So whoever figures that out I think is going to do very well.
[00:22:19.363] Kent Bye: Well, you said earlier that one of your primary driving motivating factors was storytelling and immersive storytelling. You consider yourself a storyteller. And it seems like a lot of these technologies have started with the intent to be in the consumer space of gaming and interaction, maybe story level. But it's kind of taken a pivot into a lot of more mundane, pragmatic, solving problems in the enterprise. And I'm just curious if you've been able to find a through line of storytelling through the work that you're doing.
[00:22:46.110] Avi Bar Zeev: I think it's funny. So I guess Flickr started as a game and turned into an image search tool. And then Slack famously also started as a game and found that their tools were a better pivot than what they were trying to do as a game.
[00:22:57.860] Kent Bye: Twitter as well. Twitter started as a podcasting.
[00:23:01.083] Avi Bar Zeev: So maybe the learning from that is, if you want to go make money, start with the utility and go for it. But maybe you need a real-world application in order to incubate and develop that. And so having that additional project, even if it doesn't succeed in the entertainment side, is useful to have. And you could do that intentionally. You could say, yeah, we're building this game, but it's completely throwaway. I'm doing it just for the sake of building tools. And maybe some people have tried that. I think it's fair to say that we're not really satisfied with the tools that we have. We never are. There's always a struggle with how much effort do I put in to make the perfect tool versus how much do I get everything else done. And so it's no wonder that people who do a relatively good job at tools make a lot of money. People are ready to snap it up and start building with it. The question about entertainment is interesting because I started wanting to make movies. I had a choice early on. Do I go the Pixar route and spend an hour per frame trying to render movies and maybe trying to make that a little faster, but you sort of eat that cost, or do I go real-time? And I said, no, I'm going to go real-time because it's more interesting. It's less known. It's harder. But then I had to give up on a lot of that cool storytelling and working with great creative people on design. And what I really wanted out of VR back then was the holodeck. I wanted the tools that let me just say what I want and express myself and build the world that I want to see around me. And that still doesn't exist. I mean, spending over 25 years at this and trying to work on projects that get closer and closer and closer to that ideal of being able to just tell my story in something other than typing on a typewriter and have it just work. And we're getting closer. Things are getting better. I've seen some great projects, but nothing that really does everything that you'd need. And I think once that's figured out, certainly I'll be happy because I'll go back and tell all the stories that I haven't had a chance to tell other than in prose. And I could just retire and just start making movies. That'd be great. But it doesn't quite exist yet to the level. But the future is that everybody is a creator. Everybody has these stories. I'm not unique. Some people may not think of themselves as being creative, but they are. And I kind of know it sounds like a scene out of a Lego movie, but they can come and build things and express themselves. And I think they'll be happier for it. I think it's what we're all made for.
[00:25:06.682] Kent Bye: Well, we've been sharing some virtual meetings in the same virtual space on the Mozilla Hubs, talking about sensory design and looking at different ethical issues around design. And I know that you recently wrote an article in Vice looking at eye tracking. And so I'm curious if you could kind of recount how that article came about and what you were really trying to say in that piece.
[00:25:26.422] Avi Bar Zeev: Yeah, so I've had the occasion to work with eye-tracking technology for a while now. As I said in the article, the original HoloLens was meant to have eye-tracking from day one, it was important, and the original vision was sort of the evolution of the operating system that is intent-based. They could understand what you're looking at, what you care about, and use gaze to make you more productive and happier with the results, so less having to use mice and windows and that kind of evolution. So that's still going to take time to figure that out. There's some really hard problems there. But I got to learn about it and I got to learn about all of the challenges with it, especially the security challenges. And I'm now aware of a lot of research that's been done that says that, well, okay, there's a few things that are important to digest before the consequences become really easily understood. And one is that we don't really see what we think we're seeing. Our brains are reconstructing reality around us, but we really only see about five degrees in our fovea, really just the center of vision. Everything else is fairly low resolution, fairly blurry, and our brains are making us think that we're seeing it all because our eyes move around a lot. So tracking the motion of the eyes is a good way to tell what we're thinking about and what we're interested in. And if you look at the pupil of the eyes, the dilation of the pupil isn't just for managing the amount of light coming in. It also, for some reason, reveals our emotions. Our eyes start to dilate when we get excited about things. And nobody's really sure why that happens, but it's consistent. So, if you were tracking the eyes, you're getting all this information, you have a ton of insight into what people are thinking about, in terms of what they're reacting to in the world around them, and also how they feel about it. And that's exactly what the companies that are collecting data on their users for marketing purposes want to know. It's the exact data that's ideal for them. theoretically put a piece of content in front of you and watch your reaction to it. Did you notice it? Did you stare at it? Did you look away? Did you like it? And they have effectively the implicit like button for everything. If you're wearing these devices, especially out in the real world, they can very quickly map your brain in a way that it would take years on a social network of disclosures and postings for them to discover the same information. They can do it very quickly. And then when you think about VR, there's another body of really interesting and positive research around redirected walking and shows very clearly that you can change the world very rapidly and people won't even notice. Right now, you know, there's an equivalent interview where there's a guy on the street who's interviewing people, asking them for directions, and he has someone interrupt by putting a big sign in between them, and then he swaps himself out for somebody else, and he continues the conversation, and the person on the street doesn't even notice that the entire person has been swapped for somebody different, maybe even a different race, maybe a different height, different hair color, It doesn't matter. They're able to do this because people are not aware of a lot of the things that they take for granted and think they are. So it's pretty clear that a company could, if they were already interested in mapping your brain this way, accelerate the process by selectively changing things in the world and seeing how you react. So websites today do these A-B experiments where they put out two versions for different people and they compare and they see how people are reacting. Now you could imagine people doing A-B experiments on the same person sequentially and seeing what you like. They could learn a lot of intimate data like your sexual orientation, what do you like, what are you attracted to, but they can learn all this product information as well. The most chilling part of this, though, is not people say, OK, so what? So they know this stuff. They'll just use it for marketing, and they'll show me stuff I like better. That's great. I'd appreciate that if I don't see ads that I wasn't interested in. But one of the most concerning parts of advertising, and even traditional television advertising, was that if they know enough about psychology, they know how to push people's buttons. They know how to make you respond emotionally to something. And so having this emotional measurement device on your body that could say measure your pupil dilation, maybe your pulse, and seeing how you react to content, they can learn what triggers you, what makes you respond irrationally with reactions where you just maybe wouldn't do that if you were thinking about it, but you respond defensively or because you feel threatened in some way or because you feel somebody you care about is threatened. If they push the right buttons, they can pretty much make people do almost anything. And the systems will be able to map out those responses by seeing how you act and then be able to selectively push those buttons and verify that you actually acted the way that they were trying to manipulate you. And if the goal is to get you to buy something, you wind up buying stuff that you don't like. Best example is something like QVC, where people watch it, and why do they buy this stuff? I don't know. A lot of this stuff doesn't seem that valuable, but they feel like, oh, I've got to buy it because it's time limited, and it's urgent, and it's important, and this celebrity likes it. And they find all the right buttons to push. Now they're going to be able to do that. to a much greater degree. Now, the good news is it's not really there yet. We have a chance to address this. We have an opportunity to say to companies that are deploying eye-tracking technology, hold on, this stuff is dangerous, let's protect it. And my proposition in the article is let's treat it like health data. It can be used to diagnose various diseases like Parkinson's, ADHD. It can be used both, I think, to determine if someone's autistic and maybe even to address some of the cognitive differences in autistic people. That sounds like a health application. This is health data, so why wouldn't we protect it as much as we're protecting our medical records or our DNA? I think we should, and I think we have to start from that position. There may be good reasons for developers to have access to this, but I think that we can design safe ways for developers to have access to this, which doesn't allow a Cambridge Analytica-type developer to come in and build apps that scrape everything and build these models about us, and do it without anybody's knowledge or consent. That's what we have to protect against, and we've already seen that happen from even 2D webpages. It's gonna be 10 times worse when everybody's wearing these things on our heads. So let's start now, and let's think carefully inside the big and small companies about, if we're deploying this stuff, let's make sure we secure it well enough. The simple example is, and I think most companies are very responsible with this kind of technology, Any company deploying iris identification has a huge responsibility to protect that data. Because unlike a password, you don't change your eyes. If somebody gets that signature, that encrypted version of your eyes that's used to match against any new eye it sees, that could be used to impersonate you and pretend to be you. So you absolutely have to protect that data. And the best known practice that I'm aware of is to encrypt it on the local device, never upload it to the cloud, even though you might have convenience reasons to do so. You might decide we want to allow multi-device authentication and make it easy so you don't have to sign in on every device all the time. Maybe we want to have backups enabled so you never lose this information. But that's not the best practice. The best practice is encrypt it on the local device, make sure it never leaves the device, and make sure that nobody can actually get that data, except by the most extreme measures. And I think that most of the companies know this But I want to make sure that everybody knows this because it's so easy to say well They're doing and I'm doing it too and not take all the precautions that the really careful people take and when it comes to the eye gaze data There's reasonable compromises that companies can make, saying, for example, well, if in the future, an app is an object on my desk. Let's say it's a little clock, just for the sake of argument, that sits on my desk. Why does that clock need to know when I look at anything other than it? Maybe it only receives those eye-tracking events when I look at it, if it needs it. And only third-party developer apps can only receive those things when they absolutely need them. Or maybe they don't receive them at all. Maybe they receive an abstracted version of that, which just simply says, Joe is looking at you right now, but not giving them the exact information about how Joe is looking at them or the precise angles that Joe's eyes are having, but just triggers a higher level event. Those are safer ways of exposing that so that developers can't exploit the data. And I think that's fairly straightforward for companies to do. So I won't go through all the cases, but I listed about eight mitigations that most companies, even the ad-driven companies, can do in order to make sure that their customers are as safe as possible. And I would very much like to see all the companies that have eye-tracking publicly state, yes, we are doing these things, or whatever version of it they think is appropriate for them, but tell us what they're doing. Tell us how they're protecting the data. And if a company is going to come out and tell us that they're a privacy-first company, then show us. Show us exactly how they're backing it up. And show us in the terms of service, show us in the fact that they're going to ask informed consent when they're going to do experiments on this, that they're going to actually have us agree to be in those experiments and not do it on the sly. Those are all things that I would expect from companies that really respect privacy. So that's what I'm looking to see.
[00:33:51.866] Kent Bye: Yeah, I have a lot of thoughts on that. I just want to ask quickly though, are you familiar with the third party doctrine at all? Not so much. Tell me about it. Okay, so here's the thing. Any data you give to a third party has no reasonable expectation to be private, as of right now. There's a Carpenter case that was in the Supreme Court that said that maybe we should change this in certain cell phone records, but essentially what that means is that any data you give whatsoever has no Fourth Amendment protection rights, which means the government can get access to it. But it also means that the more that type of data we give, the more that it changes the legal definition of what is collectively decided as to what is reasonably expected to be private. So the more that we let any company, anyone record any biometric data, we're collectively saying we no longer want this to be private. We are okay with anybody having access to this. We're okay with the government having all this information. I was just at a VR and neuroscience conference put on by the Canadian Institute for Advanced Research and there was a researcher there that's working on the technologies to be able to decode brainwaves. And they said within five years they're going to be able to have it so you can put on a non-invasive EEG and they're going to be able to read your mind. So, you know, you talk about the eyes and there's a certain amount of, like, the poetic interpretation of, like, the windows into the soul. I mean, it does actually get a lot of really intimate information about what you value and what you emotionally react to. But we're talking about, like, this type of psychographic profiling that could essentially come up with an algorithmic approach to be able to start to control and manipulate people. And when I talked to a behavioral neuroscientist, John Burkhart, he said that the ethical threshold between being able to predict behavior and to be able to control behavior starts to get completely erased. So for me, I'm utterly terrified that anybody would ever consider recording any of this data, because it's basically like a map to our psyche. It's like the Rosetta Stone to all the things that we value. So yes, I'm sure that there are going to be very important medical applications for that, but I think all of it should be recorded on device using some form of either differential privacy or homeomorphic encryption. Because what the companies want, they want to store house this data for decades. So they do AI training on it and to get better and better over time. So I think we're in this very small window. before Pandora's box is open so that everybody just starts storehousing decades worth of biometric data on us. Which, if it gets out and on the dark web, it might be someone going in and being able to watch your behavior and be able to track your gait or different signifiers of your body that's being radiated where we think it's de-identified but it's actually personally identifiable. It's like they're unlocking the cryptographic key to this storehouse of information that, if it's correlated to the content that you've been looking at, you've basically got a roadmap to be able to do whatever you want with large swaths of people.
[00:36:35.952] Avi Bar Zeev: Yeah, and that's an important point that I didn't cover, which is, if you're giving them any biometric information that can personally identify you and your information as to who you really are, Don't listen to them when they say they're going to give you anonymity. They're not. They may do it temporarily, but they can reverse it in a second. Anybody who has access to both your identity and your biometrics can reverse your anonymity. So anonymity is only a pretend security, really, at the end of the day. The only thing anonymity is really good for is when you want to survey large groups and sort of see sentiments or learn things about groups. And anonymity has a role, and it makes a lot of sense. But whenever your specific data is shared, giving you a unique identifier is complete nonsense. It's not security whatsoever. And everything else you said, I think, is completely right on. There is no good reason for companies to upload this data to the cloud. The reason that they might give is that, yes, to do additional learning on, they need all the data to do massive machine learning on. The trend is moving more and more to moving things to the edge, especially to the phone itself, if the phone is the device, let's say. Or whatever compute node you have on your body that's yours. And my hope is that that ensures a greater level of privacy, that that's your device, you've bought it, and it's considered your private data, even though there's all these searches that happen at airports and things like that where they require you to unlock things. I hope those things get worked out, but in general, If I'm buying this thing, it's my device, there's really no legitimate reason for this stuff to be uploaded to the cloud. And I'd say for people who are very well-intentioned or thinking about ways to do offloading work to the cloud right now because it makes the devices smaller and cheaper, be very careful. Be very careful as to what you upload. It cannot be that sensitive information, even if you're encrypting it. And the homeomorphic encryption is very interesting because it says, well, OK, we're going to compromise. We're going to send this to the cloud, encrypt it, have the cloud do some work on it, encrypt it, and nobody ever needs to decrypt it. Well, I worry a bit about quantum computers and the ability to break some of the traditional encryption algorithms in a few years. And so I'm not sure that that's safe, even though it's very clever. I would still rather say no, it has to stay on the device, so you have to have physical access. Would that be more of a differential privacy approach? It's related. Differential privacy is a more complicated subject. But the simplest example I could give that's sort of close to it is progressive disclosure. So if we want to figure out, let's say you and I wanted to meet up, we might start by sharing what country are we in. And we agree we're both in the same country. And then we share what city are we in, and we agree we're both in the same city. And then what building are we in, and what area of the building? And so there's no reason to tell you exactly where I am unless we're close to each other. So you can progressively disclose that. And it's not exactly what differential privacy is, but it's an easy way to think about the ideas. So I think that helps. And there are always going to be things that have to be in the cloud. I mean, not have to, but for example, it makes sense if you're looking at something like if this, then that functionality. It's stuff that lives a long time. It runs all the time on your behalf and thinks about events and then tells you when something happens. You'd probably rather not have that not run on your phone all the time because you'd like your phone to go to sleep and save power. And so having that run in a data center somewhere does make sense. But those kinds of queries are not that revealing. In fact, In this case, the cloud can help you by being a proxy server. If your phone were making a direct request for the weather, whoever's serving up the weather could see that and learn something about you from your unique ID from the phone, from your IP address or whatever. So being able to go through the cloud as a proxy helps actually obfuscate who's making the request. They can also batch up many similar requests, so it's more efficient. But other than those kinds of things, in general, I think almost everything else should live in the phone itself, including things like natural language processing. The trend is to move more and more to doing everything on the phone, partly for privacy reasons, but also partly for better user experience. I think it'll be much better when you don't have to wait a second or two for that response, even if the model is simpler.
[00:40:19.390] Kent Bye: Yeah, and I think there's going to need to have shifting and changing of some of the laws from the legal perspective because the third party doctrine to me is like this big gaping hole in terms of the legal framework that essentially allows a totalitarian government to have access to all this different types of information without having the ability to keep it private. So just the fact that any data you give to a third party, even when you're uploading it up into the cloud, it's sort of putting us into the situation where, like, what is being lost, I guess, is my big concern of, like, as we start to continue to share these things, and without plugging that gap, it's like we're slowly eroding away our rights to privacy.
[00:40:57.267] Avi Bar Zeev: I think the important thing is even if the state of law says that governments could, you know, raid servers and get the data and companies can do what they want with it, Companies don't have to do that. I mean, a company can be very progressive in its policies, like VPN companies that promise not to keep logs of what anybody's doing. And that's a step. That's a voluntary step. Or designing your software so it doesn't collect the data in the first place. It's a very good mitigation for some of those loopholes.
[00:41:20.728] Kent Bye: That's the best approach, is not to record anything. But if you do have it recorded and you're being faced with a court order, then I'm not sure if they have much ability to sort of deny that.
[00:41:29.748] Avi Bar Zeev: I think if they're big enough, they do. I mean, I can fairly say that I had not considered joining Apple until the San Bernardino case, until I saw them stand up to the government and say, no, we're not going to break our operating system for a very good cause of finding information about an alleged terrorist. They said no. But when I was in a small startup, I tried to say no, even just to my board of directors. And they're like, no, it's never going to work. Somebody asked for this stuff, you're going to have to give it over. And so small companies won't have that same kind of leverage. Because it's so easy for the government to put a company out of business if they want to, just by the way they regulate. They get to a certain size. They're a little more immune to it, but not perfectly. But I was very impressed by that, and it was shortly after that that I reached out to people I knew and said, hey, I'd like to come work at Apple. So I credit them for that. And I think they're serious about privacy. I don't think it's just talk. So that's good. But I think every company, even ad-driven companies, Even though I know I've talked a lot about how the business model drives them to do this, even though the employees individually are very ethical people, I meet them and I like them all. But the business model rewards the ones with promotions and power who do the best and make the most money for the shareholders. And that means taking advantage in some ways of the trust of the customers. But even those companies can come back and say, we're going to put procedures in place that value us based on the happiness of our customers, because that's a long-term play. Investors can't argue with that. Investors can't sue you when you say, we value the long-term stability of our customer base. We cannot afford to lose customers, so we're going to do the best by our customers that we can, even though we serve ads. We're just going to make sure that we do the most we can to protect people's privacy. That's not an impossible thing to do. It's just that the downhill gravity is all towards the exploitation side. It's very hard to fight. So you've got to work at it.
[00:43:18.469] Kent Bye: So for you, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?
[00:43:25.208] Avi Bar Zeev: That's an interesting question and I think I'm still very interested in the questions around what are the right experiences and looking forward to all the companies coming out with great hardware and prototyping things even at home and things that, ideas that are like, let's try this, let's try this other thing and it's kind of my style is to just rapidly prototype a lot of ideas and see what feels good and work towards that. If any of those things feel good enough, I'll spend more time on them and maybe turn it into a company and go for it. But I want to spend some time just thinking about things and trying things. And I'm excited about things like spatial audio for me. It's very exciting. There's better and better hardware coming out that allows us to do great spatial audio. I think it's a very important component. As a very wise person said, it's often 10% of the cost of a project, but more than 50% of the experience. And I completely agree with that. It could be even more than that. If you take a movie like Star Wars, I'd say it's more than 50% of the experience for a movie like Star Wars. And just try and listen to it without the soundtrack, you'll see. So that's exciting to me. I'm very excited about all the communication scenarios. So any kind of telepresence I think is very interesting. I'm excited, but I'm not sure what to do about all the people working on the open AR cloud types of things. It's stuff I worked on a long time ago and prototyped early. And so I have a lot of thoughts and ideas around it. But I also am very sensitive to people are working on these things as standards bodies and in a very deliberative way. So I don't want to just come in and say, hey, I solved all this stuff once and take my solution. No, it's got to be collective. It's got to be something that everybody comes to agree on. And so I can offer some insights, but I don't want to be too pushy with that kind of thing. But I'm very interested in the world in which we make a world as a result of everybody bringing their data, the parts that they want to share, the parts that they're willing to share, and hopefully they can share in a safe way. I don't think any one company is ever going to have enough money and resources to go build everything themselves. I just think it's like cell phones. any one company and said we're gonna make a completely different cell phone that doesn't work with any other cell phones would not have succeeded it required you know that only really worked because I guess we had a monopoly in the case that was able to make a standard and work but today you'd have to have a bunch of companies coming together and agreeing on how we're gonna interoperate and I think it's a very important process we have to make sure that plays out well and hopefully We are able to design that to be more secure than the web was designed when it was first created and more thoughtful about all the things that could go wrong. We've learned a lot and hope it's all going to work out. So those are the main areas for me. Ironically, the companies that I'm spending the most time advising now are companies that are trying to solve interesting unsolved problems more in the hardware space than anything else. But that's why I stay at Advisor, because they're taking the big risks and putting their butts on the line. And I commend them for doing so, but I know how hard it is to make hardware. And so I will come in and try to help them figure out new ways to do things and encourage them and be a good cheerleader. But I don't think I would do another hardware project at this point. I see what it takes to do well, and it's very hard.
[00:46:26.239] Kent Bye: And finally, what do you think the ultimate potential of immersive technologies are and what they might be able to enable?
[00:46:34.543] Avi Bar Zeev: The easiest answer is that we don't even notice it's there. That you and I are having this conversation and we're in different cities and it feels and looks as if we're here in real life. Once it becomes everywhere, you wouldn't even call that magical. That would just be communication. That's the ultimate realization of it. And while we're talking, all the things that I'm saying, are visible, and we're able to make these concepts concrete and draw. When I wrote this article, I had to go into Keynote and build a bunch of graphics to explain the concepts. Why do I have to do that? I should be able to just say it, and a computer should be able to figure out what I mean and render these things for me so that we can take abstract ideas and make them concrete and enhance communication that way. And I think, ultimately, I subscribe to the theory, although it's gotten a little bit cliche at this point, that what we're really trying to do is to enable superpowers for people. And what I mean by that is not Superman-type superpowers, where there's one person who saves the planet. I think that asymmetric stuff doesn't work too well. What we're really trying to do is collectively elevate people so that everybody is empowered to overcome the limits that they had before. And it could be That's one person's limit is they're unable to determine the emotional cues coming from another person if they're on the spectrum. And we can enable them to get them to a place that we would call neurotypical, which most other people may have by default because of their psychology and upbringing. And it may be difficult for someone to do. And we can help them with technology. And there'll be other places where people are perfectly capable of intuiting social interactions. But we can provide them with other things that they might want enhancement with. So the superpower idea in general is just that idea that Every really successful technology has enabled people to overcome the limits that they were limited by, going all the way back to fire. Fire was the ability to conquer the night. We were limited. We couldn't do anything at night because it was dangerous and dark and cold. So we just huddled, I guess, in our animal skins or whatever we could imagine. I'm not sure. I'm not an anthropologist. But all the successive technological improvements can each be mapped to enabling people to do something that they couldn't do before. And I think AR and VR is the most interesting of all these because it enables so many different things at the same time. The sky is the limit as to how we can enable people to overcome whatever limitations they're faced with.
[00:48:38.029] Kent Bye: Great. Do you have any last words for the Immersive community?
[00:48:41.130] Avi Bar Zeev: No. Keep at it. Happy. I've been at this a long time, and I feel like I've seen almost everything. But then I'm continually surprised and inspired by how hard people are working and all the new ideas people come up with. So just keep it up.