#971: Mark Pesce’s Book “Augmented Reality” Contextualizes AR with History of HCI & Future of Localized Metadata

Mark Pesce’s new book Augmented Reality: Unboxing Tech’s Next Big Thing was released on Friday, January 8th, 2021. It’s a lucidly-written look at the past, present, and future of augmented reality. He contextualizes AR within the history of computing and evolution Human-Computer Interaction, while also looking at the underlying principles of adding metadata to space, spatial computing, and what it means for a physical space to go viral.

Pesce also looks at how AR is a technology that has to be watching, and the open ethical and technology policy questions around privacy. He says that the closer the technology is to our skin, then the more that it knows about us and the more that it has the capacity to potentially undermine our agency.

He also points out that as you change the space around us, then you’re also changing us. There will be a lot of emphasis on feedback loops for consumers wanting to have specific information and context about the world around us as well as an aspirational aspect of the world providing that information. Pesce describes this interaction as a combination of the space itself, how the space expresses itself through radiating out information, then there is how the people who are present in that space interpret and make meaning out that information, but then feed more information back into the space changing the meaning of that space.

He is grateful for Netflix documentaries like The Social Dilemma that starts to provide metaphors for some dynamics of technology companies and the role of algorithms in our lives, but that this role is going to only become more important as augmented reality technologies are able to overlay context, meaning, stories, and metadata onto physical reality that could have a lot more physical collisions with differing perspectives where they were not possible in cyberspace.

I had early access to the book, and I was able to conduct an interview with Pesce back on November 19, 2020. Pesce wanted to contextualize AR within the history of computing and human-computer interaction, but also to catalyze some technology policy discussions around the privacy and inherent surveillance aspects of augmented reality. With consumer AR on the horizon in the next couple of years, then there are a lot of deeper questions around how to navigate the relationship between humans and machines and Pesce’s Augmented Reality book provides a lot of historical context that brings up some of the first discussions and writings on this topic like Norbert Weiner’s The Human Use of Human Beings (1954) and J.C.R. Licklider’s Man-Computer Symbiosis (1960).

Pesce is able to not only contextualize the history of AR, but also give us some pointers of where the technology is heading in the future. If you’re interested in some deeper discussion and analysis of spatial computing, then this is a must-read account that’s grounding in this history and evolution of consumer augmented reality.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So in today's episode, I have Mark Pesce, who has just published a brand new book called Augmented Reality, Unboxing Tech's Next Big Thing. It's a really lucid account that gives a lot of historical context as the evolution of augmented reality, but also point to a lot of these existential open problems around privacy and surveillance. the degree of agency and how our agency is in relationship to these technologies that we're using. So really both this pragmatic and philosophical take in terms of where augmented reality is within the context of the overall arc of the evolution of human-computer interaction, but also like where is it going into the future and what can we do now to be able to help wrap our minds around what this technology is and what it can do. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Mark happened on Thursday, November 19th, 2020. So with that, let's go ahead and dive right in.

[00:01:08.520] Mark Pesce2: Hi, I'm Mark Pesci. I am the author of the recently released Augmented Reality, Unboxing Tech's Next Big Thing. I've been working in the field for, now that it's December of 2020, exactly 30 years. Oh, wow. And was instrumental in bringing virtual reality to the web, working with Tony Parisi and Gavin Bell to create VRML. VRML 97, the MPEG-4 specification layer, whatever it was, 21, which was the integration of VRML in MPEG, and so on and so forth.

[00:01:43.040] Kent Bye: Great. Well, I had a chance to get an early copy of Augmented Reality and was able to read through most of it. So maybe you could just tell me the story of how did this book come about for you?

[00:01:52.863] Mark Pesce2: So back in 2017, when I saw Mark Zuckerberg do the F8 keynote, the fake keynote, where he was really going all in on augmented reality, I saw him give this demo. And the demo, in a sense, seemed very innocuous. He said, we have this beautiful blank space at Facebook. and we're going to put some art up on the wall. And he just did it. And he showed what he was doing with augmented reality. He talked about the fact that spectacles would be coming, but they would be hard. But when he put that piece of art up on the wall, a switch inside of me flipped. Because at this point, Facebook was already being used fairly broadly as an engine of disinformation and an engine of coalescing all sorts of interesting and perhaps malevolent forces. And I realized that the ability to be able to randomly put a piece of art on the wall also meant that you could randomly put a swastika on a synagogue. And that you're emphasizing one aspect, which is you can put art everywhere without emphasizing the fact that with the great power comes great responsibility and that all of this had been glossed. And that started me down a whole line of thinking about the implications of AR, but also the implications of AR as being presented from a commercial source, where it was going to be shaped to a narrative around the things that they were promising, rather than the things that the users would actually do with it. Because let's face it, Facebook says it connects the world. Well, Yes, it has. And that's had all sorts of unintended consequences that mostly Facebook tends to just gloss. And so it was time to step back and go, all right, let's take a look at the world of technology as we use it, and then put AR into that context, and then maybe start to understand how we want to think about and use that technology.

[00:03:41.995] Kent Bye: Yeah. If there was one major theme that I think runs through this book, it's that it's sort of the worst of times and the best of times. You have these big major tech corporations that are building all this technology and that's going to unleash all this human potential of human computer interaction, do all these amazing things, but yet at the same time is also these tools for control. And so maybe you could sort of like, as you are trying to put this book together, there seems to be that as a message of trying to explore both of those potentials.

[00:04:11.403] Mark Pesce2: And the thing that I did in the book that I didn't realize I was going to do until I was actually writing the book is I put it in context. The thing that I stayed away from was saying that AR is exceptional. It's different. What I said is AR, in fact, is the natural consequence of everything that we've been doing in human interaction design for 30 years. ever since we decided to do sort of A-B testing and survey the results, in other words, watch users as they used Google or some other website to understand whether they're clicking on the blue button or the pink button or whatever it might be, that this started an avalanche of building surveillance techniques into our code, into all of our systems, first to fulfill user need and then to start to fulfill commercial need. And this is where it gets very kind of weird. Now, when you take that and then go, okay, augmented reality, and this is something that's just a basic statement in the second chapter of the book that I think needs to be repeated at every possible moment, which is that in order for AR to work, it has to be very aware of the world around. Now, I met you at the Consumer Electronics Show back in 2017, which was the first CES that was just huge with VR. And I remember getting ushered into a private suite at Intel's booth to see, I think it was called Project Icarus or something, which was their head-mounted display that used inside-out tracking. All right, so it was using cameras, watching the world very carefully. It worked reasonably well. It was clearly getting on the way there. And now that's basically the way all Windows mixed reality headsets work. It's cheap, it's effective, and it's now become a foundational technology for augmented reality, because you have to be able to watch the world, you have to use SLAM, simultaneous location and mapping, to be able to produce a map of the world, and then you can start to do augmentations in that environment. So AR to work, and this doesn't matter whether it's on a smartphone or whether it's on spectacles, has to be watching. It has to be a technology of surveillance. So it's not even like with a web browser, a web designer can decide whether they're going to watch you click or scroll. They have an option there. With AR, there is no option. You have to be watching the world. And then there's this other aspect, which is explored in chapter three. And if you take a look at Facebook's Project ARIA video that they released two months ago, what you see is that they've also turned the cameras inward and they're looking at gaze direction. And gaze direction is extremely privileged information. Gaze direction reveals things about you that you would never admit. It reveals your likes, your desires, your preferences. your hates, all of these things are revealed at a physiological level. And so you now have this idea that AR spectacles aren't just watching the world intensely and being surveillant of the world, but they're also simultaneously watching you and your reactions.

[00:07:10.600] Kent Bye: Yeah. And as I read through this book, you're trying to tell the story of this technology, but you're also reflecting upon the deeper relationship between man and machine. And I think what was really striking to me was to go back and read through how you started to tell the story of the history. Because as being involved within the VR industry, I've heard lots of different accounts of the history of AR and VR. I think you went back into like a lot of things that I hadn't really heard of and this man-machine symbiosis and cybernetics, and you kind of like really are tying in how that led into the funding for Sutherland and then sort of Damocles and on from that time period for the last 50 years, this trajectory as to where we're at now and how we got here based upon that. So maybe you could talk me through a little bit of like trying to tell this story of the history of AR and VR.

[00:07:59.389] Mark Pesce2: So one of the things that is being represented is that AR is this very new thing. And in fact, AR is more than 50 years old now, but it has a trajectory that goes back to basically the foundations of modern computer science. We have in the 1950s, you get to a point now, particularly with jet fighters, So I think the 86A, the F86A was the category example of this. And there is beautiful representations of cockpits. If you look at them online, it's just crowded with dials. And you realize that a jet fighter pilot had to be across all of the information that's being presented across all of these dials simultaneously. while also perhaps being in a dogfight at a supersonic speed. And all of a sudden you get to this decision load for an individual pilot, which means that it's very easy for them to make the wrong decision when they're informationally overloaded, kill themselves, or lose the battle, whatever it might be. And so all of a sudden there's an intense interest from the military about how do we manage the incredible decision load that these incredibly powerful machines are offering to people. And this is then where J.C.R. Licklider comes in. J.C.R. Licklider being a psychology researcher who was at Harvard and then at MIT, and then becomes the founding director of the Information Processing Technologies Office, the IPTO. which is the part of DARPA that gave us the internet, that gave us AR and VR, and therefore real-time 3D graphics. In other words, that gave us the universe that we live in today. Licklider was the founder of that, started research on these areas. And then after two years, he passed the baton on to the most promising person he could find, this young, recently graduated MIT student called Ivan Sutherland, who had just done sketchpad and sketchpad being the foundational piece of interactive programming, right? Where you use a light pen on a display to draw things. So it's the foundational bit of all interactive computing. So he's now in charge of the IPTO for two years, writes a paper called the ultimate display, where he basically says, look, we can get devices that can now measure the body so precisely that we should just be able to put the display all around us like Alice through the looking glass. He hands off to, oh God, is it Bob? I'm trying to remember his last name. It's the fellow who actually gave us the ARPANET after that. But he then goes off to Harvard and builds the first version of that ultimate display, which is the Sword of Damocles. So there's this really clear sense that all along the way, the problem had been clearly outlined by Licklider in his Man-Machine Symbiosis paper, which is from 1960. before the IPTO had been founded. They sort of lay out, this is the reason for it. He goes and founds the IPTO in order to start to work on a solution to a problem, then brings in Sutherland to bring in part of the solution. Sutherland brings in the next person in order to be able to do the next part of the solution. You can see how these elements start to fall into place to give us a trajectory that was so ambitious, technically, that took almost 60 years to fulfill.

[00:11:08.915] Kent Bye: Yeah, this whole time period is really quite fascinating. When you look at the ultimate display written in 1965, you have the Sword of Damocles in 1968. You have the Mother of All Demos also in like 1969, just basically. 1968.

[00:11:21.479] Mark Pesce2: Oh no, they're at the same. The Mother of All Demos and the Sword of Damocles are both shown off at the same event. This is the thing, it's like, the Fall Joint Computer Conference in 1968 is the single most important event in the entire history of computing, because it's not just that, but you also have the first papers about time-sharing operating systems. So it's just like, it's the whole enchilada right there. Plus, of course, Sutherland not only shows off the head-mounted display, but writes another paper which describes the systems that create real-time 3D computer graphics. So it's like, it is literally the whole enchiladas at that one event.

[00:11:57.849] Kent Bye: Yeah. It was really what Alfred North Whitehead would call like this concrescence of all of these visions to kind of plant a seed that then would be continuing to grow for the last 50 years. So it's really quite amazing to, you know, just as I've been studying this time period, it's those series of events just to watch the mother of all demos and just feel like how all the things that he's showing there have been slowly coming to pass. And, you know, you have this man machine symbiosis that have led to this interaction and human computer interaction and the sketchpad also with Sutherland 1962, innovating and creating this first real-time input. And I know that Sutherland got an award from the Proto Awards 2015, and he gave a little speech. And his speech was interesting because he was a little bit of like getting an award for a man that he was previously, like 50 years ago, he was getting an award for something that he had done like way back then. And he's really kind of moved on and done all sorts of other things that he's not really necessarily even wanting to talk about it.

[00:12:53.247] Mark Pesce2: Which is a pity, because so much of our present hinges on his past, right? That's the thing. And he's not getting any younger. He's 83, 84 now, right? So I would hope. At the same time, he's also a Turing Award winner. So any award he gets after a Turing Award, because the Turing Award is the Nobel of computing, right? Any award you get after the Turing is kind of like, yeah, thanks.

[00:13:14.850] Kent Bye: Yeah. Well, just as you are writing in this book, I mean, you mentioned him a number of times and just how groundbreaking a lot of his stuff has been and continues to be. And the other thing that you mentioned here that I thought I think is worth collaborating on a little bit here to this audience is that the sort of Damocles is sort of technically, a lot of people say like more of an augmented reality than virtual reality because it's a pass through see-through. You said that in the book that, you know, really that was more of the first AR device. And the first VR device didn't really come like 20 years later, which a lot of people cite the solar thermocles as the first VR device, which, you know, you could say it's more immersive spatial computing technology and it's, but yeah, maybe you could sort of elaborate on that differentiation there.

[00:13:53.879] Mark Pesce2: Yeah, because the lens system on the sort of Damocles used half silver mirrors so that it took in the space that it was operating in and then augmented it, right? Added things to that space. And because of its entire trajectory, again, you have to think of what's behind it, which is this idea that at some point someone in a jet fighter will use it, right? And the jet fighter is not in virtual reality, it's in real reality. So I don't think there was ever a particular sense there that these systems were going to be virtual, that there was no conception around that, that they were in fact solving a problem for the real world and that was always going to be in the real world. It isn't really until you get to the view system out of NASA, where they're using it for mission rehearsal, that you have this idea that you actually have to completely absent yourself from the real world in order to be able to rehearse something you're going to be doing in zero gravity when you're outside the spacecraft. So I feel as though, although we can see the difference between the two from our vantage point in 2020, in 68, they were solving a problem that they hadn't identified as augmented reality or virtual reality. They were just solving the augmentation problem.

[00:15:07.258] Kent Bye: Yeah. And when I went to the Decentralized Web Summit in 2018, it was put on by the Internet Archive and they had a display booth there talking about all the history of cybernetics. And I talked to some people there about the history of cybernetics and cybernetics seems to be something that generally the history of computing, but I don't necessarily hear about it a lot when I talk about VR and AR. And you mentioned it here in the book, and maybe you could talk about like how you see the role of cybernetics and the history of computing, especially related to spatial computing.

[00:15:36.114] Mark Pesce2: Yeah. So it is interesting because cybernetics comes out of the sciences, and particularly math, around the same time computing does. And they're closely related because they were embodied by a couple of people, Norbert Wiener being the big person in the space who then coins the phrase cybernetics. And of course, gives us cyber this, cyber that, everything today, but also John von Neumann. So people are familiar with that. So there was a whole set of people there at a series of events called the Macy Conferences. And the Macy Conferences took place in New York. They started during the war, but they really took off in 47 to 52. And it was a round table of people from a number of different sciences. So you had Margaret Mead, the anthropologist, Gregory Bateson, you had Norbert Wiener, you had John von Neumann, you had famous psychologists and physiologists, because they were all basically looking at the familiar and common ways that all of these systems were controlled, managed influence. Because they were looking for basically meta statements that you could say around all of these things. How does what's going on in a neuron mirror what's going on in a culture? And a lot of that then starts to show up again when we start working on human computer interfaces. And the idea of man-machine symbiosis is an idea that flows directly out of the Macy conferences. It flows directly out of Wiener's work. Wiener's work, particularly, we're now at the 70th anniversary of a book that he published called The Human Use of Human Beings. And it's essentially an accessible version of the early work that he did called Cybernetics, which is highly mathematical and Wiener was considered both one of the best mathematicians of his age and also such a good mathematician that at MIT when he would teach his postgraduate courses he would be writing things on the board and not a single of his graduate students would really understand what was going on. So he was a genius at maths, but he also had the ability to translate what he was doing into terms people could understand. So he wrote the book in 1950. He said, look, we're going to have this massively automated society. If we don't manage it carefully, the computers will end up managing us rather than the other way around. And this is why he called the book The Human Use of Human Beings, because we're going to build these systems. Now let's start to give some thought. And of course, 70 years later, that seems fundamentally prescient as we think about the way social media may be manipulating or algorithmic choices or AI systems. So in some ways, what happened was cybernetics became the background for producing the world that we're in, but that world tended to ignore the framework for being able to mitigate its influences or to be able to manage those influences. And it feels like that's part of what we're coming back to. And that was part of my goal in writing this book was to go, here are these influences, here are these questions that come to us at the beginning, it's time to return to them.

[00:18:29.617] Kent Bye: Hmm. Yeah. And wasn't another big part of the aspect of feedback when it comes to cybernetics of trying to have like iterative or systems that are taking feedback from the environment and then adapting in some way.

[00:18:41.033] Mark Pesce2: So the idea of feedback was a very new idea in the 40s and 50s. I mean, it kind of had been understood, but Wiener formalizes this idea of feedback, and then effectively Sutherland implements it in Sketchpad. So there's sort of this direct correlation between what Wiener's talking about in theory and in math, and what then Sutherland implements in code. And as soon as you get this idea that the light pen touching the screen produces something that is seen by the human visual system immediately, you now get this coupling of information and action that's both on the computer side and on the human side. And so through the mediation of software and devices, you now have a close coupling between whatever's going on in the computer itself and what's going on in the mind of the human being. And for the last 60 years since Sketchpad, what we've been doing is we've been exploring all of that. Now with augmented reality, we seem to be approaching a point where our abilities with it are so potent that we need to think very carefully about how we deploy those abilities because it's possible now. And I make the statement in the beginning, in the introduction of the book, that when you change space, you change us, right? When you put new things into a space or take things out of a space or change the way you highlight or represent a space, you are changing human behavior, you're changing human expectation. If that is now going to become a basic part of how we operate in the world, we now have an incredibly potent technology for changing how we think and operate, and we need to think carefully about that.

[00:20:25.126] Kent Bye: Yeah. And I wanted to ask, because as you go through, you're laying out the story of AR and contextualizing with the story that you're telling. I mean, for me, there's certain aspects of that story that I look at, like, say, like the role of video games with that, or in the early nineties, all the stuff that happened. So there's stuff that you are including some stuff that I didn't see and other stuff that you know, I'm familiar with that aren't a part of this particular narrative that you're telling, but yet, you know, I'm more familiar with VR and this is a book about AR. And so as you start to tell this story and history of AR, I'm just curious, like how you decided about what to include and what not to include when you, you know, as an example, like the role of Oculus and Facebook isn't in this narrative at all, but for me, that's such a big part of the modern resurgence. And so I'm just curious, like how you generally were trying to cast the limits and boundaries of how you told this story.

[00:21:13.479] Mark Pesce2: So I mean, the entirety of Chapter 5 is about Facebook. So I didn't really avoid that. I simply chose to drop it at the end because so much of what had to happen happened prior to that, but also because the first truly realized modern AR device doesn't come to us from Facebook. It comes to us from Microsoft. It's a HoloLens. And so Facebook maybe takes that up and runs with it. Magic Leap maybe takes that up and runs with it. But the fundamental work on inside out slam so that you can track around an environment, all of that original work and the displays and putting quote unquote holograms into space, all of that work is Microsoft's work. So in some ways, that's the important thing. And you can see the line that I follow through the book is that Microsoft with the Kinect And it's really funny because the Kinect for a device that never truly succeeded on its own is still probably one of the most successful ideas in terms of its implementation in all time, because all modern iPhones have their own version of a Kinect built into them. And all modern AR systems have some version of a Kinect built into them, right? This idea that your camera out into the world, you're using a depth map, all of this other thing. And so you can see from looking at the story of Microsoft and where Microsoft was going, how AR developed. Facebook does come in with the Oculus purchase, all right, and then sort of develops it. And then in 2017 says, OK, we're going to make AR a basic part of all of our systems. Now, whether that's because they really reckon that that was the future for them or because they wanted to basically stop snap in the bud, because remember, snap was coming on very strong as a challenger to Facebook in 2017, because it was only the year before that Evan had turned down the offer to be bought by Facebook for I don't know how many billions of dollars. And of course, Facebook tends to take its competitors and then simply produce the same set of features and drive those competitors out of business. It's been the business model for some period of time. So it was hard to know in the early days whether that was just a strategic move, but you can see because of the amount of resources they're continuing to throw into it that it's actually them considering the future direction. I think that that's a hedge for them because, as I say in chapter three of the book, it's pretty clear that by the end of this decade, so by 2030, a lot of the things that we're doing with the smartphone will be done via AR spectacles, right? And the reasons for that are several. Some of them is because it handles location and locative data and locative metadata in a way that a smartphone really doesn't. But the other reason is because it frees us from the looking down problem. And if you remember when Was it Sergey introduced Google Glass? He said smartphones are emasculating. And everyone wondered what he meant. And I remember this because that comment got a lot of play, and I know what it means. In other words, he finds the idea that a device could soak up so much of your concentrated attention as something that deprives you of agency. And the smartphone, as we stare down into them more and more, has got this constant conflict with us being up and looking out into the world. And augmented reality removes that problem, but it removes that problem by putting a display around the entire world. So it's an open question about whether it removes the problem by making the problem worse.

[00:24:35.078] Kent Bye: Yeah, and as you were telling this story, I was also surprised to see that the Google Glass wasn't prominently featured or, you know, I guess you could argue that it's more smart glasses rather than AR glasses. And you talk about the Google Cardboard, but then it's like, sort of, where does the Google Glass kind of fit into this larger narrative here?

[00:24:53.455] Mark Pesce2: Yeah. And I suppose it's because underneath all of this, I'm operating from a very specific definition of augmented reality, which is that augmented reality is the ability to put located data into space and the ability to read that data back out again. And there are different operations and they're done differently. Google Glass was more of a notification display for your smartphone, which allowed you to have what we would now call ambient notifications in your eye. And so in that sense, because it wasn't, I think, fundamentally Locative in that sense, although it is interesting and it certainly produced a lot of the foundational technologies. As we were moving in it's not exactly on the same path that i'm charting where you know I talked about the foundation of technologies being the connect. the smartphone and specifically the smartphone range of sensors, and then the kind of mapping applications that we got from Google Niantic, that when you put those three things together, you have the necessary confluence of technologies to be able to produce what we would think of as augmented reality.

[00:26:08.032] Kent Bye: Yeah, I like how you start the book with the Pokemon Go moment. I remember I was traveling in New York City when it launched and you know, there was people clustering in Central Park and some of the scenes that you were casting there in this book. Was that your first person accounts of when Pokemon Go or where were you when this launched?

[00:26:25.977] Mark Pesce2: So I was not at the Ryden Roads, which is this little suburb outside of Sydney, because Australia got Pokemon Go about 48 hours before the rest of the world. Niantic was testing out its service. So it dropped here first. And we had it. I think it dropped in the rest of the world on sort of a Monday. It dropped here on like a Friday afternoon. And I had a friend visiting from Detroit. first trip to Australia, and I'm taking her around. Of course, she's super jet lagged, because she's just gotten off the plane. But we also noticed that, yes, you see this clustering, the people clustering over at the Opera House Forecourt, or over in the city, or on the train, all talking around a smartphone. And I realized, of course, we actually talked to them. They're like, they're all playing Pokemon Go. So it was a very magical period of time for that weekend. And this is something that I really state in the book. Niantic was able to place imaginary creatures into real space. And when they did that, they transformed space because they transformed our behavior in space. And so for a period of time, Australians were caught, literally caught in the spell that was being cast by Pokemon Go. And this is, you know, that is the first little sort of curtain opener that shows you how potent AR is as a technology.

[00:27:35.678] Kent Bye: Yeah, yeah, I do think that there's, you know, AR goes way back, but I do think that that was a turning point in terms of the awareness and the mass consciousness in terms of what that could mean. And for me, I think there's, you know, a lot of other things that you go into in the book in terms of ingress and the power of the way that they were able to gather all that information and kind of map out the world. And maybe we could talk about that just a little bit, because I think that's an important part, the precursor to Pokemon Go with this game called Ingress, which was in some ways having more of a similar kind of battle where people were using their phones in sort of the imaginal realm, but overlaid upon with the GPS, but they were able to, I think, in some sense, find out the cultural places that people would gather and meet. And so I don't know if they had already figured that out based upon other GPS data, but that seemed to be a carryover between some of those same spots that then came into other locations within Pokemon Go. So, maybe just sort of set the context for Ingress and what that meant for the history of this capturing of all this data.

[00:28:37.371] Mark Pesce2: So, to set the context for Ingress, we need to bring in the other person who's really followed in the books. We have Aiden Sutherland, but we also have John Hanke. who is the founder of Niantic, but we need to go all the way back to, and he was a games developer, sold his games company in the late 90s and then founded a company called Keyhole. And Keyhole was basically designed to integrate all of the satellite data that was being photographed around the world and being able to present it as a seamless integrated whole. And if that sounds familiar, we'll come to that in just a second. Keyhole was used on television during the Gulf War in 2003 so that CNN could show its viewers where particular battles were happening. And of course, the Keyhole logo was on the screen all the time. That was probably a penny drop moment at another company that was just getting going called Google. Because Google had completely owned the lexical search space, so using language to search for things. and had no offering for locative search. And so basically what they did was they bought Keyhole and then used that for the foundation for Google Earth, and for Google Maps. And so the data set that Keohole had been working on becomes the foundation for Google Earth and Google Maps. Hanke really starts to grow those divisions. And then at a certain point, when these things are up and running in massive products, he says to Google, look, I now want to start to use that massive locative data set to be able to use it to tell narratives, to tell stories. And Ingress, which starts as an internal project at Google, is a way to be able to take the locative dataset that they've gathered up and then augment it with stories around this new secret form of energy and the two sides who are battling over control of it. And, you know, are you on the blue team or the green team? And it's a capture the flag game where people are encouraged to use their smartphone to go to a particular location and seize that location for their team. right? So the data set they started with was basically the Google data set, the Google Maps data set. Niantic, because it now has users on site with their smartphones, which are gathering more data, builds up an even richer data set around those particular locations, which tend to be around public parks and public monuments. Then Hank spins that out, Google lets him go, and that becomes Niantic. They bring Nintendo in to produce Pokemon Go. They now have the map from Ingress And that becomes the beginning of the map for Pokemon Go. So the places that were the most popular in Ingress were the most well-described places in Pokemon Go. One of them happened to be a public park in Rhodes called Peg Patterson Park, which is really just a tiny little suburban park where you let the dog run and let the The kids play on the jungle gym. That's really all it is. It's a little square. I've been there. But there was a polka stop there. And it's also quite close to a train station. And so within a couple of nights, people realized there was a polka stop there. They would then drop some lures and attract some Pokemon. And then they would start messaging their friends. And over the course of a Tuesday evening, you get almost 1,000 people, young people, at 11 p.m. in this tiny little suburban park, which is surrounded by high-rise towers, and they're keeping all of those people awake. And the police have to come and break it up. Now it's not actually a riot, but it was this accidental amplification. And this, I use this as the opening example in the book because Niantic didn't intend that. It emerged from the intersection of the augmentation of space and the way people behave. And this is why it's so important as an example, because if we get that at the beginning with augmented reality, goodness knows what's waiting for us.

[00:32:27.562] Kent Bye: Yeah, this is the other aspect that I think is really fascinating in terms of the role of story and narrative and science fiction in terms of inspiring all of this work. So I know there's certain science fiction books and Neuromancer and Snow Crash that certainly inspired different people and technologies as we go along that just that topic alone of the role of science fiction of inspiring the evolution of the technological develop could be a whole encyclopedic book and study within its own right. But I do think that there's a very interesting connection there between the role of story and the role of narrative kind of spinning these imaginal futures that then inspire people to go actually manifest it. And I don't know if you were sort of adding a little hook there for Keyhole, if that was sort of directly coming from a science fiction novel as well.

[00:33:13.667] Mark Pesce2: Yeah, I mean, if that's coming from Snowcrash, from the Earth project in Snowcrash, which was a fundamental influence on me as well. I mean, I think Snowcrash definitely framed a lot of the possibility space for people. And certainly I immediately started to work on Earth visualizations. Add to John Hanke, he had better resources than I did, but we both managed to do some version of that and we've seen it grow since then. That data has become much richer and more accessible. So we start to see more and more interesting uses it. But we're also now going to see, as people start to walk around wearing various versions of augmented reality spectacles, that the richness of that data set will increase by orders of magnitude.

[00:33:55.318] Kent Bye: Yeah. Yeah. Well, the other aspect of the story though, I guess, is that the augmented reality itself in some ways is the way I think of it conceptually is sort of like overlaying like a platonic realm of ideal forms on top of our embodied experiences. And so we're able to mash together to different realities. And part of that fusion is the process of telling stories, setting a deeper context and being able to share meaning with each other, which I think is the essence of the role of how place is able to have this different history and stories and context and the ways that you can start to travel back through time or recontextualize things is a lot of the AR art that I've seen in terms of trying to dig into the story or the context of places that used to be slave trade locations in New Orleans, or the Columbus statue in New York City and trying to say, okay, well, Columbus was actually trading in slaves. And so being able to really actually take some of these different monuments that are put there by the state or these artifacts that are produced by the culture, and then really trying to deconstruct them in different ways or try to spin new stories or cultivate new relational contexts and new meaning within these places.

[00:35:02.550] Mark Pesce2: And you managed to sort of lay the groundwork for what I believe is going to be a big issue that we're heading into sort of blind. And this is where I really bring the book to its culmination. You know, you're talking about recontextualizing history. Well, the recontextualization of history is a process. It's always happening from the first time that there's a draft of history. And it always contentious. It always makes people cranky because people want a particular narrative that agrees with their own views of the world. And we very much live in a world now where people are fighting over narratives. I mean, we're recording this in the aftermath of the 2020 U.S. presidential election, which really does feature that as its most prominent feature. So you now have this idea that if you're recontextualizing a statue of Columbus or of slave trading locations in New Orleans, what happens when someone comes along who has a fundamentally different point of view on those matters? And what happens with their annotations? And how do we allow contesting views of what the truth is in a space represent themselves, particularly when some of those may be seen as being abhorrent or provocative or simply not very helpful.

[00:36:28.855] Kent Bye: Or not true, no basis in any fact as well.

[00:36:32.473] Mark Pesce2: Exactly. And so you have this idea that we now are getting to the threshold of a technology that allows us to augment pretty much any space anywhere we want. I mean, Minecraft Earth has that capability to some respect. So does the Facebook app to some respect. But we're just really getting to this point now. And we haven't got any significant thinking about what that means to exist in a shared space And the thing is, because the locative space is a unified space. There aren't multiple locative spaces. I mean, yes, there may be parallel realities. We're not going to worry about that. I'm focused on the one we're in right now. There's one locative space. And, you know, you talked about it's funny because 1968 gives us the mother of all demos, which is the birth of the human web, right? The human web of knowledge. It's the foundation for that. And we also get the sort of Damocles, the foundation for AR. And now in 2020, this is where they collide headlong. So we have the human web of information, which is now so rich, but completely not located in space. It's all in that sort of weird non-space cyberspace that allows parallel narratives to coexist without intersecting. And it's now colliding with a real world space where you actually, if you have a collision of narratives, it's a real collision. And so you now have this new tension that will be formed around this, which is that I disagree with your view of whatever person that might be, and I am willing to present my own arguments. And you might find those arguments abhorrent, but who's going to arbitrate that? And we can look at systems like Wikipedia for examples of how that can be arbitrated. But again, do we have this idea of a unitary authority over management of space? And this comes back to the conversation that we had three and a half years ago at CES, which is this idea that who has the right to write into locative space? How is that managed? Who is saying you have permission because you are the historical owner of this building or historically represent some party to this building? You do not because you only have the representations of your own whatever, fictional reality, whatever it might be. So we get into an area that is not a technology area but is an area of Culture is an area of law, right? And this is the cliff that augmented reality is running over right now without actually knowing.

[00:39:09.540] Kent Bye: Yeah. Yeah. For me, when I think about this issue, I take a philosophical lens, which is on the one hand, you have modernism that tries to, to some extent, have these grand central narratives that are trying to come up with this capital T truth of what the reality is. I think postmodernism is really pushing against that, saying that from a feminist perspective, have these situated knowledges where you have lots of people of different places of power and privilege who have different experiences and different points of view. I think to some extent, I'm not trying to necessarily think that we should strive towards this singular truth because I just don't think that's feasible. I think that the inherent nature of these different perspectives is that we are going to have a plurality of different ideas about stuff. A pluralist approach would be to try to have channels that you could go in between these different reality bubbles and that you could start to try to triangulate between those. But I think the challenge is trying to, on one hand, be comfortable with the paradox of things that are conflicting with each other, and letting people sort of resolve on their own. But the other issue is this whole dimension and trajectory of just complete propaganda and fake news and misinformation that is polluting that whole information ecosystem with stuff that doesn't really necessarily have a solid foundation in actual facts or history. And so you have this complicated aspect, which is that there's a normal deliberative process that I think just trying to figure out the consensus of what the truth is that happens through the peer review process or adversarial division of epistemic labor, as Agnes Collard says, which is, you know, it's impossible to avoid all falsehoods and believe all truths at the same time. So to some extent you have to have some degree of dialectic process, but yeah, this is sort of the quintessential issue of our time is having this more sophisticated relationship to epistemology and knowledge and truth and how we negotiate that. And so to some extent I'm okay with having a little bit of the messiness of it It's just having all the fake news and the misinformation and how do you navigate that? I think it sort of complicates it all.

[00:41:07.649] Mark Pesce2: And this is the central feature of our time, so we can't just gloss it. But I do want to introduce another element that I make very much of in the book, which is the concept of agency. That in fact, where we're going, space will speak for itself, right? because we're building devices to both put metadata into that space and read that metadata back. So that space will speak for itself and it will shape our behavior in it because of that. Where does that agency rest? And the interesting thing is, as we move to some systems that are autonomous, In some cases, space will just speak for itself because here's the temperature here. Here's the wind speed. Here's the number of people who are here. It's just sensor data that is being read to us. And so you can argue whether it's objective or not, but it's the space speaking for itself. When it is someone else speaking for space, it's not just that there's contested territories here. It's that actually people have rights to privacy and they also have rights to not be written over. And so this is where agency also intersects with all of the other narratives that we're telling, which is that, in fact, while narratives in the public sphere may, in fact, be permitted and we'll find ways to manage those amidst all of the fake news and the noise, narratives in private space are another matter. And much of the located sphere is private space, whether it's a building or a church or a home or your bicycle. And so we do need to consider that, in fact, it's not just this world of ideas, which is what we've gotten very comfortable with, because the web has gotten us used to that. It is now these things in space, and they have qualities that we cannot ignore.

[00:42:55.618] Kent Bye: Yeah. And I'm wondering how you start to reckon this dimension of the physical property. We have property rights of that versus the right to have a metadata layer, which is in some sense in this etheric imaginal realm that is in dimensional. It doesn't have any dimensionality meaning you could have as many of those different perspectives overlaid as you possibly want. And so it's essentially like a metaphor of like you go into a space and you have an app and you have just as many apps as you have that are out there, you could have different reality bubbles that you fly into. But there's also the issue of like the public space of like, what is the GPS and the, I think this is what we talked about three and a half years ago with the mixed reality service saying like, what is the canonical information that of whoever the property owner is or whoever has just some consensus based decision come to like whatever shows up and what would be kind of like the equivalent of the public web of just the GPS and you go to just one interface, one app and be able to have some way to mediate that versus this other model, which is essentially like the Pokemon go, which is like you create your own app and you can do whatever you want, but you also have just an explosion of a million of these different apps that you have those different reality bubbles. So I don't know if you come to any sort of conclusion in terms of whether or not the property rights should have any sort of First Amendment rights or if everybody should have the right to be able to overwrite and who has the right to sort of overwrite other people. I mean, this is sort of the essence of this debate here.

[00:44:21.331] Mark Pesce2: Yeah. And in truth, I don't need to have any statements about that. I can simply look at how the courts are ruling because the Pokemon Go problem has in fact ended up in the courts where neighborhoods are upset because features have been re-emphasized because they show up in Pokemon Go. And so you get similar effects to the ones that you had in Roads. And in fact, I think the city of Roads insisted that Niantic remove the Pokestop from Peg Patterson Park. So there is this idea and there were rulings, I think, in Hollywood and rulings also in Wisconsin that are discussed in chapter five. So there is a body of, you can't call it case law because most of these things get settled out of court to prevent the creation of case law, but there's a body now of at least cases that are going to judges where the rulings are that in fact the property owners themselves do retain substantial right here. And so we're actually seeing that in some sense, it's going to be driven by litigation. And of course, because the foundation of law in the West, particularly English common law is property, right? I mean, yes, there's criminal stuff around murder and whatnot, but the rest of it is property. And so when you're talking about property, that's actually already something that the law understands really well. When you start dragging AR elements into it, it's always going to be measured against the thousand years of English common law that we have. So we can see the beginnings of that. You know, we're going to need something that will give us canonical enough And I propose in chapter five, the creation of a body that's similar to ICANN, which is the Corporation for Assigned Names and Numbers, called ICLUM, the Corporation for Assigned Locative Metadata, which can oversee some canonical space there. And whether you use MRS or something else, it doesn't matter. I think we're going to need something there. What is interesting is that the commercial responses to this are Azure anchors and ARKit anchors and gosh, what's the Google one called? ARCore anchors, right? They all have their own basically publicly accessible databases that allow you to apply metadata to a located space, right? Which is that fundamental act on one side of what AR is. And that is going to be a shermozle. because you're just going to have people populating these databases and there's not going to be any way to be able to, I think, arbitrate unless all of these groups are going to set themselves up as the traffic cops. Yes, you may publish this. No, you may not publish that. That introduces a second problem because I, as a property owner, probably have the ultimate right to say whether something happens or not. And if these commercial firms implement systems that essentially deprive me of that right, I might be deprived of that property right before I am even aware that I have that property right.

[00:47:18.278] Kent Bye: Yeah, well, as we start to wrap up this discussion here and talking about what I would characterize as the worst of times and the best of times, the dialectic between the horrible surveillance aspects of this all versus like untapped latent human potentials that all of us could start to unlock. And so maybe let's start with the worst of times, the more dystopic future, because We have these big major tech corps, especially two of them with Google, as well as Facebook, who have surveillance capitalism as their primary business model. You have Apple, which uses privacy to sort of excuse the fact that they want to have all sorts of other closed walled garden business practices that I'm sure there's certain aspects of the privacy that is legitimate, but it also sort of reinforces this larger closed mindset that's antithetical to what is happening on the open ecosystems, the open platforms and the open web and open standards in general. So, you know, you have this movement towards what I see is more and more of these closed walled gardens, but also these big major tech corporations are going to have access to some of the most intimate, as Google says in Project Ario, this egocentric data capture, which is their vision of capturing not only what is in the world, but also our eye tracking data to correlate what we're actually looking at and paying attention to and find valuable in the wide world and these glasses that we're wearing 24 seven. You know, for me, when I think about this, I'm honestly like, you know what, I'm just going to stick with VR because this is like absolutely terrifying. And I have a hard time making that leap between how this is going to turn out anything other than just the worst big brother dystopic reality. So I don't know how you like sort of salvage this trajectory and if people are just going to like not care and you know, just like when the the cell phones first launched, people were freaking out that people were going to take pictures of people in the bathroom. And then that ended up being over time, culturally acceptable. And so my fear is that all of this stuff that's terrifying for me is just going to be commonplace and no one's even going to bat an eye because we're just kind of like sleepwalking into this dystopic future.

[00:49:18.124] Mark Pesce2: I think one path to confront the dystopic future is to go all the way back to that beginning of cybernetics and take a look, in fact, at the relationships, the informational relationships between ourselves and all of this new machinery. And the thing about an AR system, at least as we're thinking about them right now, is that it's not a two element system, it's a three element system. It's the human being, it's the spectacles, and then it's the fact that all of that data goes up to a cloud, where very non-transparently it gets transformed and then fed back to us. And that is exactly how the Facebook app works today. This is just taking it to its logical next level. And to the degree that these systems lack transparency in their operation, to that degree we sacrifice our own agency. And I feel as though understanding the systemic nature of those relationships is the first step. And you can see, it's interesting, Kent, because I've been trying to talk to people about this from 2017 on forward, and then Netflix drops the social dilemma. And all of a sudden, everyone is across this. And thank goodness, everyone is across how this works and why it works. And maybe not everyone's starting to ask hard questions about it, but it's created a space for critical thinking and agency that allows us, the next time a technology is presented on a silver platter, well, let's take a look at this in context of what we already know about how these systems work. The project of preventing dystopia is not a one-off. It is a process. It is something that you simply have to do. I'm trying to remember Barry Goldwater's line, extremism in the defense of liberty is no vice. So that's, I think the dystopian aspect is to be able to illuminate as far as we can all of the pieces and how they're working together and how that working together is being used to deprive you of your own ability to make choices.

[00:51:23.527] Kent Bye: I've been working with this industry connections group for the IEEE. They're looking at ethically aligned design and they're going to be creating this new initiative looking specifically at XR ethics because I think that they did a whole similar thing with AI and they had an ethically aligned AI book. There's this new initiative that I've been a part of the committee to bring forth because for me, I see is this this big gap in terms of the larger ethical frameworks and trying to have this interdisciplinary collaboration between the academics and the rest of industry to be able to have this larger discussion about this. I mean, I just recently had a discussion with Nathan White. He was the privacy policy manager of Facebook. And I'm like asking him all these really hard questions. And he's basically like, yeah, those are good questions. I don't know what the answer is. And it's sort of like, there's like this big gap between these big major tech corporations and how their technological architecture and their code is related to civil society. and what role individuals that we have as a public has into what are essentially technological dictatorships rather than some sort of representative democracy where we have some sort of say in all of this. I mean, we have the law that could potentially come in and step in and start to create tech policy frameworks that enable that, but it really requires a larger culture and awareness to be asking for that or to even have that relationship set up. In the absence of that, we're essentially left with these big major tech corporations that are going to continue to create these algorithms that are driving the nature of reality forever, and yet with no transparency, no accountability, or no way to do any sort of checks and balances, where you have these essential technological dictatorships that don't have any of that relationship to the public. So I feel like this is such a huge technology policy issue with so many ethical implications, but yet the technology just seems to have its own trajectory of what kind of things people want to do. Like example, Pokemon Go, people want to have the next iteration and that will continue to evolve and grow where experiences will be so compelling enough. But yet to me, there seems to be a pretty big gap when it comes to the ethical frameworks and how to make sense of the relationship for what should be the role of government in tech policy and governing all these companies. And, you know, the role of the market is sort of playing itself out. And I don't, I don't put a lot of faith that the market is going to sort it out. And so I feel like we're kind of left with either the culture being informed to be able to make different decisions and, or having the governments be able to step in and be able to set larger policies here.

[00:53:51.062] Mark Pesce2: And I think you basically touched on the reason why I wrote this book, because I knew that this was a good time, because we're really at the threshold here. Sort of within the next 24 months, we will see the first consumer-level augmented reality systems hit market, whether they're from Apple or from Facebook. And we know that they're both working very hard. We already have the Unreal, which is sort of being sold in Asia now. I haven't had any experience of how good it is. We're seeing the sort of second generation of these systems right before they tip over into massive consumer adoption. So this is exactly the right moment to start to have all of these conversations and to start to think about how to frame questions about agency, questions about privacy, questions about capacity, and turn those then into policy. And then from policy, turn those into activities. And, you know, I spent a fair amount of time working with regulators in the cryptocurrency space, which was a space that was largely unregulated And because it was being used for terrorism financing and for money laundering has now attracted a lot of attention from the G20. And it was really interesting watching an industry that didn't want to have any regulation understand the value of regulation. That in fact, it becomes less radioactive. It becomes more open. The problem with the tech giants is the tech giants are already trillion dollar companies. They actually don't need to buy in because they can just buy the whole game. And so the question there becomes, what is the effective counterweight? And the only effective counterweight is the mass of people, is democracy. And so we have to be, while we're creating the place where the solutions can be proposed, where the questions can be asked, the policies can be developed, the solutions proposed, we also have to be building the social infrastructure So that there can be enforcement there because we can have the best policies in the world. And if we don't have enforcement, then we're going to end up much closer to your dystopia than either of us want.

[00:56:00.542] Kent Bye: Yeah, I think it's a good summary for the dystopic parts. And as moving into the more utopic areas, there was one passage that I wanted to read and maybe get your comment on, because it really jumped out to me. And you sort of alluded to it earlier. And so from page 111, you say, in the age of augmentation, our view of the world veers towards a post-scientific animism. The world and all of its individual elements has interiority, agency, and feelings, both about itself and about us. And so I feel like when you mentioned this earlier, there's a part of human beings putting this information into the world. It's all this is sort of originating from people, the stories that people are projecting on these places. But yet at the same time, there's perhaps a consensus or what you're talking about here is this sort of animistic way in which that the places itself are able to have its own agency to some degree, to some capacity. And so maybe you sort of expand on that and what you see as where this is headed in terms of like moving into a world that has this post-scientific animism built into the fabric of reality?

[00:57:01.292] Mark Pesce2: So there's clearly, you know, I am now more than well old enough. I have lived half of my life in the web era and half of my life in the pre-web era. And I've sort of seen how the human relationship to knowledge has fundamentally changed as a result. And we're at a similar inflection point now, but it's going to be around space rather than around knowledge. And so we are used to the fact that our feelings and our thoughts and our information about the world in some sense belong to us. We might be referring to a smartphone to add some context, but that is, it is not an integral activity. It is an additional activity. It's an activity that's drawn from need in a specific instance. When we get to a fully realized augmented reality where we're wearing spectacles and walking around in the world, we will be consistently presented with what the world wants to say about itself. Now that may come back to what some human being has decided the world's going to say about itself. But I think that underestimates the other part of this arc around technology, which is to build systems with higher and higher levels of autonomy. And so many of these systems will in fact be presenting autonomously the information that they want to say, or after they've had a quick look at you, the information they think you ought to know. And so that's part of it. So there's this idea that there's not just the richness of us interacting with this locative data, but that in fact our presence in that world of locative data changes in real time the nature and presentation of that locative data. So it's another feedback there. And in the same way that you get the unprecedented and unexpected aspects of having the riot in Peg Patterson Park, you may have these other weird events that are happening, unexpected, hopefully delightful, sometimes perhaps disturbing events, as space also speaks for itself. And so there's this idea of the animism of the unexpected. We think of the world as being inanimate. I'll go back to a story that I used in my last big book called The Playful World. This is in 1998, and the Furby had just come out. Sherry Turkle, very, very famous sort of foundational researcher on children and computing, gave Furbies to a bunch of four, five, and six-year-olds for two weeks and then took them away. A, the kids were not happy about this, but B, she asked them a series of questions. She said, all right, is your Furby more like your doll or more like your dog? All right, because she's asking a question that children not out at this age, which is the difference between the animate and the inanimate. You know what the kid said? Neither. the kids created a new ontological class and they placed the Furby in that. Cause it had qualities of both, but it didn't belong to either class. And this is when I heard that story, that was when I knew I had to write the playful world because I realized the quality of interactivity changes our ontological relationship, our being in the world. And that is precisely what's about to happen now as we add locative data and as we give the world the capacity to speak for itself.

[01:00:02.876] Kent Bye: Hmm. Yeah, and maybe you could sort of give a bit of a summary of what you sort of walk through in terms of the last chapter. You kind of give this vision of how this could all play out.

[01:00:14.143] Mark Pesce2: Yeah, I mean, the conclusion is on a Christmas day, a few years from now, and someone opens up on boxes, their little spectacles and puts them on underneath the tree. And all of a sudden, of course, they're confronted by a riot of information. Everything has too much information in it. And what we see over the next 12 hours is how they learn both to confront, interact with that information and how the device and the systems that it is connected to out in the network are learning to modulate the information that's being presented, right? And so what we see is this strong set of coupling feedbacks. And this is something I've learned over my years in technology. You can tell how potent a technology is when you give it to someone, you watch them sort of for the first 36, 72 hours they're using it, and watch how they couple, how it becomes integrated into their behaviors. A cell phone is one thing, a smartphone is another thing, AR spectacles will be another. Again, they'll all have that quality associated with it. And I wanted to capture the arc of that because what we will learn and what these systems will be learning from us, we will learn what the world wants to tell us while these systems are learning from us what we want the world to tell us. And it's going to be a very interesting dance between those two. And what happens when the world wants to tell you something that you don't want to hear, or when you're asking the world to tell you something it doesn't want to, right? That there are also these moments. Again, that's that idea of animism and agency. An animistic world has its own intention, and that intention can disagree with your intention.

[01:01:47.422] Kent Bye: Yeah, it's like the reintroduction of teleology and the final cause and each object having a certain purpose and meaning and a trajectory of how it is going to manifest. And I think that as it relates to human consciousness, as we think about this, we sort of have these, you know, emergences of different manifestations of these intentions as they pop up around. That's at least conceptually how I think of it. It's like popcorn that's popping and there's certain areas that may be exploding, just like there's things that go viral on the web. What's it look like to sort of go viral with a location and with that location be aligned to a specific intention of a final cause that is aligned with other people that want to have that type of thing manifest. And so as you go to conferences, I, would operate by this concept of serendipitous collisions to be able to have these moments of concrescence where my intention would align with other people's intentions and we would be able to create something together that was fully emergent in that moment. And I feel like, you know, as you talk about this vision, it's a little bit of that going viral, these different intentions and having those take place around the world in a very specific location. But what would it mean to have those locations gather people together in a co-located space to see what kind of occasions could happen?

[01:02:58.533] Mark Pesce2: And this is exactly why I opened the book with the Riot in Peg Patterson Park, because the very first example at the very beginning is something going viral. It wasn't designed to go viral, but it's that quintessential nature of people in space with a space that has been changed in meeting and also by their presence in it, because people were dropping lures, also changing the space itself. So it's that feedback. Right. And so those viral moments are going to be truly emergent because it won't just be the space itself. It's going to be the space itself, plus the way that space is expressing itself, plus the way we're interpreting and then feeding that back.

[01:03:36.974] Kent Bye: Wow. Wow. Well, I'll sort of end with the question that we've sort of covered some of this already, but I'm just curious what you think the ultimate potential of augmented reality might be and what it might be able to enable.

[01:03:50.983] Mark Pesce2: The promise, and this is the whole core of chapter four, because I'm not really just a nervous Nelly here, I am definitely very filled with hope, is that we will be able to have the capacity to bring a true digital depth to all of the world, the entire built world. possesses an enormous amount of interiority, information about itself, and it is mute. It cannot tell us what it wants us to know or what we want to know about it. So an example that I use now, if you buy a modern Mercedes, say a 2020 Mercedes, and you open up the hood, what you see is a large plastic cover. Because the idea here is that the engine is not really meant for you to see or use, and there's no parts there that you can fix. It's just we're going to cover it up. It's going to look pretty. You might be able to sit there and put some oil in or put some washer fluid, but that's it. And that essentially the entire world is set up like that, particularly technology is set up like that. You can't look at a smartphone and really understand what it does. And there's a potential here for now being able to pull back that shade, to be able to take that cover off and actually see and understand. And in context, not just, oh, I need to know this for later, but I'm getting this now. And so that means that our capacity as people to operate within a world that is going to be very rich and very aware is going to be able to expand to meet the needs that will be placed on us, right? That we can get the information that we need in context when we need it to be able to operate at our best. The potential of the web was always that we would have the best possible information to be able to operate on. And we have that potential when we choose to make it available to ourselves. But we also have the reverse potential, which is to fill our heads full of garbage and operate on that garbage. And augmented reality is going to take that to the next level because we will have that potential for everything in the world. All of the material world will be suffused with the richness of information that we associate today with the web. And it will be trying to share it with us in order for us to be able to make the best of it. And then it's going to be up to us to understand how we are going to navigate that world and make the best from it.

[01:06:14.063] Kent Bye: Wow. Yeah, I totally see that. And, uh, is there, uh, is there anything else that's, uh, left and said that you'd like to say to the broader immersive community?

[01:06:23.409] Mark Pesce2: This is a really good time to start thinking about what it means when there's 100 million of these devices out there, because that will happen before 2030. And not just the devices themselves, but I want people to really start to think about these devices as elements in a system, where a person is part of that system, the device is part of that system, and then the entire network computing infrastructure. with all of the algorithms and all of the databases and all of the profiling. All of those have to be thought of comprehensively. Don't point to any single element and say, well, here's the problem, ma'am. Here's what you got to fix. We have to think about this as a comprehensive loop of information. And then that loop is incredibly potent because the closer a device comes to your skin, the more aware it is of you and therefore the more agency it has over you.

[01:07:17.311] Kent Bye: Hmm. Wow. Well, Mark, I just wanted to thank you for joining me here on the podcast today. And, you know, I really enjoyed reading through your book, Augmented Reality. And the way that I think about it is that there's a lot of these existential issues of this potential dystopic or potential utopic future. And that There's a lot of ways in which that, just like the social dilemma was able to tell the story of our relationship to technology, you're starting to, in this book, connect the past with the present and the future to be able to also tell the story of these spatial computing technologies and what's left ahead in terms of helping orient us into where we've been where we're at now and where we're going to help create a larger context so that we can maybe bend the arc towards the more utopic manifestation of this rather than the dystopic one. So I think that's how I see this book and what you're able to do. And it's a really fun read and you'll be able to go through. And now that I've talked to you, I'll be able to go through and kind of read through the rest of it and fill in the gaps. Cause I think there's a lot of important details to be able to help really form this underlying narrative that you're trying to tell here. So, but again, just thank you again for writing the book and for joining me here on the podcast today. So thank you.

[01:08:23.552] Mark Pesce2: My pleasure.

[01:08:25.014] Kent Bye: So that was Mark Pesce. He's the author of Augmented Reality, Unboxing Tech's Next Big Thing, which was just released on Friday, January 8th, 2021. So I have a number of different takeaways about this interview is that first of all, Well, I think Mark does a really great job of looking into the past and looking at this evolution of all of this human-computer interaction and modern computing, essentially going back to the roots of cybernetics and the man-machine symbiosis and seeing how this Fall Computing Conference in 1968 was really the seed of so much of modern computing, some really amazing demos if you haven't watched the demo for the mother all demos I highly recommend you go watch it just to see like how much of Modern-day computing was really imagined in that and also the sort of Damocles which was Ivan Sutherland's piece That was really the foundations of this augmented reality that has eventually led to virtual reality implementations but this whole thread of looking at this history of computing through the lens of augmented reality and looking at it through human-computer interaction and and how so much of this being able to track making good user interfaces and user experiences, being able to track what people are doing, and then to improve that with that level of surveillance and monitoring, but eventually to the point where that surveillance and monitoring becomes a part of the commercial aspects of the technology itself. And what's it mean as we move forward that in order for augmented reality to exist, it actually has to be surveilling what is happening in the world. And then the next iterations of that will be looking at what we're looking at into that world as well. So there's certainly a lot of open questions when it comes to, in order for that to even work, we have to reimagine what our relationship is to our information, our private information, and to what degree having all that information available into these third parties is going to allow them to potentially undermine our own agency. And there's this feedback loop system that's going to be developing with augmented reality, which is that we're going to be looking at the world and wanting to know information from the world, but at the same time, the world is needing to have some level of aspirational intention to be able to say this is what we think you want and then have some feedback mechanism to really hone down to be able to have the information that you need from the world and want from the world and what the world is actually telling you and that there's a high likelihood that there's a mismatch between that and just trying to get on the same page or for what type of information is even going to be useful. So this lays out this underlying dilemma in technology, which is that all these things are going to be amazing once they're all fully figured out. But yet at the same time, it's going to require us to come up with new conceptual frames to be able to make sense of the boundaries between our information, our privacy, our agency, and our capacity, our cognitive load, all these things that have to be iterated on. And in order to do that, you kind of have to commit to doing it. But at the same time, there's not a lot of, from a legal perspective here in the United States at least, ways to be able to put boundaries around that information as to what is private and what is not. And I think that to me is one of the more concerning things is that we're moving towards this future, but yet we haven't really fully worked out all the tech policy aspects of it. And without really evaluating what has happened within the 2D realm, and now as we jump into the 3D realm, we're risking making things even more worse than they have been. And so Mark says that he's been trying to tell people about this for years and years and years. And it wasn't until the Social Dilemma documentary came out on Netflix that he was really able to point to a cultural artifact that really tells a story of all these different trade-offs and dimensions that are happening. So I think that's a huge open question. That's a big reason why I've been involved with the IEEE's Industry Connections Group around XR ethics. And we're going to be starting properly here in 2021. And if you are interested, I'll leave a link in the description if you want to sign up and be involved with trying to do some investigations of all the different things to take into consideration from a white paper perspective, and then how does all that information feed into the tech policy with trying to balance all these things, with not trying to artificially stifle innovation, but at the same time, putting some checks and balances in terms of the information that are going to be available to these companies. So yeah, that's a big concern that I have as we move forward. And it's also part of the reason why I'm maybe a little bit more into virtual reality, because I think it doesn't introduce all these more complicated things. And I feel like a lot of the privacy of your own home and using these immersive devices. But once you go out into the wide world, then there's all these other really thorny ethical questions that start to come up. And Project ARIA is trying to push forward with that, with Facebook, and trying to lay out some of those different potential ethical issues. But, you know, from a legal policy, tech policy issue, I mean, it's just completely rife. And I think that in the book, Mark starts to really lay out that and put it into context, a historical context, in terms of looking at this man-machine symbiosis, which was really starting to look at, okay, if we don't pay attention, then we're gonna have these machines that are gonna be automating humans. And so, yeah, just trying to be in that right relationship where the technology is in service of humans rather than trying to control and manipulate us in any specific ways. So I think as we start to move forward, I think there's a lot of really interesting ways in which that Mark is framing this issue in terms of the space. And once you start to change the space and you start to change us and that he really sees it as this feedback loop cycle. where he describes it as this animism, where the space starts to have its own wisdom, and it starts speaking on its own behalf. But yet, it's not really on its own behalf, it's really in relationship to other people and what those people are saying. So I can understand what he means. But I think most of the time, what I see that meaning is the meaning that is emergent based upon the people that are there, rather than the space itself kind of speaking on its own behalf. But I feel like it's difficult to have a space speak on its own behalf Other than it's just gonna be at some proxy of some other person speaking through that space but what he says is that there's this feedback loop or where spaces could go viral and the spaces meaning changes based upon the interactive feedback loops that are happening. So you have the space itself, you have the way that the space is expressing itself, and then you have the way that we're interpreting and feeding back that with those indications that the space is speaking, and it's feeding back into that space. And so it's changing the meaning of that space within its own right. It's essentially the opening story that it starts with, which a space goes viral based upon Pokemon Go, Again, I don't know if it's the space speaking on its own self, but I think it's more of the emergent relationships of the people that are in that space that are co-creating the meaning rather than it being specifically coming from the voice of that space in its own right. But I think it's a provocative way of framing it to get us to really think about what's happening there. I think there's also this difference between what I would say would be substance metaphysics and process-relational metaphysics. What I mean there is that because we have so much focus in Western philosophy around substance, then we've put all this emphasis on property and property rights. What can you do and not do based upon ownership of that property? I think in some ways that's a way of thinking about the world in terms of these objects and these substances that you are taking ownership over and that you may tend to think about augmented reality as properties of those objects that you're getting information around. But I'd like to maybe switch the context a little bit and get away from just the property right perspective, which is still valid. I think that there's another way to look at that, which is that you have these unfolding processes that are happening in that it becomes more about the context and meaning that is co-created by the people of that community. And that history is this constant evolution of that. That's what Hegel talks about in his philosophy of history is this thesis, antithesis, and synthesis. And as you move forward, you always have the ability to look back with this new additional context. And it's this unfolding process of the context and the meaning of these spaces. And that there are going to be conflicts. And I don't think you're going to avoid people having difference of opinions about what the history and the context of that space is. I mean, They're not gonna come to a universal consensus on anything. There's always gonna be disagreement So I'm less concerned I guess of putting all of the property rights on to the owners because you're essentially replicating the existing socio-economic dynamics where the only people have the right to be able to speak would be the people who own property which is going back to like you can't vote unless you are a property owner and and You know, we get into all these various different complications where the disenfranchised people who don't own the property don't have a voice to be able to speak. And I think because it is this augmented reality layer, you can basically have anybody say whatever they want. It becomes more of an issue is if you start to have these emergent dynamics when you have these Place is going viral and what is the recourse for people to you know? Have some sort of control over where not you have a bunch of people that are outside your apartment Doing what is essentially like this corporate sponsored video game and having all this ruckus and how is that in relationship to the people? Around in that area and how do you negotiate those relationships? I think that is another way to frame it rather than just saying well whoever owns the property has the right to be able to become the dictator to say this can and cannot go and think there is a certain element of that especially when you come into Pokemon go certain augmented things when it comes to the Holocaust Museum as an example how do you start to negotiate those different types of context and those property owners it's not as clear I guess the takeaway I have is not as clear because sometimes I do think that the property owners should have the right because it's like a Area where you want to maintain a certain amount of decorum But at the same time you don't want to like prevent people from expressing their free speech rights But I do think that augmented reality is an etheric sphere That is an area where you should have just complete free speech as you can the issue becomes whether or not you have the right to be able to be on that property or not and if you start to have these different emerging dynamics and what do you do when those dynamics are not in relationship to the people that are in that neighborhood and So it's still trying to take this relational approach, but not put the complete emphasis on the property owners. Cause I do think it's important for people to be able to express themselves. And I think, you know, you just have lots of different people with lots of different apps and we have on the internet, you have one entity that is controlling the domain name servers and what name goes to what website. And so perhaps the same as you're in physical reality, you have the equivalent of the GPS and what is the GPS equivalent of whatever information is on like, say the public web in terms of what everybody sees, but there could be as many apps as you want to be able to start to overlay all sorts of other information on top of that. And Mark said that in the past, we've had these websites that are in this cyberspace, this etheric imaginal space that doesn't have any spatial dimensions to it in the sense that it's connected to physical land. But as we move forward, augmented reality is going to have that. And when you do have conflicts, it may potentially have those collisions happen in physical space for people that have different stories and narratives as they're coming together and colliding with each other. to be able to start to negotiate some of those differences. So he sees just as the internet and the World Wide Web was this inflection point for information, now we're at this inflection point for space and how we negotiate these locative data and the metadata that is put on top of that. And there's all these different open questions around our agency, privacy, and capacity, and to be able to turn all that information into policy decisions that then dictate our activities within these spaces as well. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show