Here’s the panel discussion of Socratic Dialogue on the Future of AI and Immersive Technology with Alvin Wang Graylin, Kent Bye, Louis Rosenberg, Leslie Shannon that was recorded on the main stage of Augmented World Expo on Thursday, June 12, 2025 at Augmented World Expo in Long Beach, CA. See more context in the rough transcript below, and you can watch the original video here.
Here’s some other relevant episodes that I’ve done recently in preparation for this debate on AI:
- #1563: Deconstructing AI Hype with “The AI Con” Authors Emily M. Bender and Alex Hanna
- #1568: A Process-Relational Philosophy View on AI, Intelligence, & Consciousness with Matt Segall
- #1585: Debating AI Project and a Curating Taiwanese LBE VR Exhibition at Museum of Moving Image
- #1609: Framework for Personalized, Responsive XR Stories with Narrative Futurist Joshua Rubin
- #1610: Scouting XR & AI Infrastructure Trends with Nokia’s Leslie Shannon
- #1629: Niantic Spatial is Building an AI-Powered Map with Snap for AR Glasses & AI Agents
- #1630: Keiichi Matsuda on Metaphors for AI Agents in XR User Experience: From Omniscient Gods to Animistic Familiars
- #1611: Socratic Debate on Future of AI & XR from AWE 2025 Panel
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.458] Kent Bye: The Voices of VR podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structural norms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continue my series of AWE past and present. Today's conversation is from an AWE panel that I was on called A Socratic Debate on the Future of Immersive Technology. And we're really focusing on artificial intelligence. And it was four people on the panel, Leslie Sandin from Nokia, Alvin Wayne Graylin, who's formerly the president of HTC and also is now at Stanford. And then Louis Rosenberg, who's been a longtime member of the XR community and has been working on artificial intelligence at his company for a number of years. So Alvin and Louis have actually written a book called Our Next Reality, How the AI-Powered Metaverse Will Reshape the World. I did an interview with them back in March 3rd, 2024, episode 1353. So Alvin is really pro AI. And then Lewis is really, I mean, he's working with AI, but it's also looking at the existential threats of AI from privacy, identity, the world's most complicated persuasion machine. And so in their book, they're kind of like going back and forth in this dialectic of more of the positive and cautionary perspectives. and then in the previous conversation with leslie shannon we're talking a bit about her role at nokia looking at the role of compute within the context of the network and really seeing how ai has a lot of momentum in terms of like the needs for doing different types of compute for ai models with large language models happening on the device itself or distributing that out into the cloud or having things that are in this infrastructure layer that doesn't really quite exist yet and so In this conversation, it was actually, for me, an intention to start to dive into this topic a little bit more. Had I not had this panel, I may have not been doing some of these other previous conversations. I really wanted to reach out to Emily M. Bender and Alex Hanna, who wrote a book called The AI Con, How to Fight Big Tech's Hype and Create the Future We Want. So this is really, for me, like the core of the arguments that are made in that book that I'm trying to proxy and express, you know, not as well as Emily or Alex would be able to, but I'm trying to at least give the essence of some of the critiques that are being articulated within this book. And so we're in this moment right now where it feels like a beginning of a new cycle, first of all, where AI is certainly going to be a big part of the future. I'd like to say that right now in AI is probably like where the internet was back in 1989. You know, the World Wide Web hadn't come about until like 93 or 94, but the foundations of what would be the structure of the new communication modality that's going to be dominant in all of our society were really starting to be planted and take root. And so that's kind of where AI is right now, but also like XR, smart glasses, robotics, BCIs, all these technologies are at a very nascent point. And there's a lot of articles that are being written like, oh, the internet's going to be a thing. And now we look back and we laugh and they're like, of course, the internet was going to be a thing. Look how... Huge and pervasive it is in our lives. And I have a sense that like AI is going to be a little bit like that. Like it's going to be in some ways inevitable, although like Bender and Hannah, they're hesitant to say like there isn't an inevitability to what's going to happen with AI, but also just that we can see all the impact. of like social media and all these other things on our democracy that hasn't always been like a very positive impact on our mental health, our communication, our democracy even. So we're in this really peculiar time where now we have AI that seems to be like an automating technology and Also, we have in the political context, frankly, a lot of authoritarian powers that are having like democratic backsliding and using things like artificial intelligence in order to justify their type of censorship or memory holing, or there's a whole executive order against woke AI. And there's all these things that say you can't have these different types of DEI and the larger language model. this is kind of like ways in which that the tools of authoritarianism are being blended into artificial intelligence in order to facilitate that. And we've already seen that with the ways that police entities have used artificial intelligence. I mean, the EU and the AI act had a whole list of things of like, we should not be doing these things because of all the different harm that can come from doing those. But in the United States, we don't have that same list of here's the things that we should not be doing. And yet we, here we are just letting this unfettered technology be used by all these tech billionaires that are using it in a way that's not doing the principles of responsible innovation with having like red teams or even thinking about some of the different ethical implications. It's like very much in a move fast, break things type of moment that a lot of these things are being pushed out. And so there's the caution that both Lewis and I are saying like, hey, what are we doing here? And then I think Alvin and Leslie are also digging into like the positive aspects of these technologies. So at the end of this conversation, I think I really agree with a lot of what's needed in terms of like regulation. But seeing that the current political environment right now is such that like it's not an environment where anything is going to be regulated at all. It's basically like these companies can do whatever they want. And so we're kind of like walking blind in this new situation that just gives me this sense of unease. And so we're in this turning point right now in terms of like, which path is this going to go down? And I can already see just a lot of writing on the wall that we're not going in a direction that is going to provide some protections for consumers, let's say. So anyway, I mean, this is a little bit of my rant on top of to fully contextualize and very curious to give all the other participants they're due to be able to articulate all different points. But this panel discussion was a lot of how I was orienting and preparing for So there were a number of different podcasts I just want to call out that I think are worth also looking at. If you're interested in this topic, I'll put links in the show notes. Certainly episode 1563, Deconstructing AI Hype with AICon authors, Emily M. Binder and Alex Hanna. I've mentioned that already. Whitehead scholar and process philosopher, Matt Siegel, did a whole paper that he had titled on his sub stack, A Process-Relational Philosophy of Artificial Intelligence. And so looking at AI from a process-relational perspective. And so I did a whole deep dive conversation with him in episode 1568, Deconstructing AI Hype. digging into some more of these process-relational perspectives on AI. And then on Sunday, June 8th, 2025, I was in New York City for Tribeca Immersive, and I went to the ONIX Summer Showcase, and there was a piece there called The Great Debate. One of the co-directors was Michaela Ternasky-Holland, and we did a whole deep dive into that, kind of debating different aspects of artificial intelligence. So that's a conversation that I definitely recommend watching. My previous two episodes with Joshua Rubin, where we talk around kind of like deconstructing the magic of AI, but also with Leslie Shannon talking about some of her perspectives one-on-one. And the other thing that I point out is the two conversations that I did that is going to be the last two in this series, episodes 1629 and 1630. So Niantic Spatials demos that they were showing there, they did have like an AI agent demo that I'll be talking a little bit more, kind of like this intersection between AI agents and geolocated space. spaces. And then also Keiichi Matsuda, who is talking around these metaphors for AI agents, looking at from monotheistic, omniscient gods to polytheistic, animistic, familiar. So that's a good conversation that I'm going to end on just because I think that Keiichi is looking at all these different ethical implications of the technology, but also like, how can we start to work with them in a way that is having us reimagine our agency and working with these agents in a way that goes beyond our existing human-computer interaction paradigms when it comes to engaging with technology. So we'll be covering all that and more on today's episode of Voices of VR podcast. So this Socratic debate on the future of AI and XR happened on Thursday, June 12th, 2025 at Augmented World Expo in Long Beach, California. So with that, let's go ahead and dive right in.
[00:07:56.337] Alvin Wang Graylin: Okay, so before we start though, I guess I want to give a little context. Even though I know the agenda says that this is a Socratic debate, I actually want to clarify. I think this should be a Socratic dialogue, and being a dialogue meaning that it's an interactive way not for us to decide who's right or wrong, but for us to co-discover a better truth. Because I think that's the spirit that our community should be working in. And it's also something that we'll come up with a better answer when we think about it together. And I think it's key that we maintain the idea of mutual respect and mutual understanding. And to maintain curiosity, a constructive nature in terms of how we dialogue, as well as... having collaborative insights that we can come up with together. So I think just to kind of put that as the opening thought so that we're not getting overly combative. But we still might. Why are you looking at me, Alvin?
[00:08:55.926] Leslie Shannon: But Alvin, I think you move straight to the collaborative without mentioning the fact that we are actually coming from two very different camps here. Oh, yes. And that's kind of where we're all here.
[00:09:04.894] Alvin Wang Graylin: The right camp and the right... So we have the sight of the light, the sight of truth and justice. And the force. Wow, Alvin. Wow. And then we have the side who is a little bit more pessimistic about the future.
[00:09:19.565] Kent Bye: Or shall we say humanist?
[00:09:22.387] Leslie Shannon: Or realistic. Or maybe ostrich with head in the sand?
[00:09:26.689] Alvin Wang Graylin: Are we diving into the competitive side too much now?
[00:09:30.290] Leslie Shannon: Yeah, sorry, sorry. We instantly veered away from the collaboration. Apologies.
[00:09:33.512] Alvin Wang Graylin: That's right. So let's move back to the middle here. So I guess before we start, let's just have a quick one-minute intro from everybody so that we can set a base. So maybe Kent, you can start us off.
[00:09:43.636] Kent Bye: Sure. So I'm Kent Bye. I do the Voices of VR podcast. And I've been covering XR industry for 11 years. And I've also done about 122 interviews with AI researchers in 2016, 2018. And so, yeah, I guess in terms of the opening statement, I don't believe that AI is going to be the savior that is going to be the thing that is going to make XR the thing that everybody wants to use. I think there's a lot of unethical aspects of AI in terms of the data colonization, the impacts, the harms of AI that feels like when I come to an event like this that everybody is kind of bypassing and overlooking some of the limitations, constraints of AI technologies. And that, yeah, just sort of all in on large language models where there's other types of symbolic systems that I think are much more nuanced. And so, yeah, I guess I'm taking a little bit more of a critical take in terms of maybe we don't need the AI or maybe the AI is not going to save us in that. At the end of the day, we need to be in right relationship to the AI. And right now, it's not that. It's a tool of empire that is consolidating and automating lots of things and consolidating wealth and power in a way that can be leading into other aspects of enabling authoritarian governments and control and surveillance. That seems to be a part of the discussion that doesn't seem to happen too much in contexts like this. But I just want to enter in that AI could be a tool for fascism and that we need to also be aware of what's being automated and who benefits.
[00:11:11.603] Alvin Wang Graylin: Louis, do you want to?
[00:11:12.764] Louis Rosenberg: Yeah, so I'm Louis Rosenberg. I've been involved in XR for 33, 34 years, starting in AR and VR way back. And then for the last 10 years, I've run an AI company, Unanimous AI. And so I've been thinking for a really long time about the intersection of XR and AI, and really my whole career has been focused on using technology to amplify human abilities. I'm also known for writing and speaking a lot about the dangers of technology, which is not because I'm against technology i'm a technologist it's that i can very easily see how these technologies can be abused and misused and not necessarily intentionally but accidentally and so if i look at the trajectory of my career you know way back was really very clear that there's so many things we can do to empower people And really, over the last five years, I started to get more and more worried that we're actually developing technologies that will demoralize people or even ultimately replace people. And so when we look towards this future of XR plus AI, The world that I see is one where very soon we're all walking around with devices that can see what we see and hear what we hear and guide us visually and also whisper in our ears, powered by AI. We're very close to that. A lot of people here will be very excited about that because it will sell a lot of XR devices. And yet, we really have to think carefully of whether we're entering a world where AI whispering in our ears 24 hours a day as we're navigating our life, will that start to become demoralizing to us? Especially that day when we realize that the AI whispering in our ears is maybe smarter than us. What will that world be like where the AI in our ears maybe is something we trust more than the voice in our own heads. And I know that sounds a little extreme, but it's absolutely possible based on the trajectory we're on.
[00:13:28.483] Alvin Wang Graylin: So I'm Alvin Graylin. I have also been involved in AI Nexstar for 30 plus years and studied this at MIT and then U.S. Washington and have founded four different companies, three of which were AI related. In fact, I released the first natural language AI search company in 2005 in the world. And essentially what ChatGPT was doing two years ago, we were doing 20 years ago. So I've also been thinking about this for a long time. And actually, Lewis and I recently wrote this book last year that talks about this. If you haven't read it, it's a good book. But it talks about this issue of how these two technologies are going to come together and change our world. And I also agree, actually. Like any technology, there's going to be a positive and negative use case. It's always a two-edged sword. The key is to recognize what are the dangers and what are the benefits and try to maximize the benefits as we minimize the potential downsides, because we know those downsides are there. And the fact that we recognize them ahead of time will allow us to give us the chance to actually reduce the negative impact. But then knowing that the positives out there, we then have the hope and have the will to actually go make the investments and maybe having some short term sacrifices to allow the benefits to rise. And we see over the last 10,000 years, technology has brought significant improvements to our life whether it's giving us higher quality life longer life better health more food there's so many things that it has done for us so ai is really just the next phase of technology bringing that in a different form although it may actually in some ways take away some of our agency if we allow it to so it's really up to us how we utilize and apply this technology
[00:15:16.725] Leslie Shannon: So, and hi, I'm Leslie Shannon. I'm head of trend and innovation scouting for Nokia. And I've been involved in the XR community for about the last 10 years. I've written a book on XR called Interconnected Realities. And I've written, also written a book about how young people use digital media. I wrote that with Catherine Henry and that one's called Virtual Natives. And so I've got two kids and they're both in college right now. But watching them grow up in this digital world was a huge part of why I ended up writing the book with Catherine. And watching them, and comparing that to my own childhood, you know, to use the now tired metaphor, but still quite accurate of, you know, I was in high school when the pocket calculator came out. And it was a huge scandal. My physics teacher would let me use the pocket calculator, but my chemistry teacher and my calculus teacher, absolutely not, no way. Now, when my kids are in high school, it's required that they buy a TI-84 graphics calculator. I'm like, what the hell is this? You guys should be doing the graphics with paper and pencil like I did. But the bottom line is that this is a tool. And this tool arrived and it meant that we no longer have to spend time on the two plus two is four. We can cut straight to the higher order thinking. And now this tool has evolved and teaching has evolved with it. And what I see, especially now with the large language models, we now have a tool that allows the same kind of facility and the same kind of support for language, and it's about bloody time too, and for coding. So let's get that lower level taken care of so we humans can spend more time on the higher level thinking. And another thing that I discovered in the work for virtual natives, I'm sorry, I'm talking to these guys and not you. But another aspect of this is that through gaming, because kids today, they have really grown up gaming their entire lives, and that has given them a sense of agency. that no other generation has had, because they're used to accomplishing things in games. They're used to leveling up. And one of the things that frustrates them when they get out into the physical world, the adult world, is that they no longer have that sense of agency. And that's why you get young people founding things. That's why you get David Hogg doing what he's doing, challenging the Democratic National Party, because this generation, thanks to gaming, thanks to electronics, has a sense of agency. And you put that together with AI, and I think amazing things are going to happen.
[00:17:38.958] Kent Bye: Alright, so it's sort of an unstructured Socratic dialogue and I want to just throw out that it's very possible that we're all in a state of collective delusion around AI and that there's a big con that's going on. There's a book by Emily M. Bender and Alex Hanna called The AI Con. And their basic thesis is that artificial intelligence as a term was coined by John McCarthy at the 1956 Dartmouth Conference of Artificial Intelligence. And even then, it was just a disparate set of technologies that were used as a marketing term that is allowing us to project human capabilities onto something that really isn't as capable. And so that the more we glob on all these technology terms and call it AI, It gives us the perception that it's actually way more capable from computer vision to the large language models, all these things that we think of it as this magic technology. And the more that we think about it as this magic technology, the more that we surrender our sovereignty to this magic trick that's happening. And at the end of the day, it's behind it. Karen Howe has a book called The Empire of AI. And she's saying that this is a tool of empire. There's data colonization. All the data are stolen. All the intelligence that's coming from AI is actually from humans. human data that was given without consent. And so I feel like we're in the middle of a, you know, we look at the inauguration of Trump and we see all these oligarchs, all these billionaires that are promoting AI, and AI is this tool of consolidation of wealth and power. And so we're in this realm now where us in the technology industry are collapsing into political influence and what is being automated, what kind of data are being used, and we're starting from a baseline of empire and foundation of sand in terms of not being in right relationship in an ethical sense. So I think from there, it just seems like we're going down a really dark path.
[00:19:23.410] Alvin Wang Graylin: Did you want to respond?
[00:19:24.450] Leslie Shannon: Yeah, so when I look at the concept and the topic of AI versus human creativity, one of the things, and you're exactly right. I mean, I don't think there's any argument against what you're saying, but I think to make that the whole picture is reductionist and it has the risk of throwing out the baby with the bathwater. Because for me, the issue with having trained everything on data that already exists in the internet is that it's all trained on yesterday's data. And you are not going to get anything original or new out of these large language models. It's going to be yesterday necessarily. And so if you want to be human and creative and create something brand new, that does have to come from you. One of the points that she makes that resonates with me most strongly is that the voices of nonrepresented people are not in that data. And so one of the greatest works that we can do is actually bring those voices in now. so that that becomes part of what we build on. But it's all the more challenge for us to create new things that will make this actually even better.
[00:20:26.894] Louis Rosenberg: So I'm going to agree with both of you but also disagree in that I feel like we are in a state of delusion but I would say we're also simultaneously in a state of denial because we're in denial about really how powerful these systems actually are and will get and we all kind of go through those states of denial a bit but for people who are working regularly with these systems in really trying to extract the most you can out of them, you're amazed every day. And then you realize we all have this natural tendency to be in denial. And so I'm torn on this issue because I know that these systems are drawing upon human content. And they're drawing on human content from the past, although they're much, much better at grabbing real-time information. And yet every single day I'm surprised at what it's able to do with that. And so I think we are in a combined state of delusion and denial, and we're just going to be very, very surprised at the capabilities of these devices.
[00:21:35.600] Leslie Shannon: What's your greatest fear?
[00:21:37.521] Louis Rosenberg: What's my fear? Yeah.
[00:21:39.243] Leslie Shannon: I mean about this, not about bedbugs or whatever. Spiders.
[00:21:46.048] Louis Rosenberg: So my biggest fear is when we start to combine AI and XR in ways that we're going to do that are so interesting, where we embody these AI systems into avatars that we speak to and we engage because we're not evolutionarily prepared to speak to an AI agent that looks human that's being controlled by an AI because millions of years of evolution have given us reactions where the facial expressions and the vocal inflections and an earnest nod as you're talking to someone We believe we can infer their motives. We believe we can infer their inner feelings. That's what we're designed to do. We are so good at interacting with another person and inferring so much from how they act. And now we're going to be entering a world where we're going to be interacting with AI agents that look human, that smile at us just like humans, that have vocal inflections just like humans, that nod along with us speaking just like humans. And it's going to undermine all of our evolutionary protections, all the things that we have where we can tell, is somebody being earnest or somebody trying to deceive me?
[00:22:58.087] Leslie Shannon: Have you never been fooled or misled by an actual human? Because that's possible too.
[00:23:03.553] Kent Bye: I think this is like moving surveillance capitalism at scale, where we're going to be psychographically profiled by all these technologies and that by using avatars, it's going to be a way of using the affordances of our humanity to persuade us by basically undermining our cognitive liberty and our sense of self-determination. So I feel like we're on a path towards all the data coming together. But yeah.
[00:23:27.222] Leslie Shannon: But the advertising, does advertising fool you? No. No, but here's the thing. I mean, I think you're blessing too much credit.
[00:23:33.812] Louis Rosenberg: One at a time, children. If you're talking to an AI agent, that looks photorealistic, powered by AI, all of the world's knowledge at its fingertips, it will be able to read you, read you with superhuman capabilities. It will read your expressions, it will read your microexpressions, it will read really, really subtle changes in facial complexion. Like we can read when somebody blushes. It will be able to read something that we wouldn't have. So we can read you with superhuman capabilities. Your brain thinks that you can read it. But it's just a facade.
[00:24:07.685] Leslie Shannon: Like, we are going to be... But what's your fear here? What does this lead to?
[00:24:10.646] Louis Rosenberg: My fear is that we're developing the ultimate tool of persuasion. We're creating a tool that could be controlled by any outside entity, whether it's large corporations or state actors or anybody, but we're developing a tool that can be deployed at scale, can interact with us on a one-on-one basis, can have knowledge about us when it engages us, read us with superhuman abilities, We can't read it, but we think we can. That puts us at a really big power disadvantage.
[00:24:41.741] Alvin Wang Graylin: Actually, I agree. I agree with almost everything. I think actually everything that Lewis is saying in terms of what the technology is capable of. And I think the issue is because it's capable of that, we have a responsibility as a society to actually have regulators, have government step in, so that we can have it not be misused because the temptation to misuse it the way that you are saying is absolutely there. And we are already seeing it as what Kent was saying in terms of the empire of AI. There are definitely a few companies that want to be emperors or people that want to be emperors. And it's a very dangerous place because right now they are going to be given superpowers that they will probably misuse because they already are starting to do that. So I don't think we disagree on that. I think the key is, what do we do about it? Now we realize the risk is there, so how can we influence the government so that we can both have a cake and eat it too? That's really the question I think we want to answer.
[00:25:38.419] Kent Bye: So the AI con details how a lot of these same ultimate open AI, they'll go in front of Congress and do all these AI doomer, like this is such a super intelligent thing that we need to have like, and from Emily M. Bender's and Alexander's perspective, an AI doomer and AI boomer are promoting the same thing, which is AI is super capable, super intelligent, and that For the doomer, say, it's going to kill us all. And the other, for the booster, it's like it's going to save all of our problems. So it's kind of techno-utopianism. And they both serve to promote this kind of over-inflated AI hype that is going way beyond what these technologies are able to do. Yeah, but I think those two camps are One thing, so we have congressional Republicans coming out saying we should have a 10-year moratorium on all regulation of AI. The fact that we don't have a definition of AI means that these companies are trying to get unmitigated power to be able to do whatever they want without any sort of regulation. So when you talk about what regulation is going to save us, if you look at what's actually happening, all the tools of empire are basically putting on for it. They're avoiding any accountability at all. So that's not...
[00:26:41.040] Alvin Wang Graylin: I think we're after the same thing because I think that's exactly the problem we need to solve is that today these companies have too much power because they are able to lobby the government. So there's regulatory capture. So we're using regulation to help themselves. What we need to do is actually to educate the policymakers so they understand that what they're doing is actually not helping society. They think they are helping society. Well, that hasn't been working so well so far. The Doomers that you're talking about, they're doing it because it makes them feel special. It puts them in front of the press, and they get a lot of clicks here.
[00:27:10.525] Kent Bye: They're using AGI and artificial general intelligence as a boogeyman that is unspecified and to basically distract everybody as something that's not specified.
[00:27:20.129] Alvin Wang Graylin: That's the side effect. That's the side effect. The reason they are doing it is because it makes them feel special. It gives them recognition. And for the people who are actually doing the other side, the hyper AI boomers, what they're doing is trying to make themselves important so they can get funding, so that they can then grow their business even more. So they actually have two different purposes. They're going to get somewhat of the same effect in society. Both are actually negative. Right. But what we need to do is to be able to how can we educate those people who are making real policies today to understand where the reality is, what the lines should be, what the guardrail should be so that we can get the best out of these technologies. As Lewis says, these things are getting more powerful every day. The things that were in the AI con, I read the book, and it's all based on studies that were two, three, four, five years old. And of course, those things that were limitations then were real at that point. But the technology is progressing so quickly. Every week we have breakthroughs that things that we thought were not possible is now possible. And given that speed of progress, our policies are definitely not able to catch up to that speed. And if we're looking backwards, we're going to be solving yesterday's problems when we need to be solving tomorrow's problems.
[00:28:31.912] Leslie Shannon: Well, and I think it's really important that we give agency and we recognize the agency of the public. When Elon Musk overstepped and went berserk on Doge, Tesla share prices fell because the general public saw that they did not like what he was doing and they reacted against it. Now, so far, AI and the way it's been handled has been a little bit more out of the view. But if a company oversteps, the public does respond. We've seen this, and then companies react to that. So on this topic of agency,
[00:29:02.112] Louis Rosenberg: If I was trying to imagine what could I design that would take away people's agency, what would I come up with? I would say, let me put some cameras on them so I can see everything that they see, and let me put some microphones on them so I can hear everything that they hear.
[00:29:19.562] Leslie Shannon: Have them put the cameras and microphones on themselves. Even better.
[00:29:23.704] Louis Rosenberg: And then let me put an AI that's whispering in their ears. So it's watching what they're doing. It's watching who they're interacting with. And at any time, I can automate this to give them guidance. And that guidance will feel very helpful. But if I wanted to manipulate them, it's the perfect tool. So I also can see all the amazing positive things you could do with the same exact technology. And this is the problem, is that the same exact technology can have remarkable positive benefits. So I agree with Alvin that this is about, can we control it? It's just very difficult to see a clear pathway to controlling it unless the public is aware of the potential.
[00:30:08.960] Alvin Wang Graylin: Are you saying controlling the technology or controlling the people managing the technology?
[00:30:14.002] Louis Rosenberg: Well, it's controlling the people, but ultimately it's controlling the infrastructure that we've built. We're building this infrastructure that is going to allow this super personalized AI that basically is riding on your shoulder, going through life with you. And it can be a superpower. But unless we educate the public that this same exact infrastructure could be the most insidious tool of persuasion and manipulation and influence that humanity has ever created, we won't get the pushback. And I agree with you. The public has to demand that these things are safe. It's not going to come from the top down. It's going to come from the bottom up.
[00:30:57.547] Leslie Shannon: But, Lewis, everything you say about manipulation and persuasion is also true of TikTok. Because we've got people right now who are totally anti-vaccination, despite all the science, despite all the evidence, because an influencer who is a face that they trust, that they're looking at. We don't have to wait for avatars. This is here now.
[00:31:15.542] Louis Rosenberg: But imagine if that influencer, imagine if that influencer, instead of recording a pre-recorded video, was an AI agent and it was adapting its pitch to you personally. It was watching your reactions personally.
[00:31:29.359] Leslie Shannon: I think a lot of the evils that you talk about are already here.
[00:31:32.523] Louis Rosenberg: Well, they're here at what we will look at as a primitive. We will look back and say, remember those quaint old days when TikTok influencers were what we worried about? We'll have AI influencers that will optimize their persuasion.
[00:31:43.835] Kent Bye: Go ahead, Ken. Let's go back to 1989 when the internet was first starting. And then we look at all the benefits of the internet, but all the downsides, right? We're sort of at a similar point now where as we put all these technologies out, there's no checks and balances for any of this stuff in terms of how to ensure that it's not going to totally destroy democracy in our society. And so we're in this sort of move fast and break things mentality. I do want to respond to one thing that Alvin said, which is this idea that if you wait for a couple of years, it's out of date. This is basically part of the cancer of the mindset of the state-of-the-art soda, which is that you have... a bunch of benchmarks with numbers and quantification of performance, and everything gets reduced down to this number. So when I was at the International Joint Conference of Artificial Intelligence, I was talking to AI researchers, and one of them said to me, the empirical results are far outpacing the theoretical results. What's that mean? They have no idea how any of this is working to some sophisticated method. They're making a number go up. We don't have any transparency of any of these data sets. They're not telling us what they're turning on. It's all stolen data. But it's also, in order for us to properly evaluate how these are performing, we need to have access to all the data. And we don't have that. And so there's a way that it gets quantified in a number. They optimize for that. They tune it for the test. We have the state of the art. And you have this perception that if it's not the best at that moment, that it's somehow outdated. The Stochastic Paper by Emily M. Bender and others that are collaborating on deconstructing large language models and how it's just like a bunch of gibberish. There's no understanding. It's the structure of language. It's not the meaning of the language. And so there's ways in which that the larger language models are still this over-reliance upon them. And that the reason why they exist is because they're easy to just throw a bunch of GPUs and data and expect that scale is going to make it all work. But at the same time, there's no understanding, no comprehension.
[00:33:35.457] Alvin Wang Graylin: What you're saying right now is probably true for maybe up to about three, four years ago. But I think over the last few years, there's been significant changes, not just in relying purely on scaling. And I think scaling is kind of already getting to about a wall of what we can do, because it's hard to get millions and millions of GPUs into a center. It's already probably at a few hundred thousand today. You have to 10x that just to get about a 2x increase in performance. Almost all the gains that are happening today are coming from changing technique, changing algorithms, and distillation in terms of using tools. So I understand that people are afraid of saying, oh, these things are getting so smart that at some point they're going to be smarter than us. And the evals definitely are showing that. But what's more important is you can see it's actually able to do real world work now. There are agents now that you can hand off a task, and it will do something that a college grad would take a month to do it. It would do it in 10 minutes. And that's real impact. It's not just it did an eval. I don't really care about evals, actually. But I do care that when I ask it to say, read these 100 papers, find the key things that are relevant in there, write me a summary of this, and then tell me what are the points that are the flaws in these arguments. And it would come up and give you very, very coherent answers. That was not possible two, three years ago. So what this allows is now all of those paralegals that used to do this for cases or all of those accountants that used to do this for people's filings, they can now be replaced by these technologies. And those are going to have real economic impact and real job displacement in the workforce that will come in a very short time frame.
[00:35:15.009] Kent Bye: And I think that's very dehumanizing in the sense that it's like putting the technology to be higher capabilities than it's actually doing. Because we are people that are embedded within a context that have situated knowledges. And that there's a lot of ways in which that all technology has automation. But I think the trick of AI right now is to reduce the humans down to a computer and their outputs of language and expect that AI can replace them. And I think that's just a part of the dehumanization that's happening.
[00:35:41.451] Alvin Wang Graylin: I think I'm actually saying the opposite. Actually, the reason I'm saying this is because I want to make sure that people are taken care of. If you say that, oh, you'll never replace humans, then there will be no safety nets provided for those humans that get displaced. Because I talk to company managers and CEOs all the time, and I can tell you every one of them are saying, how can I use this to save money? How can I use this to reduce hiring? How can I use this to reduce my workforce? And the first moment they can do it, they will do it because it will affect their bottom line. And that's what a capitalistic market is prioritizing. So it's not that I want to say this is replacing human, but to a business person who's making financial decisions, this is how they will make their decisions. When it's good enough, they will use it to replace the human that used to do this work.
[00:36:26.938] Leslie Shannon: Well, but technology always brings change. And change always means that jobs change and the tasks that need to be done change. Nokia is a company that's based in Finland. I've lived a long time in Finland. My husband is Finnish. And Finland, it was a poor little peasant country that was colonized by the Swedish for 500 years and the Russians for another 100 years. And Finland itself didn't get independence until 1917. And their language was one of the last ones to actually be written down in English. The first novel in Finnish was not written until 1869. And so that means that in the 19th century in Finland you still had the poets that were working in the same style as Homer. They were the oral poets who were still singing these incredibly long epics that they'd memorized. very rhythmic thing. When language, when written language, when the alphabet finally came in and was used in Finland, they lost that tradition, and we're only a couple of generations away from people who still heard that. But it's gone now. And you know what's happened? The Finnish population now can all read and write, and they are literate, and there's all kinds of advantages that came, even though this art form has been lost. And in fact, I think it was in Homer's time that people were lamenting, oh, it was Socrates, I think, So things are always going to change. But there's benefits. We need to lean into the benefits.
[00:37:52.370] Louis Rosenberg: I think that that's denial. I think that it's to say, and you hear this argument a lot, that there's always change and there's always new jobs that come as if this is the same.
[00:38:04.981] Leslie Shannon: None of our jobs existed 20 years ago.
[00:38:07.363] Louis Rosenberg: This is different.
[00:38:08.564] Kent Bye: But it's also being trained on the stolen data of those workers. So oftentimes it's based upon the intelligence of the people that are being replaced. So AI, large language models, wouldn't be anything without the human data. Steal the data and then replace the people. I think that's a different equation.
[00:38:25.322] Louis Rosenberg: Regardless of where the data comes from. Just really quick. We have never gone through technological change this quickly by an order of magnitude. I mean, a lot of us here have been working in high tech for three decades, which most people would say is the computer revolution. We've gone through the computer revolution. We've gone through the phone revolution. We've gone through the internet revolution. This is different. This is an order of magnitude faster. And it's progressing in a way that nobody really can predict or understand. And this is the first time through all those revolutions where everybody feels overwhelmed. Everybody who works on it every day feels overwhelmed by the speed that it's progressing and the lack of clear understanding of how far it will go. It's just...
[00:39:18.197] Alvin Wang Graylin: this is just different and this is actually why we need to put safety guard rail so we slow down so that we are actually able to have the economy and and our society adjust to this type of change and these changes are coming and a lot of jobs will go away right i mean if you hear all the experts are saying something like 90 plus percent of programming will be done by ai in the next 12 months they say i they'll probably take longer but that's You know, there are 30 plus million programmers in the world. That's a very large number of people that will be affected by that. Not to mention all of the customer service agents and the other, you know, the accountants and the other things.
[00:39:55.240] Louis Rosenberg: But how do humans adapt when it would... Well, none of those jobs exist 50 years ago. So those are all new jobs anyway. But programmers... That's a profession that takes many years to learn that skill. There are skills like radiologist. It takes a decade to learn that skill. Both of those jobs are going away. There will not be human radiologists other than to be able to sign insurance documents.
[00:40:21.863] Leslie Shannon: No, actually, I have a friend who's a radiologist and he thought that was going to happen and he came back to me and he said, no, I'm good. I've got a job for at least the next 20 years.
[00:40:29.383] Louis Rosenberg: I mean, you may be wrong, but- Ask him in two years.
[00:40:32.524] Alvin Wang Graylin: Okay, we'll do, we'll do. The technology's definitely improved significantly.
[00:40:37.126] Louis Rosenberg: But if it takes a person 10 years to get a skill like that and an AI can replace it, what are they gonna get retrained to do
[00:40:45.509] Leslie Shannon: But why did it take them 10 years to discover that, to build that expertise? My father was an actuary, and he first did the actuarial training with a slide rule. Actuaries are gone, too. Oh, I know, I know. And calculators came in while he was working and really changed the nature and the importance, the hierarchy that his job had.
[00:41:08.014] Louis Rosenberg: Automating calculations is one thing. Automating planning and reasoning is where we are now. And it's a totally different thing. We're enabling these devices, these systems, to reason and plan at human level and ultimately beyond.
[00:41:23.558] Leslie Shannon: Well, wouldn't it be great? Before self-driving cars came out, I had a debate with a friend. And I was like, ooh, self-driving cars, I'm not really sure. He's like, are you kidding? Almost all accidents are the result of human error. Don't you think that a machine would actually drive more safely?
[00:41:38.931] Kent Bye: No, they have to be trained for lots of... Right, they do have to be trained.
[00:41:42.492] Alvin Wang Graylin: Once they are trained, they are significantly more safe than humans.
[00:41:46.653] Leslie Shannon: Exactly.
[00:41:48.374] Kent Bye: I think we should maybe do some final statements and then maybe... Actually, I do want to make another point though.
[00:41:55.236] Alvin Wang Graylin: I feel like right now we're still arguing over how we are viewing the world today because we're so afraid of losing our jobs. I think we actually should be flipping this around to say, what happens? Let's just say, let's just say these things come and now I don't have to work and now the government's actually come in and given everybody universal basic income, schooling, housing, all these things. Look, I know in the US that sounds absurd, but in parts of Europe and other parts of the world, that actually is something that already exists. So the thing to remember is if we don't do this, if we don't have a proper infrastructure safety net for our population, the US will actually be the worst off. Even though right now it is the most advanced country in the world in AI, it will be the worst once it deploys and it's not able to adapt to the changes that will come and the chaos that could ensue if we don't have these safety nets.
[00:42:51.638] Kent Bye: And a lot of people promoting these AI technologies are promoting more types of authoritarian types of government that are destroying social services.
[00:42:59.181] Leslie Shannon: Well, okay, I've got a rebuttal to that, because I work for Nokia, a Finnish company, as I've already mentioned. Within Nokia, we have made a pact with the employees to never fire anybody and replace them with AI. And because we have made that pact, in fact, we encourage our employees to discover how they can use AI to do their jobs better and faster so they can actually do more with less time and have it be the people whose jobs it is doing that discovery. That gives them ownership, they lean into it, and they are, from the inside, working on how to make us a better company who can serve our customers better using the tools that AI bring to us.
[00:43:41.727] Louis Rosenberg: No, we will probably hire fewer people in the future. But just think about how unprecedented that is. This is a technology where the company has to promise, don't worry, you're not going to get replaced by this thing that we know is going to replace you. Like, that's really what they're saying.
[00:43:55.282] Alvin Wang Graylin: Well, right, right.
[00:43:57.565] Leslie Shannon: But it's... No, it's going to replace the future jobs, not the current jobs. They just want you to not get too upset too early.
[00:44:06.611] Alvin Wang Graylin: No, at the end of the day, it will replace all of us, period. No, actually, but I don't think that's a bad thing. Wait, hold on a second. But I don't think that's a bad thing. because what it will allow us to do, what it will allow us to do is to actually be liberated to have time to do the things that we really want to do. Right now, 80% of people's jobs, they're not happy in their jobs. They're doing it because they need to be able to make ends meet. But if the government says, here's enough money for you to have a comfortable life, Now, if you want to go back to school, if you want to travel, you want to take care of your children, if you want to take care of your elderly, spend time doing that because that creates social value, societal value that is not measured in today's GDP. And that's what we need. We need to switch our mindset to say what is going to be valuable tomorrow, not what's valuable today. Everything that's being counted in GDP today in five, ten years will actually become because those things will all be automatically created. And when that happens, automation creates 100% efficiency or essentially infinite efficiency. We don't need to be involved. When we don't have to be involved, we have time to do the things that actually makes life worth living.
[00:45:12.230] Leslie Shannon: Alvin, yes.
[00:45:13.171] Kent Bye: Utopia. That's a story that has been told around automation for over 150 years, and it hasn't happened because it's the structures of capitalism that are about the consolidation of wealth and power. That is just not what has happened. No, but that's exactly... It's a sort of deluded dream that's totally disconnected from the reality of what we have right now in this world. Yeah.
[00:45:33.990] Alvin Wang Graylin: It's a deluded dream based on the fact that you believe that capitalism is the end and only form of government. And I think that needs to change. We need to change the economy so that we start to value things that goes beyond production. We need to be able to move on from having money as the only way to measure success. That's what we need to do. When we do that, actually, we will be able to then think about how do we measure success in the future? I want us to be able to measure success in terms of how much contribution you are making to society. How much service are you doing for the people around you? How much new invention and creativity have you made? Not about how many widgets you made this week.
[00:46:11.078] Louis Rosenberg: So you express that everybody's worried about losing their job, and I think they should be, but I think I'm worried that we're all losing our agency. And so we're talking about what's the currency of being human? It's agency. And we are all very eager to put these devices on our heads and over our eyes and in our ears. And again, amazing things we can do with it, but... If an AI is whispering in my ears, giving me guidance, and I'm talking to you, and the same thing's happening to you, I'm going to start to wonder, am I talking to you, or am I talking to the AI that's whispering in your ears?
[00:46:45.918] Alvin Wang Graylin: But see, if we use advertising, if advertising is still the model of how we run the future internet, then your problem is going to happen. So we have to move away from advertising, because advertising means somebody else is controlling what's coming to you.
[00:46:59.462] Louis Rosenberg: Instead of using the word advertising, which is really about products, let's just talk about influence. If the currency's influence, whether that influence is political influence from a state actor or economic influence from a business, we've built this world where the economic metric is influence, regardless of the monetary model below it. And AI is the ultimate form of influence for a human, like an AI that can adapt to you in real time.
[00:47:31.416] Alvin Wang Graylin: OK, Louis, I think Ken wants to do one quick statement each, and then we'll go to... Sorry, we've got seven minutes.
[00:47:35.862] Kent Bye: We should probably each make our final statement and wrap up here.
[00:47:38.726] Alvin Wang Graylin: Why don't you start us off?
[00:47:39.687] Kent Bye: So Alvin, I actually agree with you. And I do believe that in order to get to the vision that you have, we have to have a philosophical paradigm shift away from reductive materialism into more process-relational thinking. My latest interview with Matt Siegel is getting into more of these paradigm shifts philosophically. at the core of process relational thinking is that we have to be in right relationship to this technology. The fact that we currently have AI as a tool for empire that is using techniques of data colonization without consent is not in right relationship. So we're starting off in a place that is not anywhere near where we need to get to be, where we're in right relationship to all these technologies. And so I don't want to be the type of Arguing against the internet or AI it's like you see all those articles of like people saying oh the Internet's gonna be a thing Of course AI is going to be a thing. It's useful to do pattern matching at scale It's useful to do synthetic text extruding machines as Emily and Bender says, you know trying to use technologies that get away from AI as anthropomorphizing the capabilities and powers but to be very specific from computer vision to things that are task specific things bounded context and I think that if we're able to like create this fusion in a way that is in right relationship, we can really create these technologies in a way that is empowering and creating an opportunity for thriving for us. But right now we're not in that place and to think otherwise is to bypass the political realities of the current moment.
[00:49:05.193] Alvin Wang Graylin: Okay, great. Leslie, you want to take it?
[00:49:08.703] Leslie Shannon: Yeah, Alvin, thank you, because your statements about the utopia of where this could all take us kind of jolted me out of the wrestling with the momentary present. Because I think that these technologies are going to let us reconsider the nature of work. and the nature of education. Because right now, both work and education, as they play out in our society, are founded really in industrial revolution norms. The 40-hour week, I mean, that was a labor win of the 20th century, but that's when you needed 40 hours to stand in front of a machine to crank out a widget. We don't need that anymore. And education, what is it exactly that we are teaching people to do and why? I mean, when I got out of high school, I knew trigonometry, but I actually didn't know how a mortgage worked. We need to kind of rethink what we're teaching our children to think and why and what it's going to enable them to do. And I think these are going to be the tools and the AR part of it and the XR part of it is how we're actually going to be able to live our best lives in a way that is not bound by the 40-hour week or the perhaps 12 years, 16 years of schooling, but something that does actually help us reassess our humanity and take a more profound place in the universe.
[00:50:35.138] Louis Rosenberg: So, again, I'm a technologist. I believe in the power of these technologies to do amazing things. I do believe that XR and AI together can amplify human abilities, can give us superpowers, can allow us to make better decisions, and can allow us to do and experience things we could never have done before. The dangerous line is where we allow ourselves to lose agency, where the AI in the system becomes the thing that's controlling us. And it's a fine line. It's going to require social changes of how we think about the world. And it's going to require that the public actually is aware and demands that these systems are there to serve us, not us to serve them. And right now, we're in this point where it's not clear which way it's going to go. And one of the reasons we do things like this and write books like this is to get people thinking about AI as being different than every technology that came before. And not because we're not debating whether the AI becomes sentient and is like that. It's really about, regardless of who's controlling the AI and how it's working, it is a technology that can adapt itself to optimize its ability to interact with us. And that could be done to optimize our ability to learn stuff, or it could be used to optimize its ability to control us. And as long as the public understands that and really pushes back and says, we want to make sure these tools are there to serve us, we can go in the right direction.
[00:52:09.418] Alvin Wang Graylin: So, I mean, clearly everybody has very strong feelings. And, you know, I guess I want to kind of summarize this with a sentence from E.O. Wilson. He said, you know, we have paleolithic brains, we have fuelistic institutions, but we are now creating godlike technologies. And That's very, very relevant, particularly to AI and XR and these things that would seem like godlike technologies to people just 100 years ago. And the problem is that our institutions are still designed for that world, that last century, last few centuries. And in fact, our mindsets are something that has been created over the last 10,000 years. After the agricultural revolution, we started to go from a world of relative abundance, because we only had about 5 million humans on this planet, to now about 8 billion people over 10,000 years. And because of that, we forced ourselves into a world of scarcity, and our mindset had adapted to it, and our society and our institutions have adapted to it. So we want to hoard and we want to grab money and resource and power because that's what made us successful in this new world. But with the technologies that are coming down the road, whether it's robotics or whether it's AI or whether it's genetic engineering, we're going to bring a near abundant society back to this world that we live in with the population that we have. And if that's the case, we have an ability to give an abundant life to everybody on this planet without sacrificing anybody's rights or dignity. And if that's the case, then it does require us to change our mindset. It does require to change our institution. It's not going to be easy. I know you guys were laughing when I said this, but it really does. We have to change the way that we are organizing our society and the way that we think about it and the way we govern it. Because if we don't, then the technologies that can bring us this utopia will actually bring us our destruction. So I think maybe you have enough time for one question. So I guess right here, go ahead.
[00:54:09.148] Audience Question: Well, I just want to say, I think this is probably one of the best panels I've heard of so far. Would you guys agree? Oh my gosh, wasn't this amazing? I really enjoyed all of your takes on all of this. It was just such great food for thought. And you all are right. And through teaching artificial intelligence at colleges and then writing my AI literacy handbook, there was something that I noticed that wasn't mentioned. how brilliant human minds are. Yes, yes. The human mind has 3.6 times 10 to the 14th dendrites in it. One dendrite carries as much information as the entire world wide web. It has the ability to reshape itself and gather information. When we were talking about listening to the artificial intelligence discussion, there's a solution that I think may work. Training individuals on vetting the information that comes from artificial intelligence. If we can train the human brain to not only look at artificial intelligence and discern if it's real or not, we may be able to understand how to train individuals to think faster than it. Tell us your thoughts on that.
[00:55:34.230] Alvin Wang Graylin: I think we're out of time. We've just been given the ax. We can talk more afterwards. Thank you all for coming. One note before I finish is that the Virtual World Society is having their online auction. It ends at 5 p.m. today, so please log on and bid on something. Thank you for coming.
[00:55:55.050] Leslie Shannon: Thanks, everybody.
[00:55:58.547] Kent Bye: Thanks again for listening to this episode of the voices of your podcast. And if you enjoy the podcast and please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a, this is part of podcast. And so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voices of VR. Thanks for listening.