#1610: Scouting XR & AI Infrastructure Trends with Nokia’s Leslie Shannon

Here’s my interview with Leslie Shannon, Nokia’s Head of Trend and Innovation Scouting, that was conducted on Tuesday, June 10, 2025 at Augmented World Expo in Long Beach, CA. See more context in the rough transcript below.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.458] Kent Bye: The Voices of VR podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continue my series of looking at AWE past and present. Today's interview is with Nokia's head of trend and innovation scouting, Leslie Shannon. And so I was going to be on a panel at the end of Augmented World Expo with Leslie and Alvin Wayne Graylin and Lewis Rodden. And we're going to be debating different aspects of artificial intelligence. And so Leslie was on the pro side and I was on a bit of the con or skeptical or trying to promote more relationality in how we relate to these technologies. So this was an opportunity to just understand what Leslie's even working on at Nokia in terms of like looking at these larger infrastructure questions and issues. And so the challenge with some of these different things that are happening at the infrastructure level is that It's a chicken and egg flywheel problem where they have to create the possibilities. And then once the possibilities are there, then they can start to use it in some fashion. And so there's been a long time of saying with XR technologies, maybe like the 5G networks, as an example, are going to be doing all this type of like rendering and whatnot. And so there's ideas at the early phases of VR that were like, oh, maybe these new infrastructure tools will be able to enable new types of location-based experience. And that may still happen, but it's not been the thing that's been on the bleeding edge that people want first. The thing that people seem to be wanting to do now is to do all sorts of processing with these AI models, whether it's large language models or different types of AI models and having a full spectrum of like, at what point on these five different locations of a spectrum of compute, are we going to start to do that type of computing? And there's different trade-offs in terms of cost and energy and speed and latency. And so given that, then building out the capacity in these networks to have more and more ways of doing distributed rendering or split rendering or whatever you want to call like compute having in many different places in the network, depending on the task and the latency requirements, as well as the energy and battery requirements as well. So that's a lot of the types of questions that Leslie is looking at. And AI happens to be the most compelling use case to develop and prototype some of these different functionalities within the infrastructure layer. the question for me is still like well who's going to pay for it and what's the business model to make it even viable because one of the things around ai is that it's extremely hungry in terms of power and energy and cost and that a lot of these different companies that are doing this right now are operating at a loss just to kind of build up the use for people to use these technologies but at this point it's completely totally unsustainable in terms of like not only from a profit perspective but also energy and everything else But what Leslie's looking at is trying to see if there are things to create an infrastructure that even enables new capabilities that are not even possible unless they were to go out and build that. So we talk about that and also a bit of a sneak peek of discussing each of our orientations around the topic of artificial intelligence and In the next conversation, I'll be diving into much deeper into the full Socratic debate that we had a chance to have a couple of days later on the end of AWE on Thursday. So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Leslie happened on Tuesday, June 10th, 2025 at Augmented World Expo in Long Beach, California. So with that, let's go ahead and dive right in.

[00:03:21.388] Leslie Shannon: My name's Leslie Shannon and I'm Nokia's head of trend and innovation scouting. My job is really to look at all kinds of new non-telco advances in other fields, trying to figure out how the networks, the telecommunications networks, need to evolve so that they actually will support these new things that are coming up. And I've been looking at the world of XR for the last decade because this is something that one day all the XR and AR stuff that is out there is going to hit the network hard and we need to be ready for it.

[00:03:50.768] Kent Bye: Great. Maybe you could give a bit more context as to your background and your journey into the space.

[00:03:55.250] Leslie Shannon: Yeah, so I've been with Nokia for 25 years, so always in the telco world. But I'm a personal VR enthusiast. I've been doing all of my fitness in VR since 2018 and still do it every day. Yeah, so I travel with my VR headset so I can do that. I've written a book called Interconnected Realities, which is about the development of VR and AR. And it came out a couple years ago, but I accurately predicted that AR, in which you are remaining in the physical world and have AI actually assisting you with it, will actually be the thing as opposed to being in a fully digital metaverse.

[00:04:33.672] Kent Bye: So working at Nokia, maybe you could give me a sense of how are these networks or like edge compute or other trends that you're seeing that may be out there or developing that you see are going to be eventually feeding more and more into what's happening here in the XR space?

[00:04:48.379] Leslie Shannon: Yeah, so first I should actually just clarify that we at Nokia, we do not make any consumer devices anymore at all. We sold the phones to Vision a long time ago. And so what we do make is the network equipment. We make the optical fiber, the routers, the base stations that are up on poles, and we sell all those to networks and large enterprises, or to phone companies and large enterprises they build their networks with. So that's what we do. We're infrastructure people through and through. And so one of the things that our industry has been talking about for the last 15 years is the concept of edge computing. Taking the actual network and putting computing in the network that would then be available to non-Telco workloads. However, it hasn't really taken off because technically, sure, it's possible, but monetization, like who's going to pay for it and why, what's actually going to make that difference, that has not been there until now. Now with the rise of AI, and AI being big, and AI being able to do all kinds of amazing things, and the conjunction with XR, which in AR glasses, which I think are kind of the holy grail that we're all trying to get to, you've got this mismatch between very big computing and very small glasses, and things like power consumptions and needing to have a very slim form factor. suddenly now it really does make sense for computing to go into the network to support not just AR glasses, but any other kind of end device that benefits from being lighter, cheaper, generating less heat, like drones and robotics as well. So it's the same kind of architecture that can support a whole range of things. But the problem is that we in the infrastructure world, we move and it takes a while for us to get around to stuff, so we haven't actually started building any computing in the network yet. We at Nokia, we're working right now with Nvidia and T-Mobile to start exploring that space, and that's great, but we haven't really been talking about it very much, and so what we're seeing, like you and I are speaking on the first day of AWE here, and all the keynotes were about moving the computing, getting the computing smaller and smaller and smaller. Both Qualcomm and Snap spoke about this, so that they can fit into the glasses without any external support. And that's great, but fundamentally we're ultimately going to need to use computing located in every node, all through the network, all through every device we have, so that we can have this ubiquitous AI that is an assistant with us at all times.

[00:07:11.178] Kent Bye: So I know that there's been a number of different movements in terms of remote rendering of XR experiences. And there was some thought that maybe these 4G networks and other edge compute could be used towards rendering spatial scenes. I'm just curious to hear some of your thoughts on that. If you see that that's still a potential use case, or if that AI has come along with all the different compute needs of large language models, that that has superseded what folks in the XR space had been theorizing might be one of those first use cases.

[00:07:40.785] Leslie Shannon: Split rendering is actually one of the strongest use cases that we have for using the mobile networks now. And that's actually going on, and it's particularly one very specific use that's actually driving its pickup, and that's corporate IP security, or defense IP security. Because the problem is, if you're actually creating some kind of a digital twin, which is now becoming more and more common in complex industries. And BMW was one of the first ones to discover this. If you're doing a digital twin of your car to help plan your car, however you're rendering it on a headset, once you've rendered something on the headset, there's a copy of that thing on the headset forever. And BMW got very nervous about the fact that there were these headsets that could really easily go walkabout that had a full copy of their latest engine part, you know, because that's their IP. So BMW worked with a German company called Hololite, who also we at Nokia have partnered with on many occasions, to do split rendering, because if the rendering is done on a computer and then it's just streaming pixels to the headset, then when the session is over, the headset has no memory of that thing that happened. And so Hololite I know has worked with Lockheed Martin to do the same kind of thing for building things for the defense industry for the same reason. So anytime you need to protect your, whatever it is your digital twin is of, that's when split rendering and pixel streaming are absolutely essential.

[00:09:00.120] Kent Bye: And so if we look at the AI, there seems to be using these huge amounts of compute that are on these data centers with H100s from Nvidia and basically like billions of dollars worth of compute and energy and everything else just to even produce these models. there seems to be the production of the larger language models. And then they have these weights that then still need to have amount of tokens rendered per second. And then if you're running it locally, sometimes you could do that, but it might be really slow. And so maybe you could just explain a little bit of the landscape of what you're seeing in terms of there's these different models. They have different sizes. What do you foresee in terms of using it rendered on a server farm, but maybe there's more latency? And if you have edge compute, then is it a smaller model? And then just talk a little bit about how the ecosystem of parsing out these large language models is fitting into this network edge compute paradigm that Nokia is really interested in.

[00:09:54.437] Leslie Shannon: Yeah, and actually, when I look at the full network and fully end-to-end, I don't actually use the term edge compute because the computing will need to go everywhere. So there's five places where I see computing naturally falling. One is on the very lightweight end device, so that's where you're talking about your glasses or a watch or something like that. Then the next step is a heavier device that is still owned by the end user, so that would be a computer or a VR headset or something like that.

[00:10:21.598] Kent Bye: Would the phone fit somewhere in between there?

[00:10:23.879] Leslie Shannon: No, the phone would be in that second category. And then the next step, now we cross over into the network, would be the base station literally at the edge of the network. But then the next step beyond that would be a local routing center or switching center, which would be good. It's not so much a problem in the United States, but in Europe, the idea of data sovereignty is really important. And so if the largest big cloud is actually across a border in the next country, in Finland, you point all your cloud gaming at Stockholm, because there's no gaming servers in Finland from the big companies. But having something local that's in your country and relatively close, and then of course you've got the far cloud wherever that might happen to be anywhere in the world. So those five different areas are all available to have compute put in them. And the workload, where things need to go, it's kind of this push and pull across that spectrum. Things that need to be low latency will be pushed towards the edge, but the computing becomes more expensive the closer you get to the end user. So economics will be pushing it the other way. And so very large language models would be, of course, way off to the side or in the central cloud, but then the stripped down models for specific purposes would be pushed towards the end user. So the world that we see is actually having AI in the network looking at the workload that's coming off of an end device, potentially breaking that workload up, having the compute done in different nodes across the network where it makes the most sense, both from an energy point of view and from a computing heft point of view, and then tying everything back together again and presenting a unified user experience to the end user. So that's the kind of, you know, oh! you know, world that we're seeing, kind of visualizing for the future, but this is like, you know, a decade away at least, but we're starting to make the moves to build this now.

[00:12:12.008] Kent Bye: Now, from what you just said, does that mean that there's like an AI, like traffic cop, who's like trying to negotiate where things are getting used, or is it just like also algorithms that are helping to decide how to split up that load?

[00:12:25.118] Leslie Shannon: We don't know yet, because nobody's built it yet. Right now, everything's kind of in the ivory tower whiteboard stage, but it's as we get our hands dirty with actually experiencing how these things work out and what the best way to do that kind of, whether it is a traffic cop or whether it is an algorithm, yeah, we'll have to see. Because energy matters here a lot, and the most efficient use of all the network resources is, we have to find out what's the most efficient.

[00:12:55.339] Kent Bye: OK. So what are you seeing in terms of the use cases that are really driving this type of distributed compute across all these different five different stages of the nodes?

[00:13:04.713] Leslie Shannon: Well, frankly, AR glasses is one of the potential biggest ones, because the opportunity, especially with forward-facing cameras, doing analytics on that video. Like Google, they showed in their TED talk a couple of weeks ago, the person asking for their glasses, oh, I can't find my hotel key card, where did I put it? And then the glasses answered back, oh, you put it on the shelf, blah, blah, blah. And so the implication is that the glasses in that case were actually passively recording everything that person did all day. Where is that stored? Is that stored on the glasses? Does that make sense? That in visual analytics, to be able to do that kind of parsing and understanding the full visual space, maybe that doesn't go right on the glasses. And maybe that's actually a little too big to even go on a puck. And so where you're actually dealing with these larger things, those are the kinds of things that might need to go into the network.

[00:14:01.129] Kent Bye: That terrifies me.

[00:14:03.371] Leslie Shannon: Talk about privacy, yeah. Got to work some things out here, you know.

[00:14:09.956] Kent Bye: Yeah, I think, yeah, wow. Okay, so, I mean, I know Meta's been talking around contextually aware AI for a long time, and they've had a part of their ego 4D benchmarks is like, where did I leave this watch, you know? And so, what that implies is that we're being surveilled all the time, and so... I mean, legally, it's sort of like the third party doctrine is like there's a reasonable expectation of privacy, but when you give over that data over to a third party, then that allows the government to get access without a warrant. And then as the whole country is moving towards fascism, I just like, I don't know, there's things where I'm hesitant towards where this is all going in terms of the use cases. Yeah.

[00:14:46.099] Leslie Shannon: Yeah, well, I mean, think about Google. If you go through Google Maps, and actually one of my favorite, I'm going to do a little plug here for one of my favorite VR apps, which is VZ Fit, where you're actually on an exercise bike, and in your VR headset, you're in Google Street View. And so you can cycle anywhere you want in the world. I have cycled around Midway Island, which was really fascinating, and then Pripyat, which is the city near Chernobyl. I mean, it's just the most amazing stuff. So I spend a lot of time in Google Maps. Google in Maps is already blurring out everybody's face and anything that looks like it could be a license plate number or any kind of identifier. So Google's already pretty good at that. But I take your point, again, about surveillance. I mean, I think Google's teasing this capability, but yet not releasing it just yet. I think they're probably wrestling with the ethicists even now. That's just an outside guess.

[00:15:36.055] Kent Bye: Well, we're going to be on a panel discussion later this week where we'll be debating around different aspects in the future. I'm going to be taking a little bit more of a contrarian take on AI, but I'm just curious to hear a little bit more around what is getting you excited around AI and optimistic takes on where AI is at and where you hope it goes in the future.

[00:15:54.485] Leslie Shannon: I think AI really has the potential to be the ultimate, well, the ultimate helper. And I mean, I keep coming back to the pocket calculator analogy, which is a little tired at this point, but I was in high school when the pocket calculator came out and everybody freaked out. And so my chemistry teacher let us use it, but my physics teacher and my calculus teacher did not. Fast forward a whole bunch of decades and my kids are required to have a TI-84. And when I'm like, Wait, you feel like you have a graphics calculator in calculus class? What the hell? You've got to do that stuff with a pencil and paper. And my kids are like, why? The calculator's so much more effective. It's like, okay, you're right. Well, and what that has done is it frees the human up for higher level tasks if you've got this computing thing doing the lower level task. And so to have an assistant do, and now, you know, with generative AI and large language models, you have a grunt work layer element But the thing is, the necessity is that because large language models are trained on things in the past, if you want something new, you the human have to come up with it, necessarily. So I don't think it replaces the ability to think or anything, it just helps us skip some of the grunt work so we can think more on a higher level. And so, you know, I'm actually really, really excited. My hobby is competitive trivia. And I've started using ChatGPT to help me write trivia questions for my flashcards. And it's great. You know, please write me a question about two treatises on civil government to which the answer is John Locke. And, you know, with historical context. And it does. And it's actually a way that I can, yeah, anyway, I love it.

[00:17:34.517] Kent Bye: Are you able to verify it? Because I know that large language models can hallucinate and make up stuff.

[00:17:39.760] Leslie Shannon: Sometimes. Well, that's why I tell it what the answer should be. But yes, I do verify. But that's just the thing. You have to trust but verify. It's not just computers that you need to do that with.

[00:17:53.985] Kent Bye: Yeah, there's a number of different books that are coming out. One is the AI Con, Deconstructing the Line of AI Hype, but also Karen Howe's The Empire of AI. And I guess part of my orientation right now, this cross-section of AI coming in, is these little pop-ups of like, let me write this email. Or it's just like, no, I want to write my own email. I don't want to use this technology to be the replacement of how I communicate with other human beings. But also, Alexana was talking around this argument of AI as craft, is that when you are letting AI write your essays for school and college, then that's actually, your writing is your thinking, and so by offloading that, you're actually degrading your own thinking. I don't know, way back in the day, Socrates was against writing in general because he was like, this is live knowledge that needs to be engaged with and it's living and it has to be dynamically transmitted. And so if you write it down, it becomes dead was his perspective. And so now with AI, it feels like Socrates' worst nightmare of like, not only is it written down, but you're not even writing it anymore. It's just like the machine doing it for you. So I feel like there's all these things around like AI and AI slop and consolidation of power, just give me so much unease around everything.

[00:19:06.007] Leslie Shannon: Well, I mean, we obviously have to proceed with caution, as we always do with any new technology. But I have a 17-year-old son. He's just finishing high school and he's going off to university. And as part of his university onboarding, he had to take a math placement test and it needed to be proctored online. And they let him have a calculator within the actual test, but he wasn't allowed to use a calculator himself. So that way they could see what he needed to use a calculator for. And it made me think, wow, that would actually be a good way to do essay writing. Okay, here it is online, and you're allowed to use ChatGPT, but we're gonna see what you do. We're gonna see what prompt you put in, we're gonna see what you get back, and then we're gonna see how you changed it. So I think we're gonna need to recalculate what we reward for, what we train for, because the reality is the tools are here. The tools are excellent for bringing poor writers or people from English or whatever language is not their native language, bringing them up to a certain standard. But then, as I said already, if you want to have something truly creative, that has to come afresh from a human being. Yeah.

[00:20:10.506] Kent Bye: Yeah, I think there's this kind of state where there's this spectrum between like more AI abolitionists who want to just send AI into the burning sun versus other people who are like boosters and using it for everything. And I feel like I'm probably more on the abolitionist side, maybe not as much. I just feel like that it's not accurate all the time. There's so much hallucinations. And so maybe we should just not use it. I feel like there's certain use cases for it, but there are certain use cases where we should just not be using AI. I guess that's part of my gut as I'm hearing these different things.

[00:20:40.458] Leslie Shannon: Well, yeah, I mean, for me, I love writing. You're absolutely right about writing is kind of how I organize my thinking. And that's fantastic. But I also, I have my own style and I am really crap at writing diplomatic emails. And so for the places where I am truly terrible, it is really helpful to have something to give me some inspiration. It's like, oh, okay, that's a nice softening language I could use. Okay, fine. That's not my natural style, but let me use that. Thank you, ChatGPT. So it can actually help extend your capabilities into areas where you're not naturally strong. Yeah, and so I think that's good. However, you are right. There are going to be a lot of people who are just lazy and just use it completely, and all kinds of stupid stuff will happen as a result of that. But again, people use all kinds of technology stupidly, and you can't blame the technology for that.

[00:21:32.223] Kent Bye: So are you someone who has used ChatGPT to help write emails?

[00:21:35.982] Leslie Shannon: Yeah, when I have to write a really sensitive one to my manager, yes, yes. And it really has helped with the diplomatic language. But also, I was asked to do some judging for something recently. And it was in an industry in an area that I have no experience in. I'm not really sure why they asked me to be a judge. And ChatGPT was actually able to help me understand, what's the format? What is expected in this particular industry that I am not familiar with? So what would a good judging response look like? And so it helped bring me up to speed really quickly. Yeah, and of course I double checked what it gave to me, but this is the kind of stuff, I see it as an extension of our own natural capabilities, very quickly and efficiently. And I think it's hard to ignore the utility of that.

[00:22:24.488] Kent Bye: Awesome. Well, just to come back to the network stuff as we start to wrap up, I want to just get a sense of, like, some of the stuff you were talking around is, like, it's speculative or it's in the future. Like, what's happening now with Nokia and these, you know, spreading out the compute across these different spectrum of nodes from edge of compute to on-device to, like, in the cloud? And just curious to hear where it's at now, where do you see traction, and, yeah, trying to get a sense of what's grounded into this moment right now.

[00:22:50.441] Leslie Shannon: There's the public stuff that I can talk about. The most public is, it's a project called AI-RAN. RAN is the network abbreviation for Radio Access Network, so it means basically just the mobile connection that goes over the air. And so the AI-RAN project, there's an AI-RAN alliance that we're part of, and then there's an AI-RAN project that we're working on with T-Mobile, Nvidia, and Ericsson. who's the other main provider of commercial radio networks. And what we're doing within that is looking at switching out the CPU that's currently in a base station for a GPU and then putting that GPU to work in three ways. The first way is for the GPU to actually support the compute that creates the radio access network in the first place. And that's a huge task because CPUs are written in C++ and GPUs are written in Python and the structure is entirely different. So just getting a GPU to generate the same radio that a CPU currently does and has done since the beginning of mobile telephony. That's a huge task, so that's one. But then you've got all this compute that's left over in the GPU. And the second task is actually to do that AI on what we call orchestration, which is the understanding where the workloads go, helping the network work internally more effectively and intelligently, so giving everybody a better service. And then the third is the new part, which is the things that can actually be non-Telco workloads that could work in the network. And that's the area where I'm most concerned with, wanting to make sure that, you know, because that's where the computing for AR glasses would go, for example, or video analytics for some kind of consumer device that has video analytics in it. All kinds of lots of fabulous use cases can go into that third part. And that's the new part that's opening everything up. So that's what we're working on now.

[00:24:29.318] Kent Bye: And would you expect a company like Meta to potentially come on and be a customer for that? Or do you get a sense that they're already got their servers, and they have their own workflow, and they'd be happy with the latency that's there? Or just trying to get a sense of what type of companies might be interested in this new edge compute capabilities on your specific network?

[00:24:49.164] Leslie Shannon: That's exactly the work that we're doing right now. Trying to, well actually step one is doing that porting of the CPU stuff over to the GPU to just get the radio access network working first. And that's going to take a little time. But then once we actually have that in place, that's when we're going to be really interested in talking to anybody who is interested and thinks that they could actually have some kind of workload that could go there. It's been a chicken and egg problem so far because we haven't built it yet. So everybody has been focused on putting the computing into the end devices because there's really no place else for it to go. But once we actually prove the concept and show that we as an industry are serious about this, you know, once that first egg is hatched, then we expect a lot of chickens to come over. Now, that metaphor didn't really go the way I wanted it to, but you get the idea.

[00:25:35.967] Kent Bye: Great. And finally, what do you think the ultimate potential of all these spatial computing technologies and where all the edge compute and AI and everything else is kind of tying all together? Where do you see all this XR space going here in the future and what it might be able to enable?

[00:25:50.970] Leslie Shannon: I think it's going to be stuff that we can't even imagine right now. As far as AR goes, we're still in the super early days. It's kind of the equivalent of like, I don't know, the internet in 1992. There's this thing, and there's some geeky people who are using it, and they're sending emails to each other, and they've got a few chat rooms and stuff. And somebody standing in 1992, they've got no idea that Uber and Airbnb are down the road. They've got no idea that smartphones with all kinds of incredible utilities are coming down the road. And so that's where we are with AR right now. I think the use cases that we're doing are things that we can imagine. They're things that we've done before. And just like early television, at the beginning, early television basically filmed plays because that was the dominant paradigm. It was only with time they realized, oh, we can make this camera move. Oh, this is great. Oh, look what we can do now. And so I think that's where we are in AR. We're still doing a lot of stuff that's based on a smartphone. We're still doing a lot of stuff that's based on a two-dimensional screen. Once we actually have computing that is truly three-dimensionally spatial and portable and with us at all times, we're going to come up with stuff that nobody can even imagine, including me.

[00:27:00.085] Kent Bye: Anything else left unsaid that you'd like to say to the broader immersive community?

[00:27:05.031] Leslie Shannon: Everybody get out there and keep building. I love absolutely everything that everybody is doing. But please watch for the development from the telco industry of computing and have a hard think. If there's anything that you're working on that could actually benefit from having some of its workload offloaded into a nearby base station or a nearby center that's closer than the big data center that could be quite far away, you know, you can get in touch with me directly.

[00:27:35.179] Kent Bye: Awesome. Well, Leslie, thanks so much for giving a bit of a sneak for where you see the infrastructure story going here in the future and how it's going to create these new possibilities for these XR devices with AI mixed in there. And yeah, I'm also excited to have this discussion with you and Alvin Wayne Graylin and Louis Rosenberg here on Thursday to kind of have this Socratic debate on the future of spatial computing and AI and Yeah, seeing where that conversation takes us. But yeah, just really exciting to hear from Nokia's side what's happening here on the infrastructure side of the story of XR. So thanks again for joining me here on the podcast. Thanks, Kent. It's been a pleasure. Thanks again for listening to this episode of the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show