Here’s my interview with Alicia Berry (Executive Producer at Niantic Spatial) & Asim Ahmed (Head of Product Marketing at Niantic Spatial) that was conducted on Thursday, June 12, 2025 at Augmented World Expo in Long Beach, CA. Check out their announcement blog posts including “Niantic Spatial and Snap’s Multi-Year Strategic Partnership to Build AI-Powered Map” as well as “Niantic Spatial Joins Khronos Group to Advance Geospatial AI and 3D Standards” (mentioned in my latest interview with Neil Trevett, as well as “Meow Wolf and Niantic Spatial Announce Plans to Explore an Expansion of the Meow Wolf Universe“. Also be sure to check out my interview with Keiichi Matsuda about Liquid City’s Parabrains system that Niantic Spatial was using in their VPS guided tour demo they were showing at AWE. And you can also see more context in the rough transcript below.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.458] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my series of AWE past and present, today's episode is with Niantic Spatial. So Niantic Spatial was having a number of different demos and announcements that were happening here at Augmented World Expo. So they used to be Niantic Labs and they kind of split off and sold off their Pokemon Go and gaming division to a different entity. And now they're just focusing on geospatial mapping driven by AI. And they're collaborating with folks like Snap where they're going to be developing in collaboration with them, this AI powered map, focusing in on specifically the spectacles and creating different applications that can be used in spectacles. But also having other game-like entities that they maintain control over, like the Peridot, was a way that they're kind of rapidly prototyping and experimenting with these different devices. I've had a number of different interviews with them over the years, like at the Snap Partner Summit, we talked about Peridot. It's always fascinating to run into the folks at Niantic Spatial and to kind of hear what they're working on. They're really kind of agnostic in the sense of hardware, where they're just on every platform and then pushing the technology forward and really focusing in on this kind of geospatial aspect. And so they were also collaborating with some of the technology from Keiichi Matsuda, who's going to be the final interview in this series, just because Keiichi is doing some really fascinating stuff, reflecting on kind of the ethical and moral implications of the technology, but also just kind of like these new metaphors that I think are really helpful as we start to think around AI agents. And his whole thing is not thinking around the AI agents as this kind of monotheistic, omniscient, omnipresent gods, but more like these puppies or these agents that are much lower capability, but more of a polytheistic, animistic familiars where they're able to be much more limited, but approachable in terms of what their capabilities are. So just these different types of metaphors for how you're starting to think around these different virtual agents. And so Previously at AWE 2023, I talked to Keiichi Matsuda about MeetWall, which was using nworld.ai, which was kind of like a NPC character driver that would have an interface for large language models that allow you to keep the boundedness and to prevent these large language models from completely hallucinating. But nworld.ai was not as robust for the different types of dynamic interactions and agent-based ideas that Keiichi was thinking around. And so he's kind of developed his own software to create the different types of interfaces that are really leaning into this Kami OS idea that he wrote about a number of years ago. So we'll be diving into much more of the kind of the theory of practice, what Keiichi built, but Niantic Spatial was actually using that within the context of their demo. Now, unfortunately, the internet went down before I had a chance to see this demo. So I did the interview and then right before my debate, I went down and very quickly had a chance to actually try out the demo. So the demo is you're basically walking outside the Long Beach Convention Center and you're walking down to what is kind of like a bus stop. And there's like these things that are overhead that are shading you from the sun. But on the ground, there's like these different murals. And so you basically are walking from one mural to the next. And you have this computer vision that's detecting it within the context of the snap spectacles. And then you have this kind of like pet, this virtual pet that is by your side. Got large language model. So it's talking to you and it's like asking you questions like, hey, do you want to see this or that? Can I explain this to you? And so you end up engaging in this kind of large language model conversation where you're like, oh, yeah, tell me about this or that, you know, and then you are trying to contextually point and say, based upon where you're located, then it's going to kind of know and infer what you're looking at and then tell a little bit more around this mural that's in the sidewalk. So it's essentially like a guided tour with this AI agent that you're walking next to. And my experience of it, it's kind of hard to know exactly because the internet may or may not been fully back up again. And because at the end of each of the responses, then the AI agent would ask me a question. And I found myself kind of like, in these, in this conversations where I know that they're not actually interested in what I'm saying. And then at the end of the, whatever I say, they're going to ask me another question. And again, it doesn't care about what I'm saying. And it just feels like I'm having a meaningless conversation with an entity that doesn't really even care about what I'm saying because it's not really responding to me. So it was kind of that type of like the worst parts of large language models that make me Because it's like you just end up feel like you're having this random conversation with ChatGPT, but without any purpose or reason other than just the AI is curious to ask you this question. So that was one part of it. But probably the more interesting part of the whole demo was that they had kind of a geolocated boundaries of where you could or could not have an engagements with this AI environment. And so it is more interesting around like how you're able to be in a specific location with the VPS and giving very specific location down to like foot or millimeter accuracy, wherever you're located, then start to give information that's specific to that specific location. So that's probably the more interesting concept, but wasn't necessarily like played out in a way that was interesting within the context of the demo. But I could see where things are going to be going here in the future. And you can go back to the interview that I did with the Space Time Adventures tour that was taking like a guided tour through Central Park and starting to use some of these different types of VPS to put these virtual characters on top of statues in very specific locations. And so... That's probably a little bit more of what's interesting around the ability for developers to add layers of story, layers of spatial context to deconstruct different aspects of what's happening. A lot of the different topics that I was covering within the context of Tribeca Immersive with the founders pillars and kinfolk and Father's Lullaby, each of these different projects at Tribeca Immersive in my previous series, we're starting to use augmented reality technologies to be site specific and to start to deconstruct different stories like the story of capitalism on Wall Street or the story of the slave market on Wall Street or capitalism. have these different memorials to black figures in history that have not been memorialized properly by our culture that are able to use augmented reality as a stopgap. So there's lots of ways that I could see more interesting deconstructions or narrative or immersive type of experiences that are going to start to be enabled. Another thing is like the Meow Wolf is going to start to be using this type of technology to expand out what they've been doing on their location-based immersive art installations and start to bring that type of immersive art ethos into the wider world, independent of wherever you are in the world. So starting to get into the Meow Wolf worlds that they've already been exploring, Walkabout, Minigolf, and then they have the different locations around the world and now collaboration with Niantic Spatial that was announced at Augmented World Expo this year. So very excited to see where that goes. Anyway, that's a lot to say before we start to dive in, just because I didn't have a chance to try out the demo before we chatted. And I just wanted to give you a little bit more of a takeaways of what they're doing and explaining some of the different things that they're thinking around there at Niantic Spatial. So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Alicia NSM happened on Thursday, June 12th, 2025 at Augmented World Expo in Long Beach, California. So with that, let's go ahead and dive right in.
[00:07:14.810] Alicia Berry: Hi, I'm Alicia Berry. In the realm of XR, I have been making AR VR experiences for about nine years. Currently, we are working on bringing Peridot to the third generation of 3D maps. And so what we're really looking to do is illuminate the future and what's possible. using the Niantic Spatial tech stack, but also bringing some whimsy into the world through Peridot.
[00:07:35.409] Asim Ahmed: Hey, my name is Asim Ahmed, and I've been with Niantic for about nine years. But we just spun out into a new company, Niantic Spatial, and we're really focused on building the third generation AI-powered map of the world. And so I've been focused on AR for the past several years.
[00:07:51.995] Kent Bye: Maybe you could each give a bit more context as to your background and your journey into the space.
[00:07:56.296] Alicia Berry: Sure. So I started my career in this space, in the MMO space. I worked at Elder Scrolls Online as a senior producer. I brought this multiplayer experience to the world, where as the first MMO using shardless technology, then joined the meta team to start with their social VR program. shipping Facebook Spaces, Oculus Rooms, Oculus Venues, then making a hard switch to Oculus for Business. I was one of the founders of Oculus for Business and came over to Niantic Labs in the year 2021 to run the games program, the game tech program. launching NBA All World, Pikmin Bloom, Monster Hunter Now, and then taking over the Peridot IP about two years ago, supporting the Peridot team, moving from the mobile phone into the realm of headset world. We launched four apps last year on headsets, and this year we're not slowing down at all. We have an experience on Snap that we've been building on, and we're going to be leveraging our VPS technology with Snap coming up very shortly.
[00:08:52.890] Asim Ahmed: Cool. I lead marketing efforts. And so I've been with Niantic for about nine years, like I said. And I joined Niantic just out of college. And so I started work on Pokemon Go. And I was able to support some of the other titles we've launched throughout the years. But I joined our Peridot team about four years ago, five years ago, when we were really an early concept on paper. And we launched our first experience on mobile as a full-fledged game, kind of leveraging the power of our Niantic ARDK. And as Alicia mentioned, over the past couple of years, we've really been focused on expanding our franchise into other mediums. So we're on MetaQuest with a full gamified, really fun experience called Hello Dot. We're on the Snap Spectacles with an application called Peridot Beyond. And that's really kind of reimagining our mobile game outdoors on AR glasses. And really what we're here at AWE focused on showcasing is The magic when you can take AI, AR glasses, and our geospatial VPS technology, bring it all together, what can that world look like? We're showcasing a glimpse of that right now.
[00:09:56.673] Kent Bye: Yeah, and since last time we had a chance to talk at the Snap Partner Summit, you were still in Niantic Labs, and then there has been like a selling off of the Pokemon Go and Ingress off to another entity, and now you've rebranded into Niantic Spatial. The SDKs moved from Niantic Lightship into Niantic Spatial, so maybe just give a bit more context as to like this shift in refocusing on the spatial data, Niantic Spatial, and yeah, just curious to hear like this transition that we've had since the last time I had a chance to speak with Niantic.
[00:10:27.051] Asim Ahmed: Yeah, we as Niantic have always been dreaming of building this third generation map of the world. If you know our CEO, John Hanke, he was kind of the original creator of Google Earth that started as Keyhole and was acquired by Google. And he worked on Google Earth, Google Maps. Our CTO at Niantic Spatial, Bam, as well kind of launched Google Earth. And so we've always dreamed of building this next generation of mapping that can understand technology and enable technology to understand the world in the way that we understand the world. We have large language models that can understand speech, but now we need models that can understand the real world. And so we're really focused on building the map. And so kind of the idea was kind of this two-headed beast for a long time, focused on games, also focused on the platform. We really wanted to give both opportunities of the business their best chance to succeed. And so that's kind of the decision to sell off some of those games to Scopely, an amazing company. And those games have now a long-term home where they can really thrive. And we can take the rest of our platform technology and really focus on building out that geospatial map.
[00:11:30.523] Alicia Berry: Any other comments? Yeah. One of the interesting parts about games is the technology use. And one of the reasons why I joined game companies to begin with is they're pushing the boundaries of tech. And as we have learned from building these outdoor AR games, we are the industry leaders of understanding this UI language, the technology that supports it, the network. There's so much knowledge that we have from building games at scale that we can apply to other use cases. I'm really interested in how these two spaces overlap, specifically, hanging assets in the real world, understanding AI, understanding our contextual space, and being able to give us the right information at the right time so we're not inundated with information overload. And it's also in front of us, as opposed to bending over and looking at it. I think that just the user interface will change pretty much everything, and I'm very excited to be just one of the few companies in the world that has this outside thought. Whenever you talk about mixed reality, AR right now, people talk about the furniture, they talk about the rooms, but I think about the world as a character, and understanding this character, I think we need AI, we need the cameras, Because if not, there's just going to be information overload. So that's kind of how I think of these two worlds. In some ways, I'm happy that we split so we can focus. In other ways, I'm super sad to miss my friends. And to have the Pokemon Go franchise with us is great. But I'm also interested in this unexplored space of head-mounted displays or wearables. wrist-mounted displays or rings or, like, soles of your shoes. Like, there's so many different ways to do this input modality, but I think you have to understand the context or else we're just going to be wasting time with extemporaneous information.
[00:13:18.658] Kent Bye: And so you say that it's the third iteration, and there was just recently an announcement here at Augmented World Expo that Niantic and Snap are collaborating on this new map development. Maybe you could just give a bit of a context for this previous two iterations and what's new with this third round of trying to map the world.
[00:13:36.341] Alicia Berry: We're thinking centimeter level precision on foot is our 3D vision. And then the third iteration of the map is being able to create the language and the protocols for both people to understand it, but also machines, also AI. And the integration with Snap or the partnership with Snap is We're just really excited about it because we have the Snapchat application where people, 400 million people are using this map every day, and the data on it is great for social, but it's not necessarily great for centimeter level foot precision. On top of that, with the Snap Spectacles, now we have this peripheral that will allow us to really grok the world in a way that we can do navigation, we can look at people and potentially see, these are our friends and maybe put silly faces on them, enable lenses where we're playing real world AR outdoor games very quickly because building snap lenses is pretty fast. And then being able to push the boundaries of just having a first mover advantage of understanding what information comes up to people, what are they interested in, and iterating on both the 2D map in terms of the snap maps, but also the 3D map in terms of not only rendering but understanding, and then the movement of people between these two. So that intersection is very interesting to me.
[00:14:48.001] Asim Ahmed: Yeah, I was just going to say, I think you'd imagine the last generation of a map to be something like Google Earth, where you can understand location. And Bam, in his keynote yesterday, talked about how Google Earth and Google Maps really enabled this deeper understanding of the world, but only to location accuracy. And as Alicia mentioned, what we're trying to build and what we're trying to enable is the VPS, the visual positioning system. And that enables us to understand the world to centimeter degree accuracy so that you can layer on content very particular, if I leave this very specific AR artifact in this place, someone else at some later point in time can come and find it in that exact same spot. Or we can localize the VPS and have a very specific, interesting, immersive experience happen. For example, our VPS demo is right down here by these murals. When you put the snap glasses on, and you kind of stare at the ground and localize in a second, you'll see your dot pop out and have this amazing contextual awareness of the place that we're at. And now you can ask it questions and get this really interesting understanding of the history of the place that you're at. And right now we only have that as one VPS location, but we also have millions of other VPS locations that we'll be able to enable over time. And the amazing thing about this partnership with Snap is that it'll enable Snapchatters, people that are using the Snapchat app, to help us build that map together. And so the VPS is going to help enable these devices, as Alicia mentioned, like AR glasses, enable AI agents, but also in the future enable things like robotics to understand the world as well.
[00:16:18.376] Kent Bye: Yeah, I was just at Tribeca Immersive, and there was a couple of projects that had AR components, and there was anchoring issues in terms of building anchoring on, let's say, the Wall Street Stock Exchange with the six columns, and then there'd be an ad that got put up, and then they'd have to re-scan it because it was occluded and it wasn't detecting with anchors. another where they would put up a poster to say we're going to anchor on this but then you know it wasn't legal to put the poster there so then that was scraped off or trying to put a cure code somewhere and then the cure codes were erasing so i had this experience this visceral experience of trying to do these ar experiences anchoring the things that we live in an ever-evolving changing process relational world where some things are not always like very static and we live in a changing world and so how do you negotiate this challenge between the VPS anchoring to something that you know is going to be solid and have at least stability over longer periods of time versus things that may be changing very rapidly. So yeah, just curious to hear how you negotiate trying to create that accuracy, but also in the context of a world that's ever changing.
[00:17:18.618] Alicia Berry: We've got a solution coming for that. It's our large geospatial model. And what we're finding is with visual positioning system is it's a lot more resilient to those types of changes. I do think that the QR code solution, which has been with us for a long time, we're trying to deliver an experience that is significantly better than that. And so the installation that we have to demo, it continuously localizes, like nonstop. So there is not one point you need to look at. There's not one angle you need to see it. where you can get a really great idea of where you are. And what makes it resilient is because you're looking at the full space, you can change a poster, it doesn't matter. It can be snowing, it could be raining, it could be cloudy, and it's resilient. So our large geospatial model, the overall goal is to be able to overcome this and also to be able to not only interpolate you are here, here, here, here, but also extrapolate. I think that the big benefit for us as a company is that when we have people outside exploring the world, we don't have to rely on GPS alone. We don't have to rely on looking at this poster or looking at this QR code. We will understand your space in real time, hopefully without interfering with any other of your expectations in terms of data collection. I think one of the more interesting use cases that I've been tinkering with my head is, if I am here, and a sim is also here, we know that we are both here, not because we've shared our data, but because we are both localized in the same location. So all of a sudden, if I'm playing some sort of location-based game, I could potentially understand everybody around me without knowing any of their personal identifiable information. So I think it's like a really interesting way to do meetups, create community, do multiplayer without having to like, we need to now localize together, we need to share this information or the app needs to know this about us. The app knows the place and the people are at the place and all of a sudden, that becomes the tie, that becomes the primary key. So for me, I'm very excited about this large geospatial model because I think it will expand humanity's human to human interactions based on place, not data.
[00:19:22.179] Asim Ahmed: I was just going to add, we've been building towards this VPS for years, and we have millions of scans around the world, which has enabled us to turn on one million VPS locations. But these are scans of places that have happened at different seasons, from different players, from different angles. And so we have good understanding of these locations. And to Alicia's point, when you feed that into the large geospatial model, the dream is, over time, you can have as little information as possible, but you'd be able to build a pretty accurate resembling scan of that place that we can then localize against pretty accurately. And so that's the dream and that's what the large geospatial model hopes to solve for.
[00:20:02.717] Kent Bye: I also find myself in this unique situation where I have chosen not to get a data plan on my phone where I pay $3 a month instead of $50 a month, which means that I can't do a lot of AR experiences because I often don't have data because they require the data. And we're about to do this experience, and the internet goes down, and we can't do the experience. And so there's this kind of question that I have, which is, is there fallbacks for some of these experiences in terms of having local data? Or is this the type of thing where in order to really have the amount of data that it really does rely upon these connections to the internet, the cloud, these deferred rendering situations offline. And just curious to have any comments on that, especially as the internet is down and we can't do the experience.
[00:20:44.520] Alicia Berry: Yeah, great question, and when I first started to work in general, I had a Palm Pilot, and I learned how to write with the stylus, and at the end of the day, you would dock it, and your information would go into your PC, and that is how we got contacts, or what have you. I can see a world where we have on-device models, maybe even a large geospatial model where we don't have to go out to the internet. And I continue to push and I continue to hear feedback that these devices are going to be able to host their own LLMs, which means that you can basically have an Android, and I don't mean Google kind of Android, you can have your own robot and maybe it's your glasses or maybe it's your watch, and so you don't need to have the internet. When we've been building this experience, Wi-Fi, Bluetooth is just a nonstop challenge because it's just the nature of the beast right now. Most of this compute is in the cloud. I can't wait for the chipset to catch up with the cloud and to be able to have it on device. For example, our Hello Dot experience over here, we launched it originally without needing an internet connection for this very reason, but all of a sudden we wanted to create some sort of progression system and then all of a sudden we have to start saving things off the device and need an internet connection. So we want to be there. We just need the hardware to catch up with us.
[00:22:00.100] Kent Bye: Yeah, and back at Magic Leap LeapCon in 2018 in Los Angeles, there was the very first consumer conference and Meow Wolf doing a demo on Magic Leap. Since then, Meow Wolf has been collaborating with Walkabout Mini Golf to translate some of their IP into these immersive games. Now we have here at Augmented World Expo, just yesterday, the announcement that Meow Wolf is collaborating with Niantic to start to do this VPS centimeter level accuracy of doing augmented reality overlays for their marketing campaigns that they're doing once a year, or just have other immersive experiences that are looking at these amazing installations and being able to do layers of augmentation and additional storytelling. So I'd love to hear any comments you have in terms of this collaboration between Niantic and Meow Wolf and the use of the Niantic spatial technology to do these kind of localized augmented reality experiences.
[00:22:51.294] Alicia Berry: I have to say I love Walkabout Mini Golf. I think it's a killer app in mixed reality and VR, so that's awesome. But from our perspective, the reason why Meow Wolf is such an interesting partner is that they really want to expand their museum experience into the real world, not as just a one-off or not just as a marketing activation, but as an immersive world-scale MMO. And we just think that we are... very primed to work with a great creative partner who really understands this cutting edge art, interactable art, and then we can bring this real world understanding to them and build something compelling where you can bring Meow Wolf into your day to day. Because it is an interesting experience. Like for me, you go to Las Vegas and you go to Meow Wolf because you're in Las Vegas. I would love to experience Meow Wolf in my hometown. I would love to experience this kind of like thought provoking art pieces in my day to day and how much more inspirational that could possibly be. I don't know if you want to add anything more.
[00:23:42.438] Asim Ahmed: I was going to say something similar. I mean, they just do such amazing stuff. And the idea with this partnership is that we can bring their universe anywhere you are. But the other exciting part of what this collaboration can look to do is, how can we bring people from outside into these immersive experiences? And then they can take some of that away with them when they're leaving. There's not too much we can share early on at this point in time, but we're really focused right now on a VPS-enabled experience at their Denver location. And we'll see if that's successful, where the collaboration can go from there.
[00:24:15.089] Kent Bye: One of the other things I love about Niantic is that you're really using all the different platforms. It feels like every new XR devices that's launched, you're one of the first third party apps slash collaboration with first party access sometimes to be able to produce some of these different experiences. And so just curious to hear some of your reflections on the current landscape of the different devices. We have the announcement that the Snapchat spectacles, or I guess they're being called the specs, are going to be launching next year. We have smart glasses that are out, a little bit less in terms of visual output, but there's audio components. Or actually, most of them don't have the ability for third-party apps just yet. But we also have Android XR and with other devices. And so just curious to hear a bit of your reflection on the landscape of the different devices and how Niantic's spatial strategy is to really be a part of each of these different devices and the insights you get from virtual reality, augmented reality, mixed reality across the spectrum and maybe even with smart glasses and AI all thrown in there. So yeah, just curious to hear any reflections on that.
[00:25:14.840] Alicia Berry: We just try to find a use case for every device. I mean, if you're just talking about the Peridot franchise, we're always trying to be by your side, no matter where you are. But with Niantic Spatial, we want to be where our customers are. And if our customer's on Apple Vision Pro, we're on Apple Vision Pro. If our customer's on Quest 3, guess what? That's where we are too. With Snap Spectacles, it gives us an opportunity to kind of push the industry for it outdoor. It's the first mass market, next year mass market device, but at least right now, a device that will go outside, that will really combine our mission with hardware. So for us, there's never been a better time to be a map-based company, especially that we can hang assets with persistence, because all these devices are coming to the market at the same time, and they're all competing for the same kind of experiences that we provide. We really want to partner with all of them. And we do partner with most. So we continue to try new devices. We continue to try find different use cases. But I think it really starts with us with the map. So if we can get our map on the device, if we can get camera access to be able to localize with VPS, we can either build experiences or work with partners to build experiences. And that's why we spun out as Niantic Spatial, because it's the right time.
[00:26:26.892] Asim Ahmed: I would just add, the thing I love about Peridot so much is it's our own franchise, and it really does give us this opportunity to innovate, to push the boundaries. And that's what's enabled us to try out so many devices and explore development on those devices. And so as I mentioned earlier, we started on mobile. When we were five years ago, the ultimate dream was we wanted to be on a headset. We wanted to be on outdoor air glasses. The technology wasn't there yet, and so we started tinkering on what could an always on air experience look like on the mobile device. That gave us a lot of learnings when we wanted to translate that from then going to mobile to headset. And we started last year with Hello Dot on the MetaQuest, and that gave us this really big opportunity to now bring our hands and the physical space as part of the experience versus just looking through this small window through the phone's camera. And now we get to partner with Snap and we get to be on their outdoor air glasses and we get to take learnings that we have from this indoor headset experience and thinking about spatial game design and how do we bring that outdoors? What are different input modalities? How do you interact with your creature? It's not just pushing buttons, it's maybe talking to the creature is one input of the way that you'll interact with it. And so with every opportunity and every device we're on, we're taking all these different learnings and different tool sets and Hopefully a couple more years down the line as these devices become more mass market and we're really outdoors with true AR glasses that can understand and see the world the way that we do, we'll have that really amazing immersive experience with something like Peridot that can be with you everywhere, wherever you are, whenever you want. But, you know, I'm dreaming of the day that we have those AR glasses. We're getting closer and closer each year. I'm really excited to see what Snap brings out next year. But even where we're at right now with the Snap Spectacles, the fifth generation, we're getting a really amazing glimpse of what that future can look like. We also have other devices on the market like Meta Ray-Ban. I think that's a really interesting opportunity to explore. I think augmented reality comes in so many different forms. I think a lot of people usually just think visual. But audio can be a really amazing opportunity. And you can think about audio experiences that get you outdoors exploring the world without having that visual HUD. So maybe there's an opportunity there to explore. But we have so many amazing devices. Smart rings are so cool. I could see that as another input. You know, yeah, we love watches. There are some people doing some cool stuff with like shoe soles. So it's going to be really interesting and exciting world in the next couple of years.
[00:28:50.737] Kent Bye: So one of the big themes here is AI. And I'm actually going to be on a panel later today arguing against some of the AI hype that I see, just because I feel like that so much of the industry is almost like AI is going to come in and be the killer app that's going to save us all. Just reading through Karen Howe's Empire of AI, one of the points that she's making, and just looking at OpenAI as an example, is that you have this tension between what used to be responsible innovation teams that would be red teaming or looking at trust and safety issues, and that with the pressure of launching and staying competitive, a lot of those teams have been bypassed to just kind of get this stuff out into the world. And so I have this larger concern in terms of as we continue to push forward, just the ways that, you know, issues around privacy, responsible innovation, contextually aware AI, what are the safeguards in terms of information that's being shared? So just curious how Niantic approaches this kind of responsible innovation dilemma, which is to continuing to push forward and innovate on the technology that's there, but also to have some red teaming or looking at the larger ethical implications or privacy implications and how that gets negotiated through both innovating but also making sure that it's done in a responsible innovation way.
[00:29:59.087] Alicia Berry: Well, we have been in business for one week, so hard to say we've got this great program, whatever. Data in general is a hot topic and something that we don't even think about doing anything until we understand where the data's going, how we're going to use the data, where could it go wrong, We've got incredibly great security teams. In terms of trust and safety, I think we're in a great position with the real world that we've already done a lot of the work with our VPS to obscure faces, license plates. Because of Pokemon Go and other games, we understand that this is a prison. We don't go there. This is a school. We understand restricted locations and have for many years because I don't know if you remember in Pokemon Go, there was some original... unforeseen activities that people would walk out into streets or potentially fall off a cliff because they were just so into playing the game. And I think as you, Joseph Alul said in 1911, with every technology advance, there's unforeseen negative repercussions. And I think AI is no exception. And I agree with you. I think that the risks are very big. I think in addition to developing and innovating in the LGM space, I think we're going to have to innovate and develop the way that we keep it safe for everyone. That's on the roadmap. I can't say we've got a great solution at this time, but it's coming.
[00:31:13.038] Asim Ahmed: Yeah, I was just sad. We care deeply about the safety and the privacy of our users. And in many cases, it's an opt-in to do the scan. The player knows that they're doing the scan. They know that the scan is coming to us. And we have that very clearly laid out in our privacy policy in terms of how we use that data. And as Alicia said, our VPS can obscure faces, license plates, and all these other things. We care really deeply about it, and that's always at the forefront of how we think about the technology that we build and even the experiences that we build. How are we doing it in a responsible way? How are we doing it in a safe way? We want people to engage in the world in the safest way possible.
[00:31:49.726] Kent Bye: And finally, what do you each think is the ultimate potential for spatial computing, geolocated types of immersive experiences, and what they might be able to enable?
[00:32:00.738] Alicia Berry: I think it's the end of buttons and screens. I think just being able to interact with 3D objects. One of the things, I'm sad that we haven't been able to do the demo yet, but one of the things that we kind of fought about internally is what is the output? Is it going to be text? Is it going to be voice? I really wanted it to be 3D objects or maybe 360 video. I don't think we're limited by text and voice anymore. So I think one of the things that I'm most excited about is a way to communicate in a way that's so accessible to everyone. that potentially language no longer becomes a barrier or perhaps you don't need to hear. I'm also really excited about the time that we will save that we don't have to abstract through a QWERTY keyboard. Potentially we can just communicate in a way where we don't have all these barriers. I'm pretty excited about that feature.
[00:32:43.097] Asim Ahmed: You know, part of the reason we're building this geospatial AI map is beyond just immersive entertainment, so we also solve solutions for warehousing and logistics and so many other things. The thing I think I'm most excited about personally, like I've been dreaming of the day where I can have AR glasses that can understand the world the way that I do, that can be with me by my side, that can light up really interesting facts about the world as I'm going throughout the world. And I've been trying to convince Alicia, and part of the demo that we'd love to show you today is this idea of these really adorable creatures that can navigate the world with you, by your side, that can teach you all these amazing, hidden, interesting spots around the world and teach you the history of the world. And so I would love to turn Peridot into these spatially intelligent AI agents. And so as you're going throughout the world, landing in Chicago for the first time, I can say, hey Dot, I'm here for the first time, can you take me on a 30 minute adventure? And it knows my personality, the AI knows my personality, it knows the types of things that I'd like, it knows the geolocation relevance, the spatial intelligent relevance of the place that I'm at as I'm looking around, and it knows enough about me that it can create this really bespoke fun scavenger hunt 30-minute adventure that I can go on and I can basically have Dot as my tour guide to this place for the first time. And so that's kind of one of the visions that I have of what we can very, I think, easily solve over the next couple of years as, again, these devices become more mass market and we have a consumer device out there. But there are so many possibilities beyond. And I think you can really skin the beautiful world outside in so many different ways. And maybe you want to see the world through the lens of Disney. Maybe I want to see the world through the lens of Marvel, right? And I think there are so many ways with AI and with this spatially relevant understanding that you can make the world so much more magical than it already is.
[00:34:31.855] Kent Bye: Anything else left unsaid you'd like to say to the broader immersive community?
[00:34:35.339] Alicia Berry: Thank you, Kent. You're such a great, strong voice and have been for so many years, and it's an honor to chat with you.
[00:34:41.069] Asim Ahmed: Yeah, I just love being a part of this community, and it's so amazing to be at an expo like this where you get to see the innovation happening so quickly, and I'm excited to see what's to come.
[00:34:50.297] Kent Bye: Awesome. Well, Asim and Alicia, thanks so much for joining me here on the podcast to break down a lot of where Niantic Spatial, a week old now, but moving through a long legacy of what was previously Niantic Labs, and yeah, very excited to see where you continue to take all this here in the future, especially because you're so connected to what is happening out in the world and understanding all these different relational dynamics of the contexts and the ways that you can start to put experiences on top of the world and encourage people to go out into the world and engage with each other. I feel like that's a key part of escaping out of our rectangle portals that are disassociated into our bodies and to move our bodies, be in space in relationship to each other and within the context of community and finding new ways that technology can create these magic circles to create these infinite possibilities for people to connect in new and amazing ways. So excited to see where Niantic Spatial takes it all here in the future. So thanks so much.
[00:35:41.246] Asim Ahmed: Awesome.
[00:35:41.486] Kent Bye: Thanks so much, Kent.
[00:35:42.747] Alicia Berry: Thank you.
[00:35:44.208] Kent Bye: Thanks again for listening to this episode of the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.