Here’s my interview with Ziad Asghar, Qualcomm’s Senior Vice President & General Manager of XR & Spatial Computing, that was conducted on Wednesday, June 11, 2025 at Augmented World Expo in Long Beach, CA. See more context in the rough transcript below.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.458] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my series of looking at AWE past and present, today's episode is with Ziad Asghar, who's Qualcomm Senior Vice President and General Manager of XR and Spatial Computing. So he's essentially taking over Hugo or Swart's job. So he goes now doing Android XR and now Ziad is in charge of all the things that are happening with XR and special computing within the context of Qualcomm. So it's really great to be able to get a chance to sit down with him and get a sense of, you know, some of the things that he's thinking around a little recap of what's happening with ecosystem and a little bit of a sneak peek of this collaboration that's happening between Samsung and Google, especially because, you know, Samsung and Google have made entrances into like the head mounted XR space with the daydream and with the gear VR. And then, They, for one reason or another, decided to take a step back from the hardware side, and now they're entering back in. And so just getting a larger context of that story and how Qualcomm is collaborating with Google and Samsung on Project Mohan, but also their booth ends up being like an exhibition of all these other types of projects that are there. I didn't spend as much time this year trying out all different demos and everything else. I was just really trying to get a sense of what is happening in this industry in terms of What's the story? And so I spent more time, you know, just sitting down and having conversations with people, including one like this, just to get a sense of where things are at and where they might be going here in the future. So we're covering all that and more on today's episode of Voices of VR podcast. So this interview with Ziad happened on Wednesday, June 11th, 2025 at Augmented World Expo in Long Beach, California. So with that, let's go ahead and dive right in.
[00:01:52.765] Ziad Asghar: My name is Zia Adaskar. I lead XR development technology and product. Super exciting times. I like to always say that, hey, this is probably some of the most interesting work at Qualcomm, but I get paid for it. But we're really focused on driving AI and XR and how those come together, but also all the different form factors that are evolving in the XR space, whether these are proper mixed reality devices or smart glass-like products on the other end of the spectrum.
[00:02:19.477] Kent Bye: Great. Maybe you could give a bit more context as to your background and your journey into the space.
[00:02:23.641] Ziad Asghar: Sure. Yeah. So, you know, I have been fortunate to actually see kind of the development of the application processor in a way on the mobile side. And that's kind of where it started in a way. If you guys recall, there was a time when it was just Symbian. And these are these are phones that used to be actually used for voice calls. Surprise. Well, they're not used for that anymore. And that advent has happened to these amazing application processors with the best in class camera technology and the gaming capabilities and more and more so now AI capabilities into them. So as the smartphone evolved from just a device that we use for talking to a device that has basically integrated into it these 20 other products like the radio and basically the camera and the location or the GPS device and all, That technology investments that we've been making at Qualcomm, I've driven all the technologies for the last six, seven years where I've driven the technology roadmap for camera, for graphics, for CPU, for AI, audio, video interfaces. And all these technologies basically go into our different product lines like automotive, XR, mobile, and others. So that's kind of what has brought me to XR, because as I'd been driving those areas, it was very evident to me that there is an amazing opportunity today in terms of how we can take XR products, combine them with the AI goodness to be able to solve many of the problems and challenges that the XR space has had. And so that's what I'm doing now.
[00:03:47.628] Kent Bye: Yeah, and the way that I start to think around this moment in time and the XR space, it sort of feels like a new cycle with all these smart glasses that are coming up online. But I'm curious to hear your reflection on that from what we've had with standalone mobile AR and mobile VR devices. And then it's kind of building up to now. We're starting to see the lightweight with just the meta Ray-Ban glasses, but now more and more talk around having a form factor that is adding more pass-through, see-through AR and other aspects, but also just smart glasses in general. So just curious to hear some of your thoughts on this moment in time and what type of trends that you're seeing.
[00:04:24.810] Ziad Asghar: No, absolutely. The device is absolutely going through an amazing innovative spurt of sorts, right? So basically what we're seeing on the MR side is people are realizing that the devices are heavy. They're ergonomically not that well done. So the result has been that people can't wear them for an extended period of time. So one of the things that we're seeing on the MR side is people are talking about a disaggregated device. And that means a much lighter sort of a visor with a puck such that you offload a lot of the compute and the rendering onto the puck. And a person can wear that device for much, much longer because it's not as bulky and it's not as difficult to wear for a longer period of time. And especially for applications like, for example, fitness and medical and some of the other ones, you cannot have a very big device on your face. More on the MR side. But the other phenomenon that's happening on the mixed reality side is it's not just video see-through anymore. People are also exploring optical see-through light devices, which by their very nature are lighter as well. But like you said, on the smart glass side, it's like, you know, meta-ray bands have been a real breakthrough success, and that has really gotten people thinking about this form factor. Now, the way I explain it is it's basically three different modes of that device. In a way, one mode is basically when it's connected to the cloud. So this is the MetaRayBinds-like scenario where basically you go through a device like the smartphone to the cloud and the AI processing and all happens on the cloud mostly, but a lot of the video and the camera and the audio goodness all happens on the device itself. The second model that we are seeing is some of the work that we're doing with Android XR with this huge platform that we have today, which is Android, and basically bringing XR to that platform. And what that means is now taking the smart glass and attaching it to a device like the smartphone. And in the future, maybe attaching it to a PC or an auto and so on and so forth too. But the third model over there is essentially you take a smart glass and attach it to a compute block, call it a puck. And basically that has all the compute and AI goodness in it. And all of these allows for very different verticals, very different end customers. And I think we are very excited because it really opens us our possibility to be able to reach to a very wide swath of end devices and applications.
[00:06:33.450] Kent Bye: And we've also seen how artificial intelligence and being able to use large language models to have conversational interfaces and to speak into the glasses, you know, the glasses that end up being sort of like a Bluetooth headset, but that are also providing spatial audio or allowing you to have like a hands-free interaction with technology. So just curious to hear your take on the ways that AI is some ways kind of driving the adoption of some of these smart glasses or other Android XR has been very clear that these XR devices are like very much built for Gemini and their AI platform. So just curious to hear some of your reflections on this relationship between AI and XR and how they're feeding off each other in different ways.
[00:07:17.287] Ziad Asghar: Yeah, I mean, this is absolutely the part that's going to bring this amazing inflection point that I talked about earlier, because actually there's benefit both on the MR and the AR in the smart class side. So we'll start with the MR side of things. So if you think about it, basically one of the challenges on MR and VR has been the availability of content. And like I mentioned actually in my talk yesterday too, but the ability to use AI to generate content is going to really make the difference of availability of all this content in there, right? You can use these amazing models to create 3D objects like text to 3D or speech to 3D and then some other models like Gaussian Splatting and other techniques to bring 2D content into 3D content. Another aspect I think that's happening at the same time is the capture that's possible on most devices like smartphones today. There are multiple cameras, so if you could use them to capture spatial video, you have so much more content that people will be able to generate. So from the work that we've been doing on the mobile side, we're enabling a lot of that capability, and that's going to bring in a lot of content there too. And then maybe the last one is you can imagine that you're playing a game in MR, and now those NPCs that are over there, you could actually be having an LLM-like session with those NPCs. Or you can create content, say create me a sword or a shield that's created exactly for your likes and dislikes because we know what your likes and dislikes are on the device, not in the cloud necessarily. On the smart glass side, you know, you've seen the amazing work that people have done on LLMs, large language models, but also on agents. And I think the best device for doing agentic flows is really a smart glass because it would be completely seamless. There is no mouse. There is no touchscreen. You actually interact with your device using voice. And at the same time, you have a camera, so the device gets to see what you see. It actually hears what you hear. With that additional context that you're able to get, you can do the best-in-class generative AI and language models on a device like Glass. And that's why what I showed yesterday was a demo of a billion-parameter LAMA model running locally on the Glass. Nobody else has been able to show a capability like that. But what that means is we have enough AI capability in products like this where the device can even function independently even when it's not connected to the cloud. So it really opens up a whole plethora of applications there as well. But additionally, you can think of education applications. You can think of medical applications where people can use AI to be able to enhance all of those use cases. I like to call it that smart glasses should give you superpowers. It should give you an ability that you don't have. So for example, if you go into a meeting and the other person doesn't speak English, If they can translate for you and you can see the translation on the display, well, it allows you to converse and deal with people that you otherwise wouldn't have been able to. Or to be able to use it in the education realm where a student is wearing those glasses. But AI is helping him figure out exactly what he's not able to understand and giving that information like a live AI tutor for him. or if you go into the medical domain and for nurses and other practitioners it's able to you know bring in an x-ray report in front of them but when it brings the report on the displays also tells them hey look at these two or three spots in the x-ray i think those are the areas where you may want to focus on or on a blood chemistry report that says these three numbers are out of range right again gives you powers gives you abilities that you don't have by the inclusion of ai So I really think these two technologies are super synergistic, and it's like AI makes XR better, but XR and smart glasses actually make AI better in many ways.
[00:10:40.079] Kent Bye: Yeah, and coming to AWE, I think it's really distinct when I walk to the showroom floor and just see how many of the different devices that are there are also using some level of a Qualcomm chip on the back end for both XR and AR devices and even the smart glasses. Qualcomm has been able to help to cultivate this amazing ecosystem of over 100 different XR devices now. And so I'd love to hear some of your reflections on what has ended up being kind of a blue ocean strategy for a lot of these companies to be able to cooperate with Qualcomm. But yet some of the stuff that they're giving you early look in terms of what they need for their devices is actually like being fed back into Qualcomm. larger ecosystem of all these other devices that are out there. And Qualcomm is really at the center of that. So I'd love to hear some of your reflections on this ecosystem that's been cultivated by Qualcomm.
[00:11:28.008] Ziad Asghar: Yeah, I think I'm really excited about this very, very healthy ecosystem. But with this very vast ecosystem, we have the ability to really bring some amazing innovation into the space. And, you know, we at Qualcomm have a very interesting vantage point because we really get feedback from all of these different types of devices, from the lightest smart glasses to the most complex VR and MR-capable products. And we are able to bring those back into our solutions. We are able to learn from the work that we have done, for example, on the perception side and look at those algorithms and look at the techniques that we need to harden in silicon to be able to get even better power consumption, for example. We look at the fact that, hey, if you brought in attached to this new kind of camera sensor and we'd go and enable it in our silicon, for example, we learn that, hey, we have the need for even a higher end display capability or this unique feature on the graphics. For example, we have brought in full foveation on the graphics side, right? Techniques that only Qualcomm has been able to offer. But I think with this, I think the real room is the fact that with this very large set of people in this ecosystem, you can bring some amazingly innovative form factors that are not even there today. You know, these are smart glasses, but hey, maybe you can do earbuds with the cameras on there. that allows those people to use smart glasses that don't really want to use glasses, perhaps, right? We have technologies for that. So this innovation on the form factor side is, I think, going to really push us into this next revolution, which is the spatial computing revolution. And I think it's really, really blue ocean at this point in time because there are, of course, the established player, but there is a whole long tail of companies that are small, scrappy, and coming up with some amazing, amazing experiences.
[00:13:09.215] Kent Bye: And I guess one of the challenges of being a company like Qualcomm is that you have to know what the trends are going to be from many years in the future in terms of designing and producing these chips and having the hardware and then having the consumer market also catch up. And so, you know, there were a number of different AR pass-through devices, but then there was kind of a waning and not as much demand. But now we're smart glasses. So just curious to hear around what are some of the trends that you're seeing now that you can sort of project out into what can we expect to see how this industry is growing? What is really taking root?
[00:13:41.007] Ziad Asghar: I think one of the challenges in the past has been with the availability of, for example, great displays that you could scale and be able to mass produce. And I think that problem is getting solved to a great extent with some very, very good quality waveguides and micro LED-like technologies. So I would say that I see a very clear path to, number one, smartphone OEMs starting to use smart glasses as an accessory to the smartphone. The great part about that is that basically we have a very strong business on the mobile and the smartphone side. And really those players are seeing that, look, this is the accessory of choice. So that's going to be one big trend that I see. The second big trend that I see is that glasses with displays, they're actually very, very light, like the ones that I was wearing yesterday for my demo. are going to become more common right because now we're going to the point where you can actually produce those displays in large quantities and mass produce them so that's going to be second thing i think the third thing is that probably we have not seen as much is actually the capability of some very unique kind of llms and domain specific language models running on these classes because Like the product that we announced yesterday, AR1+, that can do a billion parameter model. But what it can also do now is you can envision a domain-specific model, a model that's trained only on medical data or only on education data, only on tourism and that kind of data. it will actually do an amazing job. It will not hallucinate. And at the same time, it will be very, very small and compact. So I really think that part starts to pick up a lot more. And then finally, of course, we get to follow up agents that are going to be running on these devices as well. And then immersiveness really, really starts to pick on. And then maybe on the MR side, we'll see the NPCs becoming really your partners that you can talk to, but at the same time, seeing that MR devices get much more disaggregated, which really breathes new life into them and people really start to use them much more and for longer periods of time.
[00:15:35.472] Kent Bye: I think one of the things that happened during the Google Glass era where there was a bit of a backlash in terms of like the glass hole effect of people not wanting to be on camera. And I think as we start to have cameras that are on these smart glasses, I suspect that we may face at some point some similar type of concerns around bystander privacy or, you know, there's indicator lights on them. But I guess in terms of the privacy of these, if you foresee being able to use AI to do like inferences on the data rather than storing the stream of the data and just some of the privacy architecture that you have to do on the chip level at Qualcomm in order to assuage some of those privacy concerns.
[00:16:15.414] Ziad Asghar: That's a great point. So basically what I showed yesterday is a path to do that, right? As much as we can do more inferencing on the device, which means that that data and information never goes into the cloud. It stays only on the device. It stays local. So if you're at home, if you're with your family, all that information can stay really local as you're processing it just on the device. That's one. The second thing you mentioned, the first Google Glass. Now, there was a time, what, 10, 15 years ago, even the phone cameras were not that capable. So I think one of the big changes that has happened from that time, we have, all of us have these amazing phone cameras in our pockets now that actually have like typically three different sensors on them. There is a ultra-wide, there is a tele, and then there is a wide. So these cameras on the phones are way more capable than what you will have initially even on the smart glasses. So I think people have gotten used to that. People have gotten used to people blogging and doing video blogs at all points in time and with people streaming in all the public spaces. So I feel that some of that concern has gone down because, honestly, a camera is on every person's body today already. This is, of course, something that can be more subtle, but I think the partners are doing a good job of having indicator lights if somebody's recording something. And I think these things will be there such that they want people to adopt these technologies, which means they will offer them the ways to make sure that they are not being viewed as devices that kind of impinge on people's private lives.
[00:17:39.414] Kent Bye: And I think if we look at the operating systems for a lot of these different devices, a lot of them have some fork of Android or directly using Android. And so now with the collaboration that Qualcomm has with both Google and Samsung, wondering if you could comment on what is the more tightly integrated cooperation between the chipsets that Qualcomm is making and the Android XR. What type of things can we expect that that is going to benefit from as we move forward with this more tightly integrated stack of technologies?
[00:18:10.243] Ziad Asghar: I think the great part about this is that basically all of our customer base is extremely familiar with Android. It's this massive platform. But now to bring in XR gives it very nice impetus because people are familiar with the software, they know all the aspects, and we've brought that into XR. So if you think from the perspective of MR devices with the Muhan-like device, if you've seen some of the demos and all, it integrates the Google platform and Google assets like YouTube and Gemini extremely well, right? So in a way, that's the benefit that our partners and us like Samsung can accrue with the inclusion of Android XR. As you go into the smart glasses side, there is that added benefit that now you could take a very light device like a smart glass, which has limited battery life and all, but now you could offload some of the work onto a smartphone, which means you could do some degree of hybrid or distributed AI onto the phone. and even some rendering onto the phone. And that allows that device to last even longer. That allows that experience and the KPI to be even better. And I think that is going to be a great multiplier for it. And like I mentioned, the smartphone audience already see that as a great opportunity. And that's why they want to launch devices like Smart Glass and all. So I think that is a very synergistic thing. Of course, we work on mobile semiconductor solutions also in addition to XR semiconductor solutions. So we are in a very interesting vantage point of being able to create solutions which actually leverage the benefit and the strength of both the sides to be able to make this, you know, a situation where one plus one is equal to three. And those are some of the things that we're working through. And especially as you hook in more devices than one. So we are in smartwatches, we are in earbuds, we are in smartphones, we are in smart glasses. I call this the constellation of devices around the person. But if you can actually do this sort of hybrid AI processing across all of these devices, then independent of whether there's a phone present or whether the cloud is there or not, you can still offer a pretty nice experience to the consumer by leveraging the assets that exist in this constellation.
[00:20:06.804] Kent Bye: And it seems like that latency is a big question as to where is it being rendered? How soon do you need it? And can you talk about some of the trade-offs between the latency and power consumption and other things that are weighed in terms of, you know, the traffic cop of deciding where if you're doing split rendering or distributed processing, like how to decide where it should get processed?
[00:20:24.611] Ziad Asghar: Yeah, I think it's almost going to be a reward function of the sort that what is the power and the performance impact of doing a use case, first of all, on a given device within this constellation or doing it in the cloud. So, for example, the very, very large models like some of the recommendation models and even models that are in excess of, say, 10 billion parameters and all. they'll most likely continue to be in the cloud, and that makes great sense. But what we are hoping is that with some of the on-device AI processing, those queries that are very latency sensitive, we can actually leverage the AI processing on the device to be able to address that concern. I think people wait for, say, three, maybe a little bit more number of seconds, but if it goes beyond that, they start to get turned off from an experience. We want to make sure that the experience is really, really good across these devices. So we want to use the strength of the cloud when we need it, and we want to use the strength of the on-device AI processing, but also the on-device AI processing across all of these products. So we have put AI capabilities in pretty much all of these product lines, whether it is our smartwatch or our smartphone or our earbud or smart glass, and then we can figure out exactly the best device to do the processing based on the query.
[00:21:34.963] Kent Bye: And so we've seen a lot of consumer-facing companies produce different XR devices, but there's also here at AWE a large range of enterprise devices that a lot of folks within the public may not have much access into. And so curious to hear from your perspective how you see the progress from what is happening in the enterprise space and how that is feeding into the consumer market.
[00:21:56.874] Ziad Asghar: Yeah, I think enterprise use cases are really very interesting, and the potential is actually very big, because typically enterprise is able to hit a very different price point than what basically consumer needs to do. So from a consumer perspective, I think with our partners like Meta Raybans, we're doing a great job over there, but we have multiple partners on the enterprise side too, and applications like, for example, doing remote worker assistance. For example, helping law enforcement like applications. We actually have a partner that's looking at using the smart glass to be able to take the license plate information with the camera and then being able to send that over to the cloud and then figure out if there are violations for that license plate. The car is perhaps stolen or something like that and bring that information back. There are partners in oil and gas who want to be able to go and say, hey, how do I fix this thing? And we can actually overlay, you know, hey, these are the first two screws that you should take off. This is the first panel that you take off. All of that can be built into this thing. So amazing amount of interest on the enterprise side, even on things like digital twins and some of the other vectors as well. And we're working with many different partners to make those use cases happen also.
[00:23:07.481] Kent Bye: And walking the floor on Augmented World Expo, at the Qualcomm booth, you actually have featured many different partners that are showing their specific headsets from around the world. And so just curious if you have any comments on the different things that you wanted to highlight in your booth this year.
[00:23:22.990] Ziad Asghar: I think from our booth perspective, it's great. You'll be able to see the Samsung Muhan device that's there. We have a very good partner on the education side, Prisms, that actually is showing full lesson plans from grade six to the 10th on science and technology-like areas. We have a very good partner that's showing an enterprise application for interaction, where you basically have, I believe, an aircraft engine that people can actually interact with across many different areas. We have, of course, our on-device AI demo. So we have multiple partners. We've tried to cover kind of all the verticals that I talked about in my talk. So we have somebody from education, we have somebody from sports, with our partnership with Asido and HPS, where we announced the second cohort for our XR Sports Alliance. We have that part covered. We, of course, have health partners. And that part that I addressed in my talk also where we're working with VA to be able to leverage MR technologies for mitigating some of the pain and other aspects that people have. So we're working with those guys. So, yeah, it would be great. Please walk by. I think we're a very good booth this time with many of the different form factors and many of the different applications as well being demoed over there.
[00:24:31.641] Kent Bye: I wanted to ask around the Project Moohan and this collaboration between Qualcomm, Samsung, and Google. I've been covering XR now for 11 years, so I've seen the Gear VR from Samsung that came online. And I think with the Gear VR, you had, like, someone's using their phone, but they're putting it into this VR device that completely drains their phone. So I feel like the battery management within a device where it's your primary device that you need seemed to be like as much better form factor with the go and the quest that that is kind of where the industry went so more of a self-contained device and then with google you know they they came in and they started with very low end with google cardboard and then they had their daydream and then you know for one reason or another they decided to take all their ar technologies and put it into mapping and other ways that it kind of lived on, but they didn't have any dedicated HMD. And so as Project Muhen's coming back on board now, Qualcomm has been in a position to really understand what's happening in the ecosystem. So just curious to hear how you tell the story of both Samsung and Google, who they had been entrants in this space, and there's like, you know, Questions that I have, okay, are they really committed? And what are the signals that either Qualcomm is seeing or all the partners are seeing that they feel even more committed now, that now is the right time to re-enter into this space?
[00:25:45.029] Ziad Asghar: Yeah, I think the inclusion of our partner Samsung over there in addition to Google should give people a lot of confidence that, hey, this is something that we are committed to because this is a dedicated device, like you pointed out. This is not a device that's basically reusing a smartphone to be able to do some of these experiences. Number two, the inclusion of AI. I think that really changes the experience from the days of Daydream and from the earlier times, which I think were marred with not having enough content and, again, not having these unique use cases and the inclusion of Gemini to be able to interact with the device. And I think that's probably the second big factor. But I think the other aspect also is as we look at the technology, how it's progressed from that time, We have these solutions like XR2 plus Gen2. These are amazingly capable, fully integrated products that are actually able to do 4K resolution displays. They can actually do concurrent multiple neural models that are running on the NPU for hand tracking and eye tracking, and to be able to mitigate a lot of those earlier concerns people had with, say, motion to photon and know brain fatigue that people used to have with devices like all of those are gone the experience is pristine on these devices and i think that's really the part that gives me a lot of confidence and actually you can also see that we're actually expanding beyond that right so with project aura with xreal it's actually going to even different form factors like optical see-through in addition to video see-through So I really think there is great commitment. We're working very closely with our partners to basically bring these devices to the customers. And I think once the customers use them, they'll really appreciate and enjoy the experience that they're able to get on them. And that's what's going to get this going.
[00:27:19.063] Kent Bye: And finally, what do you see as the ultimate potential for XR and all these spatial computing devices and AI in the mix and what they might be able to enable?
[00:27:27.961] Ziad Asghar: You know, really, the holy grail of this is basically having a very, very light device, a device that can do pretty much everything that the highest-end VR device can do. And I think that time is going to come. It's a little bit further out. We are driving the technology at every level, from a transistor level to the IP level to the level of use cases and everything to be able to bring that future to fruition. But I think that's where it gets to be a single device that you could, for example, make the glasses opaque. and you have a full VR experience, and the glasses become light, and you have full OST and a very light experience. But I think, again, it has to be seamless. The experience has to be pristine. It cannot be something that works sometimes. It doesn't work some other times. But I do see that there are multiple companies, very capable companies, bringing in the power of AI, bringing in the power of the perception stack, working with the work that we have done to be able to make this future come to reality. Anything else left unsaid you'd like to say to the immersive community? No, I think please keep at it, making these amazing new devices, new experiences, and use cases that we might not have even thought about when we're launching these products. So please keep at it.
[00:28:35.016] Kent Bye: Awesome, Ziad. Well, thanks so much for joining me here on the podcast to give a little bit of a behind-the-scenes look at what Qualcomm is doing right now. And yeah, it's really at the center of this whole ecosystem. And so it's a real pleasure to get to hear some of your thoughts of where things are at now and where they might be heading here in the future. So thanks so much.
[00:28:50.769] Ziad Asghar: Thank you for taking the time. I appreciate it.
[00:28:53.271] Kent Bye: Thanks again for listening to this episode of the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

