On November 9th at the beginning of the Augmented World Expo (AWE),Qualcomm announced Snapdragon Spaces, which is a series of AR tools striving to cultivate an “open, cross-device horizontal platform and ecosystem.” The Snapdragon Spaces tools include “environmental and user understanding capabilities that give developers the tools to create headworn AR experiences that can sense and intelligently interact with the user and adapt to their physical indoor spaces.” This specifically includes spatial mapping & meshing, local anchors & persistence, positional tracking, plane detection, image recognition & tracking, object recognition & tracking, occlusion, & scene understanding.
What’s especially interesting to me is that all of this is “based on the Khronos® OpenXR™ specification to enable application portability and is the first headworn AR platform optimized for AR Glasses tethered to smartphones with an OpenXR conformant runtime.” The Snapdragon Spaces platform will enable Qualcomm to start leveraging the additional computational resources of mobile phones in order to enable a distributed compute and rendering capabilities that allows AR to become like an extended spatial display to existing mobile phone apps.
It’s also starting to build out the more software-driven potentials for innovation that comes from specific application developers, who will be empowered to write OpenXR extensions and modules that not only benefits their specific application, but potentially the broader XR ecosystem. This is a really exciting development to see Qualcomm go down this path of cultivating an open ecosystem like this, and it makes a lot of sense why they wanted to become a big sponsor of AWE 2021 with an opening keynote slot with Hugo Swart announcing Snapdragon Spaces (registration required) as well as a couple of sessions by Steve Lukas diving into more specific details of Snapdragon Spaces in the Ramp to the Future of AR session (registration required) as well the more generalized tips for what type of AR applications they’re looking for in order to grow the AR ecosystem here in this session on Designing Your Mobile App for Qualcomm’s Tools (registration required).
They also announced an early access program for XR developers called The Pathfinder Program that’s a new program for Snapdragon Spaces “designed to give qualifying developers early access to platform technology, project funding, co-marketing and promotion, and hardware development kits they need to succeed.” Generally availability for Snapdragon Spaces won’t be until the Spring of 2022.
Going to AWE 2021, it was made really clear to me the impact that Qualcomm has had on the cultivation of the standalone VR and AR HMD market So many of the latest standalone devices use either the XR1, including Snap Spectacles 4, Ray-Ban Stories, Lenovo ThinkReality A3, Vuzix M4000 & M400, or use the the XR2 including Quest 2, Vive Focus 3, Pico Neo 3, & iQIYI QIYU 3, Magic Leap 2, or HoloLens 2.
In fact, since 2016, there’s been over 50 devices that have launched on either the Snapdragon 820 (announced September 1, 2016 at IFA), Snapdragon 835 (announced January 3, 2017 at CES), Snapdragon 845 & VR Dev Kit Reference Design (announced February 21, 2018 at MWC and shown at GDC March 2021 + my previous Voices of VR interview with Qualcomm at GDC 2018 after seeing that reference design), and then their XR-specific XR1 chip announced at AWE May 29, 2018 and then their XR2 chip announced December 5, 2019.
I had a chance to catch up with Qualcomm’s VP & GM of XR Hugo Swart during AWE on November 11th, where I was able to get more context for their new Snapdragon Spaces platform and open ecosystem they’re cultivating, but also to recap the evolution of standalone VR and AR devices since 2016 when the Snapdragon 820 was announced as being the first chip capable of handling the needs of standalone XR devices.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So last week was the Augmented World Expo that happened on November 9th, 10th, and 11th, and it was pretty much the first professional XR conference that happened in nearly two years now. And at that, on the 10th and 11th, they had the expo. And on the 10th, I spent all of my time trying to go and try as many of the different demos as I possibly could. It's been nearly two years since I've had any in real life gatherings within the XR industry. And there's been a lot of movement and devices that I just haven't had a chance to try out yet, just because it's much more difficult to try to get your hands on some of these different experiences when there's not like a regular cadence of gatherings within the XR industry. So there's a lot to catch up on and. The thing that I noticed was that a lot of the heart of the most compelling of these different devices that I was seeing was the Qualcomm chips, either the XR1, XR2. And so I actually happened to run into Hugo Arsuart, who is both the vice president and general manager of the XR program there at Qualcomm. He's really in charge of having the vision to see this new emerging market back in 2014 and 15, and started to create some customized chips that could be used in these standalone AR VR headsets. And he was like, hey, we should catch up and do another interview. And I was like, OK, let's set it up. And then we set it up for the next day, which I'm really glad because as I look back on the evolution of the XR industry, Qualcomm has played such a huge part. But as a journalist, it's sometimes difficult to cover what they're doing because they'll make announcements and then it'll be like a year later when you actually see what's happening with some of those announcements that they were making. And so it actually Had me go back and kind of look at the evolution of the different things that they announced back on September 1st, 2016, they unveil the first chip of the eight 20 was their Snapdragon series. That was going to be good enough to start to be used in some of these standalone devices and ended up being in the Oculus go a few years later. And then January 3rd, 2017, they announced the eight 35 at CES. That was the chip that ended up being in the quest one. That was a few years later. So they announced their VR developer kit and their reference designs on March 21st, 2018 at the GDC. They created this whole reference design and that they were showing around to different folks and I had a chance to actually see the reference design and did an interview with them at GDC March 30th 2018 and then February 21st 2018 was when they announced the 845. Now the 845 was a chip that I don't think really actually ended up in many of the different XR devices because They're at Augmented World Expo, May 29th, 2018. Just a few months later, they announced the XR1, which was going to be a dedicated chip that, because of the go and the quest, ended up going with the Snapdragon series of A20, A35. The XR1 was kind of skipped over and they skipped straight to the XR2, which wasn't announced until December 5th, 2019. So that was like over a year later after the XR1 was announced. And so sometimes it's just kind of difficult to know how things play out until I go to something like the Augmented World Expo 2021 and see pretty much all of the devices that are there are either using the XR1 or the XR2. So that's just a little context of me just catching up for myself, trying to be like, oh yeah, oh, by the way, Qualcomm's been at the center of the heart of all of this standalone AR and VR for many, many years. And so I was glad to be able to catch up with Hugo Swart to be able to see, okay, where are they going in the future here? So on November 9th, 2021, the first day of Augmented World Expo, Qualcomm announced the Snapdragon Spaces. Now, I had originally thought that this was just a chip just because of some of the interactions I had, but it's actually a software layer where you're going to be able to do this distributed compute between not only whatever's in the headset, but for augmented reality glasses in particular, I think this is where It's going to really start to be coming in where there's going to be ways of using open XR extensions to be able to develop specific applications that are also using the compute power on the phone. That's also rendering or doing specific things of how to distribute computation between the phone and the headset. So this was the new thing that they were announcing both Snapdragon spaces, as well as their pathfinder program for individual developers to be able to be involved. So not only just the OEMs are involved in helping design the future of Qualcomm, but now. also making it possible for the individual XR developers to be able to have a little bit more of a say. So that's what we're covering on today's episode on the Moises of VR podcast. So this interview with Hugo happened on Thursday, November 11th, 2021. So with that, let's go ahead and dive right in.
[00:04:27.850] Hugo Swart: Hi, I'm Hugo Swart. I lead our XR business at Qualcomm.
[00:04:32.733] Kent Bye: Great. And so maybe you could tell me a bit about your background and your journey into what you're doing now with XR.
[00:04:38.716] Hugo Swart: Sure. Yeah, I joined Qualcomm actually 18 years ago, but for the last 10 or so, I've been always incubating new businesses, going into robotics, drones, smart displays. And around 2014, 15, I saw the opportunity to enable VR headsets that did not depend on a phone, did not depend on a PC. And then back then we had our first chip a Snapdragon 820, which we considered as having the minimum bar for immersion. And what we decided to do is, hey, why don't we create a reference design that does VR with a mobile platform, a mobile chip, but in a form factor that optimizes the experience without requiring any other product. So that's kind of, I think, when we saw the birth of standalone VR headsets. We announced our first reference design at the IFA 2016. And from there, you know, we started to see products, you know, Oculus Go, even the Google Daydream standalone device. And we enabled many devices in China. Actually, from 2016 to today, we already launched more than 50 devices between VR and AR. And I think a good reason for that is our investments in chips and then in reference designs that enables people, you know, our customers to quickly go to market.
[00:06:18.260] Kent Bye: So I know that the XR2 is in the Quest 2, and there's also been the Go in Quest 1, and so maybe you could talk about the evolution of what the Qualcomm chips were in the Oculus products, now meta products of their VR headsets.
[00:06:31.571] Hugo Swart: Sure. So the first two generations were still very much mobile chips that we optimized the software and made some modifications for VR and AR. But then in 2019, we saw that the market was growing, that there was the potential for much bigger volumes, and we created dedicated chips for VR and AR. Actually, we created two tiers, the XRM1, which we call the high quality tier, and then the XR2, which is the premium quality. But then going back to, you know, how did we work with Facebook or Meta? The first Oculus Go was with the Snapdragon 820. The Quest was with Snapdragon 835. And then the Quest 2 is when we work really closely actually with Facebook Meta to defining requirements for that chip. You'll see features that you don't see in a smartphone chip. So by doing this optimization is one of the reasons why we saw such a big performance uplift between the first Quest and Quest 2. So graphics, we doubled the GPU from the 835 to the XR2 on CPU as well. On AI, 11 times more. And of course, AI is super important for many of the perception algorithms that you need all the tracking, hand tracking, position tracking, you know, environment and so forth. So a lot of AI is needed. We had a computer vision block. You know, four XR in this chip. We support seven concurrent cameras, if you want to make more advanced devices. So it was, I think, a big leap in performance that the XR2 brought. And now, of course, Meta, Facebook using it. And still this year, we had many launches with it. The Vive Focus 3, iQiyi 3, not sure how many are familiar with iQiyi, but iQiyi is a big premium video provider in China, part of Baidu. Pico with the Neo 3, now Pico of course part of Baidance, and many other devices that probably many of you don't even know. But by using our chip, our software, our reference design, we can scale and enable these different form factors and innovation in the industry.
[00:08:59.435] Kent Bye: I guess I didn't realize that the XR1 and XR2 actually released at the same time. Is that right? Because I've mostly heard about the XR2, but here at the AWE, there's a Lenovo Think Reality A3 that was using the XR1. I saw someone was mentioning that in the comments. Do those come out at the same time? And what's the difference between the two? And if you see more of a use case for some of the stuff with AR glasses or smart glasses with the XR1, because it's obviously still being out there and used, but I mostly hear from the VR world, the XR2 don't hear as much about the XR1.
[00:09:31.273] Hugo Swart: Yeah, it is true that XR2 is more tailored for VR because it has a much higher performance bar compared to XR1. You can think almost like twice in every aspects comparing the XR1 and XR2. So you will see XR1 more into AR type headsets, even snap. You know, the Spectacles 4 uses XR1. The Lenovo A3 uses XR1. We also work even with the Enterprise Edition of the Google Glass. You know, the last one they had was XR1 based. But it's also used in VR. The Vive Flow uses XR1. And it's really, it's just a tiering, right? It's a tiering. But then XR1, because of where we are today in AR, where the resolution and the amount of compute from immersiveness is not as demanding as VR, so you'll see XR1 more in AR. I mean, Vuzix, another example, I mean, across their product line using XR1. So, yeah, it's, you know, both are gaining a lot of momentum, but XR2 gets the limelight mostly, I think, because of the Quest 2 and, you know, the higher performance that it has.
[00:10:49.017] Kent Bye: Yeah, so as I've been covering the XR industry, I've been mostly focusing on the creators. But when I come to an event like here at AWE, I think it's clear how much Qualcomm's a part of all this from the technology infrastructure. But I don't necessarily track or cover the day-to-day announcements or what Qualcomm is doing. And so just to give me a sense, with XR1 and XR2, should we be expecting the XR3 sometime? I mean, I know you obviously don't want to make an announcement. You also have the Snapdragon series that gets updated. And so I don't have a sense of the update cadence for when we should start to expect like the next generation of immersive devices would have some sort of increase. I don't know if it would be called the XR3 or if there'd be new naming, but you know, there's also the Snapdragon spaces, which, you know, seem to be open XR integration. So maybe you could just give us a sense of your cadence for how you're iterating with these different chips and getting them into the hands of these different manufacturers.
[00:11:44.920] Hugo Swart: Sure. Well, you have to think about XR1 and XR2 as tiers, where we're going to have generations. You know, you're going to have a second generation of these, and we're working on the roadmap a little early to talk about when new parts will come. But it's really about The devices that are coming out can still be met with requirements that XR1 and XR2 have as the industry overall matures. From a display perspective, from sensors and content, we're trying to match when to release new chips with when the rest of the value chain or supply chain and ecosystem are ready. So we're of course working on new chips, but I think what's worth exploring is that we are introducing new capabilities still using the same chip. One example is video passthrough or mixed reality. That's something in hardware we had already included in XR2. But then we made software optimizations, lowering latency for what we call photon-to-photon latency. The time that the light reaches the camera on the headset to the time that the display is updated with the real-world image. So we're able to achieve less than 10 milliseconds. And, you know, just for audience to have a reference, the way we tested it, is this, you know, good enough, is we put two people playing ping pong with VR headsets. And they were able to. So, we reduced latency almost in three times. It's just the whole camera pipeline, we changed, you know, how you process the pixels and do things, you know, at the same time and just made it very efficient for this application. So then you see folks like Lynx showcasing that feature, so we work closely with them on that. But I can expect that more and more customers will be launching this functionality. And I think that's where you're going to see the improvement. But again, still using XR2. It's just that we haven't exercised everything that it can offer. But let me touch now on Snapdragon Spaces. I think with the Snapdragon Spaces, it's really about offering the Qualcomm technology and innovation to developers. I mean, if you look into our history and everything I said through this conversation, it's about technology products to hardware manufacturers, to OEMs. And we feel that by giving it to just OEMs, it takes longer for us to get feedback on some of the features that we need to develop. Also, in particular in China, if each OEM has to build their own SDK, their own platform, it becomes very taxing for them in the enterprise segment as well. So we see the opportunity to accelerate you know, in particular AR, head-worn AR, with this platform. So Snapdragon Spaces is a developer platform, and the idea is that it's a cross-device platform. So, you know, developers will have access to our tools, to SDKs, following OpenXR, so we're Khronos runtime compliant. and then have open channels, open distribution. This is a contribution we wanted to bring to the industry to really accelerate AR and facilitate developers by creating once and then making it available across devices. So we're super excited about this initiative.
[00:15:23.377] Kent Bye: Yeah, so I know that at the latest Connect 2021, there was a session where they were talking about implementing these application level space warp, and it was implemented in an OpenXR implementation, so you could go and actually copy some of that reference design and use it in other applications above and beyond. whatever meta is doing. And so it's this idea that with OpenXR, there's a common set of APIs that developers are writing their code for, so that it can be almost platform agnostic, so that it could be on many different platforms. I understand how that works with writing software, but I don't understand how that works with individual developers being able to work with Qualcomm to get something designed within the chipset itself. is the idea that they would write a OpenXR implementation and then it would be further optimized and tuned for something that may work in a PC but maybe need to have specific thermal considerations that the larger community is wanting above and beyond just what the major OEMs are doing. that it's a way for Qualcomm to potentially have more innovation by having more developers above and beyond, like we're here at AWE, lots of meetings that you're planning the future of the immersive industry. So is that the idea, is that you would have more opportunities to have innovation from individual developers?
[00:16:34.288] Hugo Swart: That's certainly one of the key objectives. But of course, we don't want to create barriers for developers. It's the opposite. It's to remove the friction. So for OpenXR APIs, I mean, we want to be, of course, transparent to what's below the API. I think that's where I think we have the opportunity to shine, which is to provide the best performance on our chip using a common standard API. And then one difference that we are delivering or targeting is that we're focusing on AR, more on AR so than VR, and we're focusing on what we call AR viewers, where you have, you know, a glass that has some computation, but then is tethered, either cabled or wireless, to a host. And the host, you know, of course, with Qualcomm's background, we're targeting smartphones. So the idea is that the runtime and applications are actually running on the phone. And then where are you doing the perception algorithm, like hand tracking, or position tracking, or 3D reconstruction, semantic understanding, occlusion, and so forth, it can be distributed between the phone and the glass. But of course, I mean, this is where it's about removing friction from the developers. because we don't want developers to care where are these workloads being executed, right? Is it in the phone, on the glass? Well, as a developer, you just want the API. I want occlusion. I want 3D reconstruction. I want the mashing and so forth. So that was a core principle of hiding the complexities to the developers and then using OpenXR. Of course, if there are APIs that are not yet in OpenXR, we can have a few additional ones. But it's also, we're working closely with the Unity and Epic for this. At the keynote here at AWE, Epic was with us on stage, and we're making sure that Unreal Engines and the Unity tools are all supporting working together with the Snapdragon Spaces. So Snapdragon Spaces is really just the perception feature set that plugs into the current tools that developers are already used to.
[00:18:50.919] Kent Bye: Okay, yeah, I had a little bit of confusion here at AWE because I went to the Qualcomm booth, I did a demo of the Lenovo Think Reality A3, and I said, is this using XR2? And I was told, no, this is using Snapdragon Spaces. So then I just assumed that it was a chip, and then someone corrected me on Twitter saying, no, actually it's using the XR1, this is an OpenXR extension, and so, Maybe you could explain to me exactly what's happening with the Lenovo Think Reality A3 and how it's using one of the first implementations of this Snapdragon Spaces and if it's something that's on the phone and just how things are working and what they did that is new and different that they couldn't do if they weren't using that.
[00:19:29.228] Hugo Swart: No, actually, I'm glad you asked this question. It's just, I think this is new to the Qualcomm XR offering to be putting a software out there to developers. So it's just that these are two different layers, right? XR1, XR2. Okay, that's the chip that is in the headset. But then you come with the software layer where you make applications, use the chip, that's Snapdragon Spaces. So the demo that you saw, it was the Lenovo ThinkReality A3 that has an XR1 on it. But if you see, it's plugged into a phone. And so, Snapdragon Spaces is this framework to enable that hardware configuration at work, so that you build the applications. Whether, you know, the headset has XR1, XR2, or future generations, or the smartphone, you know, has a given chipset, or the latest release, Snapdragon Spaces will abstract that. So that's how we should think about it. Here at the show, we also announced a developer initiative called Pathfinder, where developers can submit and apply for the program on qualcomm.com. And as more of these applications come out using Snapdragon Spaces, I think the ecosystem, the industry, they will have a better understanding of what Snapdragon Spaces provide. So, yeah, last night had dinner with Figman, with Felix and Paul, Resolution Games, Trip. I mean, they're all, you know, very excited to start working on Snapdragon Spaces. If you see, as part of the announcement, we had Lenovo, of course, you know, the first glass that supports Snapdragon Spaces. But you had Motorola, Xiaomi, Oppo to start. So that's, you know, we're starting to see the OEM getting interested in this platform. And then with that, hopefully attract more developers. And then, you know, you have the cycle where the flywheel of content with Snapdragon spaces move fast and enable, you know, growth in the headworn AR space.
[00:21:42.535] Kent Bye: I was watching a announcement from Pimax recently where they were talking about it was called like the Pimax reality and They're gonna be releasing it next year, but they had in their architectural diagram They had like a Qualcomm xr2 as a part of their system But there's also are doing rendering on the computer that's being sent over this high Y gig connection or Wi-Fi 6 and so they have a this concept of split rendering or being able to potentially offload certain processing tasks off onto another device, whether it's tethered or whether it's, you know, in the case of AR, it's your phone. So is that generally the idea, is that there may be a splitting of different tasks that are happening and maybe being shown on the headset, but there's gonna be some tasks that are done by either the XR1 or XR2, and then other processing that's happening on the phone that's then seamlessly synthesized at maybe the application layer that is then showing different things. Is that kind of the general idea?
[00:22:36.601] Hugo Swart: Definitely, yes. You know, I think as we move forward, I see we're doing more and more split processing. I don't want to even call it just split rendering or distributed processing. And then it's something you have in your head, be it in VR in the case of Pimax, or even the Quest 2, you know, also has this functionality. So be it VR or AR, then approximate host, right, in the case of Pimax, you know, a gaming PC, in the case of what we are showing with Lenovo, a smartphone, and then the edge of the cloud, right? So you can think about a dynamic, you know, in the future, dynamic distributed processing. So if I have something in the proximity, that has high compute capability. I can connect with low latency. It has the framework to run the application. I can just have the proximate communication and then offload tasks to it. In the case of rich graphics, that's the rendering case. But there are other things related to AI as well that not always you can do everything on the client. And of course, you know, once you do it on the client, on the headset, that's the lowest latency. But there are thermal limits, there are compute limits, there are storage limits. And then, you know, having the proximate compute unit, you know, I think is a very desirable thing. As long as you can meet the latency requirements. And you always have to look at, well, if I'm transmitting data from one device to another, that also consumes power. You know, transmission and reception of data also consume power. So you need to make sure that, you know, the power you're using to transmit and receive is less than the power to do it on the head itself. So, I mean, that's where I think Qualcomm has the opportunity to really help drive this because of our pedigree, our knowledge on both processing and communication. And then it can be in proximate environment, it can be 6E, Wi-Fi 6E by using 6 gigahertz. In the past, we even looked into using 60 gigahertz, but 5G as well, right? So I think that's a great example on how 5G can be used to connect directly, you know, a headset to the edge of the cloud.
[00:25:05.923] Kent Bye: Great. And finally, what do you think the ultimate potential of both virtual and augmented reality and our drive towards our visions of the metaverse, what the potential of all that may be and what it might be able to enable?
[00:25:19.544] Hugo Swart: Well, I think our vision is similar, I think, to the rest of the industry, that we're going to see more and more of our lives be augmented with the digital overlays and digital aspects on it. But it's all about removing screens, right? We're all always with the rectangular 2D screens, you know, going from TVs to PCs to smartphones. So I think the potential is, well, I don't need a screen anymore. The world is really, you know, your desktop. And, you know, we've been talking for a while already about holographic telepresence. That's my personal favorite of being able to have maybe an interview like this or meetings as if they were presential meetings. but in a virtual environment. So definitely see the potential of having new economies in these digital worlds. I think all the features or capabilities that folks in general refer to in the metaverse as persistence, you having an avatar of yourself, photorealistic. And I think that's coming. And I don't think it's too far off. It's just it's going to get year over year better. And who knows when the full realization of this vision comes to fruition, but I'm sure it's going to happen and Qualcomm is going to be there to enable those experiences. Awesome.
[00:26:47.107] Kent Bye: Well, Hugo, thank you so much for taking time out during ADB to sit down and tell me a bit about where the future is going with Qualcomm and all the different stuff that you're working on. So thanks so much for joining me today on the podcast.
[00:26:57.873] Hugo Swart: All right. Thank you. Thanks. I'm looking forward to seeing you more often.
[00:27:01.759] Kent Bye: So that was Hugo Swart. He's the vice president and general manager of the XR program at Qualcomm. So I've remembered different takeaways about this interview is that first of all, well, like I said at the top of my intro, you know, Qualcomm is really at the heart and the center of so much of the XR industry, especially when you look at all the VR and AR devices, except for tilt five, which is. a completely different paradigm. And I'll have an interview with Geri Ellsworth really digging into the innovations that her and her team have been able to put forth with the Tilt 5, which is one of the highlights for me was seeing the Tilt 5. Another highlight was seeing the Lynx R1, which was this mixed reality platform using kind of a weird shaped lenses, but also very tuned and optimized to be able to blend together the different layers of reality. And I think Lynx was in collaboration with Qualcomm to be able to really develop this as a technique that is going to be optimized. One of the things that Hugo said was that there's a lot of things that are built in in the hardware of the chip, but sometimes it takes time for the different OEMs and the industry in general to be able to develop all the different associated software to be able to really take full advantage of those things that are within the chip. So I was a little bit confused because he was saying that XR1 and XR2 should be thought of tiers, and I just thought that these were coming out at different times and that they were a progression. But I think what I got from this conversation, after going back and looking, well, actually, indeed, The XR1 was announced at Augmented World Expo on May 29th, 2018. And then on December 5th, 2019, over a year later, the XR2 was announced. However, the Quest 2 kind of leapfrogged the XR1. They just used the 835 in the first one, the Go was using the 820, and so The XR1 is never really adopted within the VR industry very much, except for, I think, the first one that I can really think of was the Vive Flow. And even the Vive Flow was, in some ways, so embarrassed about saying that it was XR1 instead of XR2 that they weren't even telling people what the chip was, because there was this kind of perception around it not being as good as the XR2, which isn't as powerful, but it's also a lower tier that maybe is tuned to whatever the media consumption tasks that the Vive Flow was really trying to optimize for. I think they probably ran into this thing where people see the XR1, XR2, and the XR2 being better, which it is, but it's also more expensive. Most of the augmented reality headsets were using XR1 because it's perfectly fine for a lot of these AR use cases, and it's just a smaller and more thermally efficient, you know, for whatever the tasks that need to be done, it may be contextually dependent on whatever that device is. And as, especially as they start to move into this new paradigm where they're doing distributed computation, either on the phone through the snapdragon spaces, or if they start to do the edge compute or different cloud rendering things, where there it's actually transmitting this more higher level processing. So to only focus on the specifications of a single chip. without taking larger context of the larger ecosystem dynamics that are happening, then I think they're trying to make this pivot from thinking about these as generations and more as tiers. Even though, generationally, they did actually come out at different times, but in terms of the tiers of how they're used, I think he's making this shuttle shift into trying to make it into saying, depending on whatever the context is, you should just use whatever makes the most sense. And I think because of the XR1 was of a lower power than XR2 and to have something like the Viya Flow be way more expensive than the Oculus Quest on top of it being higher power and better chipset in general. So it's basically like an older version of the chip that isn't as powerful, but yet more expensive. But there may be use cases for that that is just different than what the Quest is able to do. And Vive isn't subsidizing their XR hardware like the way that meta is, and so it's also creating these larger skews within the market dynamics of all the different processing power and everything else like that. So that's yet to be seen how all those different things will play out. But I think that may be part of why I was a little bit confused as to thinking about these XR1, XR2 as generations rather than as tiers. We'll see if they come out with an XR3 or if they call it the XR1 version 2 or XR2 version 2. I think that would actually be very confusing. Even if they do come out with the xr3 or whatever they use as the naming scheme to also take into larger consideration What are the other ways in which is it tethered to a phone to compute is it tethered to? Other processing powers from edge compute or cloud rendering you know just trying to take into consideration those different trade-offs that may be happening in terms of you know like Hugo was saying and It's not just a matter of being able to add all these things in. You have to look at the cost thermally and energetically to be able to do and receive all that data versus doing it on the chip itself. These are the types of things that they're starting to move into, but I think they're trying to tell a larger story, which is to think about these as a larger ecosystem rather than just focusing on individual specifications. How things all get tied together is going to be way different than any type of Existing model that we have from our personal computers with 5g with all the different connectivity and the y5 6e and the y gig It'll be interesting to see what the Pimax reality 12k headset starts to do with some of that, you know distributed rendering that's happening within their standalone PC thing that they're building on top of what is capable within the xr2 chip So I had a chance to watch some of the other presentations that were by Steve Lucas at Qualcomm to start to really get a better sense of how you start to close this gap between the different human-computer interactions that are available within the 2D computing and then start to add in the spatial computing and thinking about the existing ecosystem of 2D mobile phones and how to start adding spatialized interaction. So to be able to put a hologram within a 6DoF space with your phone, it's only 3DoF. So are you able to use the affordances of all the different augmented reality space where it has all the proper tracking, but you're not using all the phone's resources to do that tracking, you're using the computer vision with on the glasses and the onboard chip, and then be able to do even more complicated rendering and just kind of adding in different aspects of augmented reality onto an existing 2D global ecosystem. So that's probably one of the other big themes that I saw at the Augmented World Expo this year was a lot of different devices were tethered to a phone. The Lenovo Think Reality A3 was one of the first reference devices that was using Snapdragon spaces, but there were all the other demos that were there that are also using a tethered phone that was connected to the augmented reality head warm device. And so starting to think about how to offload the different aspects of compute onto the phone and blend all these different things together. So some of the different things that they were announcing with the Snapdragon spaces were these different perceptual aspects. So everything from spatial mapping and meshing, local anchors and persistence, the positional tracking, the plane detection, image recognition and tracking, object recognition and tracking, occlusion and scene understanding, and eventually probably a semantic understanding as well as something that Hugo mentioned, but I didn't see listed in the slides. But, you know, as we move forward, there's all these different ways of doing perceptual understanding and all those different features that are listed here. So like I had mentioned, there's a pathfinder program that if you are a developer that's interested in getting involved with some of this, it is kind of interesting to think about this integrations with open XR. And when I was watching, uh, I ended up doing a whole binge watching of the entire connect to 2021, where I just stayed up for 22 straight hours. from the beginning to the end and just consumed all of the sessions and just watched everything and did a big, long, epic Twitter thread, I wanted to just try to see what else was there that I may have missed. And one of the things that I probably would have missed was the application space warp that was being implemented by Meta within the OpenXR specification. So the way that they actually implemented it was through OpenXR, which was great to see how even these big, large companies like meta, where they say they're trying to move towards this world of being interoperable, are actually living into that by doing some of these implementations within OpenXR, which means that as they start to implement these things like Application Space Warp, it creates this repository for other people within the industry to use those same different types of techniques and start to create these standardized APIs and modules and extensions to be able to start to take advantage of, from a software layer, how to be the most optimized and get the most out of these immersive technologies. So I really love to see that OpenXR is going to be a big part of that. And the fact that the Snapdragon Spaces is a platform that's used based upon these open standards. And, you know, Steve Lucas made the differentiation that this is not an open source program where all the code is available, but it is an open platform that's using these APIs through OpenXR to be able to create an open ecosystem. Meaning that once somebody creates something that's going to be distributed amongst the entire ecosystem, the entire community, and that rather than just focus on the OEMs who are developing and producing these different hardware, you're going to be opening up to all these XR developers, both augmented and virtual reality, to be able to be interfacing with snapdragging spaces and develop these different open XR extensions. That not only is going to be applicable for the systems that they're building there at Qualcomm, but also for the entire XR industry. So really fascinating to see where that starts to go, to be able to start to blend and blur all these different layers of reality into the future of spatial computing. So again, like I said, I really feel like Qualcomm is at the heart of so much of many of these different applications. I should say that Apple is making their own silicon, and Meta has definitely also said that they have the intention to be able to do that as well. And so we have this area where, when I look at something like Augmented World Expo, We have this sort of interim time where I see that the larger ecosystem is being developed and that you have the big major players that are out there. And so what's odd about something like Augmented World Expo is that it is a big industry event, but you don't have all of the top tier companies there representing their hardware. Like you don't see like a Microsoft or a Magic Leap or a Google or Meta even snap None of these big major companies that are going to be driving the future of augmented reality None of them are there on the Expo floor showing their hardware Some of them were there having meetings and I heard through the grapevine that the Magic Leap 2 was being shown But no one could take any photos of it or anything So I didn't hear about it until after the conference was over. But the point is is that there wasn't a lot of public presence from these major Tier 1 augmented reality and virtual reality companies. I see Augmented World Expo as this opportunity for people that are these startups that either get acquired by some of these big companies, or there's companies like Qualcomm, who's trying to really foster an open ecosystem from these other companies that are out there, Lenovo, Oppo, Vuzix, and HTC. Each of these companies are using their technologies, and they're trying to foster this larger ecosystem. It's really interesting to see how the approach that they're taking may actually give them an advantage over someone like Apple, if Apple, up to this point, hasn't made any public declarations as to whether or not they're going to be using anything like OpenXR. They may try to create their own self-contained, do-it-yourself approach, and have the alternative, which is this open ecosystem that's being really fostered by something like OpenXR. That's yet to be seen, what Apple is going to do. But I do think there's a little bit of a tension as we move forward, that if Meta does eventually start to produce their own silicon, then what's going to happen to this larger ecosystem? Are there going to be all these other companies that are filling out and able to sustain the future of these immersive technologies? With the Project Cambria coming out, one of the things that Hugo says is that they're not making any announcements in terms of what their new chipset is. In the past, they've usually announced something, and then later you get what these products are going to be integrated with. But with events like here at Augmented World Expo, you had some of these OEMs that were in collaboration and a partnership. with Qualcomm developing some of these different techniques. So this case, it was the Lenovo Zync Reality A3, being able to have the first reference implementation of these new software specifications with the hardware. So you can actually start to try it out at these different events. And so as we move into thinking about what Project Cambry is going to be with the next iteration of the virtual reality head mounted device, it's sort of colloquially been called the Quest Pro, which implies that it's going to be incremental improvement, but maybe some of the different aspects aren't going to be a full generational update like the Quest 3. So I think that's yet to be fully announced as to whether or not Project Cambria is going to end up being Quest 3 or if it's going to be the Quest Pro and what the underlying chip technology is going to be. If it's going to still be like the next iteration of Qualcomm XR3 or if it's called the XR2 version 2 if there's a tiered version. I imagine that they're going to keep with XR3 despite what Hugo is saying in terms of thinking about it more in tiers rather than in generations. But whatever that ends up being, I guess it's still yet to be decided and if they have a whole new naming scheme or whatnot. I can tell by being at Augmented World Expo and seeing all these different OEMs meeting with them, and just seeing all the different stuff. The Snap Spectacles 4, the Lenovo Think Reality A3, the Vive Flow, all these are using XR1, and then the Pico Neo 3, and then the Vive Focus 3, and of course the Oculus Quest 2, all using the XR2 chip. So definitely keep an eye on what happens with Qualcomm. I'm really glad I had a chance to run into Hugo and just kind of catch up with where things have been and where they're going. And just to give me also an excuse to kind of really look back at the history. Cause like I said, it's kind of difficult to correlate what is happening with Qualcomm with. what happens in the future because you know sometimes they'll make announcements and then it'll be like a year later when you actually see what's happening until I go to something like the Augmented World Expo 2021 and see like pretty much all the devices that are there are either using the XR1 or the XR2. And if you are a developer interested in getting involved and helping shape the future of these immersive technologies, then definitely check out the Snapdragon Spaces Pathfinder program and how to fuse together these immersive technologies. Because at this point, they really don't know how to go from all these new affordances of spatial computing and to go where we're at now and to a future where people are using them a lot more frequently. We're in this kind of liminal unknown stage as to what the next steps might be but it looks like this is a problem that Qualcomm is trying to solve and This is a program that they're trying to get more involved at the software layer to help shape the future where spatial computing is going to go So that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you could become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.