At Google’s 10/4 press conference, they announced a new Pixel 2 phone and a range of new ambient computing devices powered by AI-enabled conversational interfaces including new Google Mini and Max speakers, Google Clips camera, and wireless Pixel Buds. The Daydream View mobile VR HMD received a major upgrade with vastly improved comfort and weight distribution, reduced light leakage, better heat management, cutting-edge aspherical fresnel lenses with larger acuity and sweet spot as well as an increased field of view of 10-15 degrees than the previous version. It’s actually a huge upgrade and improvement, but VR itself only received a few brief moments during the 2-hour long keynote where Google was explaining their AI-first design philosophy for their latest ambient computing hardware releases.
I had a chance to sit down with Clay Bavor, Google’s Vice President for Augmented and Virtual Reality to talk about their latest AR & VR announcements as well as how Google’s ambient computing and AI-driven conversational interfaces fit into their larger immersive computing strategy. YouTube VR is on the bleeding edge of Google’s VR strategy, and their VR180 livestream camera can broadcast a 2D version that translates well to watching on a flat screen, but also provide a more immersive stereoscopic 3D VR version for mobile VR headsets. Google retired the Tango brand with the announcement of ARCore on August 29th, and Bavor explains that they had to come up with a number of algorithmic and technological innovations in order to standardize the AR calibration process across all of their OEM manufacturers.
LISTEN TO THE VOICES OF VR PODCAST
Finally, Bavor reiterates that WebVR and WebAR are a crucial part of the Google’s immersive computing strategy. Google showed their dedication to the open web by releasing experimental WebAR browsers for ARCore and ARKit so that web developers can develop cross-compatible AR apps. Bavor sees a future that evolves beyond the existing self-contained app model, but this requires a number of technological innovations including contextually-aware ambient computing powered by AI as well as their Virtual Positioning System announced at Google I/O. There are also a number of other productivity applications that Google is continuing to experiment with, but the screen resolution still needs to improve from having a visual acuity measurement of 20/100 to being something closer to 20/40.
After our interview, Bavor was excited to tell me how Google created a cloud-based, distributed computing, physics simulator that could model 4 quadrillion photons in order to design the hybrid aspherical fresnel lenses within the Daydream View. This will allow them to create machine-learning optimized approaches to designing VR optics in the future, but it will also likely have other implications for VR physics simulations and potentially delivering volumetric digital lightfields down the road.
Google’s vision of contextually-aware AI and ambient computing has a ton of privacy implications that are similar to my many open questions about privacy in VR, but I hope to open up a more formal dialog with Google to discuss these concerns and potentially new concepts of self-sovereign identity and new cryptocurrency-powered business models that go beyond their existing surveillance capitalism business model. There wasn’t a huge emphasis on Google’s latest AR and VR announcements during the press conference as AI conversational interfaces and ambient computing received the majority of attention, but Google remains dedicated to the long-term vision of the power and potential of immersive computing.
This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon
Support Voices of VR
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR Podcast. So I went down to San Francisco to Google's press event on October 4th, where I was able to listen to a variety of different leaders from Google talk about their latest hardware products, as well as the vision of their company of where they're at now and where they're going. So computing is changing and we're moving away from a desktop paradigm and even a mobile paradigm. And I think that we're slowly moving into more and more of an immersive and ambient computing paradigm. And one that's being driven by artificial intelligence and more and more natural intuitive interfaces with conversational interfaces and being able to use gestures and our full body within our interactions with technology. So that is kind of the larger themes that I saw that were emerging from here and the specific content of that comes out in things like ambient computing where there's Google Home and then there's Google Home Mini and then there's Google Macs. All of these are like speakers that are going to be around your home where you're going to be able to have these conversational interfaces with Google Assistant to probably the biggest use case is playing music. But there's all sorts of other like calendar applications and integration with Internet of Things and being able to chain together different commands so that you can say, good night, Google, and it will turn on your security, alter your temperature for the evening, as well as tell you what's going to be happening tomorrow in your schedule. So these conversational interfaces where you're just speaking into technology and things are happening. And then there's things like the phone with the Pixel 2 and the AI assistant built in, as well as Google Lens, where you're starting to be able to take photos of things and computer vision to be able to do image searches and hook into the knowledge graph. And right now we're using the phone as the primary interface with the screen, but eventually in the next five to 10 years, we're going to be moving into these augmented reality headsets. They also released a brand new Daydream View, which is far superior from their previous Daydream View headset. They did another iteration where it actually solved a lot of the problems that I had with the first iteration, and it's a lot more comfortable, better lenses, better field of view, better heat management, and just all around, just better weight distribution, fits better on my face, and more immersive experience. I mean, it's so much better than the previous iteration. But right now, virtual reality and augmented reality is trailing behind the leading edge of both the ambient computing, as well as the conversational interfaces and mobile phones, as well as Pixel Buds, which is being able to have wireless headphones that are Bluetooth enabled, and you're able to interact with these Google assistants by putting the phone in your pocket and not even having to look at the screen. So a lot of these ambient computing technologies are getting us out of the habit of having to always look at our computer screens. So I had a chance to sit down with Clay Bevore to connect some of the dots between this ambient computing and conversational interfaces and how that is feeding into their larger initiatives for augmented and virtual reality. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Clay happened at the Google press conference in San Francisco, California on Wednesday, October 4th, 2017. So with that, let's go ahead and dive right in.
[00:03:42.541] Clay Bavor: I'm Clay Bevore. I'm the VP of Augmented Virtual Reality at Google. And, well, Kent, a big fan of your podcast. Delighted to be here. So we just had the Google hardware event keynote just a couple hours ago. And the thing that's exciting for me is my teams have been working on augmented reality, virtual reality for several years with Cardboard, then Daydream, with Tango, more recently, ARCore. And all of those things are starting to come together in the announcement today and some of the products we announce. And so, first thing, the new Pixel 2 phones, they're Daydream ready. We've optimized their displays, processors, sensors for virtual reality. We've also done some really neat things for augmented reality to optimize the camera and the whole system for AR. We announced a new updated Daydream View headset, which includes everything new. Redesigned optics, it's far more comfortable, and still the kind of familiar, approachable fabric design that people really, really loved. And then we have some neat things in the AR area as well. Integrated into the pixel camera is a new feature we call AR stickers. And it includes collections of 3D characters and objects that you can just from the camera insert into the scene and have a lot of fun with. And so a lot of stuff going on that we're excited about.
[00:04:56.751] Kent Bye: Yeah, I just wanted to start with the Daydream View because I had a chance to try out the latest iteration, and I have to say that it's so much better than the first iteration. I was really disappointed in the build quality of the first one, but also how it fit on my face, the light leakage, the lenses, all sorts of issues that just prevented me from having a good VR experience. the Samsung Gear VR was better than the Daydream View initially. But with this latest iteration, it fits my face a lot better, it's got heatsinks so it doesn't get so hot on my face, it's got better lenses, and a little bit higher field of view. So overall, it's sort of matching that depth of a quality immersive VR experience that I would expect and be able to stand behind and recommend people to check out to be able to be fully immersed. So why don't you first kind of talk about the big things that you see that were changed and the feedback that you were taking from the first iteration to kind of rapidly iterate and release a second one a year later.
[00:05:49.942] Clay Bavor: Yep. Well, first, Kent, thanks. Your stamp of approval means a lot. There were a lot of things we liked about the first one, that it was approachable, the fabric design, it was comfortable, made out of soft materials. But we knew there were so many things, right, that we could make better. And we really spent the last year refining, polishing, and improving the core elements of the product. And that starts with comfort and how it fits. And as I think you noticed, we've completely redesigned the face pad, the materials, and so on, so it fits a far broader range of people and faces, reduces light leakage, distributes weight more evenly. We also added an optional top strap if you're going to be in longer VR sessions that even more evenly distributes the weight. And so big improvements to comfort and I think you'll also notice just build quality and materials have really improved. It still has the fabric design but we've used a more rugged and frankly just cooler looking and feeling fabric for the all-new version. So, comforts first. Second, we made some huge improvements to the optics. And so, as you notice, the field of view, I think there are many ways to measure field of view, as you know, but relative to last year, it's an increase of 10 to 15 degrees, which really makes a difference in increasing immersion. What's neat about them is that the lenses aren't the typical aspherical elements you'd find in mobile VR headsets. Instead, they combine fresnel elements with aspherical elements into a really, really, really nice lens that, again, has wider field of view, but also far greater acuity in the center region and a larger center region, a larger sweet spot. So it fits and is really crystal clear for a wider variety of people. And then the third big thing we wanted to work on is VR pushes any device hard. For mobile phones, it pushes them really hard, creating these 3D environments and so on. And that means they get hot. And if your phone gets too hot, it starts to throttle back and the CPU and the GPU get slower. And that's an issue if you want to enjoy a longer VR experience. So, we designed a really innovative thermal solution into the front panel of Daydream View. So, there's an ultra-lightweight magnesium heatsink and then a special rubberized coating on the inside of the front door that basically wicks heat away from the phone into the heatsink and then uses convection vertical airflow to dissipate the heat extremely effectively. What's cool about it is it's more effective, meaning your phone runs several degrees cooler than it would if it were just exposed to the open air. And in typical use scenarios, as in you're not using VR in the middle of a desert in direct sunlight at noon, we've observed most agent-ready phones never grow into thermal throttling. The Pixel 2, you can run for as long as you want. Battery limiting, obviously. So we're really excited about that.
[00:08:33.520] Kent Bye: Yeah, and the other thing that I noticed is that YouTube, as an application, the streaming of it seemed to be very seamless, as well as, if you look at the different applications that people are doing in VR, it seems like with Google and YouTube, it seems like people watching video is something that people are already doing. and that there's an increased level of immersion for some specific events, whether it's a live event with some of your live stream with the 180 camera that you released that has stereoscopic 3D, it's optimizing for the stereoscopy to be live stream of immersive content that may be better than having something that is a full 360 with monoscopic. So I'm just curious to hear your perspective of how you see YouTube and video as a potential driver and adoption of these immersive technologies.
[00:09:19.818] Clay Bavor: Well, first, so we've been working for years on YouTube VR, all the way back to the first version of Cardboard, but really picking up in the last couple years, optimizing it for Daydream. And we just think it's one of the most exciting things there is to do in VR. And it turns out people love going places. People love seeing people, meeting people, getting access to places that they just otherwise couldn't in the world. And everyone has something they're excited about, a sports team, a celebrity, a band. And we found VR video captured well, delivered well, can be really transporting and really resonates with people. It's one of the reasons we invested so early in Jump, our VR video capture system that uses an array camera and then some very sophisticated software running in a data center to create stereoscopic VR video. And we really do believe that stereoscopy, your left eye, your right eye, seeing different things, which creates that vivid illusion of near things seeming near, far things seeming far, that's core to the VR experience and is really important. And as you notice, we've made a bunch of optimizations to YouTube VR video delivery to really improve that experience. And I don't think you've seen the last of those optimizations. We've got some other things up our sleeves that we're pretty excited about.
[00:10:34.973] Kent Bye: Yeah, and one of the things that I noticed from Google I-O until now is that at Google I-O, a lot of the demos you were showing with AR were Tango enabled. So you had depth sensor cameras that were able to do very volumetrically enabled experiences in education. Apple announced ARKit at their developer conference, and then on August 29th, you announced ARCore, which was something that I would have potentially expected to have been announced at Google I-O, but seemed like it was in some ways a reaction to ARKit to be able to ditch this requirement to have depth sensor cameras and to get this AR technology into people's hands so they could start playing with it and innovating without having a hardware requirement. So maybe you could just kind of talk through that process of what happened there, why wasn't ARCore announced at Google I-O, and kind of like, where you see it going from here on out.
[00:11:22.540] Clay Bavor: Well, first thing, just to say again, on the Pixel 2 phone, we've actually calibrated every single phone's camera individually to pull out what are called the lens intrinsic. All of the kind of micro variation in lenses, sensors, and so on. And you can imagine we've been working on that for a while. And so ARCore, AR tracking without depth sensing has been in the works for a while. And the short answer is we announced ARCore when it was ready, when we were ready to share it. And as you can imagine, one of the strengths of Android, with 2 billion phones out there, there's a wide variety of phones, it was important that we be able to bring ARCore to scale. And there were some real algorithmic and technical breakthroughs that we wanted to make, that we wanted to make sure were part of the SDK to really give developers a uniform, high-quality surface area to develop on and guarantee a high-level performance for users across a wide variety of phones. And so we've been working with partners to get ARCore optimized on, at launch, there'll be over 100 million phones that it runs on. And so, short answer to your question is, when it was ready.
[00:12:27.246] Kent Bye: Yeah, and it seems like one of the things I've been looking at, both in the VR and AR industry, is really trying to figure out what is going to be the leverage points for adoption of these different technologies. And I noticed, if we take a step back and look at all the different products that Google was announcing today, you know, a year ago there was Google's entry into hardware, and now we have the next iteration. And the themes that I see is ambient computing, conversational interfaces, being driven by artificial intelligent assistants. And so Spotify and music is actually, seems to be driving a lot of the adoption of these technologies, whether you have being able to talk to your Google assistant and say, play me music. And with all the speakers that you have, you have these getting away from the screen and starting to have these conversational interfaces in the home and all the artificial intelligence and Google Assistant that's built into that. But also photo sharing seems to be another huge part of that in terms of being able to take AR stickers, for example, as maybe the entry point for ARCore into the mainstream versus something that may be educational or something that may require something like a Tango-enabled depth sensor to do more rigorous AR enterprise applications. And when you compare those two, like AR stickers seems like on the surface a little, you know, like some people who are really hardcore into technology may not get excited, but that seems like that may be actually a highly leveraged point for adoption of these technologies. So I'm just curious to hear your thoughts of how you see all this ambient computing, artificial intelligence, and photo sharing kind of playing into the adoption of these more immersive technologies.
[00:13:57.102] Clay Bavor: Well, really two questions there. One, on what coherence do I see between some of the other things we announced, conversational interfaces, ambient computing, and then AR and VR? And then second, what are maybe the introduction points or the scale points you used in early AR and AR stickers? On that first one, computing becoming more ambient, more contextual, more conversational, I see a lot of the work we've done in the Assistant. More recently, which we showed here with Google Lens, using Vision is deeply coherent with, as I think about it, immersive computing. AR, VR, computing that really feels to you real, whether it's integrated seamlessly into your environment or is all around you. And that's one of the, I think, foundations that we're approaching augmented and virtual reality with is how do we make computing feel more natural, more intuitive, where instead of learning the keyboard or mouse or the touchscreen, it just works like you'd expect the physical world to work. Instead of, one of my favorite examples of instead of looking down on a two-dimensional representation of a micromap of where you are with a blue dot, and then walking 20 feet one way to try and figure out am I on this street or that street, You can imagine footsteps overlaid on the sidewalk taking you back to your car, or a character walking you there. That's how a guide would get you there. So I see a lot of coherence between ambient computing, computing that's woven in the environment around you, and that you're able to interact with much more naturally. On the second question, I think you capture it well, which is augmented reality to your average person sounds kind of complicated. Sounds like, I don't really know what that is. And one of our goals with AR stickers and integrating them in a lightweight way into the camera is to just introduce people to the idea. Oh, wow, I can take a digital character from my phone and basically push it into the environment and it kind of knows where it is and it reacts to things. Oh, that's cool. I wonder what else that could do. And if that's what we accomplish with this kind of first step into air applications, then I'll consider that a great success. Just kind of getting people used to the core concept. If you do that, then you can build, I think, all of the far more interesting, more powerful, more involved applications that we can imagine, but the world might not be ready for yet. I think I see some similarity between what we're doing with AR stickers and really approachable, simple applications of augmented reality, and the way we first approached virtual reality with cardboard. It's a cardboard box. Oh, wow, that's neat. Oh, I kind of get it. I'd like to know more. That's what we're trying to do here, too.
[00:16:41.482] Kent Bye: Yeah, and wondering if you could speak to the long-term vision when it comes to WebVR and WebAR, because right now we have an app model where things are bundled into applications. But I feel like just having things available on websites may be just more of a seamless experience and may be better suited in terms of augmented reality. When you're in a physical location, you don't want to have to download something. You just want to have instant access to something. So just curious to hear your own vision of the future of WebVR and WebAR when it comes to Google's strategy.
[00:20:10.731] Kent Bye: I'm just curious to hear your thoughts on productivity and whether or not you can get into flow states within VR and different initiatives that Google may have to be looking at how to bring VR into the workplace and something you use day to day.
[00:20:25.790] Clay Bavor: Well, before working full-time on VR, I actually ran the product and design teams for many of our productivity applications. Google Docs, Google Drive, Gmail. It's a space, personally, I'm really excited about. And I do think there's something about VR's ability to take you places. and maybe embracing what in some applications is today something of a drawback of virtual reality, which is you black out the world around you. Embrace that instead. And I think in particular for things like writing or programming, you really want to focus. You want to load up your working memory and get ideas, get code out of your head and into a form that's useful. And so I do think there's some opportunities there in two respects. One, this idea of blocking out distractions, kind of headphones for your eyes, if you will. And number two, in using space and spatial memory to extend your working memory. So I think if you look at memory competitions, the greatest memories in the world, they use this notion of the memory palace, where they actually imagine a physical space and place things they want to remember in that imagined physical space. and we exist in space and so we've evolved for millions of years to be really good at navigating and remembering things spatially. Making use of that for something like programming where instead of just having two giant monitors you can actually anchor concepts, pieces of code, libraries, documentation in space and refer to that I think could be pretty powerful. I think what's holding productivity and VR back a bit, we're still below, I believe, the threshold of screen resolution you need in order to have a satisfying kind of text-based experience. Today's headsets are 2100 vision. I think you need something like 2040 to get there. So we're making progress on that, in particular with some of the work we've been doing with partners and next generation displays. But I think it'll be a couple years before you're there. And of course, it's not just the displays. You need to build the apps, the tools that actually make use of this stuff. And I've seen some neat things there. I don't think anyone's cracked it yet.
[00:22:34.689] Kent Bye: Anything else you'd like to say?
[00:22:36.572] Clay Bavor: No, Kent, I just really appreciate the time and enjoy your podcast.
[00:22:39.793] Kent Bye: Thank you. Awesome. Thank you so much. So that was Clay Bevoir. He's the vice president of augmented and virtual reality at Google. So I have a number of different takeaways about this interview is that first of all, There is a thing that Clay was super excited about, and we didn't have time to really dive into it, but he wanted to share it with me after we stopped recording. And it's essentially that they were able to create this digital light field simulator in order to create these Fresnel lenses in a way that just sounds super impressive of what they were able to do technologically. So the lenses are just so much better on the Daydream View. There's so many things that are just so much better on the Daydream View that It's on par now with the Samsung Gear VR of having a super high quality mobile VR experience. But the lenses are just a huge innovation. They may be actually some of the best lenses out there on the VR market. So what Clay said is that in order to do the physics simulation, it was like too much computing power to do that on a single computer. So they basically abstracted it to be able to be simulated on the cloud so they were able to simulate four quadrillion photons within the process of simulating within this environment to be able to design these lenses to be able to really figure out the physics of it all. That is just remarkable for me to hear that they were able to create this physics engine that was to be that specific and that detailed. And I think the next step is because it gets so complicated they're going to start to to be able to attach machine learning and artificial intelligence to be able to optimize the production of these lenses to be into this like super crazy future of having AI designed products is pretty amazing. So Clay was super stoked about that and I regret not being able to have the time to be able to dive into that. It was just a 20 minute interview and there was too many questions to really cover everything. So the other thing is just that it feels like Google as a company is started as a software company and that they've slowly been turning into more and more of a hardware company. They just acquired the HTC team that was working on the Pixel 2 phone to be able to continue to do this hardware development. I asked afterwards whether or not there was any specific virtual reality IP or anything related to this acquisition between HTC and there wasn't. It was basically completely separate and just having to do with the Pixel phone with HTC. But Google overall is using all of their expertise about artificial intelligence to be able to start to embed it into all of the different hardware. There's going to be all sorts of AI functionality that's built into the technology. So one of the things that Clay said was that the Google Lens is actually foreshadowing where augmented reality is going to be going. And that the virtual positioning system, the VPS, which is this layer of being able to do computer vision at something like a Lowe's store was the example they were giving, where if you're within a retail store and they're able to capture where things are at, then you're able to actually go find something without having to ask somebody who's working there, where is this located? You can just have the directions of where to go. And so it's this idea that there's gonna be a layer of this kind of like GPS, but this virtual positioning system that's gonna be overlaid on top of our world. And Google has been doing so much stuff with Google Earth and computer vision, and they're already basically creating that virtual map of the world. And that is so much of what Google services has been able to do so well is to create this knowledge graph of being able to suck in all this data and make sense of it. be able to teach and train their artificial intelligence so that you know you're able to add all of this information and context on top of your real world. So it seems like that we're moving towards this future where the world is going to start to be a little bit like the operating system where there's going to be context and information that is going to be coming up to where you're located what you're doing and who you're around and it's going to be moving away from an application model which is what we have now where you open up a specific application to do a specific function and the computing is just going to be ambient and it's going to kind of have all that context information and be able to have these different interfaces for you to interface with the world. So that's the vision of where things are going and I think that the technological roadmap are things that Google has already been showing what they're working on and they're probably the most well suited to be able to do that. I think the caution that I have personally is there's so many privacy issues with Google that I don't even know where to start. It was difficult for me to know what the privacy angle is with virtual and augmented reality because honestly, a lot of those questions about privacy are so far still into the future, like AR and VR are still getting kind of sorted out. So as a journalist specializing on that, it's difficult for me to ask specific concrete questions about that. But with the Google Home and with all of this AI assistance, we're creating this world that in order for this AI assistant to be able to do this level of interaction with us, it has to have more and more of our data. And so many different times during the press conference today, Google went out of the way to say, with your consent, you upload photos. And with your consent, we were able to tag and identify everybody that's in your photos. You're able to speak into the microphone with Google Home and with your consent, we're able to model your voice and be able to do voice matching to be able to know exactly who you are when you speak. So all of these things are issues where you are giving more and more information about yourself and sharing data and you're getting a real benefit from that. And that's been basically the trajectory of technology for the last 10 to 20 years. So the larger point I think is What are the business models that are going to go beyond this surveillance capitalism? And I want to have a conversation with Google. I want to have an honest, embodied conversation and express all of my concerns, especially when it comes to the third party doctrine, which is that whenever you give information over to a third party, that no longer becomes any reasonable expectation to keep private. So the more that we are giving these companies this level of data, if the government decides to go to Google or Facebook and say, hey, we want to know everything about this person, Google is going to have these dossiers where it's going to be basically no expectation to keep that information private. And if the officials from the government are asking for that, they have to hand that over. There's all sorts of other issues with like, what happens if Google gets hacked? I mean, Google hasn't been hacked as far as we know, but They're sitting on a goldmine of all this data and you know if a state actor got a hold of that information then who knows what they would able to do in terms of like information warfare of being able to Specifically target information to us in a way that was never intended by Google because they have mostly you know economic incentives, but they're not malicious or political incentives and So I have all sorts of questions when it comes to the deeper philosophical and ethical questions when it comes to privacy and our relationship to technology. I do believe that technology is a mirror for us. And so that as we create these immersive technologies, and as we create artificial intelligence, it reflects back to us the things that we don't like to see. And there's some ugly shadow sides of How the lines of ethics are being blurred when it comes to what type of data are being collected on us and how that data are being used. So it may be that eventually we have this new economic models where, you know, cryptocurrencies and being able to have an exchange of value. where whatever we're paying attention to may be mining bitcoins, for example, and that is allowing them to be able to have other ways of monetizing our attention. So rather than paying for ads, we're actually just using our computer resources for them to physically mine these virtual currencies. I think that's actually an interesting model to go beyond the advertising or surveillance-based model where you're recorded and creating these huge dossiers that are very detailed. And finally, just thinking about some of the future of productivity, Google actually has a separate team that is working specifically on these various different productivity applications. And that's the same team that had developed the Google blocks. And so looking at stuff like Google Blocks and Tilt Brush and looking at what native tools really work well within virtual reality, there's just something that Google is really thinking in a holistic way. And I think that they're really pushing forward the state of the art when it comes to creating processes for immersive design and rapid iteration and trying to really figure out the unique affordances of virtual reality. I think at an engineering level, I think Google's got some of the most forward-looking teams of anyone in the industry. To me, the thing that is a little bit more concerning is just the business side of Google in terms of the various different economic models of the surveillance-based capitalism that I think I just have some concerns about. And as Google becomes more and more of a hardware company, then are they going to be focused on creating these premium hardware products that has artificial intelligence built inside of it? Then what are the costs and the trade-offs to how are they going to ethically manage all these algorithms and think about the limitations of artificial intelligence, as well as how to holistically navigate these new realms of, are there potentially even new business models that are out there that would be able to exchange our attention for something that was able to give them a direct value? And are we going to continue to want to have advertising everywhere that we're going? Or are we going to want to just build new relationships with our technology? I think Tristan Harris and his interview that he did with Sam Harris talking about our relationship with technology and how there's so much of technology that has been tuned to just having us be almost addicted so that they can optimize our time that we're on screen so that it can be more probable that we click on an ad and that That level of thinking has just kind of created this really toxic and abusive relationship that we have with technology. And I personally see that this move towards ambient computing and immersive computing has the potential to like liberate us from being so dissociated from, you know, escaping into our screens and really being disconnected from the rest of the world. having these different levels of ambient computing and conversational interfaces has this ability to allow us to potentially be more present within our co-located space. But if that's still done within the same level of ethics and business practices that we have now, then there's going to be some problems. And until we are willing to actually sit down and actually have an expression of anger that I feel and that the community feels towards these companies that have potentially created these dynamics in a way that is just tuning algorithms and optimizing in a way that is creating and yielding behavior that people either feel regret or unhappy with. That is the thing that we need to look at, both the responsibility of what these companies are doing, but also what our responsibility for ourselves. There's so much of our own unconscious and compulsive behaviors that we're complicit in this as well. And that in some ways, looking at the artificial intelligence and immersive technologies is just a mirror for us to be able to actually look deeply into what we own, what we have control over as well. So that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And this is a listener-supported podcast, so I rely upon your donations to be able to travel and go to these different events and to talk to these different people and try to go a little bit deeper than having to file a report right away. I have the time to really think about it, reflect on it, and just have access to these leaders to be able to get the latest insights of where the future is going and then to be able to have the time and energy to be able to step back and to reflect on it deeply. So if you enjoy that and you want to hear more of that, then I encourage you to become a member of my Patreon. You can join today at patreon.com slash Voices of VR. Thanks for listening.