#1185: HTC’s Enterprise VR Strategy + Debate about the Future of AI

I speak with HTC’s Alvin Wang Graylin, who is the President of HTC China & Global VP of Corporate Development at HTC about HTC’s investment strategy and what’s happening in the enterprise XR space. We talk about how hardware subsidies have distorted the overall XR hardware ecosystem, and how Meta has largely skipped over the enterprise market in favor of the consumer market while HTC continues to find compelling use cases across training, medical XR, architecture, and spatial design. We also talk about the imminent TikTok ban, how that might impact Pico VR, which is also owned by ByteDance, as well as how this might impact HTC.

At the end of the podcast, we end up having a debate about the future of AI where Alvin is very bullish in where AI will be in the next 5, 10, and 20 years while I remain a bit skeptical that we’re anywhere close to moving beyond narrow applications into more generalizable AGI systems that demonstrate a fusion of deeper understanding, knowledge representation, planning, world modelling, multimodal sensory sensemaking capabilities, symbolic reasoning, common sense reasoning, and storytelling.

This was recorded on the first day of SXSW on March 10th, and just a few days later on March 14th ChatGPT4 was released by OpenAI, which kicked off an even deeper AI hype cycle around the latest potentials of auto-regressive large language models, but also some backlash about whether or not AI is progressing too quickly without having proper mitigating alignment protocols — or whether this open letter advocating a pause on AI research was actually subtly reinforcing #AIHype presuppositions as detailed by Emily Bender in this Twitter thread and joint statement from the Stochastic Parrots authors. There are many quickly developing aspects that have happened over the last three weeks since Alvin and I had this chat, but I’d recommend checking out the GTC keynote to see the industrial applications and how the consumer generative AI breakthroughs have warranted NVIDIA CEO Jensen Huang to label this the iPhone moment of AI.

I argue to Alvin that we need to economic, political, and legal frameworks (such as the pending EU legislation around the AI Act currently being deliberated) to get sorted before quickly accelerating AI, which has the potential to consolidate wealth and power in the hands of a few companies while simultaneously displacing jobs based upon data that may have been unethically acquired from the human labor that enabled the innovations in the first place.

Filmmaker Kirby Ferguson has continued to argue that “everything is a remix,” even with the latest breakthroughs in generative AI. But it’s happening at such a systemic scale, and the tech ethos of prioritizing technological disruption over everything else seems like unless these AI systems are developed in right relationship to the world around us, then the exploitative and extractive nature of technocapitalism will continue to advance wealth inequities and use utilitarian arguments to benefit the majority of people while bringing disproportionate harm to marginalized communities. It’s because of this that Access Now’s Daniel Leufer argues that we need to take more of a human rights approach to regulating AI, and self-regulatory ethics statements like this are meaningless in the absence of a regulatory body and a viable enforcement mechanism.

It is worth mentioning that Alvin continuously emphasized that the timelines of AI disruption that he is speaking about is on the order of 5, 10, 20, 50, or the next 1000 years. I did ask him about the “ultimate” potential of these immersive technologies that invites a sort of near-term, middle-term, or distant future speculation about where the future of AI will be heading. But what’s remarkable is how we were also able to capture quite a bit of the different dialectical debates around what’s happening with AI currently as it’s the one topic that is demonstrating breakthrough capabilities that have the potential to disrupt knowledge workers in a way that is simultaneously incredibly awe-inspiring but also deeply troubling and terrifying — especially given the recent tech downturn and a general tech backlash and skepticism that a “move fast and break things” ethos in technology hasn’t really worked out that great in society’s favor on many fronts.

Anyway, this is a good opportunity to share some extended thoughts on AI, and I’m hoping to restart my temporarily abandoned Voices of AI podcast that is currently offline at some point. I had a chance to interview hundreds of AI researchers in between 2016-2018, and I think it might be worth revisiting those conversations to understand more of the history and philosophical foundations for AI and machine learning. Speaking of which, there was a really amazing Philosophy of Deep Learning gathering at NYU March 25-26, and I’m looking forward to watching more of those talks once they’re made available on their website.

I mention Process Philosophy to Alvin in this interview, and I think there are many deep insights that a process-relational perspective (see episodes #965, #1147, and #1183) can start to add some foundational understanding for how understanding the dynamics of relationships within language is yielding deeper patterns of intelligence. If the underlying nature of reality is process-relational, then a deeper metaphysical understanding of process-relational metaphysics (see my talk on this here) might unlock some key insights into adding some interpretive insights into the multiple layers within these machine learning architectures.

Here is a list of all of my SXSW 2023 coverage, including 10 of the experiences that were showing at SXSW that I’ve covered previously at other festivals.

  1. [SXSW 2022] #1082 Roman Rappak’s [2022] Mixed Reality Live Music Performance & an Art History Perspective on XR + music
  2. [SXSW 2022] #1086: Anne McKinnon on the Ristband Music Platform going into Alpha, Pixel Streaming, & the Future of Musical Experiences
  3. [Venice 2022] #1122: Mixed Reality Platformer “Eggscape” by 3DAR Wins 3rd Place Prize at Venice Immersive 2022
  4. [Venice 2022] #1123: Interactive Animation of Polarized City “From the Main Square” Wins 2nd Prize at Venice Immersive 2022
  5. [Venice 2022] #1128: Combining Puzzle Mechanics with Environmental Storytelling in “Mrs. Benz”
  6. [Venice 2022] #1130: Combing Mythical Metaphors, Environmental Design, & Volumetric Cut Scenes in “Stay Alive, My Son”
  7. [Remote 2022] #1151: Shooting an Immersive Doc the War on Ukraine’s Culture with NowHere Media
  8. [IDFA DocLab 2022] #1154: Visualizing Melting Glaciers in 360 Video Story in “Once a Glacier” + Mixing Motion Capture Dance and Indigenous Poetry
  9. [IDFA DocLab 2022] #1155: Polymorf’s Multi-Sensory “Symbiosis” Explores Speculative Futures Inspired by Philosopher Donna Haraway
  10. [IDFA DocLab 2022] #1161: The Many Immersive Documentary Innovations of “In Pursuit of Repetitive Beats:” Winner of IDFA DocLab Award for Immersive Non-Fiction
  11. #1185: HTC’s Enterprise VR Strategy + Debate about the Future of AI
  12. #1186: Chinese Ecosystem for Immersive Stories, VAST Platform, & Neo-Wulin Immersive Music Experience
  13. #1187: “MLK: Now is the Time” Brilliantly Translates Dr. King’s Speech into Embodied Interactions & Spatial Metaphors
  14. #1188: Emotionally Evocative, Virtual Eye Gazing with Ukrainians in Bombed Out Buildings in “Fresh Memories: The Look”
  15. #1189: “Forager” Volumetric Timelapse of Mushroom Growth Hits a Sweet Spot of Touch, Smell, & Immersive Storytelling
  16. #1190: Targo Stories’ Immersive Documentary Spatial Innovations with “Behind the Dish” & “JFK Memento”
  17. #1191: Closing the Distribution Gap: Atlas V’s Astrea Aims to Port, Publish, & Market the Best of Immersive Stories
  18. #1192: The Last Moments of AltspaceVR, Athena Demos’ Eulogy & Retrospective Journey into Social VR
  19. #1193: Phone-based Interactive Story “Consensus Gentium” Takes Top SXSW Prize for Chilling Speculative Worldbuilding Exploring AI Bias, Surveillance, & Biometric Agency
  20. #1194: “Jailbirds” is a Well-Told, Magical Realist Story Using Character-Driven Animation and Stylized Cinematography
  21. #1195: Exploring Non-Normative Avatars with Disabled Dancers in “Figural Bodies” Research Project
  22. #1196: “Eggscape” is a Groundbreaking, Mixed Reality, Multi-Player, Table Top Platformer Aiming for LBE
  23. #1197: Myriam Achard’s Industry-Leading, Immersive Curation for Montreal’s Phi Centre
  24. #1198: AmazeVR is Bringing High-Res, Immersive Concert Experiences to the Quest Starting with K-Pop Band Aespa
  25. #1199: “Whipped Cream: The Dark” Interactive Music Video Blending Volumetric Capture with Spatial Locomotion
  26. #1200: Defining Process-Relational Architecture with Andreea Ion CojoCaru: Spatial Design as a Participatory Improv Performance
  27. #1201: Jessie Cohen’s Oral History of Public Relations for Immersive Stories from 2013 to 2023
  28. #1202: Miro Shot’s Second Mixed Reality Concert at SXSW: An Intimate, Live Performance Ritual
  29. #1203: “Body of Mine VR” Uses Full-Body Tracking Embodiment to Explore Gender Dysphoria & Transgender Testimonies
  30. #1204: Blending Open World Exploration with VR Immersive Theatre Drama with “Find WiiLii” in International Collaboration with GiiOii Immersive Studio and Ferryman Collective
  31. #1205: A Primer on Media Geography, Human-Environment Process-Relational Philosophy, and Virtual Natures with Claire Fitch
  32. #1206: The District VR Enables Professional DJs to Play Live Social VR Gigs without Mixer Hardware
  33. #1207: “Temporal World” Blends Touch with Sound Design for a Unique Haptisonic Experience about Memory
  34. #1208: 2023 SXSW Immersive Recap and Highlights with Programmer Blake Kammerdiener

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the future of spatial computing and the structures and forms of immersive storytelling. You can support the podcast at patreon.com slash voicesofvr. So I'm going to be kicking off a 24-episode series of the 20-plus hours of conversations that I did at SXSW. I was focusing primarily on the immersive stories and the artists and the creators there at SXSW, where there's this intersection of music and immersive storytelling and these haptic sensory experiences. But I'll also be talking about other aspects of distribution and also had a chance to talk to HTC's Alvin Graylin, who heads up the HTC's efforts in China. Also, he's the vice president of global corporate development HTC and so he's really focused of not only investing in a lot of different XR businesses but also working with different XR enterprise entities and so just get some reflections on the overall XR ecosystem from Alvin's perspective and yeah also at the end of this conversation we do a deep dive into AI because With the chat GBT that's coming from open AI. This is recorded on March 10th And it was not until March 14th that chat GBT for was released So we're right on the cusp of the new version of chat GPT for coming out And then there's a philosophy of deep learning conference that happened at NYU for March 24th 26 diving deep into all the different Current explosions of AI and so over the last number of weeks I've been kind of monitoring what's been happening with AI and I see this dual aspect of like the peak of a Gartner hype cycle of the inflated expectations as to what's possible while simultaneously there's a lot of quantum shifts that are happening with applications of generative AI and all sorts of other aspects of artificial intelligence at the same time. I highly recommend checking out NVIDIA's GTC keynote that dives into a lot of the different underlying infrastructure and applications of artificial intelligence, machine learning. And so that was happening on March 21st, and the CEO of NVIDIA was saying that AI is having its iPhone moment. Meaning that there's an inflection of capabilities with generative AI and lots of different excitement and also fears and hyperbolic hype at the moment. So at the end of this conversation, we take a look at some of this and we have a little bit of a debate where Alvin's arguing, I guess, for a lot more of the most exalted potentials. And I'm arguing for a little bit more of a cautious look at some of these different things, but also trying to look at things holistically in terms of what are the larger cultural economic and political aspects that we're living in and what's it mean to have these fast acceleration of these technologies without either a specific guardrails in place or figuring out how to handle this explosion of artificial intelligence and Is it hype or are there some real functional applications that are coming out of it? so I feel like the answer somewhere in between trying to navigate what is hyperbolic hype at the moment, but also realizing that we're actually in this huge shift of what the capabilities of artificial intelligence are going to be and as you add these large language models into combinations of other things to achieve deeper levels of understanding and planning and reasoning and all the different things that are missing from the existing autoregressive larger language models like ChatGBT. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Alvin happened on Friday, March 10th, 2023 in Austin, Texas at the Southwest Southwest Festival. So with that, let's go ahead and dive right in.

[00:03:32.240] Alvin Graylin: I'm Alvin Graylin. I've been in the VR space for a long time, as you know. We haven't talked for a few years, but I first got into VR about 30 years ago with Tom Furness and the HitLab. And then in between, I've been doing some AI and AR-related projects, startups. And in the last eight years, I've been with HTC, running their China business and their investing globally. And then now I'm also been charged with doing global corporate development. Yeah, it's been a fun ride to see kind of how things have developed over the last few decades, and then especially the last, you know, five, ten years where I think it's really taken a leap in terms of where the technology's gone, and I can say that I'm actually probably more energized and confident about the future of XR and VR than I've ever been. Awesome.

[00:04:18.717] Kent Bye: Yeah, maybe you could give a bit more context as to your background and your journey into VR.

[00:04:22.920] Alvin Graylin: Yeah, so I guess my background, I studied electrical engineering before and computer science at University of Washington and then MIT. I also have a master's in business administration as well from MIT. But I studied neural networks at University of Washington and natural language processing, computer architecture. And then at MIT, I was doing symbolic AI, as well as geographic distributed large networks. But I have done multiple startups in both fields. At Boston, I worked on AI using genetic algorithms to do large data processing for user value calculations predictions. And then when I went to China a few years later, I started up the first natural language search company in China. And we were essentially doing what a lot of, actually what ChatGP is doing now, or Siri has been doing back then in 2005, where people would send in a message. And this was before WAP, so we were actually using text messages. You would send a message and you would get an answer in a sentence of what you were asking for. And we allow people to use any format of questioning. In fact it was picked up by all three of the Chinese carriers to do their customer service and response systems and they were to cut their phone service provider staffing by like 30-40% which was a huge deal for them because they had a lot of people doing this work. But, you know, later on I also did a location-based social network using AR where you can use your phone to look around and it would find events and overlay activities and venues based on where you're pointing and give information about what's happening in those places. But, you know, my prior life was a lot of startups and I also worked with Intel. I actually worked on the early multimedia architecture for data processing on the CPU. So it was called MMX, Multimedia Instruction Set, using single instruction, multi-data, kind of before the GPUs were being used. And the last seven years, eight years, it's really been really helping build the ecosystem for HTC and for the XR ecosystem in China and globally. And, you know, we've invested in over 100 companies in the XR space, probably 120 companies. We've also brought in thousands of developers into our content ecosystem with ViPort, as well as really built out the leading hardware devices for XR throughout the last eight years. The first ones to put out Six Degrees of Freedom VR devices, the first one to put wireless PC VR, the first ones to do tracking systems to allow physical objects to be brought into a VR environment. And then the first one actually to do Six Degrees of Freedom standalones, well before Oculus did it, where they had the Go, which was a 3DOF device, and we already had the Focus. and then the first ones to bring in two-hand controllers into standalones. And then recently, the last couple years, the first ones to bring full standalone into a thin and light form factor last year with the Flow and then this year with the XR Elite. And the XR Elite now, I think it's the most versatile product on the market. I don't know if you had a chance to try it. I should bring it and have you have a test of it. But it can link to PCs, it can link to your phones, it's a fully standalone. It can do VR and MR, and if you want to take the battery off the back and make it a glasses form factor, you can do that. So for, in fact, I was just at the VR healthcare conference last week, and a lot of the doctors there were saying, wow, we really like this form factor because we have so many patients that are in their bed, they can't have this giant battery behind them. And so having this glasses form factor where you can put a battery anywhere, in fact, Bill Levine, who's head of the Penn Medicine there, was saying, you know, we've got these giant proton scanners, it's like radiation systems for cancer treatment, and they're saying, you know, we have these patients that are in there, and we'd like them to be distracted and entertained while they're going through their treatment. So, you know, can we put a little tiny battery on this thing and just make it work all by itself? And the amazing thing about our product is that it's so versatile. It can even work in that kind of very specific environment, very, you know, space-constrained, and You know, I mean, those kind of things just makes me feel very excited that we're able to add value to all levels of service. In fact, I was just telling you about the doctor who's been doing conjoined twin separations. And he's in front of this group of 400 industry people was saying, you know, I could not have done it without VR. And that just brings a lot of warmth to my heart to say, hey, this technology is actually helping to do good and to do things that would not be possible if it wasn't there. And so many, oh, there was people from the VA were there and they have this program where they bring thousands of these devices around to their hundreds of facilities. to do essentially mental care and post-traumatic treatment for their veterans. And they said, our number one job is to make sure that our veterans don't kill themselves. And when they use these treatments, they're seeing very soon, because they were showing, I don't have the exact data, but there was very significant data improvement in terms of the patient's will to live, patient's satisfaction for life after using these kind of treatment programs where you're essentially getting a VR therapy, having a virtual doctor talk to you, bring you through a kind of self-healing process, and also ones where they can bring them back into the battlefield and take them step by step out you know, give them a way to really heal when they come back so quickly from one environment to another. And then also giving them a, you know, also for a lot of the patients who maybe have lost their limbs that they can't move around and having them feel what it's like to go to different places in the world and helping bring some level of satisfaction for them. So it's just seeing all kinds of different use cases, you know, of course, you know, about, you know, all of the pain reduction issues because there's doctors that said, hey, you know, we have, such an epidemic of opioid overdose. There's like tens of thousands of people every year die from opioid overdose in this country. Maybe even hundreds of thousands. Actually, it's hundreds of thousands. I was like, really? That's a crazy number. And if you could use a non-medication model to reduce the pain of these people, how good would that be? And having a device like the XR Elite would be so much more suitable than having a big box in your head because it's so much lighter and more comfortable. And if you want to use it for a few hours, you're going to be able to do that. So it really makes me feel like we're doing good. In fact, Walter Greenleaf, you know, his keynote and his first slide was saying, we're doing good work, we're doing meaningful work. You guys should be very proud of all the work that the industry is doing. So, both the healthcare industry as well as the VR industry to have that come together. So, I'm really encouraged in terms of the different ways that I just saw, you know, in fact, just last week at this healthcare conference.

[00:11:10.127] Kent Bye: Yeah, well, the thing that comes to mind for me is that, you know, I'm giving a talk here at South by Southwest about the ultimate potential of VR, and I lean heavily on this model from Simon Wordley, where he says that there's these evolutionary phases where you have an idea that's proved out in academic literature, then it's deployed to like a custom bespoke enterprise application, and eventually those applications come into the mainstream consumer context, and then eventually gets out into the mass ubiquity. And what I see in the VR industry, to some extent, is that Meta has, in some ways, tried to skip over that enterprise phase and kind of jump straight to the consumer market and put billions of dollars of subsidies into the headsets to kind of artificially lower the price. And then we have HTC that has, I think, from the beginning, really embraced this enterprise market. And you've invested in, you said, 100 to 150 different companies in that more enterprise context. For me, it seems like there's kind of a natural organic growth that could happen by focusing on the enterprise. So I'd love to hear any additional context to kind of like bear out that natural evolutionary progress of VR. And people from the consumer space have their opinions of some of the different headsets and the price points, but yet there's all this stuff that's maybe happening behind the scenes with the enterprise space with VR and HTC that maybe doesn't get as much transparency all the time.

[00:12:26.482] Alvin Graylin: Yeah, absolutely. I think you kind of hit the nail on the head there is that, you know, we really want to let this naturally grow in an organic way because when you try to artificially push something before its time, it's a lot of heavy lifting. And we've seen that with the, you know, billions of dollars or tens of billions of dollars that are being funded by Meta to push this into the consumer space. Although I think from an industry perspective, we should be thankful that there are companies who have that economic capability to do some of that education. But in fact, the last year, ByteDance, who bought Pico, they followed the same strategy in China, and I have to say it kind of ended the same way, where they poured in probably a couple billion dollars. And they got, I think, something in the order of a million units or something like that sold, but far below their personal goals and the retention was also far below. I don't want to talk about the specific numbers, but I think both from Meta's perspective as well as from Pico, they're definitely internally not satisfied with what they got out of it. But, you know, whereas I feel like, you know, we've been following a more natural model where we see there's customers with real problems. We see there's solution providers that have an ability to solve those problems. And, you know, we help them from a technical perspective, from a distribution perspective, promotional perspective, and in some cases funding, to help enable these solutions to come to market faster. And we're seeing across the board, really, In fact, I was at this closed-door CEO meeting in China two weeks ago, and pretty much every major company, both hardware and software company, and the ecosystem were all there, and they all said, we're still very, very focused on B2B, and B2B is what's driving the business. If it wasn't for B2B, we would not be alive. And that's what pretty much all the players have said, and also some of the players have said, can we stop subsidizing hardware? Because it sets the wrong idea, the wrong standard for what this technology really costs. And it's essentially a race to the bottom. And I feel like these are industry people talking to each other, saying, can we just play on an even playing field? Because that's better for the industry, and it's, in long term, actually better for the users. Because then you have more diversity of companies out there providing competition for each other, instead of just whoever has the deepest pocket wins and the rest of the companies die. which is actually not a great thing. So especially for the younger, smaller companies, they just can't compete in that space. And I think there's a lot of innovation in the smaller companies, but they need to have a chance to grow up. So I'm really, I guess, HTC has been trying to set the right role and also creating a more open ecosystem. As you know, we've been open with our app store, we've made it available, all that content that we've gathered, we've made available to all the players, we've made our SDKs available to all the players. We announced last year our Viverse platform, which is essentially a way for anybody to create open ecosystem metaverse solutions. And we've made that also available to anybody to get on. In fact, and all the content that's created in that is compatible to all the other VR devices, XR devices, and also to phones, to tablets, to PCs, because we feel like For this ecosystem to really thrive, we have to have a critical mass of users, and depending purely on XR users today is not enough. So having a user that's on a phone, a user on a tablet, and a user on a PC be able to see and talk to another user or a group of users who are in headsets or glasses, we think that's gonna create an experience that then leads more and more people over time to transition to head-worn devices, because that's really the best way to consume 3D content. So, in fact, we just announced a Viverse for Business platform, which we built because we had so many businesses come to us and say, hey, we want to use this. It's not just useful for consumers. We actually, as a business, want to create our branded solution that we want to make available to everybody instead of just going to, let's say, the central LAN or a sandbox, which, you know, they feel like there's nobody there. And it's cost them a lot of money to build and then they feel like it's just more of a marketing thing. They want to be able to build something that anybody can get into. And recently, I think with all the Web3 crisis issues, people are starting to move away from a purely decentralized crypto type of a theme for what they're doing. So I feel like we're trying to do the right things. And going back to the decentralized thing, we actually are also integrating and allowing for integration of Web3 or crypto technology or more actually blockchain technology into the system. So we support Ethereum, we support Polygon, and then recently we announced our partnership with LaminaOne, so Neil Stephenson's company. And the reason we're working with NIO is that we just feel like here's a guy who's been in this space, who's looked at it for three decades, and he's willing to put his brand and name behind this. We know that this is not going to be one of these scam companies. And having spent a lot of time talking with him, I really feel you know, he deeply believes in the idea of an open metaverse and he wants to create a tool set to enable it. It's not about creating, you know, I'm dropping some NFTs and we're gonna make some, you know, a new ICO and we're gonna make a bunch of money and then rug pull everybody. I don't see him doing that, right? And even if the team wants to do it, he won't let them do it, right? So I feel like that kind of a partnership makes a lot of sense to me. And, you know, he's got great ideas, I think, because he's been thinking about it for a long time, about how to enable even more interesting use case of that technology in this context. So, you know, as you can see, we've been doing a lot of work for a long time to try to enable this, in some cases to our own detriment, because we're making technology that we spent a lot of money and time and effort to build, and we're making it available to other companies and industry, which I think long term will pay off, but in the short term, it may or may not actually be great for our bottom line.

[00:18:22.489] Kent Bye: You know, back in like October of 2021, when there was a shift of the name from Facebook into Meta, then we have this big, huge Metaverse hype cycle that started, and then the crypto hype cycle followed on that. And now, as I come to South by Southwest this year, there's a lot of hype around ChatGTBT and AI, which may be a similar type of bubble that people are getting really super excited about. But I know a couple of years ago, you were on the frontiers of trying to, I guess, map out the cartography of how you think about and conceive of the Metaverse and this shift from 2D to 3D and creating this dialectic between moving from one thing to another. And so I'd love to have you give a bit more context as to why you were trying to map out these different aspects of the Metaverse and how that played into the larger strategy of your other roles at HTC as you were trying to do this thought leadership of trying to flesh out what this concept of the Metaverse might mean.

[00:19:15.323] Alvin Graylin: Yeah, I mean, I think that you kind of, you know, I guess nailed down the issue there, which was, you know, a year and a year and a half ago, there was this big hype cycle around the whole crypto side. And there was a lot of confusion between Web3 and Metaverse when both were hot topics. And there was so much confusion. And I just felt like there's a lot of people who don't know either or maybe think they know and they're going out there and preaching things that either weren't true or only partially true and I just wanted to help create a little bit more clarity so that was when I actually made some effort to to try to create more clear definitions as well as maybe bust some of the myths around some of these areas and I feel like there's a little bit more clarity now but there's still a lot of confusion out there Because it's hard to communicate these ideas, a little bit complex ideas, to a large enough audience. And now with the whole AIGC, or generative AI, and ChatGB, these kind of topics, it brings a new wave. And then I think that creates, again, another wave of confusion to say, oh, AI is hot, that means Metaverse is dead. And that was one of the things that last week when I was speaking, I wanted to clarify. AI being successful does not actually bring down the metaverse. In fact, if anything, it really brings it up because it's an enabler technology. It's a technology that, you know, of course people are using it to write poems and do art, but it can also be used to essentially support all aspects of metaverse content creation. And in fact, I think that the biggest thing holding back XR adoption is not price, it's not hardware, you know, it's really about Do I have a large enough base, a library of content that I want to use it and I can use it every day? And are there specific use cases that makes me motivated enough to put this headset on, even when it is not a perfect glasses form factor that is 20 grams and then I can use it all day, right? You know, as you know, with all technology, the early adopters, even the first wave of people before the mass market, they were able to withstand some of the extra friction associated with the technology if the benefit was big enough. And I think, you know, right now, when there's only a small number of studios building these content, it's hard to make sure that you have that breadth of content out there. And also the depth of content, because most of the content is relatively limited form of content, you know, either a 20, 30 minute or two, three hours. Some of the content, you know, like Alex, whatever, you can run it for 10, 20 hours, but then that's it. I mean, when I first played Half-Life Alex, I was super excited about it, but then I haven't played it again. Now, if we have an AI-generated toolset that can help developers, both professional and maybe indie developers, to be able to build higher-quality content at a fraction of the cost, at a fraction of the time, then we can convince a lot of these developers to create content that will be engaging, will be deep. I'm not talking just about gaming content but also utilities, tools to help you do virtual travel or virtual relaxation or virtual coaching or virtual education, whatever. But right now the cost of creating 3D content and then porting it to all the various hardware platforms is actually a lot of work and a lot of cost and a lot of people aren't willing to put that effort into it. talking to a lot of the AAA studios and their heads, and they're just saying, look, the scale of the market is not big enough in XR for us to put in $20, $50, $100 million to build more of this content. But now if it only costs a million, or $5 million, or $2 million, or $100,000, that equation changes. And I think it's a lot easier for them to make these kind of decisions, especially if we start making content that is accessible with, let's say, WebXR or OpenXR. type of standards that reaches a lot bigger pool of users for a lot less cost. And then maybe you have to scale down the performance or the fidelity a little bit in the web-based solutions. But, you know, I think people are willing to sacrifice that, at least in the near term. So I'm actually very optimistic that the fact that AI has gotten so good so quickly, and you can now verbally say something, and they'll create a scene for you. You can take a few photos, and using NERF technology, it creates a high-fidelity 3D model that you can zoom into, and fly around. I mean, you can turn words into pictures, and then those pictures into 3D models, and then those 3D models into animations, or animations into worlds. seamlessly just by using a series of tools. So we've had examples of things like Onward or VRChat or Beat Saber, small teams, right? A few people or one person making products that attracted a lot of attention, millions of users. Now, if we give them these tools now, just imagine what would be possible. And what would be possible if you have a platform that then allows anybody to just say a few words and to be able to do that. So I know there are a number of startups now working on these kind of AIGC co-development tools, not just content development or picture or model development, but actually full playable content development. I think that's going to create essentially 7 or 8 billion developers around the world and just imagine what kind of great content they will create. And most of the content will be crap, right? Just like if you go to DALI or Mid Journey and you look at the types of pictures that come out, most of them are crap. But there are people who will spend time and they learn how to use the tools and with a little bit of practice they can create amazing things that even professional artists couldn't do or would spend a long time doing. From people who had no art training but a little bit of practice and coaching on these tools. I think if we educated more people, if we made these tools more available, we're going to have the same level of content that we have on our phones, on XR devices, which then I think will really drive a lot of user interest and engagement. So super excited about what's going to happen over the next couple of years as both XR technology develops as well as AIGC and AI cogeneration, kind of the software 2.0 concept.

[00:25:39.498] Kent Bye: One of the things that's a bit frustrating as covering the VR industry is that a lot of the numbers of how much things are being sold, or even in the context specifically around the consumer market, but then there's the enterprise market and the B2B, which is even more opaque in terms of what's actually happening. I get some indication when I saw the presentation from Walter Greenleaf, where he said there's around 300 VR, AR companies that are working on medical applications, and that gives a sense of the scale of its continuing to grow, there's a lack of a single conference that the entire VR industry comes together for. It used to be sort of a proxy for Oculus Connect and the Kinect, but now that it's on virtual, you know, we have the SVVR, and you know, there's CES, and there's other consumer electronics shows, but there's not like the one show that brings the entire industry together. And so it's very fragmented to be able to track what's happening with each of these niche industries. Since you're kind of at the nexus of understanding what's happening at the enterprise market, for me, I kind of intuitively see that there's the medical applications, there's the applications for training, there's the architecture and visualization, maybe data visualization, but also health and wellness apps in a more consumer space. But what can you tell me in terms of what type of applications are really getting traction? And you see a lot of either sustained profitability or growth in the context of the enterprise XR space.

[00:26:58.217] Alvin Graylin: I think you named some of the key ones, you know, training, education, design, healthcare, actually also simulation for factories and things like that. Also skills training, we've seen that. And also actually defense, military. I was actually listening to a talk from Palmer and he said more soldiers die in training than die in combat for the American military forces.

[00:27:24.275] Kent Bye: Is that on my podcast? It might have been. He said that on my podcast.

[00:27:26.937] Alvin Graylin: Yeah. Yeah. So, so I was like, that's crazy. So, you know, if you can do training and yeah, that is on your podcast. I remember that. So I was like, that's crazy. So, you know, having this technology be able to safely allow people to train. In fact, I don't know if you know, but you know, we work very closely with Axon who does the tasers. And they created a very customized training for their system using our tracker system with their actual taser devices. And there's, I think, 500 plus police departments around the country, just in the US, using this system. I mean, how many lives is that saving in terms of allowing people to properly use this technology, to know when to use it, to be able to have sympathy training on it, as well as skills training? I think there's quite a few unneeded deaths today in the police experiences with various assailants. But to have that training, maybe even then take it to the rest of the world, I think that would be really, really helpful to reduce civilian deaths, as well as, in your podcast, in terms of the military, the soldiers' casualties. Yeah, so I feel like pretty much every industry you can think of, I'm seeing people apply it. And, you know, working with Audi and Holoride, I'm sure you know that. There's actually a few also new, I can't really talk about yet, new automakers that are now implementing that kind of technology to be able to essentially create a backseat entertainment system. So, you know, most of the new cars have a number of screens now. But, you know, now you can essentially have a giant environment. It's not just a screen. it can definitely add a different level of entertainment and particularly in the future when autonomous driving becomes more mature and we don't need to spend as much time looking at the road at that point you know being able to do collaboration you know travel and having a meeting with your staff and your teams actually that is one of the use cases we see quite a bit and events we're actually seeing a lot of events replacement initially because of the pandemic But afterwards, what we're finding is a lot of events companies are saying, hey, you know, we actually find that for people that couldn't make it, this is a new way for them to be able to receive that content and also for them to get additional revenues that they wouldn't have had otherwise. And also to be able to create quick events that would happen anytime that are also not geographically tied down. Whereas, you know, physical events can, you know, they only can do one or two a year, you know, et cetera. So I feel like that's going to be quite interesting. Virtual travel, actually, and travel sales is an area I think is going to really grow. We just came out with a study about two months ago, I think. It showed when somebody does a virtual cultural experience, it makes them 70% more likely to do physical travel to that region. And it also makes them twice as friendly in terms of liking that culture environment or that culture. than they'd had before, which was actually quite powerful for me because I think one of the key issues facing the world today is a lot of misunderstanding about cross cultures, across countries, and creating a lot of conflict, right? Unnecessary conflict. So if we can use this technology to create more closeness between nations, between the people of those nations, you know, even if You know, if your politicians may have certain points of view, but if you've actually had the ability to virtually travel to these places and see these people and maybe even having live virtual avatar to avatar interactions, you can make your own personal decision. Does that make sense to me? Is what they're saying making sense? Because I see a lot of the rhetoric being said by politicians, and I think it is not representative of reality because, you know, as you know, I spend a lot of time in China and the U.S. And I see both sides, you know, creating enemies out of each other. And it's sad for me to see that happening when I see the potential of what could happen if both these countries actually work together to solve bigger problems instead of trying to flame up this hatred for each other and creating more reasons for them to buy and build more weapons. I just feel like, what a waste, you know. So I just want to see this technology being applied in more ways and actually the kind of stuff you're doing is really great because you're giving that deeper look into this industry to a lot of people that won't normally have a chance to do that. So keep up the good work.

[00:31:50.558] Kent Bye: Yeah, with the relationships between China and the United States, I wanted to just bring up one thing that's been coming up within the last couple of weeks, which is that, you know, the United States were putting forth this legislation to potentially ban TikTok. And with ByteDance being the owner of both TikTok and Pico, there seems to be some potential connections there as the VR market continues to potentially expand. You know, if they want to come to the United States, you know, my preference would be to have a comprehensive federal privacy law in the United States that would treat all of the US companies on the same footing as what's happening in China in terms of like more of an equivalent of something like GDPR on top of like even more human rights implementations for neural rights and cognitive liberty with the Nita Farahani's book, The Battle for Your Brain, which is covering all these different aspects of the threats of neurotechnologies. So I feel like there's a lot of potential privacy risk, but if the United States starts to take this isolationist view of kind of just banning companies that have any association with China, then that could have some implications for HTC, which is based in Taiwan, but also has offices in China. So how does that play out as you start to think about, as these types of debates start to play out with the potential of banning something like TikTok, how does that play into the data practices of what HTC is doing? And is that something that's on your radar to try to think about how to mitigate against?

[00:33:10.950] Alvin Graylin: I mean, I think we're a little bit protected from that in the sense that we're not a social network and we're not storing any of our customers' user data. All the data stays on the device. So, you know, we're trying to take as a protective stance as possible for our users. In fact, this is why, you know, when we were talking to all the doctors last week, they all loved what we were doing because they were like, okay, Here's something that I know I can get past the purchase board from these hospitals, because they're going to care so much about privacy and data and security and things like that. So from that perspective, it's actually not been an issue for us. But I can definitely see what you're saying in terms of some companies that have ties to social networks. It is very sensitive. And even for Meta, they have a tough time selling into schools and banks and hospitals because of some of these regulations of the companies, not just the regulation of the country. But I think the industry needs more competition in the sense of more companies being available to the users, which will create higher innovation and higher acceleration of innovation versus locking down a specific industry or specific players so that they cannot move forward. That actually, I think, stifles innovation. I think regulation in terms of what you're saying, make sure that the user has transparency, the user has the rights to manage their data, has visibility, has ability to delete their data, has ability to see what's available. I think those are the right ways for government to regulate versus to completely ban one software or one hardware. And in fact, I mean, China, actually, a lot of people think that China bans Google and Facebook. It's actually, they just have regulations that say you have to have local servers and you have to have the data stay in China and you have to be able to allow the Chinese government to influence some of the content on there, right? So some level of censorship. So this is why you look at Bing. Bing's available. Yahoo is available in China from search systems. So it's not the fact that they're banning them. It's actually the companies themselves that choose not to follow the local regulations. So if the US government says, hey, TikTok, if you keep your data in the US, if you allow your users to have certain rights to their data, I think that makes a lot more sense than to say, hey, it's banned in this country, which you know there's you know millions of hundreds of millions of users in the U.S. using TikTok. I have no affiliation with them but I just feel like for a country that prides itself in terms of being a fair market to use that kind of regulation is a little hypocritical.

[00:35:43.713] Kent Bye: Well, just a few more questions to wrap up, because I know that educational context in the United States is a lot different in China in terms of like, in the U.S. it seems like it's underfunded, underappreciated, and you get many older generations of technology. If there is any sort of educational applications, it's going to be out-of-date technology and just underfunded, unappreciated. But in China, you have a little bit different situation just in terms of if the government decides to put a lot of money into it, then they can make something happen. But the caveat that I'd say, back to what you were saying earlier about how ByteDance had put a lot of money in a similar way of trying to make VR happen, I feel like in some ways education could be in a similar situation that you could invest a lot of money, but yet if there's not a larger ecosystem of content or people that really support it, then I'm a little bit more skeptical of something being pushed forward. But what is happening in primary or secondary education in China when it comes to virtual education?

[00:36:34.833] Alvin Graylin: Yeah, so I think that's actually something that I'm also encouraged about is even in just last week was the two sessions or two meetings, which is every year the Chinese government gets together and talk about their plans. And also just a few months ago was the 20th Party Congress, which was a big deal. Those only happen every five years. So in both cases, there was a very strong emphasis of digitizing government, digitizing education, digitizing business, digitizing state-owned enterprises, and growing core technology, right? So it's really, you know, China's in a national directed path to say, we want to become a more technology-centric country, and we don't want to just be focused on manufacturing. And we want our population to be the most educated and to have access to the latest technology. So when that happens, usually the city and provincial governments become very active in terms of supporting these policies. In fact, even last year before this happened, there were 30 provinces and municipalities that have created what they call metaverse centers of excellence. So where they would put in hundreds of millions or maybe even billions of dollars to recruit and support indigenous company development for metaverse related technology, hardware, software, content, whatever. And so when that happens, things will come up. I mean, government programs, there's usually still a lot of waste or I guess misuse because they're not technologists. So they may or may not invest in the right things. you know overall good things will come out of that right in fact i was just informed two weeks ago i'm now one of the you know few members of the shanghai metaverse governing board or something so for all major investments by the city they would ask experts in the industry to come in and help evaluate you know are these the right companies? Are they doing the right thing? Are they innovative or not? And the fact that they're reaching out to industry to do that, that's actually also very encouraging. And I know that process took a while for them to pick out of, you know, hundreds of people that applied or were selected, right, to come up with a few members that would help do these kind of things. So, you know, I feel like The U.S. could learn some things from that kind of a process, more of a directed model for government to help facilitate growth, not just having companies do it. And, you know, like I said, I think, you know, companies doing it, they're doing it for their own benefit in some ways because they feel like they want to own the market. But I think when the government does it, in fact, I mean, if we look at the Internet, the Internet was a government started as a defense project, right, and then became an academic project and then became more commercial. In fact, the national highways in the U.S. were built because they first wanted to have a good way to move military troops around the country in case we get attacked. So things that happen, you know, radio were first used in military, etc. So satellites, same thing. But now that technology, when it's invested, after a while, after it matures, it then waterfalls down to the consumers. And I feel like that's what also probably needs to happen around the world, not just in China. And I think China's going to do it, no matter what other people do, because they want technology to be the reason that they continue to have a certain level of growth in their society and advancement and to catch up to the rest of the world. So yeah, so I mean I see cities, I see provinces, and I see national programs that will make requirements for people to use more these new education models, new technology to educate. And, you know, I think as we've seen with a lot of history, essentially the countries that are the most educated will become the most economically vibrant over time because it creates the most innovation. So I'm happy to see that happening in China, but I would like to see more of that happening in the U.S. and the rest of the world.

[00:40:24.051] Kent Bye: Great. And finally, what do you think the ultimate potential of virtual reality might be and what it might be able to enable?

[00:40:32.018] Alvin Graylin: So I remember you asked me this question a few years ago, but I think I've actually changed a little bit in terms of my views. With what's happening and the speed of growth in AI, I actually feel like the economic models globally will be going through a real sea change over the next few decades. And the dependency on our personal fulfillment being tied to our vocation will become less and less because you can see right now even junior white collar jobs can easily be replaced now or maybe not fully replaced but at least partially replaced by AI technology up to today. In a few years, it's only going to get more and more. We used to think that the labor class would be the first ones to be replaced by technology. And I actually now think it's going to be the other way, is that white collar and creative classes will actually be replaced first because our ability to automate that work will actually happen before our ability to automate physical work, right? A nanny, it's going to be hard to replace that. But a junior clerk in a law firm, pretty easy to replace that, right? Or a junior writer or a junior artist, right? I think that the senior player white collar will take a long time. But, you know, in the not too distant future, I mean, less than 10 years, a lot of the junior white-collar work will no longer be needed. You can have one senior lawyer that can probably do the work of a team. And so, what do those people do? And developers, too, right? A lot of code can now be...one senior developer can get an AI assistant. You know, he can architect it and tell the AI assistant to write that code and then recheck it itself. And then it's like, oh, that didn't really turn out what I like. Let me re-architect that. And so you don't need a team of 100 or 200 people to do a project. Maybe you have five or 10, because you don't need artists anymore. You just need maybe one artist that then can create as much as a team of artists used to do. And they just need to pick from the selections or make tweaks, which the entire workflow changes a lot. And so if that's the case, then what's 7 billion people or 8 billion people going to be doing? I feel like the metaverse is actually going to be the place where a lot of them are going to find their fulfillment, because in these places, they can essentially create any world they want. They can live in any world they want and they can create challenges for themselves or they can use it to create content that other people would then come in and enjoy. Right. So rather than saying, you know, because we've seen that in the agriculture sector, right, where it used to be the majority of people in the world were farmers. Back a few hundred years ago, you know, there was farmers and then there was the aristocrats or the Kings, right? And there may be a few soldiers, but you know and even just 30 40 years ago I think 70 or 80 percent of the Chinese population were farmers and Now it's less than 10%, in the U.S. it's less than 1%. So we're going to see that kind of transition where a lot of the current work will go away and they will need to find a new realm to find satisfaction and occupation. I think the world will start moving into a UBI type of a society because If our productivity grows by 10x or 100x, the world does not need everybody to be working 9 to 5 to survive, to be able to have a functioning society. If that's the case, then we need to give these people something to do. And I think in these virtual worlds, they can have, essentially everybody can have a realm to govern over that they can have some level of ownership to. And I think that's going to be an important psychological tool that will help maintain people's sanity and people's motivation.

[00:44:19.119] Kent Bye: Yeah, I find my body going through this sort of split experience of like understanding what you're saying and seeing how it's plausible but then also having this deep skepticism from the more of a AI critic of the stochastic parrots and that, you know, when I asked ChatGPT to say who is Campai, I have enough of a presence where it has a lot of things that are completely plausible but then completely manufactures that I've done all these things that I haven't done and then so I think it's like this bottom-up statistical probability, but there's not real, you know, Noam Chomsky just had an article in the New York Times where he's saying this doesn't actually have any understanding, it doesn't have any understanding of knowledge, there's no knowledge representation in these, and so there's like all these gaps that I feel like you can have these large language models and keep adding billions of features, but is it actually going to get to this artificial general intelligence level and live into these promises, and even if it does, would that be a good thing? to steal all this data, to do it in a way that's unethical and not necessarily have all the rights, to then displace all these workers. So I feel like there's a lot of deeper ethical issues that we're kind of in the middle of that, where I can see what you're saying, but I also am deeply skeptical that even if it's possible, that it would necessarily be a good thing.

[00:45:27.190] Alvin Graylin: I mean, I see what you're saying about what's there today with chatGBT and similar large language models, but this is essentially version 1.0, even though they call it 3.5, whatever, but it's very early days in this technology. And I think it's also based on only just using transformer technology. Now, if you start creating hybrid models, which I think you need to do, where you start having cognitive models and language models and models of the mind integrated with these technologies, then I think you get closer and closer to what people are talking about with AGI. I agree today, if you want a fully functioning, multi-purpose AI, it doesn't exist today. But if you talk to the experts in the industry, I think something like 70, 80% of them agree that within the next 30, 40 years, it's going to happen. So it's within our lifetime. Or at least our children's lifetime. So it's not that far away. And what I'm talking about, when you ask about the ultimate potential, I'm not talking about the next five years. I'm talking about the next 50 years. And I think by that time, most people, something like 90% believe that it will happen by the end of the century. And these are the leading minds in AI. So I guess I'm less pessimistic about the potential of this technology to really grow. And when it happens, then it actually starts to, if we get to AGI, then AGI can actually start doing its own development and its own creative thinking, which then will create an acceleration of advancement where you know people talk about this is it a fast takeoff or a slow takeoff to super AI and Even when they say slow and it's like tens of years when it's fast. It's like a few days right where you can get to it, you know something that could be 10,000 times smarter than the average human right in which case then most of our real-world problems will probably get solved and in some way. And I know I'm kind of putting a lot of contingencies on these things happening, but we can see the trajectory of where it's going. Where a few years ago, we were so excited when AI was able to beat a gold champion, but that's in a limited, fully visible, finite game. Even though it has billions and billions of possibilities, but you can see everything. But now actually language and the ability to understand language and the ability to interact with the real world is an infinite game. It's a much more difficult problem. So the fact that it doesn't work perfectly. in that scenario, it's understandable. But I think we will see most of those issues will get worked out. And the fact that multiple teams are now working on it and competing against each other to find that, I think it will happen. Now, your other question of, is this good for society? That's actually the bigger question. Whether or not it will happen, I think is highly likely. Whether or not it's good for society, that's more questionable. And how it actually gets used will be very important in terms of how that societal impact happens because if it gets used by a dictator who somehow was the lucky ones to first create AGI and then somehow create a super AI and then create some kind of a new weapon or something or a mind control that then turned him into the world leader, that's probably not a good outcome. There's a possibility for that, right? Now, the probably more likely scenario is that we have a slow takeoff and the progress of the technology happens in a probably more open and controlled way where hopefully more righteous people are in charge and there's some kind of government body around to make sure that that happens. It's like, you know, with nuclear weapons, it was invented, it could have gone south really quick. But the world got together and said, you know what, this is something that's pretty bad. We should control it. And same with chemical weapons, right? So we've done it in the past. We've been able to control some of these technologies in the past and turn it into something healthy, right? I mean, some parts of the world, a large part of their energy comes from some nuclear. In fact, it's probably one of the more cleaner energy sources that gets a lot of bad rap. So there's still a lot of responsibility on us as a society to see how that technology, when it does evolve, how do we manage it, how do we control it so that it is manageable. Because there are some fears that it's going to be so smart that we won't be able to manage it and it will take over and we will just be slaves to the robot overlord. I guess I don't think that's probably going to be the case because we hopefully will be able to put in some safeguards into it and also it will still need to have physical access to things which at least for the meantime humans still have an advantage over. Now long term though I actually think that there may it may not be a bad thing long long term this is not like the next 50 years maybe the next thousand years that somehow humans and this new digital life form becomes a symbiosis where either humans are uploaded into this new substrate, or we download it into our bodies in terms of a chip or something, right, where we essentially create a society of superhumans, of super smart animals. And just like over time you see evolution, we see animals are getting more and more sophisticated, using more and more tools, and new life forms replace old life forms. This is actually a natural cycle. If we want to actually explore the rest of this galaxy, if not the rest of the universe, our human carbon-based form is not a very good vehicle for doing that. So if we are able to either upload our conscious nerves or our minds into some kind of a digital substrate, we can then travel much faster than we can as physical beings. We can go a lot further and we don't have to eat and do a lot of other things that biological beings do. that would not make us able to do these thousand-year journeys or million-year journeys, right? So if we feel like and so far it feels like right now we're the only place in the universe so far that we found has that level of intelligence of using technology to be able to go beyond where we are as single planetary species. that it'd be a shame for that to get lost. And it'd be great if we can expand that and, you know, move it to the rest of the known universe. So I think if we can progress and allow that new life form to thrive, not just to say we're going to control it and it's just going to be a tool, but to make it an equal. And in fact, maybe in some ways it become a superior to the physical form that it may actually be better for I guess sustaining intelligence, not necessarily sustaining carbon life form. And I think that type of intelligence will probably do more good for global, or not global, but actually universal goodness than what we've shown we can do. Because we've not been that great to the world. in the few tens of thousands of years that we've been kind of, or maybe we've only really been the head of this world for the last probably 10,000 years, right? So we've done a lot of damage to the world in that limited time with limited tools. If we start having a more graduated intelligence, I think that allows us to have a certain level of enlightenment that hopefully will keep us from doing some of the wrong things.

[00:52:43.476] Kent Bye: Yeah, I guess just a few points and responses that I feel like at the heart, we need to be in relationship to ourselves, to other people, and to the planet. And I feel like, you know, as I've gone and done these different immersive experiences, you know, the question comes up is, is there even enough materials on the planet to give six to eight to 10 billion people a VR headset? You know, like there's a certain real hard limit in terms of how much capacity we have to even kind of take these things to logical extreme.

[00:53:11.547] Alvin Graylin: I'm not sure if that's actually a constraint, right? If you think about it right now, there's I think six to seven billion phone users in the world. Smartphones. Well, maybe like 80% of them are smartphones. That's a big number. And the amount of components that goes into a headset is not that different than the number of components that goes into an advanced phone these days. And there's probably 100 million PCs that are sold every year, right? There's tens of millions of cars that are sold. In terms of devices, in terms of materials, I don't think that's a constraint. In fact, if anything, replacing A TV, a phone, a laptop, and a PC with a headset is a lot less material, a lot less waste, and a lot less strain on the globe than it is today, than what we do today. And we also buy these devices and we throw them away. I probably got a drawer of 30 to 40 phones and 5 or 10 laptops. So if you have a headset that is smaller, lighter, uses less material, uses less battery, Why would you want to still use your older devices?

[00:54:19.660] Kent Bye: Yeah, I saw some documentaries that Sundance talking about like the electric car revolution and the amounts of raw materials you need to have like you know going to clean energy is actually like it shifts the problem over into a resource problem that actually starts to have more hard limits than just the devices themselves because we start to have this clean energy revolution that is actually like takes a lot more materials than are necessarily like easily available and it takes more money to extract those materials over time.

[00:54:46.095] Alvin Graylin: I think you're right if you say that we want every single person to have their own personal car. Then yes, that equation, it kind of evens out actually. I think the study showed that it about evens out in terms of the initial pollution that's generated from creating the batteries to the overall exhaust of carbon fuels over the lifetime of the ICE cars. But what will probably happen with electric vehicles is that now you have intelligence in there and you can have shared vehicles. You can maybe, you know, since 95% of the time of a life of a car, it's parked. If we can now have shared ownership and instead of having just, you know, Ubers that have a person driving it, you just have this car that's always around and maybe you have 10 families that share it. In that case, actually the overall consumption is much less. Now if you also have buses as well that are electric, that actually is a very, it's a much more efficient way to carry a lot of people with a fairly efficient transportation model. So adding intelligence into it changes that equation. If you're just looking at it from a one-to-one perspective, then it's probably about even.

[00:55:53.039] Kent Bye: Yeah, I guess there's this larger thrust of, like, as we automate things, we're going to be not working as hard. And I think that's, like, if you go back in history, that was said again and again and again. And it's actually never come true because we still put people to work. So I think there's technological innovation aside, I don't think we're constrained by technology. I think it's actually... more of a function of both the cultural and economic structures of capitalism that are kind of driving certain behaviors that I think the risk here is that we have these super complex tools that as these companies have to have a lot of resources to develop them, then what happens when they're the sole ones who are driving the future of these technologies and displacing workers but consolidating wealth and power into a fewer number of hands? You have these different wealth inequalities and this larger social, economic, issues that I feel like it's more of a cultural and political and economic issue than a technological one.

[00:56:44.978] Alvin Graylin: Yeah, I hear your argument about the fact that in the past technology hasn't created unemployment, right? And I think that's true because in the past technology has really been about replacing our physical labor. Less so about replacing our mental labor. And now what's going to happen is that we're seeing that technology will, you know, the homo sapien is the knowing one, right? So our minds is what defines us. Our ability to think and be creative, to be innovative is what defines us. Now, if that skill set can be replaced by a machine, then we get to that point where our only real advantage, because cars can go faster, robots will run faster, machines are stronger than us. So all these things that require physical labor, absolutely, that's going to change. And like I said earlier, in terms of farming and manufacturing, a lot of those humans are being replaced by robots. But what will really make the change is where do we go next? It's when the creative jobs, the service jobs, are going to be replaced. I actually think service jobs are going to be last because human-human interactions are going to be the hardest for machines to replace. So we will probably for some period of time move mostly to service based jobs but then at some point we're probably going to actually have no real official job that we need to serve. and the fact that most of what will happen will be automatable, right? And the biggest change is because we're now automating thinking. We're automating intelligence. And humans are good at only a few things, actually. I mean, you know, horses run faster than us. You know, even, actually, a squirrel runs faster than a human. I don't know if you know that. I mean, a tiny little squirrel, right? So we're not, from a physical perspective, you know, chimpanzees that are half our size or third our size are actually stronger than humans. So from a natural perspective, we are not a very special animal, but except for our brains. So now if AI gets to that AGI level, then our only advantage goes away.

[00:58:50.621] Kent Bye: I think I feel myself like on the brink of like like last year there was a lot of crypto hype and there's a lot of metaverse hype. I feel like I'm like in the face of a lot of AI hype where I'm like there's still a creativity, there's still emotion, there's still affect, there's still creativity and imagination. I feel like there's so much more about what it means to be human but the deeper issue is that if we start to look at these AI issues as like a sort of God that is super intelligent and better than us. I feel like that's where If you put it in that hierarchy where we're subservient to this larger entity, like, we are no longer as good as it, I feel like there's a humbling aspect to it, but there's also, like, the AI should be in service of humans, like, bottom line. And, like, when I hear that type of rhetoric, I feel like that's lost, that we're suddenly going to be servants to this, like, automated AGI that is kind of like our overlord.

[00:59:35.968] Alvin Graylin: No, so I think, again, we need to look at timeframes, right? What you're saying, I agree with for the next 50 years. After 50 years, I think AI will get to a point where it will have that level of sensitivity and consciousness and, you know... That's debatable.

[00:59:57.122] Kent Bye: as to whether or not AI will ever have the same level of consciousness as a human. So there's sort of metaphysical assumptions you're making there, like that there's a technological development, but there may be deeper issues with that.

[01:00:07.778] Alvin Graylin: What are humans made of? They're made of a few atoms.

[01:00:12.084] Kent Bye: processes which is a process relation like there's a different metaphysical system where you could say that all of nature reality is process and relationships and not atoms so I feel like there may be like I have a whole interview that I did with the process philosopher Matt Siegel where he talks about these different metaphysical systems so you're speaking from a substance based metaphysics and I'm speaking from a process relational metaphysics and I'm saying there's a difference there between what makes a human human and I'm skeptical that any sort of like automate unless it's like a Maybe there's biological substrates. Maybe there has to be a certain living organism. And this gets into the philosophy of mind and whether or not you need a biological substrate in order to have consciousness. And so I feel like we may be able to test these things in the future. But at the same time, I can't prove that you have a consciousness. It gets back to this sort of like Descartes problem. So that same issue is going to come up with these AI machines. If we can't say that each other have a consciousness, there's always going to be a sort of unanswerable philosophical dimension to this as a debate. That's why I'm saying we can't know.

[01:01:12.078] Alvin Graylin: I mean, I think we're both hypothesizing. And I guess I'm on the side of, in time, technology will advance in a way where we can actually achieve and hopefully understand ourselves more, understand how that consciousness rise, because it is the hard problem that nobody understands right now. But at some point, we will be able to create machines, our intelligences, that will be able to mimic every single way that we would be able to say a conscious being would behave and think. And in fact, We may even want to try to remove some of the issues that human consciousness and human brain works where we have so many biases that are built into our brains. A artificial intelligence being that has consciousness may be able to have both the empathy to feel for other beings at the same time not be biased by either cultural historical issues or biological or genetic issues that keep it from making the right decisions. So I'm not an extremist. But I'm actually hopeful that I think that I don't see it as us being subservient. I see it as us birthing a new species or a descendant of us, right? Just like if I had a kid and my kid was smarter than me, was more successful, had a better job, made more money, I would be proud of my child. I would not say, hey, I want you to work for me, child, because I made you. Right. So I think if we look at it from that perspective of a new life form, whether it's a biological, a carbon based form or a silicon base or some other substrate, if we help to create it, and it is in many ways or always superior than what we are and more able to do the things that we can't do, we should be proud of us as a society, as a species, to have created that type of a being and not try to then make it our slave so that it just serves us. I feel like that's a little bit selfish.

[01:03:06.228] Kent Bye: Yeah, well, I guess I invite these types of discussions when I talk about the ultimate potential, and without specifying certain timeframes, but I do think that there is grains of truth of how these things play out, and there's a bit of what they call the singularity, which is at some point where it's hard to know how things are going to continue to develop because things just complexify so much. I just landed here at South by Southwest and I feel like this conversation may be a sign of things to come as I navigate this sort of AI hype that is happening at the moment and try to figure out what's pragmatic, what's real, what's grounded, and what is, for me, I take it like a human-centric first in terms of how are these tools going to assist us into being able to live into our desires. Because the thing that's different between human and machine is that we have aspirations, desires, imagination, and sort of a final cause, a destiny, a will, a fate, where we're trying to have this autonomy and self-determination and free will to sort of act in the world. And I haven't necessarily seen, or if we do, I mean, you say you create an AI and it's sort of like, you know, similar to a kid, but It's different when you create an AI structure that's going to potentially displace thousands of people's works that has a different ethical differences there because it's a power asymmetry that is not necessarily analogous. So I guess I keep coming back to like...

[01:04:27.019] Alvin Graylin: actually come to your point of saying you create an AI that displaces people's work. I actually think you create an AI that liberates people so that they don't have to be doing the monotonous work that they didn't want to do. And we create actually a leisure class of humans of 8 billion humans or 10 or whatever billion humans to create that leisure class because now you have machines that can create a certain level of productivity that was never possible before so that we don't have to work. It's not that you displace them so now they're gonna starve at home and they don't have any way of income. I think at that point, everybody should get universal basic income and that income level will probably be a higher quality of life than most people, average people have today anyways with work. So I think maybe I have a different vision of where things go and I can understand some people afraid of not having work. But I actually feel like if I could not have work, I could have any life I want. I could travel anywhere. I could spend my time reading and learning and learning different languages or different cultures and learning how to play instruments. That sounds like a pretty good life. That sounds like something that most people would say, hey, I would sign up for that. As long as I don't have the responsibility of feeding my family, why would I? Most people's jobs are not necessarily fun for them.

[01:05:46.054] Kent Bye: Great. Well, is there anything else that's left unsaid that you'd like to say to the broader Immersive community?

[01:05:50.679] Alvin Graylin: I guess I should say that everything I've said is a representation of my personal opinion. It is not an opinion of HTC. So, you know, just to put that caveat.

[01:06:00.950] Kent Bye: Awesome. Well, Alvin, it was great to catch up and to cover. Your background is really at the intersection of this AI that you started with and now VR. And these things are really colliding together right now. So it was a real pleasure to kind of see where things are at now and where you seem to go in the future. So thanks for joining me today on the podcast.

[01:06:18.158] Alvin Graylin: Thanks for inviting me. It's great to chat again. And sorry for getting a little controversial. But I'm talking long, long time frames. So it's not like the next 5, 10 years. So I think the next 1,000 years is what we're looking at.

[01:06:29.423] Kent Bye: Awesome. Thank you So that was Alvin Wayne Grayland. He's the HTC China president and the global vice president of corporate development at HTC So I've a number of friend takeaways about this interview is that first of all well I just want to say that you know kind of just reflecting on how that has been subsidizing a lot of the XR hardware and that it has these knock-on effects into the larger XR ecosystem and In a lot of ways, HEC is trying to create a vibrant ecosystem by not only investing into a lot of these different companies, but also focusing on the enterprise market, which I think, frankly, Meta has been skipping over to a certain degree. They were late into coming into the enterprise market. They had some offerings, but then they shut down. Now we're still currently waiting for the MetaQuest for Business to be officially relaunched at some point here. I imagine sometime in 2023. There's a way in which you could organically grow the ecosystem and create a vibrant ecosystem of all these other different companies, but Meta's been trying to supercharge the consumer market of XR while there's all this other stuff that's happening in the enterprise market. So it was really interesting to hear from Alvin. what was happening in the context of Enterprise XR. They also, at GDC, announced the third iteration of the trackers that don't rely upon the base stations, and so they have cameras on them, and so they're these self-contained self-trackers that you're able to put on your body and to use into things like HTC Elite, which was showing in a couple different experiences, both from the Roman Mappic and his mirror shot, was doing a mixed reality show with a number of HTC Elites, as well as the Yuki mixed reality experience. It's a video game, but they had a mixed reality component that we're showing at the South by Southwest experiences In terms of the AI and what's going to happen, I feel like in the long term there's certainly going to be a huge integration with all these different technologies. I guess my concern overall is that we don't have the cultural, political, and economic frameworks to be able to handle something that's going to have the consolidation of wealth and power into some of these different companies that are going to be the larger beneficiaries of some of these different technologies. What are ways that these systems are ethically gathering all the data that they're then potentially displacing different jobs around and then how do we make sure that we don't just create this situation where we have a lot of people who are displaced from what their current work based upon the labor that may have been unethically acquired to be able to train these AI models. And there's going to be a certain incompleteness with all these different things. These autoregressive large language models have certain gaps that Yann LeCun was having certain arguments that there's going to be a certain number of errors that are just going to be exponentially growing that they're not going to be able to close the gap if you just take this as a a pure approach of the autoregressive large language models, and that you need to have some integration with either knowledge representation, be able to do symbolic AI, and there's these plugins that are happening there, but there's a difference between essentially having a chat-based interface into these existing third-party applications. You could just see it as a conversational interface into these other applications, rather than the way that OpenAI is trying to frame it, is that it's extending the intelligence capabilities of these large language models, which is kind of a Smoke and mirrors according to some of the AI ethicists who are just saying actually this is just kind of a conversational interface that there's nothing Inherent that these largely those models are going to be learning from access to different tools So I think in the long run what Alvin is trying to indicate is that we are on the brink of some really huge significant shifts and that we're just trying to wrap our minds around a lot of those right now and I Again, I would recommend checking out the NVIDIA GDC keynote that was released on March 21st that really goes into some of the different enterprise applications of some of these machine learning and the backend, these cloud, super expensive processing devices, essentially these extrapolated from the GPU technologies specific to training for machine learning and deep learning applications. being made available in a cloud instances and what all these different types of companies are able to do with that type of distributed processing that's at the heart of what happened with this deep learning revolution that was kicked off back in 2012. So that GDC keynote is declaring this as an iPhone moment just because of the generative AI applications like Mid-Journey, Staple Diffusion, and DALI that has been available to the consumer market to have people put in a text prompt and have images. And then as we move into ChatGBT4, which was just released on March 14th, at the end of South by Southwest, then you have this multi-modal interactions of being able to have both text and video input as well as output. And so you have this fusion of all these things. But a lot of folks from either if you look at the hashtag, AI hype on Twitter, or the AI ethicists who are having a lot of different complaints around the limitations of these large language models. I've had an interview with Daniel Lufer from Access Now where we're talking about the AI Act. One of the things he said in that interview is that A lot of the people that are advocating to push out these technologies quickly are taking a bit of a utilitarian approach, saying that, hey, this may work for 95% of the people. It's 95% effective. But it's those 5% where it doesn't work, where those people who are negatively impacted also happen to be from marginalized communities, where there's other aspects of systemic bias and racism and discrimination that are happening. with those folks. And because of that, then you have this amplification of these harms at the systemic level. And so because of that, rather than taking a utilitarian approach, then it needs to have more of a human rights approach. And something like the AI Act is trying to create these tiers of different applications that have different levels of risk. So things that are going to be applications that are banned, there's going to be stuff that needs to have government oversight, and then stuff that has medium risk that you have to have at least like disclosure to the audience that, hey, you know, you're chatting with AI entity right now, this is not an actual human. And so having those type of transparency obligations. And this is something that is in the context of the EU. The AI Act is in the process of being deliberated in this trilogue process. But it's something that the regulators within the United States are woefully behind by years and years and years. So the AI Act is kind of at the forefront. But at the same time, in the absence of having any of this type of regulations, it's basically like the Wild West, where you're putting out these incomplete large language models that have different gaps. I mean, just as an example, when I did a chat, GBT 3.5, I said, who is can't buy and the first couple of paragraphs were pretty plausible, but then it just completely made up a bunch of information like I had worked on these documentaries and had done the stuff that I'd never actually done. But there's no way for the model to know But it seems plausible with this kind of stochastic parrot modeling of language that's just like, what is some probabilistic words that would make sense that coming from this shallow understanding of who I am based upon my public footprint of Wikipedia, my web presence and Twitter presence, but At the end of the day, there's certain aspects that it just completely fabricates, and that Yann LeCun is saying that, essentially, that this is an exponential problem that you're never going to be able to have with any type of tuning. There's always going to be, like, this fundamental gap between what is actual reality versus what's true, and so how do you add different layers of knowledge representation? I mean, this is a big problem that they're trying to solve. There's a lot of different deeper discussions around all the stuff that happened at NYU at this philosophy of deep learning, which hopefully those talks will be made available soon for folks to be able to dive into. I just had a chance to catch a couple of those different talks, but those different types of discussions that are really talking about the real limitations of what's happening with AI. Anyway, I wanted to give a bit of that as a deep dive, just because I know that there's been a lot of AI hype, a lot of excitement. Every year, when I go to South, that's basically at the forefront of whatever the latest hype cycle is. Last year, it was all this cryptocurrency hype, and now that's basically crashed. So, the Gartner hype cycle always has the cycle of the peak of inflated expectations, and we are at the peak of inflated expectations with the potentials of some of the different latest iteration of AI. That's not to say that a lot of those are going to come to pass in the next five, ten, twenty years. But at the moment, there's a lot of real limitations that haven't really figured out what would reasonably be called artificial general intelligence, despite some of the different, more hyperbolic claims that are coming from Microsoft, which has a business relationship with open AI. So there's a bit of a incentive to declare that this has maybe some more capabilities than it actually is. But there is some real significant updates from chat to be three and 3.5 and to chat to you before. And it's worth looking through that Microsoft research paper to kind of look into both the possibilities, but also a lot of the real limitations that are elaborated within that paper as well. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a list of supported podcasts, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you could become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show