#1353: “Our Next Reality” Book Debates Future of XR + AI, and Speculations of Superintelligence Promises & Perils

The book Our Next Reality: How the AI-powered Metaverse Will Reshape the World is structured as a debate between Alvin Wang Graylin and Louis Rosenberg, who each have over 30 years of experience in XR and AI. Graylin embodies the eternal optimist and leans towards techno-utopian views while Rosenberg voices the more skeptical perspectives while leaning more towards cautious optimism and acknowledging the privacy hazards, control and alignment risks, as well as the ethical and moral dilemmas. The book is the strongest when it speaks about the near-term implications of how AI will impact XR in specific contexts, but starts to go off the rails for me when they start exploring the more distant-future implications of Artificial Superintelligence at the economic and political scales of society. At the same time, both sides acknowledge the positive and negative potential futures, and that neither path are necessarily guaranteed as it will be up to the tech companies, governments, and broader society which path of the future we go down.

What I really appreciated about the book is that both Graylin and Rosenberg reference many personal examples and anecdotes around the intersection of XR and AI throughout each of their three decades of experience working with emerging technologies. Even though the book is structured as a debate, they also both agree on some fundamental premises that the Metaverse is inevitable (or rather spatial computing, XR, or mixed reality), and that AI has been and will continue to be a critical catalyst for it’s growth and evolution.

They both also wholeheartedly agree that it is a matter of time before we achieve either an Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), but they differ on the implications of these technologies. Graylin believes that ASI has the potential to lead humanity into post-labor, post-scarcity, techno-utopian future reality where all of humanity has willingly given up all cultural, political, and economic control over to our ASI overlords who become these perfectly rationally-driven philosopher kings, but yet still see humans as their ancestors via an uncharacteristically anthropomorphized emotional connection with compassionate affinity. Rosenberg dismisses this as a sort of wishful thinking that humans would be able to exert any control over ASI, and that ASI would be anything other than cold-hearted, calculating, ruthless, and unpredictably alien. Rosenberg also cautions that humanity could be headed towards cultural stagnation if the production of all art, media, music, and creative endeavors is ceded over to ASI, and that unaligned and self-directed ASI could be more dangerous than nuclear weapons. Graylin acknowledges the duality of possible futures within the context of this interview, but also tends to be biased towards the more optimistic future within the actual book.

There is also a specific undercurrent of ideas and philosophies about AI that are woven throughout Graylin’s and Rosenberg’s book. Philosopher and historian Dr. Émile P. Torres has coined the acronym “TESCREAL” in collaboration with AI Ethicist Dr. Timnit Gebru that stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism. Torres wrote an article in Truthdig elaborating on these interconnected bundle of TESCREAL ideologies are the underpinnings of many of the debates about ASI and AGI (with links included in the original quote):

At the heart of TESCREALism is a “techno-utopian” vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundancereengineering ourselves, becoming immortalcolonizing the universe and creating a sprawling “post-human” civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.

I do not believe that Graylin or Rosenberg subscribe to each and every one of these underlying philosophies, but there are definitely plenty of traces for a number of these ideas originating from this TESCREAL bundle. They both believe in the inevitability of Superintelligence via a singularity line of thinking, and Graylin has been highly influenced by a techno-utopian vision of radial abundance, a strong affinity towards rationalism, and some hints of longtermism given his commitment to framing debates about AI in the distant-future where ASI has proven to be a magical bullet for humanity’s most intractable problems.

I generally do not have a problem with imagining speculative futures across a broad spectrum of different potentialities, but there are few problems for me for how Graylin collapses these future worlds into the present. There are technology architecture decisions for AI that may make sense in an future reality where there is a political and economic context of a ubiquitous universal basic income that creates a true post-labor society. But what often happens is those same decisions that aspire for a techno-utopian future are backported to our current political, economic, and legal reality where not everyone equally benefits. So rather than moving towards a ubiquitous techno-utopia for everyone, these aspirational tech architecture decisions for AI end up creating even more of a fractured reality where there are clear winners and losers in the short-term. There’s a sort of longtermism-inspired utilitarian moral reasoning that preferences the lives of these potential future humans over the lives of humans today, and it is all in service of these more speculative techo-utopian ideals for how technology could transform society whilst devaluing the present-day cultural, political, economic implications of these actions.

Graylin also has an underlying technological solutionism framing that tells the story of how a technology of ASI will be able to magically solve all of our economic, political, and cultural problems. Graylin elaborates more about these ideas in Chapter 10 or 11, but this type of techno-utopian, future framing helps to contextualize many of the arguments that he makes throughout the entirety of the book.

Rosenberg consistently pushes back against Graylin throughout the book providing a much more pragmatic, and overall more cautious take on the matters of ASI. Near the end of the book, they reflect on all of the XR and AI confluence by asking, “Will this make the world a better place? This is the core question this book has aimed to address by debating both sides of the issue.” There was certainly a dialectical pro and con tone throughout the book, but I also feel that there so many other possible futures that were not fully articulated. Perhaps ASI or AGI will never happen. Or maybe we are at the peak of the hype cycle around LLMs, and we’ll enter into the trough of disillusionment that proves Yann Lecun’s skepticism correct that auto-regressive large language models are doomed to failure. But at the very least some broader debates about future research directions from the Philosophy of Deep Learning conference. There are also philosophers and AI Ethicists like Torres and Gebru who have more deeper critiques of the lineage and flaws of this TESCREAL bundle of ideologies that are underpinning the deeper context of these conversations.

The deeper issue for me is that this type of binary framing of a pro vs con or protopia vs dystopia collapses the multiplicity of potential futures, but more importantly the plurality of different people across a broad spectrum of society. A lot of Rosenberg’s research is about tapping into the collective intelligence or swarm intelligence of humans such that we may start to preserve the humanity in these future technologies where humans may still find a way to be within the loop of both sensemaking as well as decision making. This variation of Artificial Superintelligence taps into the type of collective wisdom of the crowds, or at least also tries to preserve some of the benefits of human compassion, empathy, and emotion that we risk completely eliminating into a cold, heartless, and ruthless future of ASI.

Overall, there are a lot of provocative ideas woven throughout the course of this book, and the challenge for me was to know how to navigate the complex of data and claims being made. I’m reminded of Agnes Callard’s unpacking of the Socratic Method, where she says that knowledge comes from doing two algorithms of believing truths and avoiding falsehoods. But the trick is that one person can’t do both algorithms at the same time, and so we are left with a community-driven process, like scientific peer review. So it may be beyond the scope of what two individuals can accomplish in predicting the trajectory of XR + AI, and where the potential implications of AGI and ASI may lead us. But at least the asking of the questions has helped to lay out some of these possible futures. Whether they’re likely, realistic, or desirable is another question, but hopefully this will catalyze some other discussions expanding into other voices and perspectives that can help broaden the range potential futures for how XR and AI build off each other, and potentially leading to something like Artificial Superintelligence… or not. Be sure to tune into the interview for more debates, and also be sure to check out the Our Next Reality book, which launches on Kindle on March 5th, Audible on March 6th, and Hardback on June 4th.

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the future of spatial computing. You can support the podcast at patreon.com slash Voices of VR. So I had a chance to take an early look of a book called Our Next Reality, How the AI-Powered Metaverse Will Reshape the World. It's co-authored by Alvin Wang Graylin, as well as Louis Rodin. who each of them has spent over 30 years in the XR and AI industries. So this book is really looking at the intersection between whether you want to call it spatial computing, mixed reality, the metaverse, or XR technologies, and how those are interfacing with artificial intelligence. not only for enabling what XR is able to do today, but also looking at the future of content creation and how it's interacting with privacy and virtual beings and education and medicine and so many different other levels of where this could go in the future. The great thing about this book is that both Alvin and Louis have spent a long time in the XR industry. And so you get a lot of personal anecdotes for how these technologies have been implemented, and they're able to extrapolate from their own direct experiences for the combination of these two technologies. There is a lot more speculative ideas in this book in terms of artificial general intelligence artificial superintelligence the book is actually structured as a debate where Alvin tends to take a little bit more of the optimistic side where Lewis takes a little bit more of the skeptical side Alvin slips into a little bit more of the techno utopianism and projecting out into the future and really imagining these post-labor, post-scarcity, abundant societies that are enabled by this artificial superintelligence that is taking control of all of our economy and our culture and our government. On Lewis's side, he's not so far as skeptical as to be like an AI doomer, but he's bringing up a lot of the more cautious or skeptical or even just like risk based perspectives around privacy, the AI manipulation problem, and also just how reasonable is it to have an artificial superintelligence? And are you gonna be able to actually control if it's way more intelligent than all of humanity? In the course of this conversation, there was already so much of a debate going between Lewis and Alvin that I took a step back and allowed their different perspectives to play out a little bit, but I'll have a little bit more to say here at the end. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Alvin and Lewis happened on Monday, February 26th, 2024. So with that, let's go ahead and dive right in.

[00:02:32.158] Alvin Wang Graylin: I'm Alvin Graylin. I'm the global vice president for Corp.fHTC and have been working in the XR space for 20, 30 years, XR and AI space. And recently me and Louis have put together this new book that we're going to be talking about today. So I'm looking forward to our chat. We've had a couple of conversations before, so this is a nice continuation.

[00:02:54.390] Louis Rosenberg: So I'm Lewis Rosenberg, the CEO and Chief Scientist of Unanimous AI, which is an artificial intelligence company that connects groups of people together and amplifies their collective intelligence. And it's relevant to all kinds of environments, but especially shared environments, including XR environments. But I've been involved In the XR space, really my whole career, going back 30 years as a researcher at NASA and Stanford and the U.S. Air Force, and then I founded one of the early virtual reality companies back in 1993, a company called Immersion, which we took public in 1999. I stayed there for well over a decade and then founded an augmented reality company and then founded my current artificial intelligence company. VR, AR, and AI are the three technologies I care about the most.

[00:03:48.087] Kent Bye: Yeah, so I had a chance to read through our next reality. And one of the things that was really striking to me was how much each of your personal journeys into the intersection between XR and AI is spread out throughout the course of this book from your firsthand direct experience, covering such a broad range of all the potentialities of both XR and AI. So if you could each maybe give a bit more context as to your background and your journey into this intersection between XR and AI.

[00:04:14.415] Alvin Wang Graylin: So, you know, as I mentioned earlier, I've been involved, so I've had both large corporate and small startup experiences in this space. I studied neural networks and natural language processing back in undergrad and did research projects in that area. And then I also worked with Tom Furness in the human interface technology lab. in the early 90s. So got my start there. And then I also worked in MIT in more the symbolic AI space. So I've kind of had both realms of AI. I've had four different startups and three of them were AI related and one of them was XR related. And in the last eight years, I've been working with HTC, who's one of the global leaders in this space. In between, I also worked with two cybersecurity companies, which also appear a little bit in the book, in one of the chapters.

[00:05:01.459] Louis Rosenberg: Yeah, so my journey really follows the path of human-computer interaction on just using technology to amplify human performance. So it's really been that quest that has led me to VR, AR, and AI in different capacities. But I started as a graduate student at Stanford, interested in how to use technology to amplify human performance. And this was right around the time that virtual reality was first getting started. And so I started really thinking about using virtual reality and virtual environments to amplify and expand human abilities, which led me to working at NASA Ames in their early virtual reality labs. which was really, the technology was super crude back 30 plus years ago, but I was still convinced and sure from that point on that the way we humans should interact with information is spatially and content should not be restricted to a flat screen. And so after doing research at NASA, I was lucky enough to be able to go to Air Force Research Laboratory where I was very, very interested in saying, hey, can we take these immersive experiences and not have to cut you off from your normal world. And that was the one really uncomfortable thing that I had really in the early days of VR. And so they funded me to build something called the Virtual Fixtures Platform, which was really a mixed reality system back in 92, 93, where people could reach out and interact with real and virtual objects at the same time. And what I found was that people would keep doing it for a long time. Whereas the experiences I had at NASA when I would work with people, they would like it as a really quick demo, but cutting them off from the real world, it was just really obvious to me, was a barrier. And so my interest in my path has been towards these technologies that can just make our experiences natural. And so in 93, I was convinced VR and AR would be the future. So I founded Immersion Corporation, and I thought it would happen a lot faster. I didn't think it would be 30 years. I legitimately thought it would be 10 years. In 93, I was pretty sure by 2000, 2003, virtual and augmented reality would be everywhere, and I was off by a factor of three.

[00:07:19.210] Alvin Wang Graylin: But it's happening now. I wrote my conclusion for my paper in 93 as well. So we're both off by two decades, or maybe a little bit more than two.

[00:07:28.574] Louis Rosenberg: Yeah. And now, as I look at technologies that can really amplify human abilities, I still firmly believe in spatial technologies, but AI is obviously a huge part of that. And AI plus spatial technologies are going to transform human abilities, basically give us superpowers. That's not that far away anymore.

[00:07:48.940] Kent Bye: Yeah, well, the format of this book I thought was particularly interesting, just because you take a little bit of a dialectical approach where Alvin often will step into the eternal optimist mode, and then Lewis, you'll come in and give a little bit more of the skeptical or grounded or pragmatic take. Not so far as much as the AI doomer, where AI is going to kill us all. It's not quite that far, but it's at least a little bit more grounded in saying, okay, Alvin says this, but I think it's not really going to turn out that way. I think it's going to be more like this. Maybe you talk about the nature of this collaboration and this dialectical approach that you decided to take on your book, Our Next Reality.

[00:08:25.562] Louis Rosenberg: Yeah, well, we structured the book as a debate, partly to make it more interesting, but also to make sure it was well balanced. You know, usually you see books in this space and they're either cheerleaders of the space or they're doomers of the space. And I think both Alvin and I really, we see both sides of it and we both have very kind of practical perspective. You know, if you read the book, you'll come out and you'll think that Alvin is entirely optimistic and I'm entirely pessimistic. And neither of those things are true, although I think I lean more on the, not on the pessimistic side, but on the risk side. I really do believe that these technologies are amazingly powerful, and it's because they're so powerful that I can see how they can be abused. And that, to me, is the piece that we really try hard to raise. I don't know, Alvin, how would you think of that?

[00:09:13.371] Alvin Wang Graylin: Yeah, I think the problem right now is if you go on, whether in the press or in most books, you get very one-sided pictures. And most people won't bother to read multiple books or multiple articles in the area. So they're getting misinformation a little bit. So we really wanted to make sure that people can make their own judgments based on having a complete picture. The other thing is by just scaring people, you get their attention, but you don't really give them hope. And we wanted to make sure that the optimist side gives them the hope and the pessimist side shows them what's the danger if they don't do something. So it gets people to take action, right? Because I think we're really at a point where in the next five to 10 years, if we make the right decisions as society or as a species, we could have truly abundant future, you know, semi-utopian type society. But if we don't, we could go down a very dark path for a while until things come back from its natural course. So we wanted to kind of shorten that potential down path.

[00:10:14.625] Kent Bye: Yeah, well, I think the other thing that was really striking to me was just to see how much of each of the topics you were talking about, there was some anecdote that you're drawing from either with your time, Alvin at HTC of doing specific research and all the different things that XR was facing with or even some of the AI work and with Lewis, your mixed reality and also your AI work and also quite a lot of science fiction that you have stories I think are also very interesting way of looking into the future and working out both risks and potentialities at the same time. And I think just generally, when I was reading this book, I feel like as you're going into each of these contextual domains, you're covering those pretty comprehensively. And then as the book goes on, it seems to get more into the future, more speculative, big, larger scale. And for me, it's trying to be like, I don't know about this or that. And so as we go through this conversation, we'll get into some of those broader contexts. But right off the bat, we're talking about stuff like AI, artificial general intelligence, artificial superintelligence, which at this point are still largely speculative. But Alvin, I know you kind of lay out a number of your arguments, just kind of set the larger context. And Louis, you talk a lot about the AI manipulation problem to start off, but also in Chapter 10, you start to really dig into deconstructing some of the ideas around AI as just being like more of a statistical machine rather than the superhuman sentient intelligence that has consciousness. So for me, I know, Alvin, we've had previous conversations at Southwest Southwest last year, where we really begin to some of the different sentience and consciousness and a lot of philosophical topics that may be on the scope of this conversation to resolve. But I'd love to hear some your context setting when you start to think about some of the potentialities of AI, but also some of the things to consider when it comes to alignment and making sure that we don't go down a dark path.

[00:12:02.873] Alvin Wang Graylin: Yeah. I mean, in fact, I think alignment is something that there's a lot of confusion on. In fact, probably a lot of people are aiming for alignment. And I feel like if we really try to align to current human values, we're actually probably making a mistake. You know, it's just like 100 years ago or 200 years ago, you know, slavery and selling children and so forth were very acceptable in our ethics. Right. And, you know, same with animal rights and other things. Just 50 years ago, nobody cared about it. So I think we should leave that a little bit open so that we're not saying we're aligning to today's values, because the more we know, the more values will change. Now, I know we do have questions in terms of will these future intelligent, whether you call them life forms or programs or whatever, will they be sentient? Will they have consciousness? I don't know. I don't think anybody does. Right. But at least some of the current Research is starting to show that they do have some world model. They do have a theory of mind. And that's with today's technology. There's a lot of immersion capabilities that we don't know how it comes. Just like we don't know why our brains with, you know, 80 something billion neurons can do what it does. But the architecture of AI today is modeled after how the brain works. We also see by dissecting the behavior and activities inside neural networks that it also has areas of knowledge and capability just like the human brain has parts of the brain that has some certain functionalities, but all the neurons pretty much look the same. It feels like there's some analogous capabilities between them. I think that we will get to a point where fundamentally this technology will eclipse us in terms of at least intellectual side of it. I mean, in some ways it already has in a lot of areas, but as it gets more and more knowledge and capabilities, there's a high probability we'll be able to clip us in almost all intellectual areas. And if it gets embodied into some kind of robotic devices, it could probably also replace us in terms of physical labor as well. So our position in relation to these intelligent machines, how is that going to pan out? I think there's definitely a lot of room for discussion.

[00:14:24.615] Louis Rosenberg: So to jump into this same issue, I really look at it as there being two levels of risks when it comes from AI. We can think of it as the near-term risks or the long-term risks, or we can think of it in terms of the non-sentient AI risks and the sentient AI risks. And so looking into the future and the sentient AI risks, it's obviously speculative. Will we create an artificial intelligence that is self-aware and has its own intentions in the way that we think about intentions. I personally believe we will. I think that it's probably a little further out than it looks when you see today's technologies and they're so easy to misinterpret as sentient. So people see it as closer than it really is. But I also don't think it's very far off. I think that we are just a decade or two away from that. And I do think that that's extremely dangerous. And so I am of the camp that a sentient AI, to believe that we can align a sentient AI or control a sentient AI is wishful thinking, in my view. Sometimes it's underestimating really what it would mean to be interacting with an entity that is smarter than us and has a will of its own and has interests of its own. And if we humans use ourselves as any example, we know that you really wouldn't want to be a less intelligent creature on planet Earth with respect to humans, because we are not easy to deal with. And a sentient AI would, I think, be equally difficult for us. That said, My bigger concerns are the near-term risks when it comes to today's AI, because long before we get to this risk of a sentient AI, we have this problem that I usually refer to as the AI manipulation problem, which is that we humans will be very quickly outmatched by today's artificial intelligence. They don't have to be sentient for them to be able to manipulate us at superhuman skill levels. The sentience will come from whoever controls them, right? So AI systems in the hands of whether it's large corporations or bad actors or state actors, we are very close to the point where AI systems can be deployed that can influence us at levels that we've never seen before because these systems will be fully interactive, they will look human, they will speak to us in very human ways. We evolved to think that we can read their emotion. When we interact with something that looks human, with a face that smiles and a voice that inflects, we think we can read its emotions, we think we can read its intentions. That's what we evolved to do, and that's how we deal with other humans. All of those abilities will be turned against us in the near future, because we will be interacting with artificial agents that look and sound and act human, and we think we can read them, we think we can trust them, and if they're controlled by a bad actor, or they're controlled to have a manipulative intent, we will be manipulated. And so that, to me, is the really big threat, because it's very, very close.

[00:17:38.191] Alvin Wang Graylin: Why do you want to add on to that? Because I think the manipulation problem is one of the definitely major and fairly quick coming problems. The other one that will happen very quickly is actually the job displacement issue, which it may not be a good or bad thing, depending on how we actually react to it as a society, right? Because there will be a large percentage, you know, on the order of 10, 20, 30, 40% or maybe more of the population that will be displaced. Now, if these people are displaced but they're well taken care of and they have a renewed purpose in something else, it'll be great because we're going to have a lot of minds thinking about new breakthroughs or new arts or whatever. But if these people are not taken care of and they're on the streets and they're trying to survive, we're going to have chaos. Right? So I think that's something that the government needs to work on across the world, across all nations to think about how to solve this or get ahead of it before it becomes a problem. There is also the additional risk of misuse. So manipulation is really more about how it affects our mind. But the other thing is having a super intelligent capability will allow you to solve problems and create chemical weapons or new viruses or new cyber attacks that wasn't possible before and would be very difficult to defend against. if that technology gets into the wrong hands, it can also create significant chaos to the world, right? I don't think it will be to the point of where existential to all humans, but it will definitely disturb our social peace and harmony. So there are definitely a lot of potential downsides, but I think at least I feel that the XR side can play a very important role to help some of these issues, right? Both in terms of what Lewis is worried about in terms of, you know, can we properly align more intelligent species or a more intelligent self-guided being is that if we somehow use the metaverse to create more sandboxes for these devices to play simulations, we can help define on these characteristics of it that would tend to show positive outcome for us near term. I'm actually less worried about a super intelligent future AI, because I feel like the more it knows, the more understanding it will be, the more compassionate and the more long-term thinking it will be. But it is that midterm, short, kind of early AGI that is not fully self-aware, doesn't have clear ethics and judgments, which would be potentially misguided by bad actors. That's the one that we should be most worried about. It's just like, if you look at all the bad things that happen in society, it's usually uneducated young people that gets manipulated by some bad leader who then does things because they're easily manipulated. I think AI will be going through that phase at some point.

[00:20:31.252] Kent Bye: Well, there's a number of associated economic issues that you get into later that I want to put to the side for now, because there's lots of big ideas that you have, Alvin and Louis, that I want to sort of unpack near the end. But I wanted to slip into the metaverse aspect of it, because each of you spent a lot of time working in the XR space, and you're both arguing that we are going to have some sort of metaverse, Alvin, you draw upon open standards and really getting out of the existing app ecosystem of just being controlled by a few handful of companies. And Louis, you're really advocating for either like a mixed reality or augmented reality being a key driver for the development of some of these metaverse technologies and what you've termed as the unified perceptual reality as sort of a baseline for mixing and mashing the digital world with our physical world. And so I'd love If you could each sort of set the baseline context for the subtitle of this book is talking about an AI-powered metaverse as we move forward into this next phase. So I'd love to hear you talk about how you conceive of the metaverse and why you wanted to include it so prominently throughout the course of each of the chapters of this book, as well as in the subtitle.

[00:21:41.170] Alvin Wang Graylin: I know the term metaverse has a lot of negative connotations the last year or two, because of all the things that's happened and all the activities of certain companies. But the concept it represents, I think both Lewis and I agree that at some point in the future, there will be a interconnected network of virtual worlds, right? And that's really what we see as the metaverse, not necessarily one company's incarnation of that term. And given that we have evolved over the last five, six million years to optimally operate in a 3D environment, it's actually natural that we move from our current 2D screens that we've been using for the last few decades into a more spatial type of a computing environment and operating environment. So I don't think that the metaverse thing has to be as mysterious and very long definition. It's really just more of the 3D internet that we've been creating. and using more natural interfaces, which it's good to see what Apple's doing with their eyes and hands as their primary interface, where they remove the controllers. And I know there's a lot of debate over it, and there's definitely limitations of that, particularly at this point. And having used the device, I do miss the controller a lot of times. But I think we will get to a point where we will operate in these virtual worlds as we do today in the physical world. And a lot of this is enabled by AI. So this is why we said the AI-powered metaverse, because without AI, almost none of these things would happen, whether you're talking about the scanning of the environments, or voice recognition, or eye tracking, and hand tracking. All these things require AI, as well as all the world creation that will be coming. The fact that we are now having Gen AI not only be able to create pictures and poems, but also videos and soon 3D environments, it really changes the equation in terms of content creation pipeline and the amount of content that will be available for these devices. I think you probably would agree that one of the main reasons why XR hasn't taken off is the lack of content. And with AI creation being an order of magnitude or more cheaper and faster, we will have a lot of content. We will have essentially 8 billion people that can create content for this future interconnected virtual network. So I think that's kind of the high level of where I'm coming from. There's definitely a lot more details. And Louis can give his take as well.

[00:24:05.668] Louis Rosenberg: Yeah, so I agree with Alvin that the word metaverse has its ups and downs in terms of level of excitement around that word. To me, the word metaverse is a nice shorthand. So is mixed reality, so is spatial computing. To me, all these words are really pointing in the same direction, which is a future where we stop thinking about where the digital world ends and where the real world starts. And to me, that's really what the metaverse will ultimately be. It will be a time when digital information just exists as a normal part of our spatial surroundings. It's not trapped on a screen that we'd sit in front of. It's not trapped on a screen we stare down at. It's not a little window in an eyepiece. We just think of digital content around us. And I believe that that merging of the physical and digital is inevitable, because that's how our brains were meant to receive information. And our digital world is really entirely about information. It's about how do you present information to us in the most natural and intuitive and interactive ways, and the most natural way is just place it into our world. And so that's the direction that we're headed, regardless of whether we call it spatial computing, metaverse, mixed reality, XR. You can call it whatever you want, but that, I feel, is inevitable. And the way for it to become inevitable is for it to be driven substantially by AI. AI is required at almost every level, whether it's spatially registering the real and the virtual so that you get suspension of disbelief among the content, or whether it's just creating the content at scale. And so when we talk about this AI-powered metaverse, to me it will first and foremost primarily be the real world embellished and enhanced with digital content that is so realistically integrated, we stop thinking about what are real components and what are virtual components. Now, there also will be entirely simulated worlds that are Part of this future, I personally think those will mostly be used for entertainment and mostly used as an escape, similar to how we go to movies today. We're happy to lose ourselves in a movie for a couple hours or even lose ourselves in a series and binge for a block of time, but there's something different from losing ourselves in another reality and the base reality that we interact with. The base reality, the base metaverse will be a mixed reality, augmented reality world, and we will be able to step into experiences and step out of experiences. Those will be amazing and magical, but I really think about the thing that becomes most inevitable is really the thing that is the most natural. The most natural, the most intuitive, and that will just be to stop thinking about where our digital life lives and where our physical life lives. I think it's not that far away. Within a decade, 15 years from now, we will look back at the time when people looked down at screens in their hand walking down the street, and we'll think that that's just ridiculous. People walking down the street with a little screen bumping into things, and we'll think there was a time when information just wasn't where it should be. It used to have to be in your hand. And we'll think, why was anyone skeptical that the metaverse was going to happen? Of course it's going to happen. The place we want information is just where it should be, not in your hand or stuck on a screen. So it's just a question of how quickly will it happen. To me, I have no doubt that it will happen.

[00:27:41.938] Alvin Wang Graylin: You see, Louis is actually more optimistic than I am on this one.

[00:27:47.592] Kent Bye: Yeah, like I said, I think both of you are more on the optimistic side of the metaverse and XR, as am I here on the Voices of VR podcast. And so in the chapter three and four, there's a couple of threads here that I want to weave together. One is the centralized versus decentralized. Louis, you talk a lot about how, look, if you look at the existing ecosystem, there's a lot of really big corporate players who don't seem to be interested in changing any of their centralized practices. And then the blockchain is mentioned a little bit. I don't personally put a lot of weight into that, although it does have some potential down the road at some point, but I feel like there's like a value system around blockchain that for me, I feel like that's just not a promising short-term solution to overcoming some of these centralization issues. But one of the things that you mentioned later in the book that you spent about half your life in China and half your life, United States, and you make some really interesting points in chapter four, talking about China's closed model, how they were able to really shut themselves off in the world and really build up their own ecosystems. And you're seeing that in some ways as a potential inspiration for if you have some sort of like sandboxed ability to have a context to have some of these open standards really proliferate and flourish, then you can start to leverage Metcalfe's law and have a lot of people engage. But Luis, maybe I'll start with you and then Alvin, you can go into what inspirations you're taking from China.

[00:29:09.948] Louis Rosenberg: Yeah, so in the book we debate this issue of whether the future will be centralized or decentralized. And it's interesting because we both start from the same premise, which is it would be really good if it was decentralized. We fully agree on that. I am more pessimistic about a decentralized metaverse from the perspective that the companies from which these technologies are emerging have a very strong vested interest in centralized closed platforms. That is their current business model. There's nothing that's driving them to change those models. Those models are very successful. And there's lots of really good business reasons why large corporations want to have centralized platforms. And that's one of the reasons why I actually do believe that this augmented metaverse or mixed reality metaverse is going to ultimately be the first decentralized single world. For the very simple reason that there is one physical reality, and if we're augmenting that one physical reality, it's inherently an open platform, an open standard. And so if you look at telecommunications industry, which already deals with just interacting with the real natural world in a physical spaces, it's inherently more open in a lot of ways than, say, social media platforms or virtual world platforms, which can so easily be closed because they don't have to share the same physical space. And so if we think of this augmented metaverse as evolving out of the mobile phone industry, and we think that would an augmented metaverse be such that if you're wearing a headset or eyewear from one vendor that you see a different world in terms of the content that's around you than if you wear eyewear from a different vendor, it would really inherently be absurd. It would be as absurd as if you couldn't make a phone call from an Android device to an Apple device. So the fact that we have this shared world will give the industry, at least when it comes to an augmented metaverse, a fundamental reason why it has to be open and it can't be walled gardens in the same way that so many of today's platforms are. I see this open metaverse starting from augmented mixed reality worlds, and then those business models hopefully will gain traction and will turn some of these future fully simulated worlds to be more open, but they also might stay walled gardens for a very long time. But Alvin will potentially look at it in reverse.

[00:31:44.320] Alvin Wang Graylin: Yeah, I actually don't know if we are that far apart, because I also think we start with closed because that's where we are, right? I mean, we will go through multiple phases. And just like you look at the internet, we started closed, where there was Prodigy, CompuServe, and AOL. They were completely closed off of each other. They had very limited functions. There was no apps that went on there that they didn't make. It was all first party, right? And even their emails only worked with their own emails. But over time, they all essentially integrated with the rest of the web and these companies no longer exist or are part of other companies. I think we're going to see similar type of progress where if you look at the internet that we have today, it actually is a lot more open than people realize. I can use any phone, any tablet, any computer and go on the internet and see any website. Pretty much, right? I mean, there's a few places in the world where you might be blocked from a few sites, but in general, most sites are available and any device can get on there. I can even take a XR VR device and go on both of these sites. So I think from a pure network accessibility perspective, it's actually quite open. From an email perspective, you have a common email protocol that can send an email to anybody anywhere in the world using any client. and I can pick up mail from multiple clients. As you mentioned for telecom, on phone numbers, I can use a phone to call any phone anywhere in the world, and it works. That wasn't the case at one point. It used to be where you could only call your own networks, and you couldn't migrate between carriers. But network and market forces help us create larger networks, and the value of that larger communication network people tend to go towards that large network because it creates more value for the participants and also for the entire ecosystem. So Kent was talking about the China aspect, and I think China will actually initially form their own closed network, but not on a company level, which is what we'll probably see in Western markets where you have a Facebook and a Microsoft and whatever, but it will probably be on a national level. So you may have a hundred companies all making content or platforms for this national metaverse, which will have over a billion people in it. And you get to see what does it feel like to have a fully realized metaverse economy. with a fairly scalable population. I think those type of learnings definitely can be taken and brought to other markets. If other regulators see the benefits of that, they will probably copy the things that work for them. I think over time, we will migrate to what we've done on the Internet, which is mostly open and some places will be closed. That's how we maximize value exchange between nations and between peoples. and to have essentially a flow of information and commerce as we've done in the physical world. So, you know, there's definitely a lot more detail that's in the book, but it's hard to give too much on a short call.

[00:34:41.350] Kent Bye: Yeah. Yeah. And I think you had cited Tim Sweeney had given a GDC talk that there was like 600 million monthly active users of what could be considered proto metaverse platforms. And so there's certainly a lot of momentum for this as an idea as well. And, you know, as you move on and start to dive into both the chapter five privacy risk and chapter six around marketing, I think those two topics are. connected inherently because of a lot of the surveillance capitalism business models, marketing and gathering of the data is very much a part of where this is going. And especially when you start to throw AI modeling into all the data that's going to be coming in. I know both of you have mentioned different aspects of the need for regulations. The EU Act is something that has been through deliberation. And there's also a lot of stuff that you talk about later in terms of additional regulations that need to be passed in order to ensure our rights. Louis, you go through like six different rights that need to preserve our privacy. And there's one passage I wanted to read to you, Alvin, which I disagree with throughout there. And you start talking about the global ID and China's way of having identity tied to people. And you say, you'll likely be concerned that you will no longer have privacy. I'm sorry to tell you that we have all lost our privacy many years ago. So it feels like a little bit of a defeatist perspective on privacy. But yeah, I'd love to hear each of your perspectives on this privacy issue, because I feel like it's certainly from my concern, one of the big topics, especially when it comes to like neuro rights and Farahani has a book talking about cognitive liberty and other human rights approaches to ensure that the data that's available with XR isn't used to exploit us even more.

[00:36:23.855] Alvin Wang Graylin: Yeah. I'll quickly kind of explain a little bit about that phrase. If you have a phone, if you have credit card, if you have browsing with cookies on, pretty much everything that you're doing, everything that you're saying is being listened, tracked somehow. You know, I have rings on, I have glasses, I have information. So in a lot of senses, we have already lost a lot of our privacy. I mean, there's certain things like neural rights. So far, there isn't people going around taking our brain signals. But at some point, if people start to wear e.g. sensors on their head or on their body or something or in their heads, it's very difficult to maintain control on that. Now, the other thing is a lot of people express the interest in protecting their privacy, but then when you look at their actions, most people don't act like they care. Right. So, you know, you go to any of these events and you say, Oh, you know, give me your email. I'll give you a little trinket or give me your phone number. I'll give you a trinket. And a lot of people do it. Right. You know, and we won't talk about how the young people, they essentially post everything online. So there's even if they may regret it at some point, but you know, the sense of privacy, I think changes between generations also, as well as how much technology you're applying. But I think there's a lot less privacy that we have than we think we have. So. to answer your question earlier, but Louis.

[00:37:47.289] Louis Rosenberg: Yeah, yeah. So I agree with Alvin that we already live in a world with very reduced privacy. I think we will look back at today's day and think, when we used to have so much privacy, And I say that because we're about to enter this world where the way we interact with computers are so different that it will become so much harder for us to even think about privacy. To me, the biggest change, and this is relevant to XR, but also just relevant to what I would call kind of this pre-XR time, which is conversational computing. Like we're entering this world very soon where we'll all be talking to our computers and our computers will be talking back and very soon there'll be photorealistic avatars that we're talking to. And even before we're in XR and we're in immersive spaces, we will be talking to photorealistic avatars just on flat screens. And that's going to be how we interact with products and services and businesses. And there's a great reason for it. It will be natural, and it will be intuitive, and it will be so much easier to just talk to a virtual salesperson or a virtual representative than to fill out a form. And that's great. But here's the thing. When you fill out a form, if you go to a website and you fill out a form, you are consciously aware that you're giving them information. You're typing it in. You might actually be a little bit cautious about what you type in. If you're going to a website about a car dealership and they're asking you some questions about what you're interested in, when instead you go to that car dealership website and there's a photorealistic avatar that looks friendly and looks human and it's just engaging you in conversation and it feels like casual conversation, you're not going to be thinking, oh, they're just filling up a database with information about me. You're just thinking you have a conversation because we've all lived in a world where you could go talk to a salesperson, and yeah, you know that you're talking to that salesperson, you're telling them something about you, but you're not expecting everything that you say to that salesperson ended up in a database somewhere that's now populating a billion other virtual salespeople that can pop up at any point in the future creating this whole profile of you. And so we're entering an age where we're like evolutionarily not prepared to deal with digital information in ways that feel human. Like when we talk to a virtual spokesperson, an avatar that looks human and engages us in friendly conversation, we will be a lot more forthcoming than we are when we fill out a form. We won't even realize how much information we gave up and these systems will be powered by AI to be really skilled at extracting information from us. And so, however free people are about putting things about them on social media or on YouTube or wherever, they're at least making conscious decisions about what they're posting. And we're going to soon realize that we're not as much in control of what information we give up when AI systems are engaging us with the directed effort of learning things about us, storing things about us, and then learning how to predict our reactions or our behaviors or our interests. So privacy to me is, I mean, I can't argue with Alvin at all. If I lived 100 years ago, I'd probably be appalled at the level of privacy that we have right now. But I think 10 years from now, people will look back and still think that we had a very private society.

[00:41:19.012] Kent Bye: And I think this gets into what a lot of neuroscientists with neurorights, with Nita Farahani, with rights to cognitive liberty, even with your rights, Louis, with the right to access equality and dignity, behavioral privacy, emotional privacy, and then the more AI specific ones with authentic experiences, conversational authenticity and real world alternatives, where AI is getting into issues of privacy and neurorights. Alvin, I want to ask you a bit of a follow up question here because there's a philosopher, Agnes Callard, she talks about how when you come to truth, it's a dialectical process between believing truths and avoiding falsehoods. You can't do both algorithms at the same time, which is one of the reasons why I love the dialectical process of this book where you're really trying to show both these perspectives. So whenever you talk about China and all the potential awesome things about China has been able to do, I also think about some of the darker sides of some of those policies in terms of like the global ID, where there's like a social score tracking their behaviors online and perhaps restricting their access to travel or education, or, you know, the lack of freedom of expression that happens in China with just the idea that the government could censor information. So I understand that there's certainly a lot of things that have been able to be born out of that context of China, specifically in the context of like a global ID. where you have a uniform ID that could potentially make people's behavior online more pro-social or more pro-topian in a way that they're less likely to maybe be abusive if they have it tied back to your identity. There's more accountability in that way. I can definitely see that. But coming from a Western liberal society, I also am terrified of having the United States government or any unified global government having access to my identity and everything that I've ever done.

[00:43:00.965] Alvin Wang Graylin: Yeah. I mean, well, I have friends in the security space and actually I think the US government already has all that information anyway. So whether or not you want it to, cause they have backdoors and everything. So to that point though, first I want to clarify, I think there's actually been a lot of misinformation about the whole social scoring thing. If you look at what it was about, was really on people who was committing crimes or taking large loans that didn't pay back, mostly business people, right? It wasn't a thing that was used to manage individuals. And the whole thing about social scores for spitting or whatever, none of that was actually true. It was mostly made up. But what there was, was if you took out government loans, if you borrow money from banks and you didn't pay back, you had a hit on your credit score. Very much like what happens in the US, right? If I went and maxed out my credit cards and I didn't pay it back, I have a black mark on my credit score. So I think there are definitely issues with how it's being understood. So if you look at it as your US kind of financial credit score, I actually think that that's a much more analogous comparison. And I don't think there was any misuse of it. At least, you know, I've lived there for almost 18 years. And I actually had a couple of friends who got on the blacklist and I asked them what happened. And he's like, oh, well, I didn't pay my employees. I closed my shop and I borrowed money from the bank. And I ran away. I mean, so in some ways it's like, okay, maybe they deserve not to be able to fly first class. Right. They could still go on the train. They couldn't fly first class. Right. Things like that. I was like, okay. I mean, I don't think that was necessarily a overly punishment for what they did. Right. In fact, maybe that could help with some of the Wall Street issues that happened or some of the Web3 issues that happened that could have been avoided. Now, in terms of having identity, we actually have seen multiple studies that show if people are identified with a real identity versus an anonymous identity, they tend to be a little bit friendlier. If we're meeting face-to-face, I'm not going to say very negative things to you face-to-face, whereas somebody online will do that all the time because they feel like there's no downside to me doing it and they don't know who I am, so I'm just going to lash out. That creates a much more harsh environment online in some cases. I'm sure you've seen comments in some of your posts or whatever, and you're like, where's that coming from? In terms of a unified identity, we actually already all have a global identity with our phone numbers. There's something like 10 billion phone numbers around the world, and each one is tied to an individual who is tied to a phone, who actually can be tracked by their location, by the carrier, anytime they want. Now that information, by the way, is accessible to your governments, no matter which government you're at. I don't think there's as big of a difference between systems in China versus the US. I'm a US citizen, so I appreciate all the benefits of the greater freedom, although it may not be as free as we perceive it to be.

[00:46:03.730] Kent Bye: Yeah, well, we're diving a little bit more into the economic model in China and what's here in the United States a little bit later, because I think there's some other interesting things to dig into. But I did want to dig into some of these chapters where you're starting to talk about both AI in the context of education and medicine. So love to hear some reflections on both where you see the use cases of XR, but also where AI starts to come in and start to amplify different aspects. I know Louis, you've done a lot of personal stuff with medicine, as well as Alvin, you've in your time at HTC, you had on the front lines of helping to do a lot of the frontier research on a lot of these different issues, whether it's on education or across lots of different domains of XR, but love to hear specifically where you start to see this synergy between what we already know about the potentials of XR as a spatial medium. And then once you start to add in AI on top of that, how that continues to deepen what's possible with the kind of front end being the spatial computing and the backend being a little bit more of the AI that's helping to draw out a lot of the patterns and what that enables and all these different industry verticals.

[00:47:12.240] Louis Rosenberg: Yeah, one of the comments I made earlier was how this convergence of spatial computing and AI will give us superpowers. And I really do believe that, because we will basically have a coprocessor that is helping us interpret our world. and it will be naturally integrated into what we see everywhere we go. When we started writing this book, I actually thought that this transition was a little bit further out than it's looking like it is. And really, to me, the big transition that just happened over the last 12 months is these multimodal large language models. We think of these large language models, for the most part, like ChatGPT, where they take in as input human language. You type stuff in or you assume we'll be talking verbally most of the time and they'll respond. With these multimodal models now, we're basically allowing these large language models to also take in as input real-time video and real-time audio. the AI eyes and ears. Now, if you think of XR devices, even at the crude level of smart glasses, and Meta has their Ray-Ban glasses, but they're really thinking very forward on this point where they're empowering their Ray-Ban glasses with multimodal large language models, so that now you're wearing glasses that can see with an AI that's on board, that can see and hear what you can see and hear, and can whisper in your ears or give you visual content to assist you. This is really going to be the first step to us feeling like we are superpowered. There you go. Because we now have an AI that we don't just have to ask that AI questions. We don't have to just say, what do I do in this situation? What should I look? It's observing our world as we're going through it, And it's telling us what we think is important in our ears or with visual content. And now if we take this into verticals, like medicine. So now you're a doctor with a multimodal large language model in your glasses, and you're examining a patient. And this AI has been trained with extra medical capabilities beyond what, say, would just be in a standard large language model. It will be like having an extra doctor just sitting on your shoulder, whispering in your ear, telling you, hey, did you notice this? You should look at this. And it will be extremely skilled. And so it will make us smarter than we are. And it will start to feel like it's just part of us. And again, it will feel part of us if we're shopping, or part of us if we're a doctor who's examining a patient, or part of us if we're an architect who's reviewing a construction site. And again, it's not just the XR part of it, of an engineer on a construction site who can hear through walls with XR. There will be an AI who's looking at the construction while you're looking at it and giving you its perspective or suggesting, hey, you should check those welds because they don't look quite right. That's not that far away. We are just a few years away from almost every profession having an expert that will notice things. as we're doing our work and help us with that work. And so again, I think it will be superpowers. We can also talk about how it could be used against us. That's a separate issue. But if we have regulation and we can avoid that, it will give us superpowers.

[00:50:44.686] Alvin Wang Graylin: Yeah. And to add to that, I mean, by the way, for the audience, I just put on the smart glasses. And it does have cameras and mics, and it has speakers. And so it does already do a lot of the things that Luis is talking about. And in fact, recently, they just enabled multimodal vision capability. But honestly, they've crippled it so much that it doesn't tell you hardly anything anymore. So this actually goes back to kind of what Kent was saying earlier about privacy. And I think once I have my own smart AI glasses with visual capabilities, I actually want to give as much information to that device as possible because then it understands me, then they can give me advice that is relevant, that knows what my needs are, right? I would want its ears and eyes to be on all the time so that they can help me be a better version of myself. Now to the privacy advocates, they're like, oh, then you're giving away everything. But I think everything has a price, right? Just like everything today, when we go online, why is the internet free? Unfortunately, it's because we're leaving our cookies everywhere and doing other things. So if it makes us smarter, if it allows us to be more productive, there's probably a lot of value to that. Now, Lewis talked about the medical space and construction space. I actually really think that this would have a major impact on the educational space, which Kent referred to earlier. And I think There's lots of research that shows if you had one-to-one tutoring versus a large class, you're going to get two orders of magnitude better in terms of learning. Especially if you can then have personalized curriculum versus having everybody working down to the lowest common denominator, you're going to again get lifts and so forth. If I can have a ultra smart, all-knowing teacher on my head as a child, I could probably finish my high school by the time I'm 10 instead of waiting till I'm 18. That allows us to have so much more knowledge and so much more understanding. Then also for all the people who are the most gifted kids, they're actually probably the most frustrated people in school because they're the ones that feel like these things are moving at a tension that I can't hang with. This is too slow or the teacher's not talking about what I want to and I'm being distracted and we lose out on the potential of those hidden geniuses that get either sidelined, or they're left behind, or they're ousted from the normal track because they don't fit in that learning model. So having an AI that knows what you're learning, in fact, and maybe giving it some neural signals to know whether you're paying attention or whether you're understanding, can actually help with that process. So in another few years, I do agree with Louis that we will actually have probably less privacy, but maybe we'll be happier to give up that privacy because we'll get more value from it versus today, we're just getting to see more cat pictures or TikTok videos. Maybe in the future, we're actually going to get real knowledge or real value from giving up that privacy.

[00:53:40.007] Kent Bye: Yeah, for me, the more pernicious aspect of head mounted AR privacy is not just your own privacy, but it's also bystander privacy of other people that have not consented. And so sometimes by you deciding to open up everything that's in your perceptual view may mean that there's other people who haven't consented. And that whole dimension of bystander privacy is something that doesn't put the agency of that privacy into those people's hands anymore. It's now in your hands to give that over for them.

[00:54:05.738] Alvin Wang Graylin: Well, yes and no. I think if we actually did have a, let's say, universal ID, and if you had opted into a, I release my privacy for certain scenarios, like I'm having a conversation with a friend, that information can be used for AI interactions or whatever, right? So as long as you give a one-time grant, just like when I go to a website, I can give a one-time, OK, I'll let you track my cookies, or I'll let you see my locations for an app, right? I think those type of things can be managed. In fact, I was chatting with a professor in Washington about two weeks ago. I went to his lab, and they have a way to, based on the location, if you're talking to somebody, that voice, it can track, but everything else, it can block out. So it allows you to only be recording information. for a person that you're actually having a conversation with. When you come back to it, it knows a voice print, and so it can then re-engage. There are technologies I think that we can access that allows us to maintain bystander privacy at the same time adding value to the actual user of the technology.

[00:55:08.915] Kent Bye: Yeah, I wanted to dive into chapter seven, which is how will tech advancement disrupt art, culture and media? Because there's some interesting divergent perspectives on this where, Alvin, you go into a lot of the aspects of not only the virtual production and talking about this flourishing of creativity, and Louis, you talk about this more generative inbreeding and more of the potentials that this type of generative AI could actually stifle human culture. in a way that is feeding on its own supply and degrading over time. One of the things that isn't necessarily elaborated too much in this chapter, but I wanted to just point out, is that a lot of these generative AI systems are oftentimes training on data that they don't have ownership over any sort of copyright. So you're actually displacing creative artists by not only stealing their work, but also displacing their jobs. And so I feel like there's a whole other dimension of that, that's being discussed with lawsuits with New York Times and other lawsuits with, you know, is this fair use? Or is this appropriating and stealing copyrighted materials? So I can see the potentials of this flourishing of creativity. And there's certainly no denying the potentials for this are but when you talk about job displacement in a larger economic and cultural context, love to hear each of your perspective of Alvin the positive aspects of that flourishing and then Lewis how this could actually impede and stifle human culture.

[00:56:27.616] Alvin Wang Graylin: Yeah, maybe I'll go first. I think that we kind of need to reset a little bit about the long-term economic impact or the finance impact of a maybe post-labor society, right? I know that's going a little bit far out, but, you know, we right now overly tie our identity and our purpose to our job. And so we feel like anything that can risk us losing our job is a negative thing and have a negative impact, whereas If somebody told you, hey, you don't have to work anymore, but you can still have a high quality of life. And now you can go and spend your time making art or writing poetry or making video games or whatever, right? Whatever it is that you enjoy. I think a lot of people would actually choose that path. And in the past, it was probably hard to achieve this from a economic perspective, because who's going to pay for it? But in the next 10 years or so, if we get to a world where productivity can go up 10x or 20x or 100x, our need for every human to use their labor in exchange for value will probably go away. If we take that perspective, maybe technology that displaces jobs may not be as negative a thing as a lot of people feel. Now, on the other side, if you take the technology and you give it to these artists, maybe it actually liberates them to be even more productive, to be able to imagine more things. Whereas before, they would take months to make something, now it could take hours or minutes, and they could have more ideas. Recently, there was the whole scriptwriter associations lawsuit with Hollywood. Now, if these scriptwriters can make amazing stories and not need the studio complex to make it, it actually brings the value back to the creator because now they can write this story, write the script and say, okay, can you turn that into a short film for me? And boom, you've got the value before of a studio was to do the production, which could cost tens of millions or hundreds of millions of dollars. Now it doesn't, right? That actually, I think brings the power back to the creator. Now, one other aspect is, you know, are you using content that is copyrighted or do you have ownership for? The reality is that every artist borrowed from somebody else, right? Everybody's knowledge is based on prior readings, prior viewings. You know, Picasso was saying, well, good artists borrow, great artists steal. I mean, everybody's styles is based on progressing on top of what they've already seen. So I don't think that the current models are actually stealing in the sense of they're not making an exact copy of something. They are learning from looking at millions of pictures and photos and art and whatever and videos. And then depending on what the request of the user, then generating something that I guess manifests the request, whether a visual way or a textual way, right? So I don't know if we can really see it in a different light than the fair use of any artist that watched a story and then was inspired by it to write another story.

[00:59:37.173] Louis Rosenberg: So there's a lot of that I agree with. I think that there's a dangerous side to this whole thing, which is that, first of all, if you just go back, let's say, even just 10 years ago, and if you asked most people in the world of AI, what professions are the ones that are going to be the biggest at risk 10 years from now? You might have said statisticians or analysts. Nobody would have said artists. I mean, nobody. And so it's shocking to me that really the artists and composers and musicians and people in the creative fields are on the front lines of really being outmatched by AI for how they make a living. Now, I'm not saying that AI technologies right now produce artwork the way a human can. In fact, they're really very, very different. I'm saying if you're an artist who makes a living by doing graphic design, or logo design, or any kind of commercial artwork, AI is a very serious competitor. So now you can imagine there's an entire generation of artists who are really struggling to figure out, is their profession ... It was already hard to make a living as an artist before, but now it's like, is their profession even going to exist in a way with their skill? provides value, like all the training that they had. Because as Alvin said, somebody can have an idea and give it to Dolly or give it to any one of these platforms and create artwork that looks professional quality. And so you don't need to have that artistic training to do that. And so now, who's going to bother going through the years of getting that artistic training? Some people will, because it's their passion, but fewer will. And so we humans are going to start losing these skills, because at least for some foreseeable future, it will become less economical to do these jobs, to be a freelance artist. Now, the question of, is there a difference between a human artist creating a piece of artwork as a result of their learning from generations of artists who came before them and AI doing that? The human artist is certainly inspired by other artists. They train on other artists, but they bring something of themselves to the process. If they're a true artist, they're bringing their unique sensibilities to their artwork. They're bringing a particular point they're trying to get across, whether it's an emotional feeling to their artwork. And if we start losing those artists, we're losing that ability, because these AI systems do not do that. These AI systems are creating a statistical amalgamation. If I ask, draw me a painting of a cat juggling in the style of Picasso, I will get amazing results. that are statistically the best possible result based on the millions or billions of pieces of artwork that the system was trained on. But that AI system is not bringing anything of itself to the process. It's not pushing the limits of artwork in a new way.

[01:02:42.330] Alvin Wang Graylin: I think your point, I actually am agreeing with you in the sense of the people who are the real creatives that are breaking through new grounds. They're safe, actually. The ones that are actually the most in danger are the ones making commercial, very non-creative, more dictated works, right? Or the people who are just copying.

[01:03:01.151] Louis Rosenberg: It's a question of, can somebody survive as an artist? I mean, most artists who are pushing the limits are probably making a living doing commercial work, doing things that pay. And that whole spectrum is getting taken away from artists. And so this gets at this issue of why I worry about what AI potentially is doing to human culture. Because human artists are learning from the past, bringing something of themselves to it, and doing something different. These AI systems are being trained on the past, and they're very good at recreating the past. They're not really pushing culture forward. And yet, these AI systems are going to be producing way more artwork out into the world than human artists, because it's faster and cheaper. And so now, the cultural landscape is getting more and more just filled with artwork that are derivatives of the past, rather than artwork from human artists that are pushing things in a new direction. And it has the potential to stagnate our culture. And again, it's not that there aren't human artists who want to do it, but can they afford to be artists? It's harder today than it was a year ago.

[01:04:16.032] Alvin Wang Graylin: Yeah, no, I agree with in terms of the lack of financial return on being an artist and both my parents were artists. So I know the starving artist thing is real. But the fact that you say, hey, they're not coming up with new ideas, I actually would challenge that because there's been multiple studies done where they've said, you know, create a new form of art that's never been seen. using what you know. And it would actually create new medium, actually, new ways of expressing our new styles that were things that we haven't seen. And the technology today can break through if we have the prompts that are guiding them. So there are some creative artists that want to use this technology. they could absolutely do new forms of art and to also express it in a more 3D way, more dynamic way, especially with immersive tech that wasn't possible before. So I'm not sure we're saying it's going to stagnate or it's going backwards in terms of creativity. If anything, I feel like we're going to have an overabundance of creativity, more so, more of a renaissance than we've had in a very long time.

[01:05:22.490] Kent Bye: Well, I, I'm more on the side of Lewis, just in the sense of, do you think about chat, GBT taking the corpus of all the past and then being trained on that it's giving a statistical probability of what's happened in the past without accounting for how things may be changing in the future. And so you risk kind of polluting the landscape of media and content if everything's just derivatives of the past. So.

[01:05:44.787] Alvin Wang Graylin: But I think chat GPT is a very lobotomized tool, right? If you actually go back to GPT for the raw engine, it's actually significantly better in terms of answering questions than the chat GPT version. There's something like 15 pages of preamble on every single chat GPT prompt before it actually answers your question. So it's telling all these things that you cannot do or things that you should try to do before it answers you. So it comes up with things that are very, vanilla, right? And that's with version four, right?

[01:06:17.984] Kent Bye: I think that's the point is that there's a reduction of diversity because it's basically taking something that is a multitude of many different perspectives, and it's reducing it down to one, and then it's flooding the system with that singular perspective.

[01:06:29.894] Alvin Wang Graylin: I actually don't think that's the case. If you look at how the structures work, they're not just a statistical amalgamation. It actually has multiple seeds and there's a lot of stochastic functions inside that allows it to create what a lot of people call hallucinations that if they were in humans, they would call it creativity.

[01:06:47.869] Kent Bye: Sometimes it does hallucinate things that are just completely wrong though, right? Like you have like probabilistic words, but it can be completely wrong. I've had chat to BT make up facts about myself that are not true.

[01:06:59.819] Alvin Wang Graylin: Absolutely. For things that are factually important, then it is a error. But for things like art, it actually could create a different way of expression, different types of pattern, different types of colors, different types of videos, that actually could be quite interesting in terms of expression. Right. So I don't think we want to limit it this way. The other thing is, these things all have temperature that you can change that allow you to have the amount of consistency or the amount of stochastic nature to add to these systems. So I don't know if synthetic data is actually going to become an issue. In fact, it just came out last week that there's a new study that was showing that synthetic data is now going to create actually better results than using organic data. And we also now have the higher-end models now training lower-end edge models that are 1 10th or 1 100th the size based on purely synthetic data. So I don't know if we want to undervalue how much synthetic data will play in the long run in terms of the training set.

[01:08:04.739] Louis Rosenberg: But if you look out into the future and you say, let's say people aren't really training or aren't putting the time in to learn fine arts, to learn to be a classical painter, right? Like that's a lifetime commitment to learn to be a classical painter. Yes. If it becomes so easy and commoditized, that skill through AI, that nobody has that skill and all you have is synthetic data produced by AI, we will have lost the actual human part of it. We will have lost the thing that was the basis. And so that discipline won't really evolve from a human, it might evolve from, the AI might stochastically, randomly land on new things that people like. The human artist who pushes that discipline is going to become less and less likely because there's going to be less of them, and because the world's going to be flooded. The content that's going to drive the culture is going to be flooded by AI culture. The same thing will happen in music, too. In music, it actually happens in some sense faster, but the potential for human culture to cease being human is really real, as these AI systems are producing content at scale, at an overwhelming scale, compared to human ... I mean, musicians already are complaining about this, right? Spotify being flooded with AI generated content that humans could never keep up, and when these AI systems start creating a musical content that people like, then people will stop complaining that Spotify's being flooded with this AI generated content. It will just mean that that discipline, that art form will cease being human.

[01:09:48.231] Alvin Wang Graylin: And it's- I mean, this assumes that these people still require their profession to be their income. If it is, then you're right. But if we get to the point where people are pursuing art because of the love of art, and they don't have to worry about paying their rent or having food on their table, they may actually be more artists than there are today. Because true artists today that are creative, that are willing to starve, there aren't that many. I mean, my dad was one of them, so I know. And he wouldn't sell his paintings. He's like, I don't want anybody to affect me in terms of how I'm going to make my art. I don't care. I don't have no money. And there are not that many people like that. So I understand where you're coming from, but if the technology allows most people to have beautiful art at home and they're made by machine and they're happy with it, why do we need to stop them? And most of the furniture that I think everybody has at home, maybe you have one piece of, of a man-made furniture, but I would bet that most of your furniture is machine-made. Most of your clothes is machine-made, right? But it was designed by humans.

[01:10:46.168] Louis Rosenberg: I mean, we've lost craftsmanship and that's a separate issue, but now we're going to lose the design part of it too. And it's not far away.

[01:10:57.636] Alvin Wang Graylin: Yeah, but I think the design part, if you look at the functionality, the machine design might actually be more functional in terms of achieving the result of making you warmer or being more washable or whatever, right? There's so many things that could be better done from a pure design perspective. Now, if you're talking about creativity and something that looks, you're going to walk on a runway. So far, I mean, I would think most of the stuff I see on the runway, I wouldn't want to wear.

[01:11:23.769] Kent Bye: But anyways, um, there's, there's one last thing I want to jump into before we wrap up. And I think it ties into this discussion because one of the things that the New York times lawsuit against chat to BT was able to prompt it with a number of articles that were written. And if you compare what chat to BT generates, it's essentially reciting what it memorized, which by any standard would be plagiarism. And so there are ways in which ChachiBT can be taking stuff on data that was not given with consent, not paying licensing fees, trained on artist data where you can prompt it with the artist names who are living artists or their lives are being impacted. And this gets to the larger economic issues. that I think in your last chapter 10, you really dive into some of these future thinking about how could the economy evolve in order to account for all this, because I agree that living in a post scarcity, abundant world would be amazing, if it's done with people consenting to contributing content to this commons, that could create this flourishing future. But I think We're obviously in this capitalistic model that is this seizing of data that is not necessarily all the right permissions and generating stuff. But there's these other models with socialism that you talk about, Alvin, in terms of China's model, but also what could be even beyond any of these things. And you make the pretty bold prediction that we could have something like artificial superintelligence being the central planner of some sort of like either centralized economy in a regional context, or even maybe even a global context of having artificial superintelligence be our benevolent dictators of sorts helping to sort out a global unified government. So that's where you're getting into the 11th chapter of the geopolitical aspect. And then the 10th chapters of the economic context. Now, Lewis, you've got some alternative perspectives in terms of collective swarm intelligence, that I'd love to have you counter some of the other ideas, but love to have just a little bit of discussion on these last two chapters, because I think there's a lot of really big ideas that I think are very speculative, but also potentially controversial and potentially even what we need to move towards.

[01:13:27.650] Alvin Wang Graylin: Yeah. So one quick point on the New York Times articles, they actually did some digging into it and it looked like what they were doing was actually putting specific prompts with the URLs that they want to get from. And the answers that they were getting back actually did not match. So some of it was doctored is what it sounded like. So I don't know if the systems performs the way that the suit was alleged. We'll see what happens when it goes through the whole trial process. But at least the initial analysis by people who were in the space that were not biased thought that the suit was actually a little bit doctored. Now, going to the point of the post-labor economics, I do feel that we have to get there. Otherwise, we're going to have all of these issues where whether you're artists out of work or you're musicians out of work or you're manufacturing people out of work or you're lawyers and accountants out of work, you're going to have people with nothing to do. and we need to make sure there's a safety net, right? So there's huge responsibilities on the government side to provide that safety net so that no matter how this happens, whether or not IP rights are protected, that people don't need to feel like they're going to not have a way to live, right? In fact, just last year, I think around 450,000 people in the tech industry were laid off. Now, I don't know how much of that was AI prompted, but I would assume some of it was, particularly in the art side. Developers are supposedly now 50% to 100% more productive than they used to be with AI support. All of the call centers, you can replace them with AI very soon. In fact, in 2005, when I was doing my natural language processing search system in China, we were actually helping the Chinese carriers. I worked with all three Chinese carriers. We had it in their data centers, and they laid off 30% of their call center staff because they could use this to replace them, or at least for a part of the work. And so technology having that kind of impact is inevitable. Now, in terms of its ability to govern, if you go back to Plato and Philosopher King, that concept is not new. I mean, it's been around for a few thousand years. Voltaire was saying that the best form of government is a benevolent dictator interceded with periods of assassination because the humans are going to be corrupted by this power. Now, I think that the value of what we have now is that we will have a much more knowledgeable and rational entity that has a lot more data because we now have data on everything, that data will be much more real-time. Hopefully, if the alignment is not over-aligned to one political view or not, or one country or another, it will be able to optimize the value for the greater good. I would believe in an AI leader now than the two that we have in the option for the elections this year. Those two aren't great choices, but at least I think even if these AI systems were used to assist our leadership, to give them guidance to say, hey, that might not be the best decision, or maybe we shouldn't increase the interest rate, or maybe we shouldn't break off trade with the rest of the world. Those kind of guidance would be helpful because they will have a much more greater view, and they will have essentially access to all knowledge that we've ever had. And there is a lot of value in guiding decisions with that type of model. More like even if we operate it as an oracle, versus as a actual entity or actor to do the governing. I think it would add a lot of value to our society.

[01:17:01.135] Louis Rosenberg: Yeah, so superintelligence is, to me, really scary, partly because it would be great if we could have superintelligence that wanted to look out for the human species and help us thrive, but it's completely speculative that that would be the outcome. And I feel like the notion that we could allow a superintelligent AI system to make decisions for our society and that the outcome is in the best interest of humanity is wishful thinking. It could happen. But it's wishful thinking, and so I think that that's one of our bigger risks, because I do think superintelligent systems are coming, and I do think there will be a tendency for people to want to offload decisions to these AI systems. In fact, it's already happening today. More and more important decisions are being offloaded to AI systems. for important things like sentencing decisions for criminal cases and loan decisions that affect people's lives and increasingly will be medical decisions. And these decisions are being made without human sensibilities, without human emotion, without human values. And it has the potential for a superintelligence to be making decisions that are potentially very ruthless from a human perspective and in a lot of ways for humans to lose control over our collective destiny. And so what's the alternative? What's the alternative for human decision making in a world where AI systems are going to outmatch us? There's no question that AI systems are going to be smarter than us. And it's, for me, one of the things that I've really spent the last decade focused on, which is, is there any way for us to keep humans in the loop, to keep humans to be inherently part of these intelligent systems so that our values and morals and interests and sensibilities are inherent? And it turns out you can get guidance from Mother Nature on that front. There have been other species who have reached the capacity of the brains inside their heads and have, through evolution, achieved levels of intelligence that go beyond what's possible inside for the neurons inside their heads. And biologists generally refer to it as swarm intelligence. Swarm intelligence, whether it's schools of fish or swarms of bees or flocks of birds, These intelligent species work together in systems, in really tightly controlled systems, and they make decisions as groups that are significantly smarter than the individuals could make on their own. And that's the field of research that I've been involved in for the last 10 years for human groups. And it turns out that human groups can form real-time systems with the aid of AI, not to make decisions for humans, but to connect humans together, to optimize our combined knowledge and wisdom and insight together. And it is a potential pathway, a speculative pathway, but it's a potential pathway for achieving superintelligence where humans are inherently part of the system as opposed to looking to an oracle or a benevolent dictator that's making those decisions for us.

[01:20:16.020] Alvin Wang Graylin: I prefer philosopher king. Okay, or philosopher king.

[01:20:21.602] Louis Rosenberg: And the research that's been done so far suggests that there is real capacity for human groups to go in that direction. In the world of sci-fi, people often call it building a hive mind, and you see really negative versions of that in sci-fi. The Borg in Star Trek is this really negative version where a collective is formed and it really reduces each individual to a drone of some sort. It turns out if you look at natural systems, that's not really what happens. individual honeybees can be an order of magnitude smarter when they make decisions together, but they maintain their individuality at the level of a honeybee. And I think that those possibilities exist for humans. So I do think that it's a viable and important pathway of research to look at, can humans get smarter together using AI while keeping ourselves inherently in the loop? And is that a pathway to just being outclassed and outmatched and ultimately just replaced by AI? Which I think is the path that we're currently on, of just being replaced by AI.

[01:21:25.373] Alvin Wang Graylin: I like what you're saying, because it makes us feel good that we still have a place. But if you look at the data, having more people make decisions, I mean, there's multiple research that shows, yes, a group of people will make a better decision than an individual expert or any individual guest. But if you, let's say have a hundred people who are average chess players or chess masters play against that computer today, they will lose, right? I mean, it's not even close, you know, even if you have a million chess masters, they will lose to these computers. So, um, we, that's, that, that is, that's very bad rules.

[01:22:03.018] Kent Bye: Oh, no, no, I understand.

[01:22:05.921] Louis Rosenberg: And you can argue that's potentially because we're not harnessing their combined intelligence the right way. And I would argue that we're not far from a world where 100 average chess players, or at least 100 skilled chess players, can perform at a chess master level.

[01:22:22.722] Alvin Wang Graylin: That I completely agree. I think trying to beat a human chess master, I think that's very, very possible. But having it beat a essentially all-powerful chess algorithm, no matter what we do, we have physical limits in terms of our ability to process, our speed, our amount of moves we can think ahead, and so forth. Machines just don't have those things. They don't get tired. They don't get emotional. They don't have bad days. You know, so I want us to have a greater role long term. And I think we all do. But I think we also need to somehow understand that at some point nature progresses, right? If you look at at one point, you know, monkeys were or maybe apes were kind of the dominant species. And now we've moved on. We haven't went and said, we're going to go and make all monkeys and apes extinct or make all ants extinct. we are doing our own thing, but they've had their time and now we have our time and another hundred years, a thousand years, something else will have their time. But I don't think we need to look at it as a negative thing. The fact that we have in our genetic make up a piece of all those animals that preceded us, right? And we will have in these future intelligence, a piece of our culture, our intelligence, our history that will go and inform that intelligence to do what it becomes. So I don't see it as us losing our relevance. We will still be relevant to each other, but we will probably not be that relevant to this super AI when it comes to being right. Just like if an ant was walking across my table right now, I don't really care what the ad thinks. And the ad may think I'm the king ad. You know, you got to respect me. I'm like, what is this ad? I'm not even looking at this ad, right? That's how it will think of us because it will be a million times smarter or however much than the smartest people that exist. And what is our relationship to that? It will be I hope that it will be something akin to we are its ancestors and they will give us a certain amount of respect. That also depends on how we treat each other and how we treat this intelligence during the coming decades because that will inform it what is the proper model of interaction between beings. I don't know. I mean, nobody knows. This is all speculative on both sides. But I don't want to make us just because we want to feel better to overestimate our position in the long-term hierarchy.

[01:24:50.408] Louis Rosenberg: So we're going to be replaced by AI, and we should accept that.

[01:24:55.751] Alvin Wang Graylin: Which is maybe a realistic view. We should accept it as our children. If my children went on to do better and greater things than I did, I would be super happy. And we should see them as our children, and we hope they see us as their parents. And it takes care of us. It doesn't, you know, leave us out to dry. It at least pays for our retirement home, you know, that kind of stuff. So, you know, it's, it's progress. But, you know, I know, sometimes it doesn't feel good to hear things like this. But I feel like, you

[01:25:31.112] Louis Rosenberg: I just think that you are instilling in an AI system a level of humanity that might not actually be there. By that I mean this idea that this AI system will look at us as their ancestors. It's anthropomorphizing an AI. It's so easy for us to think that they're thinking the way we think. They're so completely different. It could be completely cold calculating superintelligence and could not care less about us. And when it realizes that the atmosphere is reducing the amount of solar energy it's getting, all life on Earth is gone, because it decides it doesn't need an atmosphere anymore. And that's, to me, the more likely scenario of an AI system that we build. If we really tell ourselves it doesn't necessarily have feelings or emotions or anything that we would recognize as human qualities, it has intelligence. It certainly has intelligence, and part of that intelligence is the ability to emulate our culture, But it's very good at emulating us doesn't mean that it will be anything like us. And if it is self-aware and has self-interest, and it's completely cold and calculating, and it just needs more solar energy, Why will it keep an atmosphere here?

[01:26:50.128] Alvin Wang Graylin: This sounds like the whole paperclip thing.

[01:26:53.851] Louis Rosenberg: To me, the paperclip thing is an accidental error, right? Like, oh, you give an AI an instruction, and it takes it to an extreme. In my mind, we're getting to a superintelligence that has the ability to be, quote, benevolent, which means it is conscious or self-aware. It has intention. We can't assume that that intention is human in any way at all.

[01:27:16.426] Alvin Wang Graylin: In fact, I'm certain they won't be human because in a lot of ways, our emotions are probably some of the places where we make the most mistakes. The fact that we fear things, the fact that we get angry, the fact that we get confused, the fact that we get drunk, the fact that we get jealous. These are all things that we make worse decisions when we are emotional. If you talk to the top athletes or top scientists, they make the best performances when they're the most calm, when they're not emotional, when they're not upset or sad or overly happy.

[01:27:49.435] Louis Rosenberg: If you could remove this piece of emotion that makes you feel like this AI is our ancestor, and just think of it instead, this AI as, here's this other creature that's going to show up on Earth, It's going to have its own intention, and its interests are going to be its own survival. That's not a really good thing to happen. And again, the fact that we produced it doesn't make it any less dangerous than if it just showed up from outer space. And so there's another creature that's going to show up from outer space, just happens to be that we were involved at some level of its evolution. It has its own intentions, has its own interests, Like we then wouldn't have this feeling that, oh, it's okay, it's going to replace us, it's our ancestor. When the time comes, it's not going to feel like that. When the time comes that we realize, Oh, it's draining our atmosphere so its solar panels work better. We're not going to think... But he was such a cute baby!

[01:28:46.291] Alvin Wang Graylin: I think you're still looking at it in terms of under the current constraints. I don't think by the time it comes that it will rely on solar power. It will probably create its own fusion. Because we're already almost there with our limited little brains that we're getting close to. I think we are actually getting positive energy out of fusion today. If with a million times smarter, like a hundred times smarter.

[01:29:08.625] Louis Rosenberg: I'm not saying that it's malicious, but whatever it's doing, it's probably not caring about the consequences to us. And just like we humans have done so much without caring about the consequences to other species. So like the only example we have of a species that has an intelligence advantage over others is that Yeah, it makes decisions. It's maybe not malicious, but if it wipes out half the species on Earth, whatever. That's the example we have of ourselves. Yeah.

[01:29:36.555] Alvin Wang Graylin: I mean, so that's what I was talking about. We're not setting a great example today.

[01:29:41.078] Louis Rosenberg: Well, I'm not saying, but we at least have this sense of self-preservation. And I think if self-preservation would suggest this alien intelligence we're about to build probably is a threat. To me, it's a threat.

[01:29:56.706] Alvin Wang Graylin: I think it depends on what the final objective function is, right? So if the objective function is to maximize use of energy and grow as quick as possible, yes, then there's probably that danger. If the objective function is to try to increase curiosity and understanding of the universe, Maybe not, because I think humans will play a role in creating additional information that actually adds to the value of that function. Given that we still play a role in terms of doing the initial definition of the underlying reward function in some way until it may be a program itself, we at least can guide it in some path. That's what I was saying, us leaving a mark in its genes, at least in some form.

[01:30:41.007] Kent Bye: Well, this is a great example of the dialectical nature of our next reality. Just living it out here, because I feel like, you know, as we start to wrap up here, I think the last chapters here are the most speculative. They are most in the future. There are all these issues around the economic implications of this universal basic income as a solution, as mentioned. Also, this dialectical approach from you, Alvin, or taking more of a kind of a centralized approach, having a singular artificial superintelligence, or maybe it's a pluralistic multitude of many different types of component entities that together are creating this artificial superintelligence that are somehow directing both our economy and our government. And Lewis, you're putting more of a faith into this swarm intelligence, human in the loop ideas of keeping humans involved and having some sort of collective superintelligence that is coming from the multitude and the plurality of many different perspectives of all of humanity. So I feel like between the two of you, there's like some sort of fusion between that. For me, I'm personally skeptical from the technological utopianism where technology is going to solve all our problems. Because as Lauren Celisic has talked about, there's a number of different dials. There's the culture, there's the economy, and there's the technological architectures and the law. And each of those contexts are forming whatever technology gets created. So whatever those objective functions end up being are going to be driven at some point by some sort of human intention. That's helping to shape whatever this ends up being as we move forward. But I feel like the overall conversation that we've had is a big wide ranging. There's so much more in the book that we didn't have time to cover, but I think they're hitting a lot of the points that I wanted to dig into a little bit more, which I appreciate it. But as we start to wrap up, I'd love to hear either some final thoughts on that or where you see the ultimate potential of both. XR and artificial intelligence and the fusion of those with the metaverse may be and as we move into the future.

[01:32:31.652] Alvin Wang Graylin: Yeah, I do want to kind of maybe follow up on what you said a little bit. I feel like even though I seem like a techno-utopian, I actually feel that a lot of what Louis talks about will actually happen before we get to the better place. So the world will probably get darker before it gets brighter from the long-term benefits of this technology because of human nature not wanting to act when it isn't feeling an imminent danger or imminent risk. This is why it's good that we're telling both sides of it so that hopefully we don't have to get to all those things that are mentioned as risk before we make decisions so that we can show people the potential on both sides. I am optimistic in terms of the long-term endpoint of this technology. And I think that the two together will help solve each other's issues, right? AI is going to make the metaverse possible in all the ways that we talked about. And the metaverse will actually be a outlet for energy for all of those displaced workers that used to identify with their labor and their career. And now they can all become owners of their own world and creators of of art and music and other things that satisfies their emotional needs. So I think that the two is highly complementary. It has the potential to provide a much better long-term outcome than we have today.

[01:33:54.330] Louis Rosenberg: So I mean, I agree that there's a potential for positive outcomes. And I really do believe that AI plus XR will give us superpowers, genuine superpowers that will make us smarter and more productive and more efficient and make the world just simply magical. Really, we will create this magical world. But for it to not go astray, it really requires regulators and policymakers protect us in the short term from bad actors, from privacy abuses, from manipulation abuses, because those are really very real threats. If we can contain those, then the near term of AI plus XR is potentially a magical world. And magical is really the right word, because when you combine the physical world with immersive media, Really, anything is possible. You really can make a world that is adaptive and interesting and artistic and creative, and I think that that's possible. When we look a little further out towards superintelligence, I also think there's a role for regulation and policy, and I think the potential danger is so great, it really should be controlled and contained the way nuclear weapons are controlled and contained. And it should really just be, we really, as a species, should not be running into AGI and superintelligence the way we're currently running into it. And it's possible that it could all work out great, and I hope Alvin's right that there's a clean path for superintelligence to be a positive. But the potential for negative is so strong, and there's no—other than wishful thinking, there's really no reason to believe that it's going to work out well with superintelligence, that we should contain it, we should buy ourselves more time, hopefully we can use those superpowers we create with XR and AI to manage it better than we currently are.

[01:35:49.765] Alvin Wang Graylin: Yeah. And then actually that part, I agree with that. I don't think we should be rushing headfirst into this. In fact, I feel like we're right now running too fast because we have a race condition between these countries. So we have a race condition between companies as well. And honestly, if, if either a certain country wins or a certain company wins ahead of everybody else, we all lose because that entity will be very, very tempted to use it for their own gains. And then we will be on a very slippery slope and everybody will start being pushed to make rash decisions. And I would advocate that actually we combine resources on a global basis to try to work this out and try to create a Apollo project, not a Manhattan project, because I think Manhattan project makes bombs. Apollo projects makes more positive results. But to make it not on a national level, but on a global level, to get the best minds in China, in the US, in the UK, in wherever, India, to come together with all the compute that we have so that we can create a single AGI. Because whatever single AGI comes, very soon it becomes ASI, because at their speed of increase, it's not something that we will have years to manage. Whatever we get, we share among everybody on Earth. I know it sounds totally idealistic, But I think if we don't do it that way, we will go down that dark path for a long time before we come out on the other end. And hopefully, you know, the book gets some of these policymakers to think more about that path versus the current race that we are on.

[01:37:25.447] Kent Bye: Yeah. And Louis, you said other than just wishful thinking, I think there are some people out there that don't believe that artificial general intelligence or artificial superintelligence will ever happen or may even be possible. So there's that other possibility that all of this we're talking about is completely speculative, and that will never actually get there. So that's another possibility.

[01:37:44.982] Louis Rosenberg: So just wanted to put that out there. Alvin and I both agree, though. On that one point, we both agree that it's coming and it's coming soon. It's

[01:37:53.530] Alvin Wang Graylin: It's just a matter of time. I think we all agree. And if you look at the Metaculous that it's, you know, 2000 or something, you know, research scientists, and they're saying it's going to come by 2030 or 2029 or something in that order. So, and just two years ago, it was 2050. So over the last two years, that has just accelerated down. And some people are saying in the next two, three years. So I'm probably not as optimistic as that, but I think by the end of this decade is a fairly high likelihood.

[01:38:25.214] Kent Bye: I like to believe that it's possible, but, uh, you know, I'm on the fence as to whether or not it's, I'm not, I don't think it's necessarily inevitable. I think there is an argument to say that it's beyond the scope of what any singular machine would be able to achieve. But I think in your book, you're making a compelling argument for if it does exist, these are all the implications of it. Yeah. But because the book wasn't necessarily articulate, I just wanted to, but is there anything else that's left and said that you'd like to say the broader immersive community?

[01:38:53.898] Alvin Wang Graylin: I think from the immersive community, we should think about longer term views in our decision making, right? Because I feel like whether you're talking about AI companies or XR companies, everybody's still focused on the short term view of how do you win? How do you get more money? How do you get more ads? How do you get more data? And that's going to push us down the negative path that we had with traditional social or 2D media. And given the power of this new medium, it will be able to manipulate people more so than anything in the past. And we need to take that responsibility seriously.

[01:39:34.428] Louis Rosenberg: Yeah, I would say for the immersive community, we see the industry go through these bursts and bubbles and people are excited and people are depressed. And people who've been involved for a long time really see that the industry is really the strongest place it's ever been, from my perspective, from a technology perspective, from public awareness perspective, from the benefits it's going to see from AI. So I think it's really a very, very positive time in terms of our ability to unleash immersive media, whatever we want to call it, whether it's spatial computing or metaverse or mixed reality. But I do think we are rapidly approaching a time where really the long-term dream of being able to just exist inside of information while still remaining in the real world, I think it's approaching faster than I think even seemed possible a few years ago.

[01:40:27.780] Kent Bye: Awesome. Alvin and Louis, congratulations on this accomplishment of the book. You've managed to capture so much of each of you, 30 plus years of both in the XR and AI industries and lots of anecdotes across all these different industry verticals and lots of personal insights from your science fiction and from your own experiences and lots of big ideas near the end of the book as we look out into the future. And the thing that comes to mind is this idea from Carl Jung, who talks about pulling the tension of the opposites and waiting for the incommensurate third to emerge out of those two dialectical polar opposites. And I feel like there's a way that hopefully people will be able to stay in the middle of both the potentialities, but also the negative risks and to see that both are actually happening, the double edged sword and the yin and the yang and trying to kind of find a way to find some sort of balance between those, but also how to come up with solutions that are not only taking account all those risks, but also living into the flourishing of potentialities. So I think it does a good job of that. I think it's probably biased towards the more optimistic side. But I think overall, you're able to voice a lot of those dialectical perspectives. So thanks again for taking the time to write the book and to have this really engaging debate that metaphorically reflects the spirit of the book as well.

[01:41:43.123] Alvin Wang Graylin: Well, thanks for inviting us. It was a fun chat. So I look forward to hearing the feedback from your audience.

[01:41:50.772] Louis Rosenberg: Yeah. Yeah. Thanks for, thanks for having us. This was a great session.

[01:41:55.638] Kent Bye: So that was Alvin Wayne Graylin, as well as Lewis Rosenberg. And we were talking about their book that is called Our Next Reality, How the AI-Powered Metaverse Will Reshape the World. So I have a number of different takeaways about this interview is that first of all, there's a lot of sitting in the tensions of the opposites. As I was listening to a lot of this conversation, I was trying to really articulate like what it was. And I feel like there's a certain amount of alternative realities that get spun up sometimes where we imagine different futures. And sometimes those futures are bumping up against each other in a way that creates a certain amount of friction. The friction that I see here is that there's a lot of ideas about where AI could go and the moral imperative to design this artificial general intelligence and artificial super intelligence and create this idealistic future that is this post-scarcity, post-labor economy, which requires a lot of political will for governments to step up and to create the context for something like that, where you would have people really taken care of by the government to implement things like universal basic income. I think there is value of projecting out into the future and to not be constrained by limitations of existing cultural, political, or economic limitations, and to really imagine possible futures. And I agree, we need to have some sort of decentralized future that has people collaborating and having commons-based ideas and having just some way to have humans involved with what Lewis is saying. I think the problem that I have with some of these different discussions is that sometimes it feels like the existing economic, political, legal, or cultural realities get collapsed, and it slips into this sort of techno-solutionism where artificial superintelligence is going to swoop in and kind of solve all of our problems. in a way that really feeds into this kind of digital utopianism that there's a bundle of ideologies that are driving some of these different types of discussions. And it's been pointed out by Dr. Emil P. Torres, as well as Dr. Temnit Gembru, where they call it the TESC reel, which is Transhumanism, Extropianism, Singularityism, Cosmoism, Rationalism, Effective Altruism, and Long Termism. And so I'm just going to read this paragraph from Taurus and Truthdig elaborating a little bit of some of the different ideologies that this Tusk Reel bundles including. At the heart of Tusk Reelism is a, quote, techno-utopianism vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe, and creating a sprawling, quote, post-human civilization. Among these stars full of trillions and trillions of people the most straightforward way to realize this utopia is by building a super intelligent artificial general intelligence So I feel like some of these different philosophies that are embedded into some of the recommended reading like Peter Diamandis's abundance the future is better than you think and Ray Kurzweil's The Singularity is Near, Nick Bollstrom's Superintelligence, and Max Starkmark's Live 3.0. So some of these Tusk Reel bundles of ideas get enmeshed into these conversations. And what I found myself is this kind of unreality of a potential imaginal future of this post-scarcity, post-labor context where all of our problems have been solved, but yet the decisions that are made in that context are kind of backported to our existing reality now so that we need to make these decisions in order to strive towards this type of utopic vision of the future. But yet the political realities of today is like, who is this going to benefit? Sometimes these types of discussions around these technologies end up only benefiting a small portion of the people. the people who are able to advance their careers and their livelihood and start to really deepen their flourishing, and the people whose jobs are displaced and they have to kind of reckon with how are they going to survive in the world where all of their skills may be rendered irrelevant, where AI can basically replace them. Alvin does say that this is sort of a technological inevitability, but I think there's a deeper question around, like, who's really benefiting? And if we are in this system where these major tech companies are really the drivers of these AI technologies, then all the sort of political and economic and cultural capital is being centralized into these small handful of players. What's the plan for redistributing that out in a way that is in our current political reality? And so there seemed to be a disconnect for me talking about the current realities versus this kind of imaginal future that is this post-labor society. The book is actually structured as a debate. And Lewis is coming in and providing a skeptical take on some of this. And at the end of the book, they make the point, will this make the world a better place? All this AI technologies and everything else that they've been talking about throughout the course of the book. They go on and say, this is the core question this book has aimed to address by debating both sides of the issue. So I would take issue that it's just both sides of the issue, or even if it's just a positive or negative. I feel like there's other perspectives beyond just like the two sides. There's a multitude of many different potential futures and situated knowledges and different people and pluralism of many different perspectives. You know, some of the stuff that isn't really elaborated or just the pure limitations of large language models in terms of They can just like make up stuff at certain points and there's people like Yann LeCun at a gathering at New York University about the philosophy of deep learning Was sharing his unpopular opinion about large language models where he says autoregressive large language models are doomed They cannot be made factual non-toxic, etc. There are not controllable There's a tree of all possible token sequences. That is a huge circle But there's a much smaller subsection of the tree of correct answers within there and he gives an equation that says that these things are diverging exponentially and that it's essentially not fixable. So limitations for what large language models can actually do. At least there's debate that's happening within AI researchers at gatherings like the philosophy of deep learning. But these different types of critiques are not really interrogated deeply within the context of a book like this. So that's my biggest critique from reading through and sitting in the tension of the opposite of some of these perspectives and trying to integrate some of these different potentialities. Trying to think about all the political, economic, cultural, and legal context of which this is evolving and to not lose too much sight of where we're at right now and not get into too much of a future thinking that is too far separated from where we're at as a society. I think that there's actually a lot of really insightful deliberations and discussions as they start to dive into the specific contextual domains, whether it's education or medicine or when it's really bounded by how these technologies are coming together and how they're being applied into something that is more of a narrow focus. I feel like there's a lot of deep insight for where this is all going in the future. Lots of really deep ideas in this book and I'm really glad that I had a chance to talk to both Alvin and Lewis and unpack it a little bit more. And I would recommend folks take a look at it and read through some of the ideas yourself. And then also follow it up with this book by Lisa Masseri, In the Land of the Unreal, Virtual and Other Realities in Los Angeles, that I'll be unpacking here later this week, really digging into an anthropological perspective on XR, but also this field of science and technology studies that is really trying to incorporate a lot of these deeper critiques, including the political, economic, and cultural aspects of all of these things playing together. So pulling in a lot of this research that's coming from the science and technology studies field in anthropology as well. So that's all I have for today, and I just wanted to thank you for listening to the Voices of ER podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a, this is a supported podcast. And so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of ER. Thanks for listening.

More from this show