I was invited to give an opening keynote talk about AI at a half-day seminar titled “Tech Talks: AI Tools, Tips, & Traps” on April 20, 2023 that was put on by San Jose State University’s King Library Experiential Virtual Reality Lab (KLEVR). I titled it “Some Preliminary Thoughts on Artificial Intelligence” (video version and slides) as it was an opportunity to put down some of my initial thoughts on the history of AI, the different ethical perspectives on AI, as well some conceptual frames that I use to help makes sense of the field of AI based upon my experiences at the International Joint Conference on Artificial Intelligence. Also, a lot of the conversations that I’ve had with XR artists and developers about AI tend to be primarily focused on the immediate utility of the tools rather than the broader ethical and moral implications of the technology. This talk starts to flesh out that aspect a bit more, but I’d also recommend checking out my interview with Access Now’s Daniel Leufer about the European Union’s AI Act from Voices of VR episode #1177.
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing on my series of looking at the intersection between XR and artificial intelligence, today's episode is a lecture that I had a chance to give at San Jose State University, giving some of my preliminary thoughts on artificial intelligence. So I wanted to include this talk that I gave in this series just because it's an opportunity for me to dig into a little bit of the history of AI, but also the larger conceptual frames that I start to use to understand artificial intelligence and how to make sense of it. I think there's this thing where there's all these polarized debates around AI. I mean, you have people like Eliezer Yudkowsky out there essentially saying that AI is going to kill everybody and that we should start to bomb GPU centers if people are generating these large language models. I mean, calling for moratorium on the development of AI. And so both a lot of doomsday prophecies and hyperbolic fears, but also larger discussions around the different ethical implications of the technology. I think it's a, That's a hot topic that I think we're still trying to wrap our mind around some conceptual frames to understand what artificial intelligence even is, what the capabilities are, but also how do we do it in an ethical way with responsible innovation and talk of alignment. So lots of different topics that I wanted to just wrap my mind around as I talk to these different artists and creators. Often they're biased towards trying to find the utility in these different tools. And that's, I think, their job as creators and makers is to push the limits of what's possible using the technology. But I think this is just an opportunity to start to take a step back and look at what some of the different AI ethicists and some of the broader conversations that are happening around AI. And this is a talk that is also available on YouTube with lots of different slides. And I'll put a link to those slides as well if you want to take a look at that. But I was invited by John Oakes to give this talk at San Jose State University. There's a whole King Library experimental virtual reality lab. They had these different tech talks, and this one was talking about AI tools, tips, and traps. And yeah, there's lots of other conversations that are happening. It was like a half-day seminar. So that's what we're covering on today's episode of the Voices of VR podcast. So this talk that I gave happened on Thursday, April 20th, 2023 at San Jose State University in San Jose, California via Zoom. So with that, let's go ahead and dive right in. My name's Kent Bye, and I've been normally doing the Voices of VR podcast, and I'm going to be sharing some of my preliminary thoughts on artificial intelligence. This is one of the first formal presentations that I've given, so I had to collate a lot of my thoughts, and so it's kind of hot off the presses, as it were. So I really appreciate the opportunity to present how I start to think about artificial intelligence. Like John said, I've been doing the Voices of VR podcast since 2014. I've recorded over 2,000 interviews, published over 1,200. And interestingly enough, back from 2016 to 2018, I recorded over 122 interviews with AI researchers going to four different AI conferences, a couple of international joint conferences of artificial intelligence, the very first O'Reilly AI, as well as more of an artistic gathering of AR artists. And that was for me to try to get a sense of what is happening in the space and to talk to the researchers. And so some of my thoughts around AI are kind of formed by a lot of those discussions that I've had. And like I said, I've had a lot more publication on VR. It was difficult for me to maintain both of these, but as the latest innovations in AI have been happening, I've been wanting to go back into some of these archival interviews that happened from 2016 to 2018, because I feel like there's a lot of deeper philosophical underpinnings for trying to understand what is artificial intelligence. And I'll try to be sharing some ways that I start to think about it throughout the course of this presentation. So I'm going to be covering, first of all, how VR and AI are sort of sibling technologies, a brief history of AI, which is just trying to set the context of where we've been and where we're at now. a little bit about responsible innovation and AI ethics, and then mapping modes of intelligence and trying to get some conceptual models around trying to understand the nature of artificial intelligence. So I'll start off with VR and AI as sibling technologies because I'm really coming at it from the first and foremost through this lens of virtual reality. So we have this idea of exponential technologies where there's all these technologies that as they combine together, they're going to start to add on top of each other and have all sorts of new capabilities, virtual and augmented reality is kind of like this shift from 2D to 3D and spatial computing. We have other things like robots, AI, big data, quantum computing, synthetic biology that you can see as these technologies that have the capability of making radical shifts into our culture. So to kind of elaborate on this idea of VR and AI as sibling technologies, for a long time virtual reality was bounded by a personal computer with a tether, with a wire. But with the simultaneous location and mapping, the SLAM algorithms are able to have a self-contained VR headset that is able to do this computer vision and keeping track of where you're at. And so you're able to move through six degrees of freedom which was a huge innovation, and for anybody who has used VR, especially without a wire, you can feel the phenomenological impact of what that means. Part of the reason why I shifted way back into VR, away from AI, is because at the time, 2016, 2018, there wasn't as many experiential components of being able to get a direct experience of AI. Now with generative AI and chat GPT, you have much more of a direct relationship to interfacing with artificial intelligence. A lot of times AI is kind of like happening in the background. It's like this invisible layer, and it's kind of embedded into so many different dimensions of our lives, whether it's social media, algorithmically driving what we're seeing, whether it's on Netflix or YouTube or Twitter, other social media, TikTok, certainly algorithmically driven, Instagram. So more and more, our lives are being dictated by these invisible fields of AI. But now we're at the point of actually having more of a hands-on experiential component of AI. So back in 2017, there was at SIGGRAPH, NVIDIA was showing how they were able to create these virtual worlds in their Project Holodeck and actually train a robot with deep learning that then they were able to take that same neural network training and put it to a physical robot. So again, these are sibling technologies is that you have virtual environments that you're able to train the AI algorithms and then with the AI algorithms and deep learning weights, actually deploy that out into the world. So you have this back and forth between the two. Also going back into like the very early days of deep learning, especially deep reinforcement learning, you see a lot of the early research with video games, especially because there's a very empirical number that you can measure performance and have benchmarking on how well these different deep learning algorithms were able to play different video games. So again, you can think of these video games as these virtual worlds and the sibling nature between these virtual simulations and training of AI going back throughout the whole history of artificial intelligence. So a brief history of AI, I sat down to do this mostly because things have kind of like exploded over the last couple of years. Cause I was tracking it very closely from 2016, 2018. And then there was different milestones that happened over the last couple of years. And also a lot of debates and polarization around the topic that I wanted to just telling the history of AI within itself is going to help set the context for other folks later in the day. So this is at least my first crack of trying to get. it's not going to be a comprehensive, it's going to be a brief history, but at least some of the things that I look to in understanding how this has all been developing. So you can go all the way back to 1956 with the Dartmouth summer research project on artificial intelligence. This is when AI was first coined. And so you have, just think back, this is like before you even have like computers widespread, you have very early research coming together to start to think about this. And you can probably predate it in before this with the Alan Turing paper in 1950. But I think this is a marker for the coining of artificial intelligence in 1956 as a nice starting point. And like I said, a lot of computing was still developing. So you can just imagine the very early days of computing itself, you have folks thinking about thinking machines and artificial intelligence. I think a big turning point in 1996 was when Deep Blue beat Kasparov. That was like a huge shift in terms of us realizing that there's some very narrow domains like chess, where computers have consistently been better than humans. And there's been a bit of ways in which that these algorithms have been helping people improve their chess. And so the level of chess over time has been increasing. So you see this relationship where AI is kind of an assistant technology in some ways, but also just when it comes to playing chess, the way that these computers think about chess is beyond any human comprehension and their levels are just so much higher. Then you have NVIDIA in 1999, they came out with the GPUs and they started with 3D graphics in 93. And then just the fact that you have these innovations in computing and parallel processing of data has been a huge innovation and catalyst for all of what's happening with artificial intelligence. And so just the fact that you could use what started through gaming and being able to have better and better graphics on gaming. has been at some point NVIDIA saw this opportunity of using these GPUs to be able to train these deep neural networks and that was actually in 2012 with the AlexaNet neural network but you can see the rise of computation is increasing from 10 flops to 10 billion petaflops and you have this brief timeline of all these sort of mile markers but the consistent thread here is that over time we've had this kind of Moore's law increasing of compute power both in CPU and GPU but especially with GPU when it comes to deep learning and the parallel processing that happens there. And you can see, you know, AlphaGo, AlexNet, GPT-3, all these are taking huge amounts of data and being able to train these networks around them. So ImageNet is something that was started in 2009, and that's a big driver for why artificial intelligence has been able to be so effective is because folks have curated and labeled a lot of this data to be able to have these benchmarks. You have this ImageNet to basically go out and use Amazon Turk workers to identify what's actually in these pictures. Anybody who's filled up CAPTCHAs, they've kind of decentralized that now for folks to prove that you're not a robot. You have to do this training of AI of these different images as you're labeling different things. I guess the theme is that what's a big driver of why there's been so much innovation with artificial intelligence is because there's so much of these data sets that are made available. And you have the deep learning revolution starts in 2012. Again, a lot of these algorithms were created earlier back in the 80s and 90s, but it was the combination of both the GPU power and the data that you start to have this breakthrough in terms of deep learning that really started in 2012 with this paper. And you've seen just this exponential growth since then with a lot of different sharing of research on the archive and just really been an amazing last decade plus of innovation in the field of artificial intelligence. So, when I started to go to do interviews was after AlphaGo beat one of the former champions of Go, Lee Sedol. And so, again, this was an instance where it was a lot more complicated than, say, chess, which was a lot more bounded. There was a lot more open-endedness with Go. And they had to actually combine both sort of a bottom-up machine learning with also sort of a top-down hierarchical Monte Carlo tree search. So in other words, you can think of like the right brain as sort of understanding different patterns and machine learning is much like that. And the left brain is more of the planning and being able to have understanding and knowledge representation. And so they kind of had a blend in this of both the top down and a bottom up. And that was one of the key innovations for why AlphaGo was able to have this breakthrough and then led to AlphaZero and other things that are happening in the context of deep learning. And then large language models start up here from 2018 to 2020. You see BERT and then GBT-2, GBT-3. GBT-3 actually won the best paper award at NIPS Neural IPS, which is one of the big machine learning conferences. So machine learning is a big driver of these large language models of both chatGBT, but also the generative AI. It's essentially scraping the internet and do this kind of autoregressive, what is the most probabilistic next word? So it has to sequentially come up with what is the next probable word and it can't sort of skip ahead. It has to just kind of go ahead. There's no planning. There's no real deep understanding. It's just taking the context of words and then saying, okay, here's the other words that are associated. And so it gives us the illusion that it has a lot more understanding than it actually does. you know, it's a big part of what's the data that's being trained on. Certainly, there's a lot of gaps and bias data in terms of it's not complete. It's going to be beholden to whatever data it's being trained on. So if there's a lot of sexism or racism within the data set, then you have a lot of sexism or racism within ShaqGBT. And so then you have this other layers of like the reinforcement learning with human feedback that tries to tune and align some of these systems to more of our values. But there's a lot of complaints that large language models are always going to have these hallucinations and never going to be able to be aligned perfectly to all the human values. And it's exhibiting a lot of amazing capabilities, but there's a lot of deeper problems that there's been a lot of debates around how far can we actually go. On the one hand, there's been a lot of surprise for how far it can go and actually demonstrating a lot of amazing capabilities. But at the same time, there's a lot of other aspects of AI in terms of common sense reasoning and planning and deep understanding and doesn't have any sensory experience from the world. So I'm sure other presenters will be going into both what the exalted potentials and also limitations of large language models. But I think that was certainly a big innovation. So this is a famous paper calling these large language models as stochastic parrots. So essentially that it's just the most probabilistic next word. It doesn't really have any deep understanding. People argue whether or not these things are conscious or not. I don't personally think that they are. although I have a different view about consciousness itself. There's a lot of debates around these things, but I just point folks to that there's this active debate between this group of AI ethicists and other folks who are promoting large language models of all their amazing capabilities without fully disclosing all their limitations. So this is a good paper to look into, get a little bit more context as to some of those limitations. All right, so that paper of stochastic pairs actually led to Google firing some of their AI ethicists. And so a lot of these big companies were betting a lot of what the future of their company was going to be on these models. And the fact that there were AI ethicists who were raising the alarm of their limitations, and then they get fired, it's not a great look. And also speaks to how ethics and safety has not really been prioritized within a lot of these systems that have been deployed out, which I'll be getting into a little bit more as the history is unfolding here. But you have DALI as a generative AI model that was announced by OpenAI in 2021. And so you start to see zero-shot text to image generation. Basically, it's scraping the internet, getting these images. It's taking the captions and associating those two. And its ability to have these higher-level features to understand what they call this higher-dimensional latent space, it creates this archetypal idea of what these concepts are. And when you put these words together, it sort of blends together all those archetypal potentials and then spits out what, for us, looks like kind of a brilliant, synthesis of a lot of those different ideas and concepts. And so generative AI has taken what's happening in AI and actually launched it into a consumer facing products that are really kind of blowing people's minds. So if you look at the Gartner's hype cycle from 2021, you can see that generative AI is right at the peak. And I'd say we're certainly near the peak of the possibilities, but after that comes the trough of disillusionment, the slope of enlightenment and the plateau of productivity. So the trough of disillusionment is where you see a lot of these different limitations of generative AI systems, as well as large language models. So, but you have the, basically the consumer launch of Midjourney, Stable Diffusion, DALI in 2022, both in like July and August. There was a lot of stuff that was available in private betas. I think it was first for me in March of last year, heard a lot of my friends playing around with Midjourney. I was able to get onto it and also experimented. And it's definitely one of those technologies that blows your mind in terms of opening up new vistas of creativity that you never even thought were possible. So at the one hand, I listened to the artists and what the artists were doing. But on the other hand, there's a lot of questions around the ethics of how this data was collected, were there copyright involved? And then even with Getty, to what degree did Getty have a right relationship to having image release forms and everything else. And so there's a lot of debates around this. Like Kirby Ferguson has a whole YouTube series called everything is a remix. And so there's arguments and saying, well, everything's a remix. And so this is just another remix. This should be just fair use, but then what degree are these actually creating more than just a copy, but it's actually plagiarism in certain ways. And so that line between plagiarism versus fair use, I think is an active debate. And to what degree are things in the latent space. It's a debate that will continue to unfold. I think the other. aspect of the ethical side is just the limitations of some of these. And so if you have the bias in AI, so if you have a lot of the generative AI systems, if you just type in CEO, then a lot of the systems would just have like 100% of their output would be like a male, oftentimes a white male. And so this post goes through and does a test and pretty much all of them except for Dali had around 80% were men and 20% were women, but all the other ones were mostly like 100% male. So again, just the idea that these large language models are taking what's out of the internet, it's taking what biases are already there and amplifying harm. And I'll be getting into the utilitarian arguments around that in a little bit here. So it's a documentary called Algorithmic Bias that I highly recommend folks checking out. It goes into how facial recognition wasn't necessarily originally trained on people of color, and a lot of their faces wouldn't be able to be detected, which meant that if you're starting to use facial detection in the context of policing, then you have a lot of misidentified people that the consequence means that they end up having to go to jail. Like the utilitarian argument that folks will often say, well, this works 95% of the time. And utilitarianism is all about maximizing what is the most utility. But if that 5% is for folks who are already marginalized, you're basically taking something that's already systemic racism, systemic sexism, and you're amplifying that. And so the folks who are already marginalized are having even larger aspects of marginalization. And so instead of taking a utilitarian approach, you really have to take more of a human rights approach and deontological approach when it comes to ethics. and really understanding what the data sets are and what the biases are. Because every data set is going to have some degree of bias. And part of the problem with ChatGBT and OpenAI is that they're not even disclosing what the data sets are or what the architectures are. So we actually have no idea what other types of implicit biases may be embedded into some of these large language models. So, you know, this is just a report from the state of AI in 2022, just to say that there's this huge discrepancy between the different types of researchers in the field. And there's like 30 times less folks that are working in the context of AI safety, which is just to say that these types of ethical issues are not valued and deprioritized within the industry at large, which is super concerning. And then you have the pushing forward of launching OpenAI Chat GPT 3.5 as a prototype in November of 2022. And then within two months, it had over a hundred million users. It's one of the fastest services ever to go from a million to a hundred million users within the course of just a couple of months. Then you have a lot of discussions. This is from Bankless, we're all going to die. This is the type of rhetoric that we get from Eliezer Yudkowsky, where he's saying that the fears around AI is going to be so uncontrolled that we're basically in this inevitable path towards death. Because if AI is so much smarter than us, then it's going to lie to us and it's going to kill us all. And you get these really ridiculous editorials in Time Magazine from Yudkowsky, where he's basically saying, we need to regulate GPUs. And if there's people who are not being regulated, then they're using GPUs to train these models. We need to literally start to bomb these data centers. I mean, this is the type of extreme rhetoric that we've gone from zero to AI is on the cusp of going to destroy us all. Now, I do think there are some safety concerns, but I'm not in agreement that we're inevitably going to be walking into our death because of not being able to align artificial general intelligence. So it's been really surreal just to kind of see this explosion, OpenAI then launching into chat GPT-4, and then there's literally no details as to what the architecture is. OpenAI used to be open, now they're completely closed, which means that we can't actually evaluate some of the different claims. So just as an example, if it's able to do coding examples from a certain test, well, we don't know if that model was trained on that data, because you can't test something on the same sets of data it's been trained on. But if you don't know what it's been trained on, then you can't actually assess what its capabilities are. So it's actually given the AI ethicists a big problem as to actually trying to rein in some of the different hyperbolic claims as to what the actual capabilities are, because we don't know what the testing data is. So we can't actually say one way or another whether or not these are actual capabilities. Because if it's trained on the data, it may be just coming from memory rather than trying to generalize and prove all these other capabilities. Okay. So open AI should be known as closed AI because again, like they originally thought that they were going to promote all of these different open things, but then they, over time, they realized that there's these implications that, you know, like the chat GPT, being able to tell people how to create nerve gas, maybe they didn't realize that there was that degree of safety and security issues. And so when you open source something like that, then you're basically creating these weapons that could facilitate all those dimensions of terrorism. So maybe it's a good thing that they're closed. That's open to debate again. But I think as we go forward, this paper of sparks of AGI from Microsoft, which was really going into all these different dimensions of the capabilities of these large language models. There's a lot of critiques around that saying, first of all, there's a lot of these definitions of what intelligence is based upon these more eugenicist definitions of using the bell curve and racist dimensions of how they defining intelligence. And so There's a lot of AI ethicists who are critiquing this paper from Microsoft saying that, you know, Microsoft's got this financial interest with open AI. So there's a lot of conflict of interest of them doing a basically research paper, but it's also a lot of marketing and a lot of these hyperbolic claims that may not be substantiated. And then there's a whole open letter saying we should just pause all development on all these large language models because. of their danger. But then all these AI ethicists are saying, well, a lot of the people who are writing that are actually implicitly propagating dimensions of AI hype. And then Gebru and Torres have this whole acronym of TESCRIEL, which is transhumanism, extropianism, singletarianism, cosmism, rationalism, effective altruism, long-termism, digging into a lot of the philosophical ideologies that are driving some of these different discourses and trying to break down some of the more toxic elements of that. So that's something to look out for as well. And then The Center for Humane Technology is saying, OK, the first wave of social media had all these dimensions of AI. And it was saying, OK, we're just going to give everyone a voice, connect with your friends, join like-minded communities. And then at the end of the day, you have all these different dimensions of addiction, disinformation, mental health, censorship, speech, doom scrolling, all these different unintended consequences of social media that has a lot of artificial intelligence on the back end. And so they're saying that this is our first contact with AI. And our second contact is with these large language models. You know, at the surface, it's like making us do faster. It's these creativities. It's all these things that are really positive manifestations. But on the back end, there's maybe job loss and safety, security, reality collapse, exponential blackmail. So all the different harms that are coming from this as well. All right, so I'm going to go through some of the very quickly the responsible innovation and AI ethics. So there was a paper that was done by Sally Applin, Catherine Flick, critiquing a lot of the different responsible innovation teams within the context of meta and Facebook. Basically, their point was true responsible innovation is supposed to have a big red button that says, if there's enough ethical concerns, there should be power in the responsible innovation teams to stop something from being delivered. But a lot of times the deadline for when things are shipping are already set. And so there's a lot of just ethics washing when you have these responsible innovation teams, because they don't actually have any power to change anything. They're basically just kind of rubber stamping stuff that's already been decided that we're going to shipping things and there's nothing that they can do to stop it. So, and then after that, Meta just kind of fired all their responsible innovation teams. And so there's no actual separate responsible innovation teams. It's all up to those engineers who are responsible for delivering things. And so literally responsible innovation as a practice within some of these companies has just been dissolving, reflecting some of those other graphics that I saw earlier. You have this idea, anticipate, reflect, engage, and act. Ideally, you're supposed to have this loop of trying to reflect and engage all these different things. There's a debate that you have to ship some of these AI systems in order to understand what their capabilities are, but then at the same time, as you ship it, you're basically doing harm unduly to certain populations. And so should you actually wait and slow things down? So this whole idea of responsible innovation is that this engage part is that you should have like some engagement with the populace and not just have like some sort of tokenism or non-participation, but actually have the citizens and people have some say in that. But at this point, these are all driven by companies where there's no say for anybody from the public to say whether or not these systems are going to be deployed or not. And then, like I said, there's these stage gated of innovation where there should be some degree of deciding whether these should be shipped or not, but none of that is actually being implemented. They're just kind of like rushing and just kind of shipping all of these products out there in this rapid iteration. I would point folks to the AI Act. This is from the European Union. I did an interview with Daniel Loufer from Access Now, breaking it down, both the biometric data aspects, but there's a lot of stuff about the AI Act in terms of Self-regulation does not work. You have to actually pass regulation. The United States is 5 to 10 to 20 to 30 years behind what the EU is doing. And so the AI Act is actually trying to create these tiered systems of unacceptable risk. So there's certain applications of AI in the European Union that are just going to be completely prohibited, especially when it comes to police use of artificial intelligence. There's some that are going to have high risk where you have some obligations of disclosure of data and reporting to the European Union, and then some other transparency obligations where You would need to disclose to people that you're using a chatbot, as an example. And some of these actually, with the deployment of gtbt4, may actually be shifting around a little bit. OK. Now to the last section of mapping models of intelligence, I'll try to get through this relatively quickly, just so we can keep moving on with the rest of the program. So there's a lot of different standardized tests that people look at to say, here's the performance of large language models. And I just really appreciated this comment saying, it's amazing how many brilliant scientists will look at a large language model, passing a standardized test and think, wow, this computer is really smart. And not that these standardized tests are actually very bad at measuring intelligence, which I think is probably a little bit more accurate that standardized tests, just because something is passing it doesn't mean it's necessarily smart. So we have the Turing test, which I think most of the chatbots these days could probably pass the very early Turing test of just indicating that the person on the other side, is it a person or is it an AI? At this point, I think it's going to be very difficult to figure that out with some of these large language models in chatGBT. But there's a AI magazine issue that actually goes into the next generation of beyond the Turing test. And I recommend folks check out this from 2016, because it goes into a lot of other ways of looking at, like, how do we make sense of the next Turing test to kind of understand how is AI progressing? And then there's a whole philosophy of deep learning lecture series that happened with all these philosophers talking about deep learning. I highly recommend to do a deep dive into a lot of these different things. And this slide from Yann LeCun where he says, prediction is an essence of intelligence. And so defining intelligence as being able to predict what's going to happen in the future. There's Gartner's multiple intelligence, where you have many different types of intelligences, and not just like a mental, but also like linguistic, interpersonal, logical, mathematical, interpersonal, musical, spatial, and naturalist, and the bodily kinesthetic and the existential intelligences. And this is from my VR work where you have different dimensions of active presence, mental and social presence, emotional presence, and embodied and environmental presence. And this is helpful for me to just kind of like understand some of these different dimensions because VR is combining all these different medias. So the agency aspects of video games where you're expressing your will, you have the language and the ways that you're communicating with other people within both the social media and books and your phones, podcasting, communication. Then you have the emotional presence where you have the building and releasing of tension and film as a medium that's modulating emotions as well as music. And then you have the dimensions of virtual augmented reality where you have theater, architecture, these spatial experiences, your sensory design. So when you think about artificial intelligence in this context, it's sort of like taking all these things in together as well as like agents that can make choices, take action, have some degree of emotional immersion and sensory experience. And again, you have this sort of progression of all these different communication media over time where you have each subsequent media is integrating the previous media and machine learning and artificial intelligence is kind of like this next step of these virtual beings and different dimensions of what's next with the future of generative AI and being able to say, I want this type of immersive experience and have the generative AI actually create that for you. And then the quadrivium is something that I use helps map some of these things out from math, music, geometry, and astronomy. These were the four disciplines that they used. And you can think about it as math as number, abstractly just number itself, then geometry's numbers in space, music's numbers in time. and then astronomy's numbers in space and time. So you have these abstractions and forms, the objects moving through space, the world body, architecture, unfolding processes. So going back to these different aspects, you have the different mental abstractions of the linguistic, the communicative, the relationships, social dimensions, the logical mathematical aspects of that air element. You have the earth elements of both the body and the kinesthetic connected to the agency, but also the existential and the freedom and the spatial and the naturalist and the interpersonal and the musical. And there's different cognitive architectures, like from Yann LeCun, that's trying to map out all these different dimensions and not just sort of like the bottom up for machine learning, but also the planning and the perception, the acting. And so you have the action, the planning, configuration, the critic, the memory, perception, and world models. And Yann LeCun says that emotions are actually the anticipation of outcomes, which is an important part of understanding how intelligence agents are kind of fusing all these things together. So the last thing I'll go through here is just like the International Joint Conference of Artificial Intelligence has all these different disciplines and domains. You hear a lot of buzz from the machine learning side, but there's all these other aspects that are also parts of the community of artificial intelligence. And so you have everything from agent-based and multi-agent systems, as well as most of the stuff that you see is computer vision and machine learning, again, that bottom up. And the stuff that's top down is also things like knowledge representation, reasoning, natural language processing, planning and scheduling, constraint satisfaction, optimization, these multidisciplinary aspects of these topics and applications, humans and AI, search and uncertainty. And then, you know, the AI ethics, trust, fairness, the AI for good and the AI arts, I put under emotional presence, but it can be seen as separate things as well. Anyway, these are just some ways that I have to sort of help understand the landscape of AI. And so yeah, just what I was able to cover today was VR and AI sibling technologies, a brief history of AI to get a bit more context, some responsible innovation, and a little bit about AI ethics, mapping some of the modes of intelligence to help understand different dimensions of what AI is and how to think about this kind of multimodal way of all these different dimensions of As you study AI, you learn more about what it means to be human is a big thing that I get taken away. And so there's this very interesting way in which that as you study AI and try to map all these things out, you're also mapping out different dimensions of human consciousness and different ways that our body works as well. So with that, again, I'm Kent Bye. I do the Voices of AI podcast. I hope to restart the Voices of AI here soon, and you can scan this QR code or look on my slide share for my slides and more information. So thank you. So that was a talk that I gave called Some Preliminary Thoughts on Artificial Intelligence. And I wanted to get some of my ideas down on AI. And one of the things that surprised me as I was putting together this half hour talk was a good half of it was just going through the history and looking at the evolution of the technology and trying to understand the growth and evolution of technology, but also to understand as we move forward, what are some ways to start to look at some of these different debates that have been coming up around whether or not AI is going to kill us all, or what are the different alignment issues, and also some of the different AI ethicists that are out there giving some contrary perspectives on both the technology, its limitations, and how to understand what the technology is capable of and not capable of, but also some of the different harms as well. The conversation I did with Daniel Lufer back in episode 1177, Hearing access now talk about some of the different human rights approach to AI and looking at the tiered systems of the AI Act I think is actually really helpful for folks to look into when it comes to regulation and so that's something that I think the European Union has been Real thought leaders when it comes to thinking about the technology and the harms and how to actually regulate them I think as you start to have all these different companies come up with these AI ethics statements at the end of the day those AI ethics statements are not enforceable and they don't really amount to much it's just basically like Here's some normative standards that we roughly agree to abide by, but they're not really binding and it's hard to really enforce it. And so at the end of the day, what you ultimately need is to understand where it needs to be regulated. And so you have on the one end people who are the makers and creatives who are trying to push the limits of what's possible, but also folks who are trying to understand what the harms that are happening from these different technologies. And so. I guess I just wanted to put out this talk as my exploration of some of those different things as I continue to have these conversations where a lot of the artists and creators are putting that off to the side at the moment because they're trying to just focus on how to push the technology forward. And some of these things are out of their hands. And the panel discussion that I had at Augmented World Expo, Alvin, Graylin, and I kind of get into it a little bit, exploring these different dialectics, both what it takes to push the technology forward And also, how do you be in right relationship to these technologies in the way that you're not just overlooking some of the different harms and ethical implications of the technology, but also trying to find the best method to address some of these issues, whether it's internally from these responsible innovation teams within these big companies or the broader fields of AI ethics and some of the different regulation that is being put forth by entities like the European Union and how that starts to ripple out into United States regulation as well. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.