#898 Sundance: Questioning AI of Chomsky Trained with 60 Years of Data in ‘Chomsky vs Chomsky’

Chomsky vs Chomsky: First Encounter is a virtual reality experience where you get to interact with a virtual representation of Noam Chomsky, and essentially ask him any question that you want to. Trained on a corpus of 60 years worth of Chomsky interviews and data covering a wide range of his expertise as a linguist, philosopher, cognitive scientist, social critic and political activist. As with all natural language processing applications, the detection and comprehension of the input speech can be hit or miss, and then there’s the question as to whether or your not your inquiry will be matched up with a contextually relevant response that’s synthesized in real-time.

So it’s still early days aiming towards the dream of artificial general intelligence, and so constraining the bounds of possibility within an immersive narrative can help showcase what AI can do successfully. When an inconsistent or incomplete answer came back to one of my inquiries, then I found that it was a stark reminder that I was interacting with a primitive machine that had a hard time understanding my deeper meaning. But when there was a contextually-relevant, direct response to question, and even sometimes joyfully novel or interesting, then it’s a magical experience that increased my sense of social presence and gave me some early glimmers of a feeling that I was interacting with an intelligent entity.

However, this form of plausibility illusion is like a house of cards, and it doesn’t take much for my suspension of disbelief, expectation detectors to get triggered and for me to be reminded of the limitations of AI. Perhaps part of the point is to demystify the capabilities of what AI can do, but it’s still worth iterating on and incrementally increasing the capacity, accuracy, and training of their models. This was one of the unique experiences at Sundance this year where each interaction was helping to train and improve the underlying models.

I had a chance to do an interview at Sundance with lead artist Sandra Rodriguez, interactive developer Cindy Bishop, and visual designer Michael Burk to unpack the evolution of the project and their experiential design process. Combining interactivity and user agency with vignettes of immersive stories can be a challenge when you’re working with a machine learning black box that makes it hard to predict how it’ll react to a given input at any given time (and how even that will change over time). It’s a moving target, and they shared some of the milestones they were able to achieve and whats still yet to come in order to have more of a memory and a context-preserving functionality in the future.

Like I said, it’s still early days for these types of AI-driven narratives with virtual beings and conversational interfaces, but they’re continuing to learn and get better over time and so it’s important to keep iterating, experimenting, and trying to find the right constraints and narrative contexts in order to hide some of the current limitations of comprehension and reacting in a contextually-appropriate way. And I’m glad to see groups like The National Film Board of Canada, Schnellebuntebilder and EyeSteelFilm continue to experiment and push forward what’s possible with the technology that exists today.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s the trailer for Chomsky vs Chomsky: First Encounter

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So continuing on in my series of looking at the experiences from Sundance 2020, specifically the immersive storytelling innovations, the technological innovations, as well as the experiential design process. Today's interview is with the creators of the piece called Chomsky vs. Chomsky. So you go in and you ask this artificial intelligent expression of Noam Chomsky any question you want. There's like this visual representation of Chomsky and he's kind of morphing around, beautiful aesthetic, creative coding design, a lot of interesting sound design aspects. And they're trying to create this experience where you can ask pretty much any question you want, trained from this whole corpus of data from Chomsky. He's been a public intellectual for well over like 60 years now. So all this data information, put it in there, try to balance how to create a narrative experience out of you going in and being able to ask a renowned public intellectual anything that you want. And then they're also trying to weave together different stories. And so I have these little vignettes where Chomsky could say something profound, and that's something that pretty much everyone is going to get. And so, yeah, this was the Chomsky versus Chomsky piece that was premiering there at the Sundance New Frontier. So that's where we're coming on today's episode of the Voices of VR podcast. So this interview with Sandra Rodriguez, Cindy Bishop and Michael Burke happened on Monday, January 27th, 2020 at the Sundance Film Festival in Park City, Utah. So with that, let's go ahead and dive right in.

[00:01:38.585] Sandra Rodriguez: So my name is Sandra Rodriguez. I'm a creative director based in Montreal. And I also work part-time teaching XR, so extended reality, at the MIT, Massachusetts Institute of Technology. And that's it. I work in VR, AR, and now AI, exploring new realities and new tools that we can use to explore new realities.

[00:01:59.761] Cindy Bishop: Hi, I'm Cindy Bishop. I am an interactive developer and artist. Currently, I'm at the Media Lab at MIT as a developer.

[00:02:08.592] Michael Burk: My name is Michael Burke, I'm a media designer and artist based in Berlin for the studio Schnelle Bunte Bilder.

[00:02:15.641] Kent Bye: Great, so I'm wondering if each of you could give a bit more context as to your background and your journey into immersive media.

[00:02:22.544] Sandra Rodriguez: Well, so I worked as a documentarian and filmmaker for 16 years. And documentary was a way for me to just explore the way we humans share everything. So from dreams to our lives to everyday tasks and struggles that we have. But there's like this thread in using tools, even cameras are just tools we're using to share our human stories. A couple of years ago, I started working on web docs and interactive docs. And while I was working as a filmmaker, I was also in parallel doing a PhD in sociology of new technology. And while I was doing that, I was working as a UX consultant for companies that did gaming. So the three lives were really three different lives. But I felt a thread of just using these tools to tell stories, and it just felt so natural to combine them. So from web docs I went into interactive media and whenever there's a new toy around I'd like to use it to see how we can share stories with it and then started exploring immersive realities, immersive media and see how we could produce projects this way. Now AI came along and that just seemed like another challenge so I thought that was going to be a good challenge to tackle and why not try to mix it with people who are really skilled at what they do.

[00:03:34.219] Cindy Bishop: Let's see, I got into software development because I'm good at solving problems and puzzles. However, I've always had a strong artistic strength and in fact would have pursued a career in art had it been, what do you call it? Lucrative or pay money. Lucrative or pay money, yeah, thank you. Sorry, long day. Got my grad degree in dynamic media and at that time, 10 years ago, all of a sudden, guess what? Art is now representable by bits and bytes. As Lynn Manovich says, it is all now data. Photography, video, it's all bits and bytes. So, as a result, I took a break from the corporate world and I went out to Burning Man. And I was an OpenDoc lab fellow at the time. And I was trying to think about what kind of tools I could use to leverage artists. What could I build for artists in terms of describing or creating new XR, VR stories. So, I developed a couple of tools, Neurotopia, which gave basically authors and storytellers a way to create content on the web that was visualized in WebVR. And then I hit on VR Doodler, which was unfortunately hit about the same time as Tilt Brush. So this was a tool that you could draw in 3D on the web and then render in VR or AR at this point. So then, after that, I said, well, that's not going to work for me for now, so I'm going to hang out here at the Media Lab and see what kind of projects come up. Sandra and I had met at the OpenDoc Lab a couple of years prior. I'm a CogSci degree major in college, and the Chomsky project was a great fit for both of us.

[00:05:15.007] Michael Burk: So, I think I started my career studying interaction design and also visual communication and my work has always been centered around creative coding. So, my education is in design but I'm also building my tools and I think now with the work at Schnelle Bunte Bilder and also in the course of my career I've moved more and more away from like screen-based interactions and more into special interactions, interactive installations which can also be VR and I really like to combine these digital worlds with like physical installations also, which can create really nice narratives. And I think that's also something we are going to explore more with our project here.

[00:05:59.833] Kent Bye: Yeah, maybe you could also give a bit more context as to the Chomsky vs. Chomsky project and how that came about.

[00:06:05.179] Sandra Rodriguez: So the genesis comes a little bit from these multiple lives intertwined. Partly, teaching at MIT came like a surprise, but a wonderful gig, one where you feel a bit like an imposter. You get to teach to young geeks and very, very smart geeks at the Massachusetts Institute of Technology how to use something inherently human, and that's their creativity. I'm there trying to get them to do immersive projects and, you know, just use the skills they have at hand and not forget that what makes them unique is something they should use to tell stories with these new technologies. And while I'm there, there's this young researcher that had a great idea but a very provocative one, and he asked me if I was interested in doing a documentary on his objective. And his objective was to find a way to map the way Chomsky thinks. And he thought, he's such a fan of Chomsky, that with all the data he could gather, he was sure he could nail down the way Chomsky thinks. So I asked him, are you trying to replicate Chomsky? And he said, no, of course not. Are you trying to create a Chomsky bot with this material? Of course not. Why not? Because Chomsky would hate it. And I thought, what a provocation. This is amazing. You're telling me all this data is available because it's online, because Noam Chomsky believes in the sharing of knowledge, and he pushes this bottom-up sharing of knowledge. So for 65 years, he's been giving interviews All of this material is online and he doesn't ask for the rights because he believes in the sharing of knowledge. And then we have this technology that one could use to try to replicate who he is and maybe against his own will. Now how would he feel about that? And what does that mean for us as humans to really obsess about replicating individuals? And what does it mean? Could we really replicate somebody's mind just by using that data? So this young researcher, Yard and Cats, told me that he had spoken with Chomsky about it and that Chomsky was really not buying it, that you could replicate someone's mind with that data. So I wanted to watch the interview of Yard and Cats with Noam Chomsky. The more I listened to the interview, the more I loved Noam Chomsky, the more I felt a really grandfatherly voice. Weaved in his answers was a message of, you silly kids, don't you realize you're missing the point? And the point is that I've been studying the mind for 65 years and I still don't know how it works. We have been studying bees for far more than that and we still don't know how their flight pattern worked. So you can call AI whatever, it's a tool that's very useful. Why do you obsess about this being you or better than you or comparing yourself to it? Maybe you're missing the point. And I kept thinking, wow, in just this small interview, there's so much of a message that we could all learn from. So because I like to disrupt technology, to use the technology to talk about the technology, and try to get a public to think about the technology by using this technology itself, I thought this is a great way to use Noam Chomsky both as a case study, all his data is out there, he's one of the most digitized living intellectuals, so his corpus of data is really vast and we can really use that and try to play with it, but at the same time Chomsky the man himself has been really pushing all his life to let us wonder and be amazed about our own brains and minds and how we think and how little we know about and how we still have to explore, but in a way not in a negative sense of telling us how limited is our knowledge, but making us wonder about all the puzzles that we still have to solve. So I thought, we have a great guide, a great kind of guru, and then a great case study, and all of these in the same persona. Maybe we can create Chomsky AI versus Chomsky, or maybe it's the same person. Maybe we have two subliminal messages at the same time. So that's a bit the genesis of the project. Now, Cindy and I, because we share common grounds at MIT, started talking about the possibility of the technology. While I was working in other immersive projects, I had the chance to travel to Germany and I visited the studio Schnelle Bunte Bilder and Kling Klang Klang, who are behind the sound design of the project and the music score. And I just fell in love with the studios, not just because of your philosophy. You also presented yourself as a team that likes to disrupt technology to understand technology. And I thought, high five, we're really on the same ground there. But also the work was so unique in a way that it used simple things of everyday life to make us think about and wonder about bigger things and, you know, bigger aspects of our worlds. Not just loving the technology itself, but kind of using, let's say, sounds from our everyday life to create music with it. Or using the AI to talk about, using data from rivers to create AI music. or finding patterns in water to talk about our relationship to water. So there was like a symbiosis in the way everybody thought, but at the same time very different backgrounds that helped maybe try not to be too biased in the way we perceived the data that we could use or how to use it.

[00:11:02.867] Kent Bye: So yeah, maybe you could describe what this experience actually is, since you're doing a lot of the visual components, and talk a little bit about what you were trying to achieve with what you're actually seeing, which is a lot of these abstract shapes, and yeah, just how do you describe it?

[00:11:15.851] Michael Burk: So as you enter the experience, you are thrown into an unexpected scenario. You wake up to a pretty much natural world, which has a certain amount of realism. But then things also seem to be a bit off and a bit not right. And we really try to go for these visual metaphors of nature and try to steer away from the usual sci-fi look and feel a little bit. And to also pick up on the metaphors that Noam Chomsky uses when he explains things. So he has this way of explaining really complicated things in an easy, understandable way. That's kind of his thing. And very often he also uses metaphors of nature, like Sandra already said, like bee flight patterns. So we thought, why not base this whole experience in this setting? So as you enter this world you kind of meet AI Chomsky and he is still under construction. So he kind of is taking shape out of like a concrete block. He's basically like a sculpture being shaped. So he's really abstract and he has some human features which develop in the course of the experience. But in this prologue he doesn't completely gain his final shape yet. So we still watch him while he is shaping and growing.

[00:12:36.598] Kent Bye: And so the conceit is that you go into this experience and you see this virtual representation of Chomsky who's morphing and changing around with all these mathematical structures and then you can ask him a question and then it's repeated back a couple of times and then if it is able to parse it then you get like an answer. and you're showing the text there, so you're giving a little bit of feedback. When I was doing it, it was a little bit difficult for me to have the audio being fed back. It was disrupting the way that I was thinking and hard to really focus on the question. But aside from that, you're just asking Chomsky a question and then seeing what that evokes with this larger database.

[00:13:12.805] Michael Burk: Yeah, exactly. So with this sound feedback, we kind of really wanted to drive home the point that you are being recorded, of course, and your data is taken into the equation and Chomsky learns from you. Of course, we might have to explore how it feels if you have longer questions. That's also a good thing to test for us now, which kind of questions do people ask. And it turns out they do actually ask very elaborate questions, which we didn't quite expect in this way. Yeah, we expected more like shorter questions actually. It's really interesting to see how people do interact now.

[00:13:47.079] Kent Bye: Yeah, maybe you could talk a bit about some of the machine learning and artificial intelligence, natural language processing that's all on the back end here.

[00:13:53.345] Cindy Bishop: So when we were first looking at the project, we were scoping out what open source AI projects, but we did have to eventually settle on Microsoft Azure. It being the most robust, it had been trained, it has a great model, and the preloaded data has personalities you can choose. serious, professorial, humorous. So we could preload those Q&A answers, questions and answers, and then on top of that, start to train our corpus from there. So as Sandra mentioned earlier, we have like 60 years worth of Chomsky recordings. So that's a great corpus. However, most of the questions are asked by scholars or researchers. So it's not really enough just yet to get to the point where you can have a generative syntax. So right now it's really fairly one-to-one, like we have several questions to one answer that Chomsky would have answered in the spirit of how Chomsky answers, which is usually with humor, usually with a question, answers a question with a question, and is sometimes a little bit of a curmudgeon.

[00:14:57.113] Sandra Rodriguez: There's also, so one of the points of the project, and to be fair, this project is still, we keep saying it's under construction. The AI itself says it's under construction. The idea is really to use the technology to talk about the technology. So the main goal is to open a conversation about artificial intelligence. When we hear about artificial intelligence today, we really feel like it's thrown at us like the inevitable future. And we're all heading this way, and we better get used to it. And we hear that a lot. What I think is really interesting is once you get your hands dirty with the technology, you realize how limited it is and how a lot of it is really hype, smokes and mirrors. And you feel a little bit like it's the Wizard of Oz, right? You have this big face of AI that looks like mysterious and scary sometimes. But if you open the curtain and you suddenly see the little man pulling the triggers behind the machine, you're not so scared of it. And exactly like the Wizard of Oz, at one point I thought the parallel was so perfect, it tells you, well, you thought you didn't have a heart, but you actually had heart. So the idea is a little bit to open the black box of AI and try to show the public that it's not what it says it is, and it's still quite limited. Will we get there one day? Maybe, maybe not. The idea is not for us to tell the public where it's heading, but make sure it gets a chance to understand how the backend functions. So we're getting there. But for the experience, we developed several types of what we call AI. So everybody will, once we open the Pandora box of defining AI, we always get an answer like, this is not real AI. What I think was interesting is we met with a lot of AI companies, asking them what is real AI, or what would be a real AI service? And the reality is AI is not one technology. It's multiple sets of different technologies that we today call AI. One of them is chitchat, so one of the AIs that we developed is conversational chitchat, and that's Microsoft Azure. But we didn't want to also just be promoting one technology, there's multiple, several types of technology out there. So then we started looking at other tools that, for instance, are analyzing intent. Are you trying to make a joke? Are you trying to ask a philosophical question? Is it talking about people, food or philosophy? But then on the other hand we also started developing a more complex conversational artificial intelligence tool that really tries to understand the types of questions that's being asked right now and from that try to generate and that's the generative part. generate answers. Now we're still exploring this. It's constructed, partly constructed. The AI entity says it's under construction because it's true. Right now what we're presenting here at Sundance is an opportunity for us to see how people interact with the AI system and try to have the AI system train but never in the objective of becoming Siri. We're not trying to create a Chomsky entity that you could ask questions forever and it will always be at your service. The main goal is still storytelling. We still have a point to make with the project and the point to make is to have us question our obsession with recreating human beings and recreating or replicating human entities or human inspired entities with these new tools. So these several AI systems have been explored and are still being explored so it's a very iterative project. Kling Klang Klong on their hands have started developing also AI systems for the music score and the soundscape and I don't know if you guys remember but at one point it really was quirky and chaotic to hear the music composed by the AI system and we decided to tone it down, just so the storytelling would take over for now. Maybe in the next iteration the music score composed by an AI system could continue. But our goal was to explore different AI systems or tools, better said, that are out there today, not just focus on one like you were saying, Cindy. So it's not just trying to buy into one technology, but kind of show that it's a vast area of technology.

[00:18:56.400] Cindy Bishop: Would your users like to hear a little bit more about how the models and algorithms work? Sure. And again, I'm not an AI expert, but as a software developer, I've learned a lot about this. And I think it's helpful to understand how, as Sandra says, AI is not all the things. It's an amalgamation of certain behaviors that people have figured out how to model. So really good examples like facial detection and recognition. Facial detection works because it's been trained on a bunch of human faces. And what has been trained is certain kinds of faces. So if you've trained your data recognition model on white faces, it's going to be really good at recognizing people who happen to be white. If you have left out certain people, it will not recognize them as easily. That's really important going forward, thinking about how are we going to represent Chomsky and how are we going to represent the questions that are asked and answered. That said, the point that I wanted to make was, if you put in enough data, you're gonna get a certain response back. So it's like data in, data out. And if you don't put in the right data, or if you put in like really biased data, you're gonna get garbage in, garbage out. And the way that we're modeling Chomsky, we're thinking about how to model Chomsky is not just up to us in terms of what curating the questions and the answers, but to eventually get to the point where we've trained the system so that it could generate answers. And it doesn't really matter what platform you're using to do the easy stuff, the straight question-answer connection, it's really like how are you going to take that data and throw it at a model like BERT. You can look it up, I can't remember what it represents, but and maybe Michael actually we were just talking about how quickly these AI algorithms are getting trained these days, but essentially grammar is difficult to replicate. And so how are we going to do that going forward?

[00:20:44.075] Kent Bye: And ironically because this is a lot of the thesis of all of Noam Chomsky's work.

[00:20:48.636] Sandra Rodriguez: There's a little like a back-end story also that I thought was, it was just irony over irony over layers of irony. The more I was digging into Chomsky and how we could explore it, we were digging into natural language processing. And I just asked programmers, is there a link between natural language processing and natural language theory, which is Chomsky's most famous theory is natural language. And they said, well, aside from the fact that it's developed in the same building at MIT. So then you realize in the same building, people were talking about natural language a lot. And in the same years, people were developing natural language processing. So of course, there is an inspiration. There's a clear parallel between natural language processing, a way of considering building blocks that you can move around to recreate new content, and what Chomsky was explaining with natural language theory, which is we all have these building blocks, ideas, images, emotions.

[00:21:41.266] Cindy Bishop: which are represented by the blocks and the squares and the experience, right?

[00:21:45.091] Sandra Rodriguez: So we have these building blocks that we play around, so these finite building blocks that we can play around to create infinite new answers and new sentences. And so for Chomsky, this is an omen to our own human creativity. With very finite blocks we can create infinity, but for natural language processing it kind of goes backwards. It says you're using finite numbers of blocks to create finite numbers of outcomes and of course you can add random to that and that's creativity. But maybe that would not be what Chomsky agrees with. So I just thought the parallel and the It became very meta, as you can see now. The more we dug into it, the more we realized Chomsky himself in an interview says, like a very small sentence, and I thought, gold. He just struck a chord because he was answering a question on singularity. And of course he's been asked questions about his favorite sandwiches, you know, advice on flirting, to politics, to who should win the Olympics, and, you know, dolphins, the ways of communications. And then he gets this answer of singularity. And he had a, suddenly in this interview, a video interview, he had a face where he looked simultaneously patient with the simplicity of the question and at the same time kind of very guiding. He was trying to tell us, don't you realize, you know, adding this idea of singularity is just science fiction. We don't really know how our brains work. So what exactly are you trying to replicate? And in this small sentence, he says, I've been doing AI all my life. If you're calling AI creating a theory of the mind, I've been doing AI all my life and nobody seems to mind. And I thought, how interesting. So Noam Chomsky himself considers what he does as understanding an artificial way of defining intelligence. Now, if you apply this to programs or computers, that's one output. But if you're just sticking to theories, it's the other. And so I just thought it's interesting to see that there's clear parallels and it becomes quickly very meta. So we decided to have fun with it and create this kind of unique, nature-driven, weird world where one really can become meta and ask questions and see how the conversation goes.

[00:23:57.585] Kent Bye: Yeah, in the visual aesthetic of it, the way I think about it is that in the neuroscience theory of the predictive coding model of neuroscience, you're making these predictions as to what you expect. And then you observe whatever you're looking at, and then you are able to then do a correction of what you actually see and what you predicted it to be. And I feel like there's something about the visual aesthetic that is really playing with that randomness and that curiosity of not really quite knowing what's going to happen. And so to me, it was so far beyond what you see with a lot of traditional 3d objects and it felt like more of these mathematical equations and so as you're developing these how do you Speak about this or how do you is it just an iterative process and are there ways to talk about? From the essence of the character of what you're experiencing and how to collaborate with other people on that type of creative coding

[00:24:45.174] Michael Burk: So I think it's all based and starts with the storyline so we think about okay how is this character going to develop throughout the story and finding all these shapes in these movements they are actually all based on mathematics so it's like all these mathematical functions like fractals and noise functions and things like that. which are also, of course, artifacts that are also used in any programming language and field, also in these natural language processing algorithms. So that's naturally a really good fit to just use these. And these creative coding things are also very explorative, if that's the word. So you really, you play with the numbers, with your algorithms and see what comes out on the other end. And that's also really similar to what you do with AI actually. So yeah, it's like a natural harmony in that case to use these procedural kind of design tools and algorithms.

[00:25:48.687] Sandra Rodriguez: I have to say that I'm really in awe of my team as well, and I think where I really realized we were all so dedicated to really researching everything we did. Of course, I expected this from everyone, but still, you all surprised me. Schnelle Bente Builder in Germany, behind the visual design, yourself, Michael, of course, but also Kling Klang Klang, went really into researching, where does Chomsky live now? Arizona. If we're talking about nature, shouldn't we have Arizona-inspired nature? I just realized even in the smallest of iterations of color schemes, textures, were all researched. Even in the music, we started thinking about metaphors of birds. It came kind of serendipitously, and I think serendipity is something that humans can appreciate, and sometimes an AI may not. We were discussing and trying different ways of visualizing what the AI could look like, how the inside of the AI could look like. So just looks. And we had like a glitch in the computer and suddenly we kept hearing bird sounds. And it made me laugh because I thought, hey, it's funny. It sounds like Chomsky's metaphors on birds. So we thought, could we use bird sounds to create a soundscape that really could inspire an audience and try to create that world that you're talking about that feels like you know it, but really is different enough that you can't put your finger on what you're hearing? And so, you know, the team, yourself and Kling Klang Klong, went into researching what types of birds exist in Arizona. And I thought even just to, you know, to feed a database of a soundscape created from all these natural elements, at the same time mixing it with the voice of the user, So for somebody hearing us right now, it really must sound very esoteric, because we're mixing all these elements, sound, voice, nature, visual aesthetics, back-end system, a corpus of data. The real challenge was to make it seemingless, so for a user to try the experience, just to feel like it's discovering a new world, but playing with what exactly you just named, trying to avoid the expectation of what the machine should be.

[00:27:54.207] Cindy Bishop: I also, talking about plants and birds and building blocks of language, you know, it really goes back to what kind of intelligence is it that we're trying to model? And we often think, well, we're going to model human intelligence because isn't that just the best? But, you know, slime mold is super intelligent. Why? Because it can find food. It sends its spores out, finds food, and the rest of the spores come and follow it. You know, ants, bees, flat patterns of birds, how do they all move at the same time? Octopi, octopuses actually is the proper term as far as I understand and in my research of cephalopods. But you know it's really fantastic that there's all sorts of intelligences that we really want to think about what it is that we're modeling. I mean whose intelligence are we choosing? And, you know, I think we've all talked about this together, but what's going missing when we're modeling these building blocks of what we think is intelligent behavior? We as humans, certainly as a software developer in my life, I try to build modules that are easily extractable and I can build things on top of each other and, you know, I can get pretty good at modeling certain behaviors. However, what is going missing? That's really something that I think Chomsky describes regardless of the fact that he believes in the building blocks of universal grammar.

[00:29:09.530] Sandra Rodriguez: That is the main goal of the experience. So, you know, the main takeaway at least that we'd like the visitors of the experience to have is to think about these new technologies that we're building. Chomsky himself, in that interview, that was the first provocation, the first idea for the project Chomsky vs. Chomsky First Encounter. That first provocation was this interview in which Chomsky himself answers to the young researcher, we really don't know anything about the mind. We really are at a pre-Galilean stage and we don't know what we're looking for anymore than Galileo did. And so that got me thinking, if we don't know anything about the mind and we're still at a pre-Galilean stage, what exactly do we choose to replicate? And so I became obsessed with trying to think, if we don't know what we're replicating, that also means there's everything that we're leaving behind, that we're also forgetting. So the project aims to use Chomsky's legacy to point us to really very simple things about how the human mind functions. I'm not calling Chomsky simple, I'm saying that Chomsky himself has a simple message that you keep hearing in all of the interviews and the simple message is Humans are awesome at being creative, at inquiring, at collaborating, at being in awe and wonder of the world. And I thought, well, funny, isn't that everything that we keep not discussing with AI? Can AI be creative? We hear a lot, but can it collaborate? Can it inquire? Does it wonder? All of these questions are a good way to use the technology to open this Pandora Box, or at least hopefully a conversation, to try to think, if we don't know what we're replicating, it's a big question to really stop and think about what we choose to replicate. Because when we don't know what we choose to replicate, we just put a bunch of things that are metaphors in there together, we quickly forget everything that really makes us human and that suddenly we keep in the background. So the main goal and the question of the project is, if we don't know what human intelligence is, what are we trying to replicate? And in doing so, what are we leaving behind?

[00:31:11.463] Kent Bye: Well, I know that when you look at narrative, there's this spectrum between the authored narrative and the generative narrative, where you are writing a script, essentially, and you're able to then just control everything that's being said. And then on the other extreme is like artificial general intelligence, where you feel like you're talking to Chomsky, but it's a bot and I think you know, we're obviously a long long way away from AGI but you're trying to mimic this experience of being able to go into this experience to see Chomsky and to ask him any question that you want and you want to number one be understood and then number two have an answer that comes back that's relevant and I feel like there's a certain amount of social presence that happens that when those are in alignment with what you're intending to ask and the answer that's relevant to that and you get this plausibility that gets built up. However, when it comes back and doesn't match, it kind of breaks that plausibility almost like a house of cards. So I found myself constantly going back and forth between being frustrated from asking a question that would give me a canned answer of almost prompting me to ask about free will. But I was like, I'm not really interested in free will, but I don't know if it was trying to guide me or But you have a wide range of people who may have a lot of context about Chomsky and it may be their dream to sit down and ask him all the questions they want. And then there's other people who have no idea who he is and have no idea what to ask. And so in that whole range, how do you create an experience that allows people to have a good experience, that has that authored narrative, that has these answers that are quite profound, but yet at the same time is able to respond to your agency?

[00:32:40.727] Sandra Rodriguez: I think you've just highlighted something that's important. Every time that we do talk to an AI system, we try to expect it to be at our service, because that's how it's promoted and marketed. For instance, Siri, Alexa, all these tools are digital assistants. Myself included, the first time you do talk to an AI system, you have a tendency to ask it a question and expect an answer. And the first thing we started to workshop and break down for the storytelling was when it's a conversation, sometimes the person still guides you into whichever direction. So the more interviews I was watching of people interviewing Chomsky, the more I realized sometimes he really doesn't answer the question at all. He just goes into a monologue and just like continues what he has to say, which we all do in a conversation sometimes. Like perhaps partly this interview, you've asked me a question and my thread of thought just goes into one direction and I follow this direction. But partly as users of this technology, we are frustrated when it doesn't answer back or it doesn't understand what we want or you see how limited the technology is. So I'm not sure we've solved it fully. That's why it's still iterative and under construction. We're still training the system. But part of that answer would be to... show the users and visitors of the experience how the backend works. So the ultimate goal, even of next iterations of the project, is to get people to see more and more how the technology works. And so when it fails, you kind of understand for yourself where it fails. I think two years ago I was here at Sundance and I was asked to moderate a panel on AI. And I started, just for fun's sake, interviewing Siri. So I started to ask Siri questions and after I think it took me about three evenings with a glass of wine of asking Siri questions to try to see where it would break and also to see the type of English I was using would bring me to different types of jokes. Because it kind of categorizes me as young hip or a traditional classic in the way of asking your own questions. So the more I would ask it questions, the more I could see how the boxes were made in the back end. And I thought with this project, and it's not completely there yet because we're now just testing how we can converse and how people do converse with the AI system, but the more you can see the back end and where it fails, I think that could take some of that frustration away. Because you understand for yourself, oh, I see now the flaw of the technology. And that's one point, because we're sold this dream of smoke and mirrors, that it's actually really thinking, while what the machine is really doing is finding patterns and trying to find a suitable answer to what it predicts as a pattern.

[00:35:12.636] Cindy Bishop: I could add a little bit about the intent and entity extraction. So we don't have a state machine yet, so that's part of why you're so frustrated, because we don't have any way of tracking what questions you've asked and what answers you've received. So obviously going forward, before we even get to generative text and answers, we would definitely want to implement a state machine that knows what you've asked, remembers your name. Perhaps we're able to calculate how many people have experienced this, how many people are our name Kent you know like we could start to interpret and anticipate the questions but that is you know that is only what we're doing we are anticipating questions so you know we haven't anticipated someone who's who wants to know like what's five times five and how do we get beyond that frustration I guess that's part of what we're trying to figure out, because we don't want to just be a Q&A. We don't want to just have static text. I mean, you certainly could populate our database with 3,000 questions and answers, or 10,000 questions and answers, but that still isn't where we want to go, I don't think. I think we want to do something more than that. We want to be able to really test the bounds of what it is to generate a Chomsky response. And I don't mean a conversation chatbot. I mean, how would Chomsky respond? Maybe he wouldn't answer the question. Maybe he'd take a sip of coffee and walk out the door. Who knows? But I do think that in terms of user interaction and design, we'll have to figure out how to minimize the frustration and still lead you towards the Chomsky, I don't want to say Oracle, but...

[00:36:44.283] Michael Burk: So, as Cindy already explained, we're really going to improve the system because it's also key for this experience to be impressive also, for the visitors to be impressed a little bit and to explore something new. But at the same time, the key here is really the combination with the storytelling because we are creating a character here which is self-consciously knowing that it is an AI and can be honest about that. it's not completely trying to be like a replication of a human being and this character also knows that so we can really play with that and I think in the end this will totally lead us to a nice experience that doesn't have any frustration because we can play with these aspects.

[00:37:28.518] Sandra Rodriguez: I think we've also, ourselves, watching people try the experience. We've been learning a lot of how people interact and react. So you were saying you didn't know if it was guiding you towards talking about free will. And that, just for myself now, really surprises me. The first day that we were showing the project, had a powwow, a team gathering and really trying to figure out what was good in the day, what was bad. And we realized a lot of people were asking a question about free will and we hadn't really planned on that, meaning in our script it would guide the conversation about artificial intelligence, what it was made of, how it works. trying to be as transparent as possible as how the technology works versus what the real Chomsky, the real human being is, and always being honest about what it is, is that it's not Noam Chomsky, but it's an AI entity. And the more we were planning this, the more we realized we had nothing on free will. Now you're saying you felt like it was guiding you towards free will, and I'm really curious to now go back and see the back end and where something in the intent analysis must have changed. Maybe too many people asked a question about free will and now it's trying to answer and guide the conversation there. So there is partly a lot of script going on. We want to be transparent about that too. Somebody asked me today, aren't you afraid that you're kind of demystifying AI? And I'm thinking, no, that's the main goal is to demystify AI. So a lot of the AI systems we have out there are fully scripted. So ours is also really scripted, but then there's little tweaks of what it's trying to understand, which question to match to which type of answer. So the more we ask a question on themes like democracy or the 2020 elections, I've heard a lot of people ask, who's going to win the elections? And you, Michael, pointed out to me, isn't it funny how they all want to know the future? And we thought, yes, because this is all the data that feeds AI systems are based on past behaviors. that's the problem with what we call artificial intelligence is based on past data and so past behaviors and we're trying to predict the future based on the past. Which is maybe one way of seeing the future but also limits us in our bias. So I thought it was interesting to see so many users ask it about the future. And this entity can't know the future. But you just asking us why I was guiding the question into maybe free will makes me think we should really look into how many questions were asked on free will. And maybe everybody wants to know today where free will is heading.

[00:39:55.150] Kent Bye: Yeah, and I'm wondering if you could talk about some of the either biggest open problems you're trying to solve with AI and VR, these immersive storytelling experiences, or some of the open questions, or some of the open problems that you're trying to solve.

[00:40:08.657] Sandra Rodriguez: A couple of years ago, I'm trying to keep my answers a little shorter so you can use them better, but a couple of years ago I worked on a beautiful project called Do Not Track. I was the director of an episode on big data. Big data was a big challenge. Everybody had an episode on cookies, an episode on data tracking, just explaining generally what data tracking was. And I was told, well, you're the PhD in Sociology of Technology, we're giving you the assignment of trying to make big data simple for an audience. And it was really hard because everybody talked about big data, but nobody really wanted to listen or hear about it. We all felt like we knew what big data was, and we all understood how it was tracking us. But there's one thing in teaching and being didactic about learning how big data works, and then understanding by doing and by being playful. So the project then taught me a lot about how people were afraid sometimes about a technology, but the more you show them how the backend works, the more they can make better decisions with a clearer perspective. So it's neither being for it or against it, but more understanding and then making your own decisions. The biggest challenge here was AI is still so hard to define. So contrary to big data, you have the similar challenge. It's a complicated subject to talk about. People feel like they know AI, but at the same time, nobody can really define it. So it becomes tiresome for people. They hear about AI, they feel it's like this complex theme, but then you tell them they can interact with an AI system and they quickly get tired of talking to a chatbot. So for me, the biggest challenge for that part of the storytelling was how can we make it simple enough that you may know Chomsky or not know Chomsky at all. You may want to talk to an AI system or not know anything about AI and you could still enjoy the experience. The biggest challenge was make it simple, pleasurable, funky enough that you attract different types of audiences while at the same time leaving them with this desire to understand better. So for me definitely the biggest challenge is how do you make sure you don't lose that audience into thinking it's too intellectual or too complex and at the same time making sure that there's something so inherently natural in thinking about why we think and getting an audience to think about how they think is again very meta but I think there's a beautiful challenge in it. So another challenge is immersive media. And a lot of projects that are 360 video or immersive use the technology. And then the biggest question that is asked is, why is this project made for this technology? And I want to push against this perspective. I don't believe that there's a story for a media. If that was the case, we wouldn't have movies about Romeo and Juliet, as we have theater plays, as we can read it in a book, or even have a graphic novel that's about Romeo and Juliet. Romeo and Juliet is a story. And that story can be told in multiple different ways through different media. So whenever the question is, why VR? In our case, it was, well, because it was then the output that we could use for this experience. But for next iterations, we're really exploring different types of technologies to tell a similar story. So we're not limiting ourselves to VR per se. But the immersiveness is important because we're telling a story about our human minds, and they don't just work in flat, perspectives, they work collaboratively in creative ways, so we want to explore space, body in space, and multiple users in space.

[00:43:39.869] Michael Burk: Yeah, I totally agree with Sandra here. So we are not completely fixed to VR, but you also brought up just now multiple users and I think that's another challenge, but also a big opportunity to go further in the experience because we really want to bring in other people into the conversation. and have a conversation between people and an AI. And even though we are not fixed to VR, VR can really help in this case to create different spaces for conversation. So you can kind of, if you have people with you in the VR, you can kind of remove them virtually, so you are alone with the AI, so it's a one-on-one. Or you can listen to someone else speak to the AI or you can also swap people around so you don't know which one is the AI, who is answering whom. You can create really interesting situations there and I think that's something we really want to explore.

[00:44:33.958] Cindy Bishop: On the general side, I really champion something I like to refer to as creative civic engagement, which is I want more people, and in fact it's critical that more people become involved in our technological society, both in AI, in terms of what data is scraped and what data is used. As the director at Center for Civic Media says, or at least it's on his door, if you weren't paying for the service, you are the service. So I think it's really important that we're aware of those things and how those tools are enabling us and also potentially hurting us. Specifically for the Chomsky project, I do think, I certainly have seen things like extracting entities. Chomsky, you know, it's actually a hard word for the speech analysis algorithm to really get. Is it chom-sky? Is it Chimsky, is it Tinsky, you know, like just some of the just like the basic ways that humans understand things that AI does not quite yet. Key phrases, anticipating questions, and then just going from there and figuring out how we're going to throw enough data at the model so that it comes up with some stuff that we don't have to do ourselves. I think that's going to be super interesting.

[00:45:47.723] Kent Bye: Great. And finally, what do you each think is the ultimate potential of virtual reality, artificial intelligence, and immersive storytelling, and what they might be able to enable?

[00:45:59.690] Sandra Rodriguez: Interesting. There's a thing that I started being tired of repeating at one point because I didn't want to sound like a broken record, but I've been telling a lot of students in the XR space that they need to be four years old. For me, immersive media really attracted me because I think I'm a forever four. And what I mean by being forever four is we have all these capacities in our body. It's not just what we see, it's not just what we hear, it's not just our physical human body in a space and the way we interact with others, but it's also our imagination. All of these things make us, audience members or storytellers, share this common space of storytelling. We have been exploring storytelling through audio, through images, but throughout the human history, storytelling for a long time has been using all these parts, audio, visual, body and space, the way we interact with each other. I think virtual reality really enables us to step into a new space where people kind of take a lot of their expectancies and leave them at the door. partly. So you can see, yes, of course, in 360 that's been talked about a lot. I think using the body in space is a lot more interesting for me to try to see how that plays into the way we experience the story. And in this context, artificial intelligence is now helping us really feel more like the magic of being four in a new space. If you are four, you don't just imagine the story in a flat screen. When people are telling us virtual reality allows you to step away from the screen, There has never been a screen. The screen is the output. That's only the thing. If you're telling a four-year-old a story, it will not just imagine a screen. It imagines everything around. And then on top of that, creates as it goes. So my hope is that while we're creating these virtual reality spaces, that the AI can help us adapt and model so it really feels seemingless. I'm not sure if we're fully getting there. I think that's the beauty of New Frontier and that's the beauty of these new extended reality technologies and explorations. All the artists in these worlds are really trying to go through trial and error. But the goal seems to be the same thing, to keep us all in awe and magic of what could happen and what could it be, but without feeling the tethering of the technology around. So make it as seemingless as possible.

[00:48:24.724] Kent Bye: Beautiful, thank you.

[00:48:26.645] Cindy Bishop: What do I anticipate? How I want to see the world of VR and AI coming together?

[00:48:32.309] Kent Bye: What do you think the ultimate potential of what AI, virtual reality, and immersive storytelling might be able to enable?

[00:48:40.854] Cindy Bishop: Well, I think it ultimately starts with the gesture in your body language and how we as humans want to make marks of existing. I just got back from Egypt and it's really remarkable to see the hieroglyphics that have lasted 4,000 years. Of course, those were really normalized or codified, but just to see those strokes, see the paint strokes, and I'm just like, wow, somebody just hand-painted this. It's all about communication. There's something that words and gestures say that's really very immediate and really connects us all together. So my hope is that we still, with the advance of the Tilt Brush, I still think there's a lot to be done with bodily gestures and movement and creation. I think that's why they've gotten into a lot of the performative virtual reality theater over here. And I really want to see where this can go. And I think, getting back to the creative civic engagement, that we've created an environment where gesture is part and parcel of education. And if you can really strengthen that connection, I think we're really going to have a really cool and meaningful experience.

[00:49:52.218] Michael Burk: So I actually can see different interesting scenarios, especially with AI and VR. You know, some people are saying that once AI will do most of our jobs, we'll find purpose in virtual worlds because we will have to do something all day. I can totally see that. But on the other hand, I think there's really a chance for VR and AR, which kind of blends like in mixed reality. And it will blend even more, I think, to like one thing. there's a chance to get us away from these rectangles, we are staring at all day, and kind of create a more natural, well, take the word natural with caution, but a more, maybe a more meaningful relation to media and storytelling, to kind of bring it into our natural world, to our homes, so we can enjoy it together as we usually do.

[00:50:46.065] Kent Bye: Cool, is there anything else that's left unsaid that you'd like to say to the immersive community?

[00:50:51.365] Sandra Rodriguez: It's been a real awakening these last years. Virtual reality has really been questioned. I remember four years ago, 2016, being here at Sundance New Frontier with Cindy. By coincidence, we were here together. It was really inspiring to see how artists were using a technology in the way it was sold then and were disrupting this to really lead us elsewhere. I remember some projects here, especially one called Irrational Exuberance, that I kept returning to at home with a virtual reality headset and I would visit it as jazz music. It felt like something that was not really storytelling. It wasn't really 360 immersiveness and make you feel like you're really there. It just had feelings and your body in that space with the music and the visuals really brought a next level for me of understanding how these tools could be used. So trying to put it more succinctly. I think artists in the new media spaces that are VR, AR, XR and now using AI intertwined with these new immersive worlds, I think have this kind of... need to explore new territories and I think that is really what is needed to push against why these technologies are built. So there's an idea behind building the technology and selling it to us as, you know, let's say a virtual reality will be great for gaming and that could be totally true. But why wouldn't it be great for museums, for instance? Why wouldn't it be great for education? Could it be great? You know, stepping outside from the discourse of it has a purpose. Maybe it's just to make us dream. And I think whenever we're here, maybe that's the last sentence. I think Sundance New Frontier is amazing for this. We see the types of projects and how different they all are. And it's really a pleasure to see the field grow and to have people around question it. Has the VR hype burst? Is VR dead? And I'm like, I think VR will forever be dead, just like punk rock. It just will keep reviving and showing that it's never dead and never fully dead. But there's always going to be somebody telling us that the technology is dead. And that's great because we keep exploring how to push it to next levels.

[00:53:07.774] Kent Bye: Yeah, thank you. Anybody else?

[00:53:11.290] Michael Burk: I also think we are at this point for quite a while now where virtual reality is not interesting anymore just because of the technology. People are not that impressed only because it is virtual reality. But now this really leads us to more interesting stories because you really have to use technology in a meaningful way now. You're not getting away with just doing something in VR anymore. It's the same thing with any technology really. So the hype might be slowing down a little bit, but I think also the quality is increasing.

[00:53:44.738] Sandra Rodriguez: You're no longer just swimming with sharks or doing roller coasters in VR, which is great.

[00:53:51.640] Cindy Bishop: I think that I would say I would love to hear from people after they read my, I have a Medium post that's called Minecraft and Mosaics, the digital redux, and it talks about what we leave behind, interstitial mud, and I want to hear from people about what happens if we erase all our traces. And going forward, I think VR has a really interesting capacity to keep the traces around, you know, like if it's an echo, like a digital echo, and you can sort of track people's movement over time or what they do over time. That could be visually a really interesting experience and begging the question, why is the digital world such a clean environment?

[00:54:32.140] Kent Bye: Awesome. Well, I just wanted to thank everybody here for joining me today on the podcast. And there's a lot of ways that we're moving towards this dream of artificial general intelligence where we could just have these conversations. And I think as we look at the gaps, we start to learn about what it means to be human. I think that's a really important project to be able to start to take a little bit of a critical eye on all this and to be able to help educate and demystify what artificial intelligence is. So yeah, thank you so much for joining me today on the podcast. So thank you.

[00:54:58.252] Sandra Rodriguez: Thank you, Kat. Thank you. Wonderful to have this conversation with you.

[00:55:02.435] Kent Bye: Thank you so much. So I have a number of different takeaways about this interview is that first of all, well, I love this concept of being able to go in and talk to what is trying to be a replication of Noam Chomsky and to get different answers. I think one of the frustrating things with an experience like this is whenever you ask a question and it doesn't quite understand what you meant, or even worse, even if it does understand what you meant, it doesn't have necessarily anything that is relevant to what you specifically asked. And so it's like, you have enough of those misses and it ruins the possibility that you're actually talking to someone who's an approximation of Chomsky. Now, when it hits and when you actually ask something and you get an answer that is answering your question, but even better pushing you down a line of inquiry that you had no idea even existed. Then I think that's where the magic can happen, where you feel like you're able to actually get the sense that they're able to take this huge corpus of like 60 years of Chomsky answers and data and be able to synthesize it down into something that would make sense as an answer. Now, what I'm super curious about is the sort of evolution of ideas as Chomsky from a certain era, the Vietnam era, or through these different phases of his life, and to see if there might be variations or differences. You know, it's pretty static in that, you know, he has certain perspectives, but I know that everybody as a human, we consistently grow and evolve. And this is just a challenging thing to try to recreate what it means to feel like you're actually talking to Chomsky from any particular era, even right now? And if Chomsky has answered in one particular way his entire life, but what if he changed his mind on something? How would you represent that in artificial intelligence to try to update this model of somebody's ideas? Now, this gets into the whole aspects of natural language processing, NLP within artificial intelligence, which up to this point has done a lot of mostly statistical analysis of trying to compare the relevance of words relative to each other, but I don't know if it's necessarily had this way of modeling the higher level of the deeper meaning. They did mention that they were using BERT, which is the bidirectional encoder representations from transformers. And that's got a lot of different variations from Roberta, Distilbert, Inbert, Albert, sentence order prediction. You know, there's lots of different variations of the different technologies on the back end, but each of them is not necessarily human readable. It's like a bunch of statistical numbers that are putting this whole corpus of information and then almost like a Markov chain, putting things together. That's at least the metaphors that I think about for how things get ingested into this AI and then trained on it and then trying to get information out. So the challenge that they face is that for one, this is a never ending system. So each time that you have new people go through, it can learn more new aspects about what people are interested in. But then how do you take a step back and maybe pull information that's from this type of generative process and then pull out the best of and try to have these little vignettes? I know in the middle it has Chomsky go off and go on this little explanation about something, but as the storytellers, they're able to really focus that and to really craft that. And so, and so you have the dream of creating this completely generative type of experience where you go in there and you just are able to ask it whatever you want. But then, you know, how do you control the narrative tension of being able to have these different vignettes or whatever it ends up being, whether it's, you know, if people go in there and they don't have nothing, no idea, no context about who Chomsky is and they're kind of lost as to what to say, you know, how can you start to guide people to be able to go down a specific route where they can still have an enjoyable experience, even if they know nothing about Chomsky and his previous work. So I think the visuals in this experience are worth pointing out just because, you know, this is. A bit of a generative process. And so it makes sense to use a little bit of generative creative coding, something that's kind of morphing and shifting and going from this block with all these discreet little cubes and then morphing into like what looks like to be a sculpture of a Chomsky face, but it's kind of melting and morphing. And so it's never fully fixed. And I think that in some ways, uh, it's a metaphor for, you know, this is not necessarily a fully baked. artificial general intelligence where you're able to go in there and ask it anything you want about Chomsky and all of his thoughts about all these different theories, all of his political activism, any question that you would want to ask him about artificial intelligence or whatever else, then it's still in the process of developing and evolving. And I think the visuals reveal that, but it had this kind of hypnotic quality to it, which I really appreciated because, you know, a lot of experiences that have just a lot of basic 3D objects that the geometry is pretty fixed, but this is a much more fluid approach of trying to show like there's a very evolving dynamic that's happening within the experience itself. Now, one thing about the sound design that they did that I was not a fan of is that you would have the opportunity to speak. And so as you're speaking, in the middle of you speaking, you start to hear an echo of what you're speaking. And then when you get done speaking, then you hear the whole thing read back. I found that extremely distracting. I mean, I don't know if you've ever tried to speak while whatever you're saying is being repeated back into you as you're saying it, but it completely disrupts your thinking. They're trying to tell that you're speaking and it's listening to you, but I think they could have done it without having to have this real time feedback that was highly distracting. So that is something that I would definitely shift. It was nice to actually see what I was saying to see how well it was interpreting what I was saying and to have like a visual readout. I didn't notice that until later in the experience, but. Once I did notice it, I was able to look at that. It is nice to see like, cause you know what you're saying, but then there could be different aspects that are misinterpreting what you're actually saying or not having the AI NLP really digest what you're saying accurately. They did say there was no state machine, so it wasn't keeping track of what was asking. It didn't have a larger context, but this is, I think something else that would be interesting, which is that. as you are talking to somebody, you're building up a certain context and rapport over that time. And it'll be interesting to see how the AI will start to take into consideration some of these aspects of rapport and trust and the deeper context of whatever conversation might be that you're talking about. Just like when you're talking to someone else and you start to refer to things after you've already built up a certain amount of context, then we'd be able to do the same type of thing with the artificial intelligence and without any type of state machine or any thing that's doing that right now, then you're basically starting with a blank slate each question that you ask. So one of the things they said that this is a project that are trying to do this form of creative civic engagement. So having people have these different types of immersive experiences with technology, not only to demystify it to some way, but also to to show what it can and cannot do, because there's certain limitations that you can start to see as you engage with an experience like this. And this is something that is going to continue to grow and evolve over a long period of time. I mean, I can see this type of experience revisited 50 years from now and still have the same different types of interactions. You know, how do you model someone's personality, their ideas, their evolution of their thoughts? Can you capture the essence of somebody based upon the corpus of these public thoughts and then transmit it into making you feel like you've having this interaction with them? I mean, this is something that's explored in sci-fi and Her, for example, they have a whole recreation of Alan Watts and, you know, the spiritual teacher who is really bringing together a lot of these Eastern traditions of Taoism. And he's done all these recordings, all these teachings, what would it mean to be able to go through and digitize all that? And then basically have a digital guru of Alan Watts with an artificial intelligence, you know, with someone like Chomsky is somebody who's put a lot of work that's out there. And it makes a good use case for trying to convert a lot of these different aspects. So, you know, there's certain nuances of things that if I was actually talking to Chomsky, I would have had a completely different conversation. And so I think it's still just early days and it's a long road to getting to the point of feeling like you have this sense of artificial general intelligence. But in terms of experiential design, you have this sense of actually feeling like you're talking to somebody else, the sense of social presence and trying to maybe create a little bit more of a narrative context, because here you're basically treating Chomsky as an Oracle. You can ask it anything you want. And. That's where I think AI actually starts to be more limited because it tends to do a little bit better when you have a narrative context. And so having different narrative situations that maybe hone down the realm of possibilities is one strategy that I've seen work very well in terms of trying to put different stop gaps of the limitations of the AI technology and putting the deeper experiential design aspects that can help maybe funnel people towards different interactions that are going to be satisfying. And, you know, I don't think they want to create a fixed set of just like 10,000 of the best, you know, quotes and vignettes and answers, you know, they want to really make this so that the AI is really generating this stuff without having to pre-populate it with a lot of these canned answers. Because, you know, if they were to go through and do that, they would probably have a better experience in the short term, but in the long term, it would kind of be missing the deeper purpose of trying to develop this type of artificial generative approach. But I think at this point, there's still felt like a need of having some balance between curating some of those different types of questions because people are going in and they're talking to Chomsky and presupposing that he's an AI agent. And he's actually very aware that as an AI agent and is talking about all the limitations of what this AI agent can do relative to the Chomsky himself. And so I think they're really playing on that a lot in terms of Chomsky, just in some ways being a bit of a skeptic, but also trying to be aware of the limitations of the technology, because. The human mind and how it thinks about language and thought and intelligence is something that Chomsky has been thinking about his whole professional career. He never necessarily thought about himself as an AI expert, but yet he's got this whole aspect of linguistics that has birthed this whole branch of the natural language processing. It's such a huge part of interfacing with AI technologies, with our voice, and to be able to express our agency through the natural conversation. And so through that, from an experiential design perspective, it allows you to express this agency and be able to explore concepts and ideas in a way that gives you this deep sense of presence when it works really well, when it feels like that you're actually engaging with something that's able to understand you and respond in a way that's both responsive to what you're saying, but also novel and interesting and engaging. And so when it worked well, it was able to do that. And I think trying to find what other ways that they need to train it up or use experiential design to help them guide and direct people in different ways. Then I think this is a first iteration, but still long ways to go before we get to the point of feeling like you go into an experience like this and feel like you're having this one-on-one interaction with someone like Noam Chomsky. So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a list of supported podcasts. And so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show