#1269: Three XR & AI Projects: “Sex, Desire, and Data Show,” “Chomsky vs Chomsky,” and “Future Rites”

Sandra Rodriguez is a director and creative director of immersive experiences that use AI to create spaces where you interact with sentient-like entities. I had a chance to catch up with her during Tribeca Immersive 2023 to unpack three of her recent projects that are at the intersection of XR and AI: Chomsky vs Chomsky (see my previous interview in episode #898), Sex, Desire, and Data Show at the Phi Centre, and Future Rites (see my previous interview in episode #1076).

We cover everything from large language models from GPT 2.0 to GPT 4.0, creating abstract art from an AI model that trained on millions of porn videos, and creating an AI autotune for embodied dance movements. Rodriguez has so many deep and profound insights about the intersection between XR and AI that I saw it fitting to conclude my 17-episode series with my latest interview with her.

Here is a list of the entire series of 17 interviews in this series:

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. It's a podcast that looks at the future of spatial computing. You can support the podcast at patreon.com slash Voices of VR. So this is episode 17 of 17. It's the last one of a series of looking at the intersection between XR and artificial intelligence. And today's episode is with Sandra Rodriguez, who's a director and creative director who's looking at creating different immersive experiences that use AI to create spaces where you interact with sentient-like entities. So I had a couple conversations previously with sandra with a project that she had at sundance 2020. It's called chomsky versus chomsky Using tbt2 to train a whole model on chomsky and you would go in and ask different questions of chomsky So it's trained on the whole corpus of chomsky. So they had a series of different questions and interactions that you could have with Chomsky and have a bit of a conversation. And then at South by Southwest of 2022, there's a whole project called Future Rights, which was in collaboration with Alexander Whitley, using dance and trying to create the auto tune of dance. And so using machine learning, artificial intelligence to modulate the way that your body is represented within one of these immersive experiences where you start to see yourself dancing in a much more elaborate way than you actually do. And noticing how that's actually modulating and shifting the way that you dance. She's also got a project that has just opened at the Phi Center It's called sex desire and the data show and so lots of really interesting explorations of looking at the intersection of porn and pornography and our desires and just training some models on Pornography and then creating this really kind of abstract art out of that. So yeah, really quite interesting to hear that discussion as well So that's what we're covering on today's episode of the Wastes of VR podcast. So this interview with Sandra happened on Sunday, June 11th, 2023 during the Tribeca Film Festival in New York City, New York. So with that, let's go ahead and dive right in.

[00:02:04.693] Sandra Rodriguez: So my name is Sandra Rodriguez. I'm a director, a creative director, of Immersive Experiences. And in the last years, I've increasingly been using artificial intelligence as a way to create new spaces that are yet virtual and where you interact with sentient-like entities.

[00:02:22.030] Kent Bye: Great. Maybe give a bit more context as to your background and your journey into this intersection.

[00:02:27.349] Sandra Rodriguez: So I'm a documentarian by trade, but I've led a double life for a long time. So I did research and technology and on the other end I would do films. And for a long time it felt like I was leading a double life. In 2015 I was directing a documentary piece called Do Not Track and at the same time I was one of the two directors or four directors in the end of the Do Not Track web series. And it just made sense that if you're going to create, well why not use the knowledge you have about how humans are using technology and interacting with machines and use this to create stories. So it just became a new realm for me to explore and it became a new medium. So I like to situate myself in an area of emergent media. because what I really enjoy doing is looking at shiny new toys that we have and instead of embracing them or feeling afraid about them, my goal is really to see what they bring to us in the storytelling world and how do we debunk new forms of expressions with them. In the last six years, I had the amazing opportunity to teach at MIT a HackingXR class, and it was the first official class at MIT about virtual reality. And it was exactly this. People were wondering if engineering students would be interested in such a class. But actually, yes, it's through really creating with these tools that you see the affordances, the opportunities, and the pitfalls. So I'm a true believer that you need to debunk the toys, look at them, break them up, create with them, and that's how you really see their future.

[00:03:54.709] Kent Bye: Yeah, and I know that you've been looking at this intersection of both documentary and immersive interactive technologies and AI. I think the first time that we met was with Chomsky versus Chomsky that was at Sundance in 2020. And that was a piece that you were starting to take the corpus of all the things that Noam Chomsky had talked about and created some sort of model to reference as you ask questions. I don't know if it was a large language model or if it was another type of model, but maybe you could take us back to that piece that we had a previous conversation about that, but how that fit into the evolution of you looking at these intersection of these technologies.

[00:04:29.648] Sandra Rodriguez: Oh, you'll see we can talk about it a lot because it's a never-ending piece. So in 2020, we premiered the first part of the experience, which we called The Encounter. So Chomsky versus Chomsky, The Encounter. And what we created was, well, while I was exploring this new AI realm, it became quite obvious that there was a question that became crucial and it was always about can we replicate someone's mind, could we replicate the way we think, the way we talk, etc. And I had met a young researcher at MIT who was really a Chomsky fan and he kept insisting that Noam Chomsky has one of the most widest and largest digital trace available because he's been uploaded, downloaded, interviewed, recorded, redacted, transcribed so many times. He has such a big fan base out there, and most of his talks, of course, I think, if not all his talks, are free of rights. So there is a huge database out there that we can use to do exactly this. This young researcher was sure that if we could recreate the way he talks, we could recreate the way he thinks. And I thought, well, this is a deep irony, because Chomsky insists on exactly the opposite. that we will never be able to recreate the mind, because the only thing we have are outputs. And these outputs are words that we're using. And words are a way to tap into how we think, but it's just a surface of a huge iceberg. There's so much we don't know about how we think. So if you're just looking at words, you're not looking at how they're conveying a meaning. You're just looking at how they're stringed together, in which order. so you can create an imitation game, that's fine, but it doesn't mean that it teaches you anything about how your mind works. And I thought, well, the most I dug into Chomsky interviews, not only did we have this huge database to recreate the way Chomsky spoke, so yes, we're using large language models and three different types of large language models from what was then GPT-2 to today OpenAI, ChatGPT, But we're of course not only using this, we're really training the models to only use what Chomsky already said. So it becomes a very meta approach where you're talking to an AI system that insists it's not sentient and it will never be and explains to you why. That's why we presented at Sundance 2020, but since then we wanted the experience to dig further. I'm saying we because it's a co-production with National Film Board of Canada and Schnelle Bunte Bilder in Germany. And we really wanted the experience to go further. I think there are three things that we keep insisting about on AI and what it cannot do and cannot replicate. And these three things are also at the core of Chomsky's message about our mind. So for some of Chomsky's fans or people who really don't like him, it's important that they focus on what he said about which topics, either linguistics or politics or activism. But outside of these messages, I discovered a Chomsky that was really in love with the human being. And he keeps insisting on three elemental things about the way our minds work, which are creativity, our capacity for inquiry, and our capacity to collaborate. And I thought it was another yet meta approach to think, well, that's three topics that AI is really struggling with. Creativity, collaboration, and capacity to ask questions and wonder. So the goal of the experience is now that we have a Chomsky AI guide that invites us into a world where we're multiple users and we're invited to collaborate, create, ask questions, be inquisitive, but he never tells us what to do. He keeps showing us that by nature we have done all of it already. So the goal is really to have kind of a more gamified, fun and playful experience into demystifying what AI is without trying to be very intellectual. Of course you say Chomsky, AI, collaboration, the mind, it becomes super intellectual. So the goal was to really try to make it playful and fun, so that people discover for themselves and try to look and dig deeper.

[00:08:20.484] Kent Bye: I have to ask whether or not Noam Chomsky himself had a chance to interact with Chomsky vs. Chomsky, which could be Chomsky vs. Chomsky vs. Chomsky. Has he had a chance to interact with any of the iterations yet?

[00:08:31.860] Sandra Rodriguez: So, this is my biggest dream. In 2020, we were in contact with Chomsky previously, of course, while we were producing the piece. We really were in contact with Anthony Arnault, who is a close relationship to Chomsky, who came to see the experience to validate and vouch that it was indeed a fair use of his digital traces. that we're not creating a deepfake, we're insisting that it doesn't feel or sound like too much of a deepfake, and that the bot always insists that it's not Noam Chomsky, it's just based on traces that are freely available online, and it's not trying to pretend that it is the individual. So, Anthony Arnault came to validate, but we insisted, can we send it to him? He tried, and he of course told us, he has a lot better things to do. Your experience is interesting to me, but I don't think he would come. But that was a couple of years ago. Since then, the New York Times published an article by Noam Chomsky, where he's feeling compelled now more than ever to tell people, especially now that chat GPT is everywhere, calm down. ChatGPT is doing a parrot act and it's not replicating the way you think. It's just a very useful tool which we can use in very different means, but in no ways it's telling us how your mind works. So he keeps again insisting, but this time writing an article in the New York Times. So my real hope now is that now that we have this new iteration that we've presented in Berlin in 2022 and we're going to present now in North America 2023, I'm just hoping that it's a new incentive for him to this time try out what we've done with ChatGPT, insisting on his digital traces, but keeping his message alive, that this ChatGPT is useful. It's useful to discover all of his digital traces. That's also what we're trying to do. We're discovering the traces he's left behind, but by no means we're pretending it is sentient or that it knows and can replace the mind of Noam Chomsky, of course.

[00:10:23.302] Kent Bye: So you started with GPT-2, and then there was a GPT-3, and then GPT-3.5, and then now we're on to GPT-4. So it's the latest iteration using GPT-4 from OpenAI.

[00:10:34.656] Sandra Rodriguez: Yes, the latest iteration is using GPT-4, but it's also using GPT-2 still. So we've used the database of GPT-2 because we felt there was something precious in it that we've now overpassed. The goal with GPT-4 is really to give you the most accurate answers. But with GPT-2, we had the most chumpskin answers. The answers that really felt and sounded like him because it was really based only and really solely looking into data traces that we used from the Chomsky.info archive and other archives that were readily available to us through the MIT library archive. So the traces were real archives that were only Chomsky. Now it could be about people talking about Chomsky, people commenting Chomsky online. So even though we're trying to train GPT-4 to really look into the data traces we're giving it, it's still biased by all these other conversations that people are having with GPT-4. So we're using it in different increments and whenever the experience is using GPT-4 it tells you and shows you that it actually is and Chomsky AI is actually transparent about it. So sometimes he says actually I'm entering this but I have no idea if it's true I just took it off online and from other people's traces and what other people have asked before you. So we're using different models and of course we're not basing it off GPT-2 which is now no longer available, but we used all of the questions in that database that we created through GPT-2 and we're still using it now through a large language studio from Microsoft.

[00:12:07.047] Kent Bye: Yeah, my recollection of the New York Times op-ed that Noam Chomsky wrote was that he was making the essential argument that other AI ethicists have made around the stochastic parrot's critique of a lot of these large language models, that they're autoregressive, they can't do any planning, that's just a statistical representation of someone's corpus of all their language, of what's the likely next word, but it can't plan for where it's going to end up, and it also doesn't have any higher modes of reasoning. His specific argument, though, I believe, was also that it doesn't really actually understand the fundamentals of language, that it is this more statistical repetition. And so are you hoping that if he would see this representation of his own corpus of work, that it would somehow make his mind more open to it? Or I feel like he's pretty entrenched in his ideas that I'd be a little surprised that if he were to change his mind after watching it.

[00:12:57.033] Sandra Rodriguez: Well, maybe where I think we could change his mind is how people are using it. Because he is, I think, insisting a lot in conversation with discourses that are very much about AI becoming sentient or conscious or does it represent our mind. And of course that is one conversation and it's a tough one to have. And so his position, I think, is really opposing these views And sometimes, even in his examples, he becomes limited in what AI can actually do. But I do hope that we can show through the experience that AI is also useful not just to fill in the gaps of a conversation, but to make individuals think about what they cherish. And it's not because the AI suddenly became the philosopher. The philosopher are the users discussing with these AI systems. But if we train AI systems to maybe be more transparent about the fact that they don't have feelings, that they're not pretending to be humans, but they're still narrative guides, maybe that could change. Maybe you could see the benefit of artists using it in expressive ways. Because I do feel, whenever we say that AI is a tool, I do feel it's a huge taboo. People are saying, no, it's not a tool, it's more like a child, it needs time, it will evolve. But whenever I speak with fellow artists that are using AI and or data scientists, We often use the word tool, and we feel weird about it because it's not just a tool like a hammer, but it's more like a pencil, or a brush, or a canvas, or acrylics. It becomes more of a medium, and that's a medium you can work with. It has its benefits, it has its own affordances that other media may not have. So it doesn't become just a tool, it becomes something of its own, but by no means we feel like we can just let it roam. Or if we do just let it roam, it's because we want to see how far it goes and that's the narrative that we want to bring. But it's a conversation with a medium. And I've heard from other fellow artists the idea that the goal of using this technology is not to suddenly just let it inspire you. You are inspired by everything, including technology, but your contact with other humans and that's not lost. So I think in the yay or nay debate about AI, Chomsky positions himself very clearly against the yes, this will change everything debate and we should embrace it. But I think we should open our minds, and I do hope we could change his mind, that when artists are using it to create, it can also become a very powerful tool to invite us to introspect, to think about what we're leaving as digital traces, if that's at the minimum. And maybe at the maximum, what are we leaving for future generations with using these tools? So it is a useful media such as, you know, the radio.

[00:15:49.981] Kent Bye: You mentioned that there's going to be some moment in 2023 where this is going to become available in North America. Do you have any specific details of when and where it will be available?

[00:15:58.909] Sandra Rodriguez: Yes, the premiere is in September, now the date is established, September 12th, and the National Film Board of Canada in Montreal, they have a new exhibit space and we're going to use this exhibit space and that's the Sarva Tour, so I'm not sure exactly from there where it will be in North America, it's the start of the North American tour, it will be there for six weeks and then onwards to other major cities including in Canada and the US.

[00:16:23.729] Kent Bye: I know the FI Center has some spaces in Montreal. Is that a part of the FI Center, or is there something that's a dedicated space for the NFB that's going to be showing some of these immersive works?

[00:16:32.097] Sandra Rodriguez: It's a dedicated spaces for a National Film Board of Canada, but I do have another project with the FI Center that's also premiering this summer. So AI is going to be everywhere, and I'm going to be very much swimming in it for the whole summer.

[00:16:45.029] Kent Bye: Well, maybe you could tell me a bit about what's the other project that you have premiering up at the FI Center?

[00:16:48.933] Sandra Rodriguez: So I love how one AI project leads me always to a next AI project because I feel limits, limits in the conversation or what I could do with it. And I'm thinking always, well, maybe that's not for this project. Maybe I keep it for another project. While we were working on Chomsky, the question that arose most of the time was, well, these are your data traces. What could happen with your own digital traces? And in parallel, we're having conversation with another group called Club Sexu. Club Sexu, if you'd like, are a group of young activists who are doing events to promote inclusive and positive sexuality. And their questions were really about dating in services and apps, such as Tinder or OkCupid, that are out there that are using tons and tons of data and how the algorithms are shaping the way algorithms are created elsewhere. So not just for dating apps, they're used elsewhere. And the more we were digging into the research, One thing led to the other. I realized that from some statistics between 20 and 40 percent of everything we share online is porn and it just stopped me in my tracks. I thought, well, okay, I'm now for the Chomsky project just looking at Chomsky data, but if I looked at the world data, between a third or fourth of it is pornographic material. No wonder these systems are are always trying to please us or are very biased in their representation. If you type in, let's say, mid-journey, beautiful women, you can really see the definition of beautiful women being of a certain color, a certain age, a certain type, a certain body shape. And we often say this is just a representation of who we are. It's partly true, but it's partly also that it's also trained on pornography. So we started thinking about how the interrelation and pervasiveness of data and algorithms is now everywhere, including in our intimacies. So the experience with the Phi Center is a two-floor exhibit with eight installations that are trying to look beyond the binary codes of our data in an area where we're trying to avoid binary codes of sexuality and genders. So the experience is called Sex Desires and Data. And it's an exploration of erotism in the age of AI.

[00:19:02.160] Kent Bye: And what kind of AI tools were you using in the course of showing this experience then?

[00:19:07.581] Sandra Rodriguez: A lot of them, and I'm laughing while I'm saying this because we trained a chatbot to be a narrative guide within the experience. And it's a narrative guide that flirts and flirts a bit too well because chatGPT has a tendency to want to flirt. So we had to find a way that it didn't become aggressively flirty and at the same time a bit erotic. leads into your imagination. We have a chatbot that we created, but we also have other installations that are using AI in different ways. One of the installations that I'm working on is called Results, and we use the stable diffusion and disco diffusion to generate images, training it on millions of pornographic videos. And at the beginning it was very poetic and beautiful because what you could see were shapes. You could discern some skin and you could discern movement and it became kind of erotic images that you cannot really quite see. A little bit like before we had the internet and you wanted to watch porn but you didn't have the channel. You can kind of understand what's going on but you don't really see it. But then it quickly became so realistic. these image-generative systems became so realistic that we're stuck with yet another problem. Now we were just recreating porn, and we just had an entire exhibit which was only porn. So we decided to tweak it again. That's why we're using disco diffusion. And we're trying to break the system. Instead of asking, for instance, for images that are generating content that relates to consensual adults, naked, pornographic, let's say outdoors, I'm just naming outdoors because we're using categories that we're taking online from the most searched pornographic results. And we're now trying to break it. So we will say outdoors, outdoors, outdoors, outdoors. And we will just repeat the word so many times that it breaks the system. But it creates images that now look like shells, animals. They look like some things you've never seen. And they're still quoted by the testers we've had until now as erotic. But you cannot know exactly what you're looking at. But it feels naughty. What we're loving is that we're telling people, do you really want to train AI to reproduce things that we know are already reproductions of clichés? Or could we train it to look for textures, skin textures, color patterns, movement patterns mixed with colors? And so we're really training it to look at patterns, but not the expected patterns. Patterns such as color, texture, skin, goosebumps, things you're not usually thinking about when you're thinking about pornography, but that it is within the millions of downloaded videos that we have.

[00:21:46.405] Kent Bye: So did you have to go out and actually download and tag each of those videos or were you using existing models? I'm just curious what the process was of gathering this data and if you had to add additional metadata that you were trying to have be picked up as you were creating this fusion of all these different videos in this model that you're either taking some stuff off the shelf or tuning it or training it yourself.

[00:22:09.048] Sandra Rodriguez: We've had to do a lot of this, so I'm working with a developer called Edouard Langteau at PHI Center. I was worried at one point about his mental health because he did have to download not hundreds of thousands but millions of pornographic videos and they're self-categorized, so online if you go on platforms where we access pornographic content they're self-categorized so we're using these categories but it doesn't mean that everything we download is okay to us and what I mean by this is that we don't want to have a morality over the type of content but there is pornographic images that is illegal out there we needed to have what we call negative prompts for violence, gore, children, non-consensual, really used trigger words to make sure that we could filter out things that are not only moral but not legal out there and that was a hard, difficult decision. We worked with sexologists and it would tag certain words as problematic or filter certain words out We had to debate if that was being overly moralistic, for instance, or if it's a practice that could be accepted. So we really had to work with psychologists and sexologists to make sure that our negative prompts that we were filtering out was okay. And even then there's things that we had to look at, even if our models end up generating these weird images that look like shells or birds part shells. From time to time you see a face appear and these faces we need to make sure that they're not about children, about individuals that could be recognizable such as porn stars for instance. So we needed to take away every famous name, every famous studio, every word that could become problematic. So we had to work and work and work on the models again and find the sweet spot where it was realistic enough so that people could understand how realistic we could get. without giving them the joy of actually seeing the huge realism that was achieved in the last months, because that's not the point of the experience. The point of the experience is really about thinking about all these results that we're sharing online and how could we train it to see something different.

[00:24:27.158] Kent Bye: Well, I know that with some of the different large language models that are out there, like, say, OpenAI, they're not always completely transparent with what the architecture they're using or even what data they're using to train things like ChatGPT or DALI to They have Mid Journey that is also somewhat opaque in terms of what their training set is. Staple Diffusion, it's an open source project so they're including a lot more stuff and I've heard that there's more adult content that's being used to train Staple Diffusion. Maybe you could expand on whether or not some of those existing models like Mid Journey or DALI Staple Diffusion to what degree are adult content being used to train some of these models that are available.

[00:25:08.244] Sandra Rodriguez: So, surprisingly, even before we started digging too much into it, I kept thinking, you know, we've seen on mid-journey how plasticky some of the skins look. It's a texture that looks like the textures of the skin have makeup on or are oiled at the same time. And I never made the correlation. We discussed with data scientists. And they said, well, skin representation is largely based on pornographic adult content shared online, because it's the most widely available. Even though we have these large datasets, we may not need all that large dataset to create a skin. And what we found was problematic is that, at least we know this for Midjourney, that's why we stopped using Midjourney, which we started training our images on at the beginning. Because we could ask it to tap into dermatology bases of skin. It doesn't have to look at pornography to represent skin. Yet it's still the idea that the larger the better. So it has to be the bigger the data set, the better the image is going to be. But the bigger the data set also means that it's already canned into certain categories of content. and these content may not be representative of the wide variety of skin textures for instance, not even to mention colors of course, but even textures. Are they old, are they young, etc. And so we've realized by looking at different types of systems and models that some of them were really I couldn't say for sure, and I didn't dig into it scientifically, but for sure we were asking data scientists where they thought they were trained, and they said, well, this is clearly using pornographic content. And stable diffusion, we felt, was allowing us to really look more at prompts that we were using to circumnavigate and make sure that we were not just tapping into, let's say, pornographic images. So really just the data that we had downloaded, for instance. And the prompts that we were not looking at the data that we had downloaded would be images or names that we really took from a search result. So if I explain a little bit better, the images are really trained on visuals from porn that we have, but we're training it to search for a mix of these images with something else that comes from words like threesome, small tits. I'm not sure if I'm allowed to say it on your podcast, but adult content searches. And what we love is that we love the mistakes in the system, and there were a lot of them. I mentioned small tits for a reason, because it gives us birds. That's my bird example. It gives us thousands of birds, and so we have a room of birds. And when the user enters, they see a search category, and they see how the AI just confused it. I love the beach category, because it decided that no humans were needed. In the beach category, that makes sense because it's also training it on beach images. We all love to think that we're alone on a beach, so we filter other humans out. And so even sex on the beach becomes something where you filter humans out and understood that the perfect beach image has no humans. But that's not true for outdoors, for instance. So outdoors is the opposite. It looks like a nudist park. So it's just interesting to see how it picks up on not just what we're doing with pornography, but the names of these categories, what they convey for us online.

[00:28:37.716] Kent Bye: So the other thing I wanted to bring up is that a lot of the terms of service to say Midjourney or DALI have restrictions over what type of images that you can produce because they have an additional layer of reinforcement learning with human feedback that is tuning it in a way that is creating these restrictions on what type of stuff that you create. But with Stable Diffusion, there's a lot less of restrictions. And so it sounds like by using Stable Diffusion, you're able to do a project like this, whereas on these other models, you would never be able to enter in all this data and have this experiment because they have their terms of service that would violate. So I'd love to hear any other additional comments on that.

[00:29:12.830] Sandra Rodriguez: I'm sure you're right and I'm sure that in the near future we may have still problems. So just to make sure the experience use these systems but it's not live generated and we wanted to make sure that that was not the case because of the problematic images that it can create. So especially for an exhibit where it's about inclusive and positive sexuality it's really hard to make sure that we are actually filtering out complex content. Recently we have no idea why but there is a particular category where it kept generating the same face over and over and it's not exactly the same face but it always looks like the same face at the same place and there's too much for us. It's not that it's impossible for us to search for the mistake and why this face appears but the face in itself was problematic because we thought well it's the strange category to suddenly just have this person's face. It's probably because there's a porn star that had a lot of content with these tags. And so it's probably closely identifying a real individual. So there's less filters, but there may be more in the future. And our decision that we had to do collectively was to say, OK, we're using it to generate these images, because once they're generated, it's not generated live with the user input. What the user accesses is that their interaction is with real-life search content. And that search content taps into closest categories that we've created using these systems. So it helps us that if in the future the filters are too complex and we cannot do this type of project, we can still have an experience. And at the same time, it helps us make sure that we're verifying and counter-verifying and counter-verifying our content all the time. It may still have glitches, but at least we're diminishing the quantity of glitches it could have in the future.

[00:30:58.994] Kent Bye: So it sounds like you've pre-generated a lot of different images and that you're not going to let people just enter in stuff because of all the ways that it would be impossible to really constrain it in a way that would not produce some potentially really problematic images.

[00:31:11.923] Sandra Rodriguez: Exactly.

[00:31:13.075] Kent Bye: But also, I guess, just to reflect that the amount of pornographic content that's online, not only is there a lot of it, but it's also got quite a robust taxonomy that's developed over the many years that pornography has been around. And so from a machine learning perspective, a lot of the training of the data has been on the captions that are written for the texts. But in this case, it's more of the taxonomy categories that are with each of these posts. So you're able to download not only the video, but also all the metadata and the taxonomic information that can help robustly train these models. And so as you're creating these huge models, then you're able to, in some ways, peer into the mind of how AI is interpreting this more constrained data set, given this one slice of life that has been produced by the collective humanity with all these pornographic images.

[00:31:59.272] Sandra Rodriguez: I love the way you put it too, because by using the metadata and the tags and the text, how we categorize the content, we're seeing how the AI can interpret the connections that we're making, but we are also seeing a good reflection of ourselves. There's a lot of bias just in the words that we're using. For instance, certain countries are specifically named. but other times it's regions. Exotic, I'm sorry to have to say the biases that we're reading, but it's huge cliches such as exotic is always associated with Asian, ebony is certainly a category in and of itself, but Caucasian will not be. You really have these discriminatory categorizations. The biases become obvious by looking just at the categories that are used. And sometimes we find pearls like hunky Hungarian and we didn't think that was a popular category or romance that became a popular category throughout the confinement. So we're looking at mirrors of how humans are looking for love. adult content that's also reflective of biases, discriminatory ideas about gender, about race and nationalities and cultures. But at the same time, sexy sex is currently very popular and that makes me laugh that somebody is just typing in sexy sex as opposed to normal sex or unsexy sex. So I just think we're finding these gems of search results and I think just that in and of itself is truly inspiring to kind of think about the way we're interacting with machines to access eros, erotism. It's very strange the way we're using words to tap into a machine to get images that we could feel are erotic.

[00:33:50.205] Kent Bye: And so it was a part of the exhibit then to start to deconstruct some of these larger patterns that you're starting to find after you've collected all the data, parsed it, created these different models, and then you're peering in and being able to uncover some of these deeper biases that the AI has.

[00:34:05.727] Sandra Rodriguez: The exhibit in and of itself is a two-floor exhibit and so it's different installations and it's a bit more guided narratively because the entire results installation for instance is only one installation and in this case we really want users to kind of think for themselves just by looking at these images and wondering what is happening, what am I watching exactly and it feels like a close-up but it's actually derived from a category such as threesome so I'm not sure what I'm looking at or and we've just tested it with users and we laugh a lot but when we watch them do it because they're kind of appalled, compelled and at the end they take selfies in front of it and they say it's beautiful and I would want a calendar of this and we're like but you're not sure what you're looking at. So I think this is one example of an installation that is there kind of to provoke and think about our relationships with machine that today desire, data and sex are closely intertwined and closer than ever. But there's other installations within the experience that are more used to deconstruct, as you were mentioning, or debunk certain myths. For instance, we have an installation called Hello, that thinks about a trans woman and her experience. The invited artist we're working with, Yana Books, is talking about her experience on dating apps and how difficult it becomes to just see how many people react or ghost to her explaining at one point that she's trans. Some want her to explain it from the beginning, some at the end. So some installations are really about debunking the dangers or disclosing the problematic binaries of these systems. and other installations are really about opening our minds to other alternatives. So it's kind of a, to answer more properly, it's kind of a push and pull. Throughout the experience some of them are really just to explore and keep a free open mind and some are really debunking myths or trying to open conversations about our relationship to machines and with machines to access intimacy.

[00:36:14.972] Kent Bye: You said there are eight different installations and you mentioned one that you dove into in quite detail. Were you involved with all the other seven installations or were you just coming in for one of them and you have other artists that are coming in with different takes on the same topic?

[00:36:28.948] Sandra Rodriguez: A little bit of both. So we're four creative directors. Creative director Annabelle Fizet, who is the creative director of Phi Center. Myself as a creative director. And then Maude Huismans, who is the director of Club Sexu. And Sam Greff, who comes from Production of Immersive Spaces. And the four creative directors, we split installations so we could focus each on different topics and subjects that were relevant to our own experience. or other artists' experience. So for instance, we're working on an installation called Queering the Map with Lucas La Rochelle, so an artist that has been working on the queer experience online about mapping your position of a queer experience, and an experience called Show Me Yours on Webcam Girls, where we worked with a webcam girl who shared about her experience. So for these experiences, we don't want to pretend that we're conducting and directing the installation. It's really about the artist's voice. But as creative directors, we split the types of installations that we were heading. And myself and Annabel Fizet are the two creative directors of the whole exhibit. So in this role, I do have to oversee all of the installations.

[00:37:41.851] Kent Bye: And I remember also back at South by Southwest 2022, you had a prototype project there called Sacred Rites, which was using AI to do like an auto tune of dance. So that's like a whole other thing that we haven't even talked about yet. But I'd love to hear what's happening with Sacred Rites and the ways that you're looking at a little bit more embodied movements and how can you start to use artificial intelligence to either augment someone's virtual embodiment while they're in an immersive virtual reality experience or allow them to make these subtle movements but being shown as something that's much more elaborate of someone who's like a professional ballet dancer with Alexander Whitley. So yeah, I'd love to hear a little bit more context about sacred rites and what you're doing with the auto-tune of dance with artificial intelligence.

[00:38:23.948] Sandra Rodriguez: So we're now calling the project Future Rights because we wanted to make sure that people didn't confuse it with the Rite of Spring, but it's an interpretation of the Rite of Spring. And as I mentioned before, it's not a plan that I'm doing. It's really when we're working, AI is developing really, really quickly and in so many domains. And when there's an experience like Chomsky that focuses so much on language and natural language processing systems and large language models, you learn about other systems that you're not using for this project. For instance, motion matching. And so the more I dug into motion matching, the more I kept thinking we need something where we are using our bodies. Our human experience is not just about words, although I'm using a lot of them now, but it's not just about words. It's about sensation. So the sex, desires and data is really about erotism through different elements of our sensory bodies. But it's also through interacting with our motion and our movements and our bodies into space. When I met with Alexander Whitley, the choreographer, he was mentioning his desire to dive into the Rite of Spring. It's like a rite of passage for most choreographers. Their interpretation of Rite of Spring means something. And we're both in the middle of confinement and thinking about the irony of having to all do all of these sacrifices, but they're sometimes hurtful. We cannot see our loved ones, we cannot go to festivals like we used to, but still it forces us to think differently. So we thought, okay, what if we could have an experience that is a dance experience, where AI helps you use your body to connect with somebody else without words. and that person may or may not be present with you. So if that other person is a dancer, are you interacting with a live performer? Or are you interacting with a trace of a live performer? And the more we dug into it, the more we thought, wouldn't it be fun to kind of puppeteer people into dancing? Alex, my own frustration with dance projects in VR is that often you're asked to either do a simple choreography, where you need to follow the rules, or you're given a beat, where you need to follow the beat. You can do whatever you want, for instance in Beat Saber, not to name it. And dance experiences are widely popular in VR, but at the same time you don't really dance. That's the irony of it. And when you have real performers, people have a tendency to want to watch a show, so they stop dancing. And we thought, well, there's this thing in psychology called mimesis effect. where if you see an image of yourself that moves in the same manner as yourself, you will have a tendency to try to imitate. So if it moves a little bit more to the left than you actually did, you will adjust your movement to move a little bit more to the left. If it grows a little bit taller than you actually are, you'll stand a little bit straighter. So we thought, well, let's use this to kind of use a puppet show and invite visitors to watch an experience where they dance with a live performer. but we mix up what they're watching so they no longer know if they're the ones dancing or they're the ones guiding the dancers or the avatars that they're dancing with or if they're just watching a show or if they're guiding the show and this confusion helps them do this mimesis effect. The prototype was presented at South by Southwest 2022, and we were ecstatic to see visitors that were really barely moving at the beginning, but at the end their arms were long and elongated, their spine were straight, and we thought, were these all taking dance lessons? Because they looked like, even in their posture, they moved differently. And that was with a very, very, very rudimentary motion matching system. Now we're really working on the real AI system and motion matching has been used in video games as AI in video games for a while. So I love that we're tapping into other things that are already part of our lives that we forget that are also AI. So, tools that we use in video games. And this motion matching, the goal that we have now, we're starting a new phase of prototyping, we're in discussions and collaboration with Luxembourg Philharmonic Orchestra, and we would like to include music data as well. So, data from the live performance of dancers, but also musicians. It's becoming bigger and bigger by the minute, but there's a never-ending to this AI exploration, so I'm just excited about the next steps.

[00:42:47.538] Kent Bye: Yeah, AI music generation is a whole other rabbit hole that I'm sure you're going to be starting to go down. But we've already talked about at least three projects that you've got going. And without naming any other specific projects you may have, I'm wondering if there's any other types of AI technologies that you're starting to dig into or anything that you're looking at in the ecosystem that you're getting really excited about.

[00:43:05.858] Sandra Rodriguez: You've just named it, music. We keep talking about music and music means for us something you could listen to and enjoy hearing, but the same tools we're using for music could be applied to any sounds. So it's a working title, but my next project, I just got excited in all these AI talks, sometimes you get frustrated and angry about how people talk about creativity or lack of creativity. And then I saw a talk where somebody was using a system to generate sounds based on traces, and I'm not going to name too much of it, but I thought, what about all these unheard songs that we need to hear that we could tap into our human history to look at new forms of generating these forgotten sounds?

[00:43:53.913] Kent Bye: Interesting. Well, I know that at the Onyx studio, there was a piece, I think her name was Kat, who was doing this translation of choreography and body movements that were translated into the various different phenomes. And so given a certain shape of your body, you would get translated into sound and start to do these movements to start to create sounds. But I feel like the music generation is a whole other dimension of like music theory and Either you look at the spectral waveform and use that to train or there's other music theory components. There's certainly a lot of generative music projects that are out there. The thing that comes to mind is back in 2015 when I went to IEEE VR academic conference in France, I had some conversations there talking about how right now a lot of sound in these immersive experiences are spatialized based upon these waveforms that you're putting in there and so it's stuff that's already been recorded but we have physics engines within the context of immersive experiences but we don't have like audio engines that are doing the same type of mimicry of looking at the different objects and interacting with each other and what kind of sounds it would make and so I feel like that's a new frontier of in the future having like an audio engine that's dynamically creating the soundscapes that's in a spatialized context, but potentially even using this AI synthesis to be able to generate different sounds based upon different models that are being used as well. So I'm excited to see what that new frontier is going to be, not only for music generation, but also just spatialized sound when it comes to creating much more rich and robust content within immersive experiences. And part of that may be capturing source data in the ambisonic format, which is something that is capturing a sound field. So, you know, imagine that this is a relatively, well, it's actually an old technology, but in terms of a contemporary use case of distribution of ambisonic audio with YouTube having integrations with it, with 360 videos and with virtual reality, there's new platforms where that type of ambisonic audio captures can have a distribution channel. So whether it's like first-order, second-order, third-order ambisonics, you have all this potential to create data from the real world to be able to train AI models as well. So that's something I'm excited to see, like what happens with the future of using like spatialized sound to create these really robust spatial sound experiences, which I feel like is maybe we're 2 to 5 to 10 years away with that. But I think it's a matter of data and people getting a hold of hardware and then creating these huge data sets that then are being used to train the AI. So that's the stuff that I'm interested in seeing where that goes as well.

[00:46:23.778] Sandra Rodriguez: I agree. You know, you mentioned all these possibilities too. Yesterday there was a panel here at Tribeca called AI Dramaturgy and Pierre Zandrovic on this panel mentioned that for him it was really like discovering the birth of the Internet. But same as Internet is now used for everything in every type of application, not just searching results on Google or searching for specific information. Similarly, all these new frontiers are not so much about what AI can do, but how we can apply it to our needs. And right now we're being a bit sold on the ideas of what these needs could be, but the more we're using them every day, the more we're going to do, I think, just like Pierre mentioned, just like the internet. we're going to decide the uses and applications for it. And there's so many of them. So just with what you've mentioned for sound, you know, there's sound, there's touch, there is a vision, of course, and images. And apparently, well, AI is very, very good at predicting proteins. Apparently, there's a new taste now out there that never existed before that was created with a new invented protein created with AI. So the frontiers are endless. I am usually sometimes excited about the technology, but I'm surprised that it's often when I get frustrated with where the conversation is heading, that it helps me think about what else could I do. And so I'm not discarding the frustration. What I mean by frustration is when we keep saying people are afraid about AI and they shouldn't be. I think it's a good thing that they are afraid. I think it's a good thing not because it's dangerous, but more because they should try to understand what makes them afraid. And sometimes afraid is not really irrational fear. It's more of not being sure that they have control over things that are being decided. And that frustration is useful to think about what else would I do? If I had the power, what else could I do? All these technologies and applications of these systems are getting so much more democratic that we could think about sound, ambisonic sound as you've mentioned, music, but what else is out there? And I'm really excited, too, just motion. We're starting to see movement, language, images. But sensory elements like smells, tastes, and sound are still very much untapped. And I'm just very curious to see what's coming next.

[00:48:44.384] Kent Bye: MARK MANDEL-WALDMANN Yeah, well, it's worth mentioning that you were the moderator of this AR Dramaturgy panel with co-directors of In Search of Time with Pierre and Matt. And also, Shahani was on this panel as well. What were some of the big takeaways from this panel on AR Dramaturgy that happened at Tribeca here?

[00:49:00.375] Sandra Rodriguez: My biggest takeaway is the audience. The audience was really into it and had so many questions. And I really felt kind of an energetic moment of realizing people want to know. They're not leaving it to scientists and specialists. They're really curious. They feel like their accessibility to AI systems they could be part of it and I think it's strange to see that's my biggest takeaway but I was so curious to see and excited to see how the audience reacted positively with questions about the subconscious and how we could use it to tap into it and creative processes and where they saw limitations and problems and ethical problems with copyright and I thought there's something burgeoning here just the excitement of the crowd made it feel like we're at the beginning of a new cusp of interest in this new media and form of expression. And my second biggest takeaway from the panelists was a little bit what I was mentioning in the first part of our conversation. I was expecting them to not go down the tool route and try to keep saying, well, it's young, because we could see five years ago that was all the conversations too. It's still young. Give it time. It will become creative. It could be more creative than me. There was always at least one panelist out of three, or one out of four, or two out of four, that kept insisting that we were constraining the AI systems to be a certain way, but they could be so much more. And I was surprised to see three artists really looking at it as a material that they could use to converse with other humans. And that was felt reassuring. It just felt like artists are not feeling like they're being let down or that they're being discarded or thrown out. They feel like they have a place in the room and they're using it to start conversations. And I thought that was really inspiring.

[00:50:55.622] Kent Bye: Well, I had a chance to talk with Pierre and Matt, and I feel like the AI domain of all the different innovations are happening so quickly that it can be a little bit difficult to keep up with all the stuff. And so they have Twitter, Discords, and private WhatsApp chat groups to keep track. You have different Reddits to track information. What's your preferred mode of keeping up to speed with all the different innovations in the AI space?

[00:51:18.685] Sandra Rodriguez: I'm very bad at it. I must say I'm very bad at it. And sometimes I was talking, I'm a little bit like Matt, maybe more than Pierre. So maybe sometimes I was talking with Pierre and he was saying, yeah, I was reading on Reddit how to like, well, we often just. I'm saying we with Edouard Langtoul, the developer, for instance, on results, on trial and error. And when we didn't know, we would talk to somebody else. And sometimes a very old-fashioned way of contacting people who are data scientists or researchers in the field, because I also have that network of research and academics, instead of always going on Reddit. So my developer would really be on Reddit, and I would contact academics, and then we'd have a conversation. And it was like this trial and error, figuring things out. But our goal was not to prove so much what we could do with it and just make sure that it could convey our message. So when it felt good enough for us, sometimes that was good enough for us to keep up with it. But as an artist or as a creator, I must say it's usually, just as I was mentioning earlier, it's usually because I'm on a panel and because I'm on this panel, I have a voice on that panel to talk about how we could use it as humans and there's a triggering thought that makes me want to know more and keep up to date. So it's often been by looking and having in-depth conversations with people at events who are showcasing their work. That's why I mean that I'm so bad at it because I try to keep up to date. I open a lot of tabs and then I don't have time to read them. I need to close them back. I go on Reddit, but I forget what I was reading and I'm doing something else. But where it really helped me think it's a little bit more in-depth, is a lot of times conversations with academics, researchers, scientists, where we can have a two-hour long conversation and understand better the basis. So, not to boast, but sometimes you don't need to know all the new iterations if you really understand how it works.

[00:53:15.873] Kent Bye: That's a good point. And I rely upon a lot of serendipitous collisions at events and conferences myself for how I curate the podcast, but also if you are holding a problem or intention in your mind and you're at an event and then oftentimes you'll come across somebody who has the answer to that in sort of this moment of serendipitous synchronicity. So there's a magic to that as well. So being embedded, embodied into physical locations with people who have a shared interest and are also creatively exploring these different topics is also a great place. So a shout out to all these creative gatherings where it's a real fertile place to discover new ideas in that way. So I think there's something there as well. But yeah, just as we start to wrap up, I'm curious what you think the ultimate potential of this intersection between virtual reality, extended reality, spatial computing with artificial intelligence and the future of immersive storytelling.

[00:54:03.322] Sandra Rodriguez: There's usually one word that I hate, and it's inevitable. And weirdly, when I heard your question, my first thought was inevitable. It's not that it's inevitable. We always get to have a say in how we develop technologies. But virtual realities are about interacting with virtual worlds. And as users in these virtual worlds, we're exploring, we're interacting, we're discovering, we're being curious. if the world cannot respond to that as it does in the real world, it becomes a bit of a, you know, it's not the dream that we're sold. So I do feel like AI really helps the magic of these worlds come to life. And more and more, and it becomes so easy to understand the beauty of immersion with something that adapts to you being in that space, that I think it's, yeah, the first word that came to mind was inevitable, but I guess it's just highly recommended. would be a better terminology and highly probable that we're going to see more and more of these combinations.

[00:55:01.521] Kent Bye: I wanted to ask a quick follow-up there, because as you were answering that, it reminded me of some comments that I've seen where you download a model and it says here in this model, which may be five or six gigabytes large, encompasses all of the, I guess, visual knowledge or visual history and cultural heritage of humanity. It's sort of a hyperbolic way of saying that this is able to compress so many different relational dynamics of all these different artifacts of humanity. And obviously, it's incomplete. It's not a complete representation. So there's a lot of stuff that isn't in there. And there's a lot of bias that's in that data set. However, there is this marvel that this file that could be five to six gigabytes, sometimes even smaller, sometimes bigger, but is able to encompass so much rich information. And I'm not sure if you've reflected on what that means for like, as a metaphor, like we're compressing the cultural heritage of all of humanity into these files, or like how you start to think about these image generation models or large language models.

[00:55:59.833] Sandra Rodriguez: I think it's very interesting. I, so far, haven't tapped so much into it. What I do think is that you've mentioned the word rich, and it's so rich in facets of our human experience that there's a capacity to look at all this data and see how they're interconnected and related. The word you're using is having, in a nutshell, condensed memories of our experience, our human experience. and I haven't tapped into it specifically what I do realize is that there's a new generation that for them most of their experience is online for sure and it's surprising how much of what existed before these last couple of decades is hard to find online and that's where My curiosity is really in tapping into this, everything that we're leaving behind. Not that I'm so much of a nostalgic, but more recently I've been experimenting here in Tribeca, for instance, a lot of projects where my notion of time has obviously changed too. Maybe I'm getting wiser and older. My notion of time has changed in the sense that some of the collective memories that are evoked in the projects that we have here at Tribeca felt to me a couple of years ago like the past, and a difficult past, and even a recent past. But they today feel like now. This happened two days ago. And it's not two days ago. Maybe it's 60 years ago. Maybe it's 70 years ago. But in the history of our humanity, it's so short. And in this history of our humanity, digital realms are so short. So what I think is truly inspiring is to look at what these bubbles of experiences tell us about the now and what they're forgetting about the before. And that's what's really inspiring to me now. Maybe it's because our climate is changing and we see that it's an aftermath of much more than 20 years or 40 years. And maybe that gets us thinking about what happened before and how come we don't have these traces. We have some of them. We don't have all of them. But in parallel to this, these traces can also say much more than we're letting them. There are researchers made currently at MIT with the Transmedia Storytelling Initiative, where they're using films, just old films from the early 1900s, and seeing how we can find data in these films and extrapolate from it to see something else in the movie. And I think this is truly inspiring. So I like both. As you were saying, all these artifacts of our humanities that are preserved and we can look at, but they're also so small that looking at them to discover more, it's kind of a, we're in the infinite. We feel so tiny and small and so now, and there's this infinite of things that we are actually not online. And I'm very curious about them.

[00:59:01.240] Kent Bye: Yeah, and as you were saying and pointing out the relationships of that, it reminds me of a couple things. One is this higher dimensional latent space, which is that there's this geometric topology that has extra dimensions that are able to connect these things. So it's beyond just like space and time in some ways. It's more than just three dimensions. It's these higher dimensions that are able to draw these connections. But also for me points to this process relational approach of seeing how the fundamental nature of reality from a process relational metaphysics perspective is relationships and potentials and these processes that are unfolding in that in some ways these models are creating these relational dynamics between the words and images and how even those are related to each other and in these certain contexts. So how things are in proximity to each other allows these models to recreate these contextual scenes and world building that allows us to have a familiarity because they are preserving some of those relational dynamics. And there's a certain dimension of these archetypal potentialities that is connected to the language that is expressing the gist of some of these things that are able to have a vibe that has this poetic nature where you give very abstract prompts but still are able to get something that's very specific. So there's these deeper what Ivo Heening calls the prompt craft. So coming up with the magical spells that are able to tap into these archetypal potentialities to create these images and these prompts and these experiences. And so it feels like in a lot of ways these models are encompassing this paradigm shift from this object-oriented or substance metaphysics mentality into this more process relational mode of relating because just of the way that the data is stored into these relationships, but also what it's able to generate in such a compression of all these things. Anyway, there's some additional thoughts that I thought of as I start to make sense of what these objects actually are. It actually catalyzes these deeper metaphysical and philosophical paradigm shifts as we start to work with them because we have this direct experience of this other mode of being, this sort of alien intelligence that actually is maybe getting closer to the roots of how reality is constructed within itself. So anyway, just some additional thoughts there.

[01:01:02.159] Sandra Rodriguez: I think it's very inspiring to listen to you. You have a view of how the technology is evolving, but also how artists are trying to convey meanings and relations and tap into these kind of multiple layers of relations, as you've mentioned. That's where it's truly inspiring. Yesterday on the panel, we all were a little bit afraid. A member of the audience called us shamans. And we thought, where is this going to go? Is this going to go very weird as a conversation? And weirdly, we were discussing after the panel, we all felt empowered. We all suddenly felt like, well, if you are a shaman, just use your power correctly. And maybe that power is to be able to use these kind of transversal layers and ways of relating to convey something and make sure people are watching or looking at the right place. If there's magic, maybe you point to it and that's all there is. So we all felt kind of a sense of, oh, this is weird to be called a shaman. But at the same time, we're like, well, there is kind of a sense of tapping into subconscious or unconscious relationships and making sure that we hear them. or listen to them. And that kind of calmed our fears down.

[01:02:14.862] Kent Bye: Well, I still would avoid using the shaman language just because it is, you know, have the dangers of appropriating from other cultures. But I do think the essence of the shaman as an archetype is that you're walking between worlds. So you're walking in the physical realm, but you're also walking in this transcendent realm. And in this transcendent realm, you can think about it as, well, from a quantum metaphor, you have, you know, the quantum foam, and then you have what's possible of all these potentialities. And then you have the collapse of that wave function of those possibilities into what's actual. So it's a translation from what's possible to what's actual. And so in order to go beyond the substance metaphysics, which only describes what's actualized, sort of after the collapse, but there are all these potentialities and these relationships and processes that are in this quantum realm of potentia that some folks like Epperson and Zephyrus as well as Ruth Kastner are arguing for these potentia to be considered to be real reality as well. It's just very similar to what David Chalmers is arguing. you are going between what is possible and what is actual and knowing how to navigate and use language to be able to pull out those potentialities into what is actual. So that translation of what's possible and what's actual is this kind of walking in between those two worlds. So I think in that interpretation, it's accurate, but I prefer maybe a different word or phrasing to that walking between these two worlds because it, yeah, I don't know. I just feel like there's more problems with appropriating language like that.

[01:03:37.276] Sandra Rodriguez: Totally agree. But it's a very, very nice way to see our roles and understand why we're doing it and at least realize that others are feeling that necessity to try to walk between worlds. I'm going to use this phrase.

[01:03:55.278] Kent Bye: Is there anything else that's left unsaid that you'd like to say to the broader Immersive community?

[01:03:59.329] Sandra Rodriguez: It's been a while. What I meant by this is that I've noticed there's the ups and downs of VR, there have been all the time, the ups and downs now of XR and the ups and downs of emergent media. And what I mean by this is a lot of the times the ups and downs came from outside, where it's an interest of the press with either the media is interested or not in what we're doing. So it's a pressure that feels like the ups and downs is a pressure that comes from the outside. But since the pandemic, I've realized that we're a small community and we're trying to expand. But there's been some downs recently where, you know, people felt less inspired or were wondering if they would do this for a while. So it came from the inside. And I thought, ooh, what is happening? Perhaps what happens is that the technology is changing drastically and there's a bit of unsettlement and people are finding their spots. There's a little bit of a discomfort and I think they're now trying to figure out where they want to be positioned. So recently I've A message to the community, it's been a while, but we're now back together and getting inspired by each other yet again. And I think that's changing everything. We are a community of relationships and relationships between, as you've so eloquently put it, what could be and what's actual. And when we're not in contact and exchanging and seeing each other's work, I think that's where we start to get a little bit depressed. So I'm just optimistic for the future because I see people picking up again. But I was afraid for a moment. I thought, what's going on? Everybody feels a little bit more depressed than usual, less inspired. And today it's quite the opposite that I'm feeling.

[01:05:34.778] Kent Bye: Yeah, and with Apple entering into the fray, with Apple Vision Pro, that's certainly bringing an increase, but I do agree that when I've gone to the International Joint Conference of Artificial Intelligence, I'll often hear from academics that there's this slow and steady monotonic growth, and there's these inflection points where things get ready to be translated into either an enterprise or a consumer context, and you have these inflection points like we had with Chat2BT and OpenAI, and just the same with Apple entering into the fray. We may have these additional inflection points that's going to bring broader interest, but from my perspective, there has been continual innovation and growth slowly over time, which is part of why I want to do the podcast to help document that. Anyway, I really appreciate you taking the time to be able to dive into all these things. And it's a hot, hot topic. And you find yourself at quite the intersection at the moment with two topics, kind of like the ups and downs. I feel like they're both on the upswing and excited to see where you take some of these projects in the future, and both the future rights and the sex, desire, and data show. And then you also have the next iteration of Chomsky versus Chomsky. So thanks again for taking the time to help unpack it all.

[01:06:35.848] Sandra Rodriguez: Thank you so much. It's always a pleasure.

[01:06:38.347] Kent Bye: So that was Sandra Rodriguez. She's a director and creative director of immersive experiences that use AI to create spaces where you interact with sentient-like entities. And she's got a number of different AI projects, including Chomsky vs. Chomsky, Future Rights, and a piece that just opened up at the Phi Center in Montreal called Sex, Desire, and the Day to Show. So of a number of different takeaways about this interview is that first of all, Well, super fascinating to catch up with Sandra. She's really at this intersection between immersive storytelling, immersive experiences and using AI and seeing how you start to blend those things together. We talked a lot about both the embodiment aspect and chat to BT and conversational interfaces. She's doing all sorts of like training her own models on the whole corpus of like millions of videos of pornography and all the different taxonomic ways that that data has been tagged and then creating some real interesting explorations of What type of stuff that is able to be picked up on? By being able to prompt it with these words that are coming from this whole corpus of pornography So really quite a lot of interesting discussions and she's one of the four creative directors of that show called sex desire and data That was just opening up at Phi Center at the beginning of August and yeah, I think the piece that I saw before was the first version of Chomsky versus Chomsky that showed at Sundance 2020 and it sounds like that they've continued to expand it and grow it out and yeah Chomsky being somebody who is a political commentator and just speaks and gives a lot of interviews and has written just a ton and to train very specific model and just to see how it evolves over these different versions and and have the latest iteration that they're going to be opening up here in Montreal later in September. And yeah, just very interesting to hear the latest iterations that she's been doing with that since I talked to her back in 2020. And then, you know, she was using GPT to model from that very first version back in 2020. Also future rights, which is something that I saw at South by Southwest 2022 Also, just really interesting to see how embodiment and trying to create this autotune of dance So you're dancing a little bit but it's correcting your dance and making you even better of a dancer And so yeah Just to see how that changes someone's perception of their own body and changes how they physically move throughout the world and it sounds like that she's also digging into music and music generation, which is the whole other frontier to start to dive into as well and So yeah, I just wanted to end on this conversation for this series and just because there's so many different explorations for the intersection of XR and artificial intelligence and hopefully throughout the course of the series you get a little bit of a sampling of what different artists and developers are doing, what they're thinking about and some of the things that they're starting to do, some of the different ethical concerns and how different folks are trying to resolve that or just continue to push forward what's even possible with these technologies. Yeah, hopefully you enjoyed this series, and I'll have a tag that you can start to go back and look at some of the previous conversations that I've done. I mean, I could have very well included other episodes that I've recorded over the last four months, including In Search of Time, which showed at Tribeca Immersive. That was in episode 1,242. It's a very poetic use of generative AI, and I had a chance to talk to the creators about that and their whole pipeline there. This could very well be included in this series as well. And then Mozilla's Liv Erickson did a whole talk around the intersection of AI and spatial computing at the XR Access Symposium back in episode 1,233. I do a deep dive with Liv digging into some of their thoughts on the intersection between spatial computing and AI, where spatial computing is seen as the front-end interface for the new back-end of AI. So it's something that is similar to what Matthew Niederhauser mentioned in an interview that I did with him a couple episodes ago. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of ER podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you could become a member and donate today at patreon.com slash voicesofer. Thanks for listening.

More from this show