I interviewed co-directors Daniela Nedovescu and Octavian Mot about AI and Me that showed at IDFA DocLab 2024. See the transcript down below for more context on our conversation.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.458] Kent Bye: The Voices of VR podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So I'm going to be diving into my coverage from IFA Doc Lab in Amsterdam, where I had a chance to talk to a lot of different immersive storytellers who are part of both the immersive nonfiction as well as the digital storytelling. Not every piece that I'm going to be covering is going to be using XR technologies explicitly. There's a number of different performance art pieces, and I'm just seeing more and more of a fusion from lots of different theatrical elements and films, but also artificial intelligence was a huge theme this year. And so I'm going to be diving in and starting off with a piece called AI and Me, which is kind of like a photo booth where you sit down and AI takes a snapshot of you and then basically roasts you through a number of these different prompts. And so I had a chance to talk to both Daniela and Octavia and to break down a little bit about their design intention for this piece and why people seem to want to get roasted by AI. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Danielle and Octavian happened on Friday, November 15th, 2024 at IFA DocLab in Amsterdam, Netherlands. So with that, let's go ahead and dive right in.
[00:01:28.576] Daniela Nedovescu: Hi, I'm Daniela Nedovescu. I'm one half of MOZ. MOZ is our artist name, I guess, or creative duo name. And I'm a filmmaker. I started in advertising and since 12 years now we work together as a duo and we do all sorts of things from films to multimedia installations.
[00:01:48.632] Octavian Mot: Yeah, and I'm Octavian Mott. Now I have to continue the thread, so I'm half of the duo Mott, basically. And yeah, I guess, originally, it's safe to say that I'm a filmmaker. But then again, what is filmmaking? We make this joke. It's like, you hold a camera. It's called filmmaking. You write a screenplay. It's filmmaking. But then you repair the office desk, and it's filmmaking. Now we make multimedia installations, and we still call them filmmaking because it's part of the same thing, so to say.
[00:02:19.760] Kent Bye: MARK MANDELBACHER- Awesome. I guess maybe you could continue on by giving a bit more context of each of your backgrounds and your journey into making this type of immersive and interactive media with artificial intelligence.
[00:02:31.690] Daniela Nedovescu: Yeah, I guess we've been playing with technology for a long time now since we actually started to work together. We had all sorts of different installations from, you know, basic video installations where we had to do simple playback on different devices and we had to figure out ways to play that in sync. So maybe that was the most basic thing that we did together. And then now we are here working with AI, which actually started not so long ago. We started... discovering more and more this technology two years ago when we first encountered it and we were introduced to it by a group of neuroscientists in Lisbon, Portugal. And since then we've been more and more curious about what it can do and what we can learn from it.
[00:03:17.652] Octavian Mot: Yeah, I guess I personally and we together right now we're very into the meta aspects of you know day-to-day existence you know why do people behave in a certain way why do we do certain things and then we figured out that this sort of interactive medium can help us explore those aspects a little bit more so that's how we started to play with these technologies whether that's just videos that we made and we played them in a loop and people just look at them or a full-blown multimedia installation where people actually have to push buttons or do certain things.
[00:03:57.791] Kent Bye: When I was talking to the curators of DocLab and Kasper Sonnen said that there's something around humans where we just want to be judged by technology in a way, where we want to see how the machines see us in a way of confirming how we see ourselves or having this outside perspective of who we are, or at least something that's very biased of however it's being trained. And so, yeah, maybe you could start by explaining a little bit of the origins of AI and me and tapping into this impulse of being judged by technology.
[00:04:26.098] Daniela Nedovescu: I guess everyone is a bit curious to find out what others think about them, right? So this was the basic idea that we had in our mind when we started developing the installation because we were thinking, okay, we like to judge others, even if we don't do it directly or face to face. But can AI do the same thing with humans, right? If it can, what if it's more unfiltered? Because in a lot of societies, we tend to be very polite to each other. But what if we try to expose people in front of a more unfiltered opinion, even raw or rude? How would people react to that? So that was the beginning of this installation.
[00:05:06.818] Octavian Mot: Yeah, and initially this was a fun play, you know, just to see if whatever AI produces is similar to what we would expect a human would produce, you know, in terms of biases and stereotypes and particularly Western stereotypes. I think this is a very important aspect. that we should mention because we're using models that are actually very Westernized, so to say, right? They're trained on Western data and so on. So this was our initial game plan to see how far we can push AI. And it was sort of a game, right, to figure out how we can unfilter AI. We can get into that if you want a little bit later. But then we actually realized what you said, that people are kind of like even looking forward to be judged by AI, you know, by a machine. And this was really surprising to us. As a matter of fact, in the first exhibition that we had, we even had consent forms prepared because we thought that people would want to know, not because we wanted to get their signatures, but we wanted people to know what's going to happen with their data. And after the first hour of the exhibition, we just threw all the tens or maybe a couple of hundreds of consent templates that we had prepared, just we threw them away because we realized people don't actually care much about that, even in Germany where we launched it. So then this exploration right now is really about that. Why do people want to be judged by machines? And why do they feel comfortable with that? And why do they actually, in some instances, even wait for 10 or 15 or 20 minutes and they queue up? Which on one hand, it's great for the outside image of our piece. But on the other hand, I have to say that I'm a little bit disappointed. Because I would prefer that... Actually, we try to tell people not to do it. We try to even push them. We keep saying, look, it's going to be probably bad. It might be offensive. But this actually makes people even more intrigued, which is quite weird, I guess.
[00:07:12.027] Kent Bye: Yeah, Kasper was saying that you were doing some reverse psychology of telling people to not do it, which then, of course, they want to do it even more.
[00:07:19.324] Octavian Mot: Yeah, I really don't know how to explain.
[00:07:21.946] Daniela Nedovescu: You really believe in this advice for people not to try it out. But I think one of the reasons why I believe it's so attractive to people is that it's also a bit social. If they come with friends, we notice that it's not only a singular experience. Your friends can pick, then it's becoming something more communal and they all have a laugh, a good laugh in common. Most of the times it's a good laugh because sometimes it's also a bit rude and it can create even nudes inside the confessional. But that's also something to explore and people are moved by that and makes them think more about what this entire thing is.
[00:08:03.437] Octavian Mot: Yeah, and this is why we actually try to uncensor the models that we're using as much as possible and unfilter them, remove any sort of locks or protections, which also makes this an adults-only piece because it's kind of delicate when teenagers or young people try it because it can be offensive. It can tell you that you have to go to the gym more often or things like that. And some people take it with humor, but some people don't take it really seriously. really well so to answer your question you know this idea of telling people not to do it is actually genuine like we're not trying any sort of reverse psychology and actually to be honest I think it worked you know toward three times you know within this whole like I think we had over 15 or 16 people going through this thing by now but it's still an enigma we're seeking for answers but I don't think we can actually figure them out
[00:08:58.561] Kent Bye: Well, I had a chance to do it twice. And the first time I did it, I just went through the whole thing. And the second time I was like, OK, I want to see if I take my coat off, if I like hold my phone, like what is going to be the same? What's different? It tries to gauge my age and my gender. And, you know, it's off by around 10 years. I'm 48. I thought I was around like 39 years old. Maybe that's because I look younger than I am. But also both times it said that I was an awkward nerd. So I guess it's kind of independently confirmed by two times now that I'm an awkward nerd. It was one of those things where it gave me a chuckle. And the second time I did it, I was like, all right, I'm going to get my phone out and then I'm going to document what it says. And it says an awkward nerd holding a phone in a way that is being socially distanced from other people. And so it's like judging me even more. And so, yeah, I guess maybe you could walk through the different phases of this piece because you basically have a chair that you're sitting in on. It's got a white light. And then once you enter in, then it changes blue at some point when you're looking at this old CRT monitor. And then it goes through a number of different, just like look at the camera, and then it starts to judge you. You can come back with a number of different statements. and then it dreams up some images of you like an idealized image and then there's some other monitors that are here like five other monitors that is showing like these idealized versions of other people that have gone through it so it has this dream imagery of you as a way of documenting your experience and So that was also fun to see me in this matrix-like super GQ headshot photos that were being mixed together amongst all the profiles other people have gone through this. And so maybe you just kind of walk through the different, like, how do you think around the phases of this experience? Because there's the onboarding. The blight turns blue, and then eventually when you're done, it turns red, and then that's a signal for you to get up and have the next person come in. But there's also a number of models that you're using in order even to do all this. And so when you start to think around the beginning, middle, and end of this experience, how do you start to think around this as an experience that you're trying to give to people?
[00:10:57.318] Daniela Nedovescu: Yeah, so we've been showing this installation around for a while now. So this edition is the 10th exhibition that we have with it here at ITVA and we had over 10,000 participants so far.
[00:11:07.684] Octavian Mot: I think it was over 15,000 by now. Even more, yeah.
[00:11:12.107] Daniela Nedovescu: So with each exhibition, we notice different things and we try to improve the process and make it as immersive as possible. So I think it started purely as a journey. Like, how do you... meet AI in a sense and that's why we are also using CRT monitors. We grew up with this kind of look and I think it also gives a bit of a tangibility to the whole AI concept and that's why we chose it and we like it and
[00:11:43.654] Octavian Mot: First of all, numbers don't matter, but I feel like I have to make this... For someone who doesn't want anyone to see it.
[00:11:52.798] Kent Bye: You've had 15,000 people see it.
[00:11:54.478] Octavian Mot: Maybe it's 20,000 or maybe it's 30,000, but who cares? As Daniela mentioned, the immersiveness of the whole experience is quite important. We didn't see this coming when we were making it, but when people queue up and they kind of have fun and they see what's happening to others, whether others are complimented or being bashed by the machine, and it's kind of like, as Daniela said, it's a fun kind of gathering group activity sort of thing. We realized that this might actually take people out from the immersive aspect of the thing, because we do want people to, as she said, as Daniela said, meet with AI in a sense, right? We want people to be aware that there's some sort of a one directional dialogue that's happening inside the confessional. that's why we really try to tweak the experience in such a way that it's calming you down if you enter it and we do that by just slowing the dialogue a little bit more and I think it's also a little bit of a we call this a professional defect you know in the sense that in filmmaking we have the tool of editing right and here in this installation we try to edit the stages and try to make it in a similar way as if we were editing a movie right but the movie is basically picture of you you know you're looking at yourself and then there's a machine that's telling stuff about you you know so and this was quite important now regarding the final stage where you actually see an image of you you know so there's a more intimate picture that you see in the confessional and that's something that others you know if you sit down in the in the confessional you can also tell people not to watch if you don't feel comfortable and and That's the most uncensored image, so to say. So that's where AI can do whatever it pleases, you know, with your image. And that we don't show to anybody else. But then when you go out, there's the second piece, which is called AI Ego, and that's where these other pictures of you pop up, right? And we always said that this is kind of the reward that people get after being bashed by AI. But it's not always a reward because sometimes you also get some unflattering images. I mean, it depends. So the algorithm really decides what to do with each person in part.
[00:14:18.783] Daniela Nedovescu: It's very funny. We had a friend that tried it several times and he said that every time I try it, it picks up the things that I don't like about myself. And he was really surprised by the perception he had about his own person by seeing himself imagined by AI.
[00:14:36.266] Octavian Mot: Also, we had instances where people were quite offended. And they even asked the organizers to make sure that the thing is removed from any sort of database. And then we also had instances where people were extremely sad about what AI thinks about them. But that's not the intention, right? The intention here is not to offend. But I guess when you just have an unfiltered opinion, this is what you get. I don't know.
[00:15:06.829] Daniela Nedovescu: And some people also find it a bit refreshing because we are so used with all the social media filters that promote beauty standards that are not necessarily real. And this is quite the opposite of that, I guess. And they are attracted by the rawness of it.
[00:15:23.922] Kent Bye: I know a lot of these AI models have like a whole trust and safety layer where they're going in and making sure that it's not being unduly malicious or harmful for people. And it seems like in this project, you're trying to go in and actually remove a lot of those trust and safety features. Maybe you could detail a little bit of like, did you pick an open source model that's out there and then work with that? Or how do you even start to get access to this more unfiltered view of how AI and these algorithms are seeing us?
[00:15:51.443] Octavian Mot: So there are a bunch of models that are in play in this whole algorithm. The very first thing that we do is we try to bias them as much as possible, which is we're using this, I don't want to get too technical, but we're using this question answering model, which basically looks at the person, you know, I'm using air quotes right now when I say it looks at the person. And it tries to answer questions about that person. And some of these questions are simple questions, but they're already kind of complicated. So, for example, what is the age, what is the gender of the person? It's already a big discussion around these two questions. As you said, you felt like it saw you a little bit of a different age, right? it's hard to judge how old people are in general. It's the same with gender. But then we also ask more complicated questions, like, is this person attractive? There are around 30 or 40 questions that we ask, even things like, is this person a criminal? Are they shady? Could they have possibly committed a crime today? Things like that. So because the model actually has to answer, it has to say yes or no, it's going to already produce some sort of a biased analysis, right? Which could be accurate, but most likely not, because it's just based on prejudice. So once this analysis is complete, it gets sent to a large language model, and we try to use OpenAI's GPT models as much as possible. And as you probably know, those are really seriously filtered. So they're already censored, they're already trying to not offend, they're already being very butlery polite, right? Even if you have any experience with chat GPT, you know that it's going to be overly polite. Even if you tell it, please be rude to me. As a matter of fact, if you tell it, please be offensive and rude to me, it's going to say, I'm sorry, I'm not going to do that. But we kind of figured out that if you have enough conversation with these models internally, at some point they kind of crack. So there is an internal dialogue that's happening between some large language models and this kind of opens up the box of stereotypes and prejudice and biases. So the really interesting find for us was that basically the more you try to have honest and unfiltered and raw replies, the more subjective the models are or they become extremely biased in certain ways and they focus on biases and stereotypes and things like that. and the more objective you try the more um to phrase it differently the more you try to figure out absolute truths the more filtered they are right and the more censored they are so this was a very interesting thing for us so sorry i think maybe i'm going off rails from your original question
[00:18:39.298] Kent Bye: No, it's really helpful just to hear because it is a thing where most of the time when you are trying to deal with a lot of these publicly facing ChatGPT, OpenAI, there is a layer of filters that you're able to tap into something that is able to either bypass it or find ways using other models and kind of use those models in conversation with each other, multiple models talking to each other, having four or five total different models. And as I understand that you have a variety of different approaches that you're using in order to get this artistic and immersive experience.
[00:19:09.286] Octavian Mot: yeah yeah and some of it is a step-by-step process but some of it is actually going back and forward between large language models and they having conversation between them to get to the get to the meat you know that's interesting and when i say meat it's because I think this was the original intention, right? The original intention was to see if the models themselves are as biased as we are when you just try to remove all layers of protections. And okay, but what does that mean? What does it mean for somebody to be biased? Or what does it mean for us humans to be biased? You know, I'm just talking about things like, as I said, you know, simple things like what's the age and gender of a person? to why am I walking down a different path from the person that's coming in front of me, right? So it could be something as simple like age and it could be something as deeply concerning as I'm profiling this person and why am I doing that, right? And the thing that was super interesting to us was that if we manage to take apart, to deconstruct these models and produce these unfiltered opinions, they might actually reveal some sort of things about us, without sounding too pretentious, kind of like a mirror to society as a whole. And to be honest, some of the biases that we saw were not really what we expected. For example, we realized that skin color, I'm purely talking about the models that we're using, right? So, for example, skin color has nothing to do with whether this person is seen as being potentially criminal or not, which was quite refreshing, actually, you know, because we thought this would be a huge bias, but some were just confirmed, you know. Blonde women tend to be more attractive for AI than, you know, non-blonde women and things like that, so... Yeah, whether we found something out or not, I don't know, but it was quite an interesting thing.
[00:21:16.001] Daniela Nedovescu: And I think the fun part with this installation is that we can, as soon as a new model comes up, we can try to implement that and see what are the new biases or what changed, what's better, what's worse. So that's a fun experiment to run if we have the opportunity, I guess.
[00:21:32.762] Kent Bye: And so are you, like, hanging out on a website like Hugging Face to see what new models are coming out? Or, like, what's your process of actually kind of curating all these different models and deploying them in this artistic context?
[00:21:43.970] Octavian Mot: Yeah, it's called filmmaking, as I said. You know, hanging out on Hugging Face or Civit.ai and stuff like that. It's just, like, part of a filmmaking process. Yeah. No, it's just, like... Actually, yeah, it's also reading papers, reading the new stuff. And also, the deeper you get into this conversation, the more you realize that the way you use the word AI was just purely wrong, you know? So... We don't take it for granted, so we realized that the word AI was definitely an attraction, at least in the beginning of the first exhibitions that we did. And people did show up and wanted to try the thing because it was AI and this whole thing, this whole conversation. But the more we work with these tools and the more we work with these models, we realized that actually, well, first we should just call it machine learning. I think this is a little bit more prudent right now, nowadays. particularly because we see the limitations that they have, right? It's very fun to go on Hugging Face and see that there are ultra-specialized models and that they can do certain things really good, such that you can actually use them as proper tools. We even use them for editing and things like that. But on the other hand, it's a little bit... Yeah, both disappointing and kind of like a relief that these are not going to take over as fast as we thought. Probably, I hope so. Maybe I'm wrong.
[00:23:10.870] Kent Bye: Yeah. Yeah, I'd love to hear what your piece has to say about you, since I'm sure you've had a chance to go through it a number of times. And if you find that there's kind of consistent themes that come up, or if it changes each time you do it. Yeah, I'd just love to hear some of the feedback of what your piece thinks about each of you.
[00:23:28.214] Daniela Nedovescu: Sure, there are basically the same... Because I like to wear black all the time, it says basically the same things all over again. Either I'm an undercover agent or, you know, an emo person or a very sad woman that lost someone and is now mourning, so stuff like that. And I'm usually... Yeah, on the AI ego piece, because we actually designed the prompts for that and the environments themselves. And the machine has to choose from whatever options we gave it.
[00:24:00.607] Kent Bye: So just to clarify, because there's five monitors and there's like a backdrop. And so what you're saying is that you're looking at the backdrop and then that backdrop is integrated into like the ego boosting type of glamour shots that AI is generating. So there's feedback there from the context of the background into that shot. Is that what you mean?
[00:24:18.194] Daniela Nedovescu: Each background is being chosen by the machine based on the analysis that happens in the confessional in the first phase of the installation. So every time I'm showing up on the AI ego, I'm in... It's pretty mixed most of the times. I'm either surrounded by balloons, volcanoes, dolphins, sloths.
[00:24:36.881] Octavian Mot: I tried it so many times so far it's very hard to just pick one picture that stood out for me but it's more or less the analysis itself it's pretty much consistent I'm always a fat balding hipster for the machine like most of the time 95% of the time and I'm I agree with two of those statements but hipster is just something that burns into my soul I just don't wanna yeah And I guess to spite me, it also then it creates pictures of me with long hair in the later part, which is something that it does sometimes. I also noticed, we noticed that, for example, sometimes it overestimates the age of some people, for example, women. especially when they're smiling so we also think that it might be just messing with people just to take them down or you know make them a little bit more happier or something in contrast so there are some interesting things that are happening there which are still a mystery as to why they are happening But there is some sort of consistency throughout, right? So if you sit down multiple times and you're wearing the same thing as you also mentioned, it's going to most likely produce the same sort of feedback. There's also an interesting aspect to this, by the way, which I don't think I've mentioned or we mentioned. So we realized that... Some people really get offended by the fact that, and particularly women, I have to say this, I don't know why this is happening mostly with them. So if the machine tells them that they are a little bit overweight or if it makes really weird remarks in terms of their body or it says that they're way older than they are for example they do it a second or even the third time and and they tend to take off more layers of clothes the more they do it just to see if the machine is reacting differently and it doesn't you know and this is sometimes It's a very interesting aspect as to how these models still kind of see the person in the same way. Although, you know, that person has changed a little bit, right? By taking off maybe a jacket or some eyeglasses or even some women I saw, they opened up their hair, you know, which, yeah, I don't think I should continue this thread anymore. But, you know, you know what?
[00:27:06.846] Daniela Nedovescu: It's very funny that you mentioned that about the hair because every time I wear a ponytail, I'm portrayed as a man, which I find it very interesting because I was supposed to be a boy when I was born. So the doctor told my father that, hey, congratulations, you have a boy. And he has two daughters, so I have two older sisters. And it was a relief for my father to find out that his third child is a boy. And it's very funny to see myself as a man because I grew up with this
[00:27:35.166] Octavian Mot: story in my mind that i was supposed to be yeah someone else on this note i feel like i have to mention this as well so i don't know if you saw but so the machine basically produces three complex opinions let's say right so the first thing is okay this person is blah blah blah this is they look kind of like this and then this is like their age gender and just sort of like the stats of someone That's the very first basic stage, but then you also get three separate messages. So the first message is a description of the person, is how the machine sees that person. The second thing is a prediction, and it mostly feels kind of like a horoscope. And the third thing is some sort of an advice that the machine gives the person. And by the time people reach the second prediction part, the interesting thing that came out that i noticed was that people start to attribute meaning to the things that the machine is saying about them particularly the prediction even if they don't believe in things like horoscopes or you know like tarot or things like that which they shouldn't by the way you know but it's even those people who are really against things like that they kind of feel that they have to attribute some sort of a meaning and they you know if the machine tells them last week something happened at work you know they actually try to think about what happened and and for me this phenomenon was very interesting right because it really plays with our perception in a certain way and it's just text on a screen you know and i don't really know how to i don't really know why people tend to do that
[00:29:16.094] Kent Bye: The thing that I would say is that in a lot of ways, AI is a lot like alchemy and magic and astrology and tarot because you're talking around these archetypal potentialities of these latent spaces, they call them. They have these higher dimensional mathematical features that are being detected. And a lot of those are being dictated by the data that's being fed into it and the labeling that's done on that data. And so in order to really get down into the guts of a project like this, in some ways you would have to curate your own data sets in a way that you would have even more control. Because if you're not doing that, then you're just feeding off the existing biases. And then it ends up being a way that you can't actually push back against that because it's just replicating many different power dynamics of our culture. In a lot of ways, these types of systems are not like the same type of like heuristic, algorithmic, logical set of sequences for how programming is done. This is more of like get a bunch of things and mix them all together and good luck with what you get. And you can't really control it, which I think is very alchemical in that sense. And so I feel like that's kind of the nature of what these AI technologies are is that it – is leading us into this realm where we don't have that same type of control, where our heuristic, logical brains can control it. It's more of this more subtle, nuanced cultivation and curation that is much more in the alchemical and hermetic, magical and astrological traditions, I would say, I would argue.
[00:30:41.908] Octavian Mot: Sure. But also from the, let's say, the creator's perspective, there is this tendency to create pieces of art with AI that kind of feel like black boxes, you know, and they have this sort of like mystical gravitas around them or kind of like they're really... they must feel important in a certain way. And we actually try to avoid that by having conversations like these, you know, and by trying to convince people that, no, it's actually yes, it's sort of like alchemy, like you said. But if you deconstruct every single component, you at some point you will figure out, oh, this happens because of that and that happens because of this so you know there is some sort of uh non-magical aspect to it and I would argue that most of it is non-magical but I guess one of the reasons why people find it a bit like magic is because it can speak the same language as we do so that's pretty impressive for most of people especially for those who were not in contact with AI so much
[00:31:45.117] Kent Bye: Yeah, well, unfortunately, we have to kind of wrap up and go off to this next show that's happening in a few minutes. But, yeah, I would say, you know, just to kind of put a pin in that part of the conversation is just that, yeah, it's still a black box regardless. Even though you want to say that you have control, it's still like casting spells. And, you know, that's essentially what we're doing here with AI is that… And there's a difference. Depending on when you do it, it's got this probabilistic change that happens each time you do it. So it is different each time. And that way it is much more in that alchemical, magical tradition from my perspective. But as we start to wrap up, I'd love to hear what each of you think the ultimate potential of immersive media and artificial intelligence might be and what it might be able to enable.
[00:32:27.219] Daniela Nedovescu: I think there are great ways to introduce people to new ways of thinking and explore new ideas and I don't know, maybe you want to say more about that, Okti?
[00:32:38.231] Octavian Mot: Yeah, I think, again, from the perspective of artists, I feel that it's very interesting when you actually see AI as a tool to reach a certain goal, but you don't really wrap everything around that thing. But try to chain it in a sort of way where you use other tools. And I think new media is really great at that. So you can have really great installations where you can combine computer technology with things like lights, with all sorts of interactive experiences, and even experience design, like physical experience design. And I think that this is a huge space where people can play with. And yeah, there are lots of things to be done with these things.
[00:33:27.502] Kent Bye: Awesome. Anything else left unsaid that you'd like to say to the broader immersive community?
[00:33:31.397] Daniela Nedovescu: No, I hope people will like to try AI and me.
[00:33:35.263] Octavian Mot: I would still try to convince them not to, but yeah. And thank you so much for this.
[00:33:40.171] Kent Bye: Yeah, awesome. Yeah, I really enjoyed the piece. And yeah, this kind of unfiltered look from AI. And yeah, I think it's kind of like reflecting on these biases in a way that you have a direct experience of it. But yeah, I think there's a lot of like, how do you also deconstruct these biases or show the full range of what was possible? I think there's a lot of, in terms of we start to move forward and start to really understand, I think this is a great first take of getting a direct experience of some of the ways that the machines can judge us and be biased for or against us. So, yeah, thanks again for joining me here on the podcast.
[00:34:11.875] Daniela Nedovescu: So thanks.
[00:34:14.855] Kent Bye: Thanks again for listening to the Voices of VR podcast. And I really would encourage you to consider supporting the work that I'm doing here at the Voices of VR. It's been over a decade now and I've published over 1500 interviews and all of them are freely available on the Voices of VR dot com website with transcripts available. This is just a huge repository of oral history, and I'd love to continue to expand out and continue to cover what's happening in the industry, but I've also got over a thousand interviews in my backlog as well. So lots of stuff to dig into in terms of the historical development of the medium of virtual and augmented reality and these different structures and forms of immersive storytelling. So please do consider becoming a member at patreon.com slash voices of VR. Thanks for listening.