#1585: Debating AI Project and a Curating Taiwanese LBE VR Exhibition at Museum of Moving Image

I spoke with Michaela Ternasky Holland about her project The Gr(ai)t Debate at Onassis ONX Summer Showcase 2025 as well as the Portals of Solitude: Virtual Realities from Taiwan show she curated at the Museum of Moving Image. See more context in the rough transcript below.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.458] Kent Bye: The Voices of VR podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So in today's episode, I have a conversation with Michaela Terneski-Holland that spans two distinct topics. One is around XR distribution and the VR exhibits that she helped to set up at the Museum of Moving Image in collaboration with Taika, which is the Taiwan Creative Content Agency. I've featured them previously. on the podcast they're doing some amazing work within Taiwan to help facilitate the process of creating immersive storytelling within the context of Taiwan and so there's like a retrospective of four Taiwanese pieces that were showing in the Museum of Moving Image there was actually like a reception that I had a chance to attend and see the exhibition see some of the projects that I haven't seen before they were showing four projects which is the sick rose which came out in 2021 and The Man Who Couldn't Leave, which won Venice in 2022, and then Dora in 2024, and then Hungary, which also came out in 2024. So a lot of the theme around solitude and the pandemic and isolation. And so each of these different pieces, we're exploring that and using Taiwanese art styles in each of these. Michaela also had a completely separate project that was being featured in the Onessa's Onyx Summer Showcase. And that was a piece called The Great Debate with great spelled with GR parentheses AI debate. So she wanted to have these different AIs debate each other in a way that they were standing in for different political perspectives. And it was a very early prototype in the way that she was getting a lot of feedback and trying to develop like what is the thing that's really interesting around this debate. interaction of working with AI. Is it serious? Is it more satirical? Is it deconstructing? What's the tone? We're essentially reading these debates of AI debating with each other on a screen. And so you end up creating through a prompt on a slider, is this politician on the left or right? You would give it a prompt to debate a different topic. And then it would kind of repeat a lot of the talking points in the United States context, left-leaning or right-leaning political discourse. which you know large language models can capture different patterns within that context so you end up having like a recreation of like the last 20 years of political history but it wasn't as compelling for me as an experience and also just not really up to date for covering what's happening in the world right now with this kind of democratic backsliding and move towards authoritarianism that you know like when you're looking at like historical information a large language model it's not really going to pick up on those nuances unless you have a lot of explicit prompting that is priming discussions around that. So I think this is going to be like leading towards more theatrical elements and they're still very early in terms of a project, but Makayla also just kind of recounts like her transition from working into a lot of immersive storytelling projects into getting into more and more of these different AI storytelling projects where she's had a number of different opportunities to like collaborate with OpenAI to get early access to Sora and produced a piece that premiered at Tribeca Film Festival. And so she's kind of one foot into the XR world and also this AI, generative AI content creator. And a lot of the funding with immersive storytelling has dried up. And so Mikhail is kind of an interesting bellwether of seeing like some of these different artists that are going to where the money is and kind of exploring these other creative potentials of generative AI. I have my own critiques around AI. You can look at the conversation that I had with Emily M. Bender and Alex Hanna. They did a whole book called the AI Con. You can go back and look at episode 1563. Highly recommend that as like a baseline for the canonical critique around large language models and some of like the worst aspects of the AI industry. Mikhail and I talk a lot around this in terms of navigating sort of the ethical minefield of generative AI and AI art in general. And so there is this tension right now within the larger industry where there is this kind of more abolitionist view to just say we're not going to use any of this at all versus other folks who are kind of embracing it and exploring the creative potentials while also being aware of some of the different limitations or not. And I feel like I'm kind of in the middle listening to see where the artists go, but also seeing that there's just quite a lot of different problems with where AI is at at the moment. And Michaela has a lot of like pushback on that in terms of as an independent artist, the environment right now is completely different in terms of where the funding opportunities are and different things that she's seeing in her own creative practice with artificial intelligence. So just wanted to air this conversation where we're talking around like XR distribution and then into all the different nuances of the creative potential of AI, but also all the different thorny ethical problems around that as well and how Michaela is navigating all of that. So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Michaela happened on Sunday, June 8th, 2025, at the end of the Onassis Onyx Summer Showcase in New York City, New York. So with that, let's go ahead and dive right in.

[00:05:13.519] Michaela Ternasky Holland: So my name is Michaela Trnaske-Holland, and my role has really evolved over the last few years. I started mainly as a creator in the journalism space, helping big brands like Time Magazine do their first VR and AR projects. I then expanded into the social impact space, where I was making and creating works with MetaVR for Good and United Nations. And then I really jumped into the role of impact producer with Games for Change and really helped create some good systems and forward thinking and research around how social impact campaigns can be executed for vr projects and since then i've sort of taken a pause on the impact producer role and i'm now stepping back into the director creator role making vr animated films as well as interactive installations using generative ai and films using generative ai workflows okay and so maybe you give a bit more context as to your background and your journey into this space Sure. So actually started off when I was in sixth grade, I told my parents I wanted to quit school and actually homeschool myself. And I think that really gives you an insight into the type of person I am, where I don't really see the road that everyone travels on. I try to take the road less traveled. So even in college, I dropped out and pursued a dance and performance career on Disney Cruise Line. And when I finally decided to settle into my studies, quote unquote, I decided to do journalism. but wanted to make it more immersive and interactive. And so post-graduating, I found VR and AR at a very small digital conference in LA, fell in love with the medium, and just really started to become a student and a nerd of the medium until I could start making my own 360 films and started pitching myself to people. And Robert Hernandez, who's an incredible professor at USC who teaches digital journalism, found me, found my work, and really recommended me to his colleagues. And that was how I got my job at Time Magazine. So from the time I graduated in 2016 in June to the time that I started working at Time Magazine in November of 2016, it was really a fast sprint into the industry. And it's been a pleasure to be a part of it ever since, now almost 10 years.

[00:07:14.304] Kent Bye: And we've had a chance to collide and do a number of different interviews at different projects over the years. And so I want to start with the encounter that I had with you at Tribeca Immersive 2025, which you helped to curate a program there at the Museum of Moving Image. Maybe you could just give a bit more context to that exhibition.

[00:07:32.461] Michaela Ternasky Holland: Yeah, it's a great question. So my relationship with the Museum of the Moving Image has been multi-year long. I started first with Games for Change as an impact producer, and we were looking for a museum partner to install the On the Morning You Awake to the End of the World VR documentary about the false ballistic missile alert. And so I really worked with the Museum of the Moving Image to create that exhibition and execute that exhibition with VR docents. I then re-approached Momii last year to create an exhibit about skateboard filmmaking because my personal history and my father comes from the skateboard filmmaking realm. And at that point, it really became a deeply collaborative process with the whole Momii team working very high level with lead curator Barbara Miller, with exhibition designer Danai. And Taika approached me late last year and said, you know, we're looking for people to come to the Taiwanese Innovation Expo and who would you recommend that would be a really great venue to showcase Taiwanese XR works? And immediately I thought of the Museum of the Moving Image and I connected them to their executive director, Aziz. And so Aziz went to Taiwan, experienced the Taiwanese pavilion, spoke with Taika and came back from that experience and that journey and said, we're going to do an exhibit with Taika and we want you, Michaela, to be the guest curator and the creative producer. So that was really the start of that project. You know, it's kind of been seeds that have been planted over the course of the few years. I've obviously seen Taika throughout many different festivals in the XR world. So I felt very honored to be selected for the role. And then really it was jumping in with Regina, who's the digital curator of Momii, who is giving me a lot of advice and opinions around curating. We watched a lot of VR projects. We shortlisted a lot of VR projects and we ended with four incredible projects. hungry dora the man who couldn't leave and the sick rose and one of the things i really loved about the curation was i noticed all of these projects really talk about different degrees of isolation different degrees of solitude whether it's political oppression mental health oppression whether it's the loss of your health to a pandemic or if it's the loss of your parental figure due to a divorce, all of those protagonists really fall into that world. But then the art direction of all of these are very different. We have different styles of animation, different styles of game engine, different styles of live action. And I think it really brings the cohesiveness of the exhibit as a whole together. And the exhibit's been open since April 26th. It's going to close July 27th, which to me is very unprecedented because it's in one of the main gallery floors. It has a full docent staff and it's a lot to be said that the museum is putting four plus months of guest exhibit time to a VR experience and a VR exhibit.

[00:10:15.950] Kent Bye: How's the exhibition been going so far?

[00:10:17.921] Michaela Ternasky Holland: So everything's actually very positive, you know, outside of some small hiccups with a couple builds that have been crashing for Dora, which we're slowly figuring out together. You know, we have seen over 1,200 people at this point, and it's only been open for about five weekends. Wednesday through Sunday is Momii's working hours. Thursday through Sunday, really. Wednesday's more for field trips. We've seen 80% of those people stay through the whole experience, not getting out a headset before that experience has ended. And that's huge for the Museum of the Moving Image. As a moving image museum, they're constantly wondering what can they do to service their audience. And so to have VR projects that are 12 minutes, 25 minutes, 35 minutes, 45 minutes long, and seeing people really engage with the full breadth of that time, whether they're doing one or two experiences, is huge because these guests are not paying an extra ticket price to see these projects. They're just paying a general admission and then they come into the exhibit and if there's booking time available, then they get to see the projects complimentary. So there's no like paid incentive for them to sit and watch the whole thing. So this is actually really coming from a motivated place. And we're also seeing over 90% of people actually come back for their bookings as well. So we're not just seeing a length of time staying in headset, we're also seeing a return to come to their booking. So we're not actually getting as many walk up empty slots as we thought we might.

[00:11:36.840] Kent Bye: Right. I'm wondering if you could speak a little bit about your experiences at the Kaohsiung Film Festival in Taiwan.

[00:11:41.792] Michaela Ternasky Holland: Yeah, I've only been there one year and that was actually with On the Morning You Wake to the End of the World. But I actually found Kaohsiung just one of my favorite festivals. I mean, actually a huge part of my inspiration for how I designed the exhibit came from a lot of these incredible festivals I've been able to attend, you know, like Venice and South by Southwest and Tribeca all have their different styles. But the one I found was my favorite was Kaohsiung. Every VR project is treated with this exhibit level of care. It's not just a one size fits all. And so Speaking with Momi, there was a lot of difficulties around the idea of how do we curate four very different VR projects into one umbrella exhibit. And some of the design aesthetics that I saw at Shutfield DocFest were inspirations. My favorite parts of Kaohsiung is that it's a naval city, so it's right there on the water. I love being so connected to the Taiwanese food scene, the Taiwanese art scene, which is very different than maybe the business-centric Taipei experience that people might have. But I've always been just so impressed with Kaohsiung's ability to curate and their ability to execute, but also the amount of people that come to the festival. It's like people really buy tickets to see these projects. My final thought around that, too, is around their VR Film Lab, which is a really great institution that's open. And in a way, I think Momii is really setting a precedent again for saying, you know, VR has a place here in these institutions in New York. And it's really cool to see Taika's ferocious ability to want to see their creators get international seats, get international tickets, get international venues. And the fact that Taika, even when we were curating, we're like, we don't care if they were even funded by Taika, as long as they're Taiwanese creators, Taika will support them being curated into the exhibit. So I think that also speaks volumes to the Kaohsiung Film Festival circuit creating such great works from Taiwan and Taika's ability as a government agency to really promote and celebrate these works in many different spaces.

[00:13:35.148] Kent Bye: Yeah, I've just been really impressed with Taiwan and Taika funding all these different projects and really cultivating a whole ecosystem of creators and makers who are pushing the edges of immersive storytelling and a lot of great projects that have covered over the years or seen over the years at different festivals. And Taika was also here this year sponsoring an event at Momii to bring the community together. And so, yeah, just wondering if you can reflect on Taika and some reflections on the gathering that they hosted the other night there at Momii.

[00:14:03.823] Michaela Ternasky Holland: Yeah, so one part of my scope of work as the guest curator and the creative producer was actually to build programming around the exhibit. And the minute I realized the exhibit was going to run tangentially to Tribeca, I pitched to Taika that we should do two events. One event should be a virtual panel with all of the directors involved and make it a creator-specific panel where they would zoom in to the museum and the museum guests would be able to see them on the screen. And then the second event I pitched was some sort of in-person event that ran concurrently with Tribeca, knowing that that was a huge week of technology celebration with Tribeca and Demmo and Onastasonic Showcase, knowing that there would be a lot of people internationally and outside of New York in town to really allow Taika to kind of build out that network and celebrate and showcase this museum exhibit they did with Momi during this time. And so speaking with Ray and Jillian over at Taika, we were thinking about what would be the most appropriate panel to be able to talk about. And obviously distribution is very... sometimes tired subject in VR, but I think, you know, you have a really great case study here around what Taika is doing with their groups of creators. And you also have these great producers coming from two very different ends of the spectrum. You have the Dora producer who where Momii is literally the international premiere. And you have The Man Who Couldn't Leave, where Momii is one of many museum exhibits they've been a part of and one of many festivals they've been a part of. So it was a really great range of listening to not just two very different producers and two very different projects, but also sort of the Taika strategy of how they're going about what they're doing and how they're moving forward with what they're doing. And in that way, you know, we really wanted to support the Taipei Cultural Center here in New York to bring the Taiwanese audience to the space. And I think we saw them come out and really get to see, you know, not just creativity from Taiwan, but creativity from Taiwan in an institution like the Museum of the Moving Image. And I think the layers of that is really spectacular when we talk about breaking out of the shell of just festivals that VR tends to get caught up in. I think there's something really impactful that you can see in that night and was great too because we got to open up the exhibit to the guests of that reception. We got to open up the museum to the guests of that reception. So it really became, I think, a holistic celebration of VR at an institution like MoMA, which I can't stress enough. I think especially here in the US and especially here in New York is a huge feather in our cap as XR creators because so often I think we get pigeonholed into one or more smaller space or one or more smaller institution.

[00:16:39.747] Kent Bye: Okay, well, before we move on into the AI work, is there anything else about your creative producing work or other work around curating or anything else before we sort of move from XR into AI?

[00:16:51.890] Michaela Ternasky Holland: Yeah, I guess I would just kind of say to put a button on it all, you know, a lot of my gripes or complaints sometimes with the VR industry is that it's very insular and it's very specific to nationalism sometimes, right? Like you get to see people in Venice, you get to see people at South by Southwest, you get to see people, but like outside of the co-productions that are being worked on kind of behind the scenes, you don't really get to see like the shared experiences of bringing the power of VR, like bringing that culture, bringing that quote unquote like powerful storytelling machine to spaces where people are not exposed to VR or exposed to those stories, right? And what I love about being able to bring to Momi and the New York audience, which is a very diverse audience in Queens, is these very, very culturally specific stories. You know, The Man Who Couldn't Leave is about the white terror movement that happened just recently in Taiwan and was made in conjunction with the Civil Rights Museum in Taiwan. And, you know, very specific to Taiwan. Most people in America don't know about that event. Hungary is inspired by puppetry that most Taiwanese people grew up watching on their kids' television shows and is performed at local temples in Taiwan. The Sick Rose is inspired by how the pandemic really impacted the Taiwanese audience, and you can see very clearly Taiwanese cultural art direction with the dough puppets that are used in The Sick Rose. It's very specific to a Taiwanese art style for stop motion. And so I just love the fact that not only are we bringing these stories from Taiwan to a space, but we're bringing it also to a space where people have no idea what they're about to get into. So we're not just exposing them to VR for the first time, we're exposing them to Eastern culture stories for the first time in a very Western culture city like New York. And for me, that is just so special growing up, you know, as an Asian-American and growing up as somebody who who wants to see more of that kind of peace or collective culture building together.

[00:18:46.377] Kent Bye: Great. Awesome. Well, I'm really happy to hear all that work that you've been doing in that distribution front. And there's a lot of exhibition of XR takes a lot of care, emotional labor, and it's not easy to do. And so I think it's one of the things that kind of holds back If people were able to do it on their own without having docents or help or instruction, then I think it would potentially be as a medium a lot more easy to get out there. But yeah, there's just a lot of considerations to do that. So it's great to see your work there at MoMe and have a chance to actually see some of the other experiences that I haven't had a chance to see yet. OK, so let's do a little bit of a context switch, a vibe shift into your work with artificial intelligence. We've covered over the years your entry into these more immersive spaces. And so when did the artificial intelligence start to come onto your radar in terms of something that you wanted to explore creatively?

[00:19:36.963] Michaela Ternasky Holland: Yeah, so actually my first foray into artificial intelligence was when I worked at Storyfile, which is a really great company that was started by Stephen Smith and Heather Smith, who were at the Shoah Foundation, where they were creating these really amazing interactive interviews where they would interview somebody and then use AI algorithms for people to walk up and ask these holograms or these video screens questions. And then based on the question, they would match it to the answer that they captured during the interview. So it was more almost like, quote unquote, like an analog AI, right, where you're like, Hey, Siri, play that tune. It would be like, you know, to the Holocaust survivor, like, tell me about your childhood. Right. And it would match that answer to the answer from the documentary interview. So it wouldn't generate the answer. It would actually like pull a video file. So that was sort of my first foray into how I could be incorporated into storytelling. And I have been on the front end of watching how AI is being used in other ways. We were doing a Black Panther Party project with sort of the same idea with story file, but instead of them being screens, we were thinking of them being in more of like a 3D shared world environment and be captured volumetrically. So sort of playing with how AI could start incorporating itself into XR storytelling. But the first time I was really exposed to generative AI storytelling, you know, was when I was an Anastas Onyx not quite a member yet, I guess you could call me like an adjacent community member. And I walked into the space and Aaron Santiago was working on some incredible work with his collaborators, Brandon Powers and of course, Matthew Niederhauser. And I realized, you know, this technology and I had heard about it, but I was when I once I saw it in person with like topomancer and kinetic diffusion, I realized there was something really strong to be had with the technology. But at the time, I didn't see myself using the technology because I couldn't wrap my head around what the story use case would be. You know, oftentimes when I use technology, I think about it from a story first perspective, not necessarily from a who's going to get to it first. And Aaron and I, you know, last January in 2024, got a really great opportunity to go to ASU and have a space residency there for a few days. And when Aaron and I were talking about being Filipino American, we talked about the cultural erasure that he and I both feel, you know, having kind of this missing imprint of culture because we don't have the media to represent us in the sense that we don't see ourselves in America in the 40s and the 50s and the 60s. We also don't see photos of indigenous Filipinos very often. So like there's this kind of like empty historical structure of visual media and we were talking about how could we engage with that in a really compelling way and so of course generative ai technology came forward as a cultural metaphor for the erasure and sort of this cultural limbic space that he and i both feel where we like know those things existed like we know filipinos existed in the 40s we know indigenous filipinos existed before spanish colonization but what could that look like and what could that represent and so using generative ai to explore some of those like I guess you could call it, you know, speculative histories. So that was really our first foray into COP1. We paired that with social media clips of Filipinos asking ourselves, like, who are we in a satirical and serious way? And then those social media clips would so lowly transform into generative AI imagery. So that was really that first step. And after that, I was like, you know, this is a really interesting tool, but again, I still don't know how I will continue to use it. And later that year, I got approached by Tribeca and Sora to create one of the first films with the Sora platform that would premiere at a AAA festival like Tribeca. So then again, I was given the opportunity to say, well, what would I do with it? And it just happened that the day that I was onboarded to the Sora platform was the 30th anniversary of my father's death. And I was like, all right, this is kismet. I think I need to tell a story about my father's death and how my mother and I worked on our relationship post him dying. And so I fell in love with the idea of using my journals and my diaries and my letters. And that led me down the path to find paper craft animation and start playing with that on Sora. I still brought in a team of score composers and animators, and we created a really beautiful film called Thank You, Mom. And so really at that point, I'm getting asked by Darren Aronofsky and A24 to see my film, because it's a Tribeca film made with gendered AI with a platform at the time that no one had access to. And, you know, as a filmmaker, I don't identify necessarily as like a traditional filmmaker, but as a storyteller, I identify as somebody who's like open to using new ways of telling stories. And so I got approached by another entity to start creating animated films using generative platforms. And I've been starting to do that work. So I did a short film called The Christmas Recipe. I'm working on a nine part animated series called Echoes of Legend. But while all of that is happening in the background, Aaron Santiago and I are still talking about interactive installations. And we've been approached now by Karen Wong, who's the head of WSA, who is the founder of New Ink, to create a commission for her inaugural Cha Cha Festival, which is all about the celebration of tea. And Aaron and I have been tasked as creative technologists to create a project that is immersive design oriented. So we're thinking about how we could create a ritual, how we could recreate a tea house. And we're like, wow, generative AI could read people's tea leaves, and generative AI could give people these bespoke readings. And so we start falling in love with this idea of creating astrological sort of nuance in this world that where people may or may not know it's generative AI, but they kind of get the magic of, seeking out a question and finding answers which i think is really fun when you play with like the magic of what generative ai does well and doesn't do well and so we did that also earlier this year and it ran for a month in the wsa space where tribeca currently is and so now i have this filmmaking with generative ai and this interactive installation work with generative ai and i'm realizing you know this stuff is probably here to stay and as a storyteller my mind is now percolating onto okay well what will i do now unprompted with this technology and michael glass and i you know we go way back to a couple like pre-pandemic when he did a project and i saw his work and he and i have respected each other but have never collaborated and he's a fine artist in the truest sense as a fine artist And so we decided to collaborate on an idea together and we were going back and forth about generational differences, political differences. And I kind of was like, well, isn't it funny that these like entities like Chachibiti and Claude and Gemini, they're kind of like our founding fathers are like the founding AIs. Like they will go down in history as these like in the sense that Siri and like Alexa, like they have these kind of like brands to them. And I was like, wouldn't it be interesting if we asked them to debate for us like a political candidate? And that really just started me down the road of making what is the great debate, which is the project I have here right now at the Summer Showcase. So much longer than you probably asked for. I'm so sorry, Kent. But that is really, I think, just important to give you all that context because For me, generative technology wasn't something that I was jumping into wholeheartedly. It was something I have slowly seen incorporate into my practice over time with very specific use cases of why it's been incorporated.

[00:26:40.184] Kent Bye: It's really actually quite helpful to hear the many different encounters and permutations and kind of evolution of your creative practice relative to this field. One quick follow on, because I know that Eliza McNitt is premiering an AI film with Darren Aronofsky at Tribeca here like in a few days on the 13th. And you said that you got reached out to by Darren Aronofsky. Were you involved with that project at all or?

[00:27:03.311] Michaela Ternasky Holland: I was not. Basically, it was like Darren Aronofsky reached out to Tribeca and said, I want to see these short films. Tribeca reached out back to me and was like, Darren Aronofsky wants to see your film. Is that OK? Can we send it to him? And I was like, yes, of course. And then I never heard anything else. But again, like Darren Aronofsky, A24, who... Also, many producers in Hollywood reached out to me directly and said, we want to see your film. So it was sort of this moment I was thrust into this limelight that I didn't realize. Again, being the stepchild of XR in film festival circuits, I'm not used to being exposed to Hollywood in such a specific way. And I don't even think I realized that could happen once I said yes to doing this program with Tribeca and Sora.

[00:27:46.798] Kent Bye: Okay. Okay. Interesting. Well, I know that Darren Aronofsky produced the film that is going to be releasing here.

[00:27:52.907] Michaela Ternasky Holland: With his production company. Yeah. And Eliza McNitt is an incredible director in her own right. And very ironically, she's also using a story about her mother and her birth and... And her approach is like what I see in the trailers is very like live action oriented. That's what we call a hybrid workflow where she's using actual like footage of actors that she's filming and then using the generative AI to kind of augment some of the more abstracted poetic visuals.

[00:28:18.390] Kent Bye: Yeah, a lot of interstitial poetic imagination of different things, at least from the clips that I saw. But I haven't seen much of anything else, so I'm hesitant to comment too much more. OK, so going back to the AI, I feel like that my take on AI is that there seems to be use cases that make a lot of sense for using generative AI. I feel like with all of the AI systems and large language models, there's all sorts of ethical and problematic aspects in terms of stolen data of how it's even been generated. And so I feel conflicted around the use of generative AI in different creative ventures. So there's that. And then there's other aspects of the limitations and constraints of large language models concerning projects that are dealing with language and interacting with it. And so I feel like your projects kind of like dip in and out of different types of use of AI. And there's a book that was written by Emily Embender and Alex Hanna called The AI Con, where they take a real critical look at the constraints and limitations of AI, but also start to really dig into the labor practices of how the data were collected, whether the data are stolen or ethically acquired. So I'm just wondering how you start to navigate some of these larger ethical questions in the context of using these systems that are in some ways kind of seizing all this data and kind of remixing it in a way that may or may not be within the context of fair use as all these lawsuits are currently being played out.

[00:29:39.844] Michaela Ternasky Holland: Yeah, it's a great question. You know, the best thing I can do is continue to educate myself. But whenever I'm speaking on a panel or whenever I speak to people about this, there's sort of three things I talk about. Right. First and foremost, you know, it takes artists to work with the technology, to see its limitations, to see the illusion it creates, to also be able to say, how could this improve for me and for the people that are being exploited? Right. And so I do think artists responsibly should at least get involved with understanding what these things can and can't do. And also understanding how imperfect or perfect the appearances are online. A lot of that goes back to the tech marketing of them needing to sell their product to an audience. And so there is a bit of that smoke and mirrors that comes when you see those clips from Sora, you see those clips from Google Flow. The second part is as an artist who has worked with OpenAI and has in full transparency basically had her papercraft animation style that I'm not the first person to ever do papercraft animation style, but I'm the first person to do it with Sora basically co-opted into a button that's in the preset menu and had OpenAI look at me and go, isn't it great? Like anyone who downloads Sora once we make it public and pays us money is going to get access to your preset button to create papercraft animation. And it's literally listed there under like noir film and documentary style. It's like papercraft. And they're like, you inspired us because your film was so successful. And there's a part of me that's like, great. As a creator, it would be great to see $0.10 of that every time someone pushed a button, that specific button. It would be great to see a little bit of the sustainability of what Sora and OpenAI are getting as a product back into me as a creator. And so that takes me down this pathway of what are good licensing ethics in this generative AI world? What are good nutritional label ethics in this generative AI world? Which there are a lot of great startups who are building those systems now and they are happening now, right? Proto AI is working with a lot of journalistic institutions to license the journalism work. But every time you use their large language model, it gives you like very similar to FDA food transparency, a literal nutritional label, like 3% sugar, 6% salt, 3% Washington Post, 6% you know, New York Post, you know, it's like it's giving you a very clear, systemic way of showing you where all the data is coming from. Then based on the percentage of data, it's sending money back to those journalism institutions. And they're building the structure to say, we can do this with photos, we can do this with video. So that's one model that I am being very active in watching and seeing and telling people this exists. Because the issue is when you get some of those bigger platforms on the podium and they say, what about nutritional labels? What about ethical transparency around output? They'll say, well, it's not possible. The reality is it is possible, very similar to the Facebook data policy issues we had with the EU. It is possible. It would just take them millions of dollars to change the back end infrastructure. But the technology is there, right? On the other hand, when it comes to the data that's being used to train, there's kind of two different realms of thought. The one realm of thought is your thousands of hours, hundreds of hours is not enough to turn the heads of these companies. They are looking for hundreds of thousands of hours of training data. That being said, there are coalitions being built with archives and creators, specific ones that specialize in very specific things where licensing packages are being built because the AI concern that we have now in these big tech companies and even as a creator is the synthetic data issue where when the AIs run out of pure human data, they can then start chomping and eating their own generative data. It will basically poison the model and poison the quality of what the model can do. And that's the last thing these tech companies want, because they want to create the illusion that these models are impenetrable and they're impermeable. So there is a huge need for anything that has not yet been, quote unquote, scraped or anything that has been protected to somehow get licensed. And so there's a lot of conversations happening right now around what is that payment structure? How much money are certain things worth? So if you're interested in getting involved in those conversations, you have to seek out those startups that are putting together those licensing packages and ask them, you know, what is because the unfortunate part, if one person gets X amount of money, like $1,000 for hundreds of thousands of footage, but then on the other hand, we're asking for hundreds of thousands of dollars. We then don't have like a clear ecosystem of how much money these things are valued at. And on that same level, when I talk to these licensing companies who are putting together these very legitimate packages for very big tech companies, they're saying the biggest thing to do right now is save everything. Because as creators, we oftentimes we only go for quality and not quantity. Like all of the raw footage of your film, you usually erase your hard drive and then keep your couple of final edits. But the reality is if you want to be a player in the licensing space, you want to gather quantity. And these are all the ideas of future-proofing creativity. The other piece of the puzzle that I think is hard to talk about, but is the reality is there are incredible institutions that believe in the free market like Wikipedia who want their data scraped. They're like, this is what it's out there for. We are a knowledge sharing community. We love the fact that like these tech companies are scraping our data. But what they're concerned about is that these tech companies are not then investing back into the sustainable resources that makes Wikipedia tick. Right. Because all of those editors are volunteer based. Wikimedia as a foundation does not raise a lot of money every year. So what they're questioning now from a free sort of Internet perspective, which is one realm of thinking, is if we're good with things being scraped, how then are the tech companies not necessarily how are we protecting ourselves from the tech companies scraping? But how are the tech companies somewhat pouring back into our sustainability model, whether that is financially? It's got to go deeper than just crediting. So there's a lot of interesting conversations happening in the space. Do I proclaim to know the perfect solution? Absolutely not. But I do think there is really interesting conversations that need to happen now around how we verify human sourced data, right? You can take a photo on your phone, send it to the insurance and say, this was my car after my accident. That same person could take the same photo, generate a totally different level of damage, and send that same photo to the insurance company. This is just one very specific use case. So what I'm also interested in when I'm in these conversations and discussions is like, how do you verify human data? to be purely human? And how do you track the way that that human data could potentially be changed and used throughout the course of its lifespan on the internet, whether it gets scraped, whether it gets modified, right? You put up a photo of your baby daughter and someone cuts the head off and adds Taylor Swift and says, F Trump. How do you say that's not my photo? That's not the original intent of my photo. And one of the things that I've been learning as I educate myself is that oftentimes people want to fight AI with AI. This AI model can detect these AI-generated image. This AI model can detect this AI imagery. But then you create an arms race similar to nuclear weapons. You're literally saying, OK, great, thanks for creating the AI model that can detect my AI thing. I'm going to make it so you can't detect it. Well, then I'll make it so I can detect it. So the issue I'm hearing is you can't fight AI with AI. You have to fight AI with something else or verify AI with something else. And that's also, I think, when the conversation around blockchain technology really starts to unfold where you're like, how do you have this impermeable way of creating provenance when the data gets uploaded? So again, all of these things I kind of nerd out about because I want to be well-educated. And these are just some of the high-level discussions that I'm hearing happening in this space. Because instead of saying, well, woe is me and this industry and the ethics, I'm trying to actively say, how can we have conversations where we start to form our game plan to make it more ethical? How do we formulate our game plan to make it more equitable? How do we formulate our game plan to make it more financially sustainable for artists to be artists in the realm of generative AI? And so that's sort of my active approach to learning and educating.

[00:37:44.864] Kent Bye: Yeah, certainly there's like way more and all of that that we'll be able to cover here. But I did want to respond to one point, which is the argument that these companies are making in terms of like your data is not significant enough to tilt the sway of all of the stolen data. So they're going to steal all the data. And because they stole all the data without consent and then used it to train the models, that your small pocket of the data is not significant enough because they're stealing everybody's data. That's essentially the kind of argument. If you look at Studio Ghibli in terms of the styles that were included in their films, clearly that is training on something that they didn't have consent to do. And then they turned it into a style. So it just feels like there's a lack of consent that has happened in all of these models. Like the scraping, it's like a colonial seizing of the data without permission. And then from that, all these arguments that get made that like, oh, well, everything's a remix or that this is insignificant. I feel like I appreciate the creativity that's coming out of it. I just feel like that there's so many intractable ethical issues of how this has all come about that just feels a certain level of immorality to even engage with it. Even your what you created with Papercraft style was sort of like similarly acquired in that same type of colonial mindset that OpenAI is just sort of like taking any of this creative input and sort of seizing it as part of their product that feels like that's just a part of the problem that is at the core of how everything that's being created that to me is like a perfect metaphor for what is happening in a way that is just like starting from a baseline of not being in right relationship to the wider world and not starting from a place of relationality is a really bad place to start.

[00:39:22.614] Michaela Ternasky Holland: Yeah, it's a great point. I think for me, what I feel, and this is just my hypothesis is like the open AI strong arming of all of this technology, right? Because the reality is, I think all of these tech companies were developing versions of generative AI in the background, but they weren't necessarily ready to release them to the public and they weren't necessarily ready to help educate the public around what these things are and how they do it. But OpenAI in the sense of this arms race, like release ChatGPT because they are sort of running on this kind of quote unquote nonprofit system that they used to be running. And they were sort of like this research company and they were just like, we got to release this. And I think that really created this like rushed timeline on how we integrate into society. That's not saying the data scraping was good. I'm saying I think my hypothesis is like however these technologies were going to be introduced by some of the bigger players, got completely truncated because of the way OpenAI strong-armed everybody into saying, well, we're going to make this a part of society now, whether we like your timeline or not. That's my first kind of thing. I would hope the rollout of these things, if Google had been the first player or Meta had been the first player, wouldn't have felt like everyone was just running to make sure their thing was ready to be released, which is what I feel has happened in the last few year, two years, three years. First one. Second one, it's really hard to pinpoint generative AI as the only, if you think about from a moral standpoint, as the only unethical piece of this puzzle. We have to kind of go backwards to like, all of the data collection they've already been doing, like social media, cloud. Those are all the building blocks for what you have today, right? We have been giving these tech companies consensual access to our data for a very long time, which has become the building block for their greed to get more data in any way they can because they realize data now runs the money, right? How they can gather their data analytics for advertising, how they can gather their data analytics for audience engagement, for user engagement, for they know how long you spend on that Instagram video or that TikTok video and they're gonna push 30 more of those to you and they're gonna get you addicted to these systems so that they can get more data out of you. For me, I don't think generative AI is the original sin in this. It's kind of like the biggest, largest, ickiest symptom of it. But I can't point my finger at generative AI and be like, this is terrible if I don't recognize that I have been an active person passive participant in the giving away of my data over the last how many years since Facebook has come online, over the last how many years since even the Google Drive systems and the cloud systems have come online. And so it's almost like the unfortunate part that I can see from it is people will always select convenience. That is why a lot of these platforms are free to begin with, because they want to make you reliant on the platform. I mean, ChatGPT gave three months of premium license to every college student with a college email. That was a very specific move that they decided to take for every college student in the US. And I don't know if that was broad, but I know it was definitely in the US to get them hooked on these platforms so that they can slowly get them to pay for these platforms. But on the back end, the reality is, is I don't think it's going to be like what we see today in the next ten years because these things are incredibly expensive to run. So the concern now is like who's going to win the AI race? I think it's going to probably be Google and all of this because Google kind of holds the most amount of data and then maybe met a second. But I don't think we're going to see players like OpenAI stay around for very long because in order to sustain this machine, you need platforms like YouTube. You need platforms like Instagram, where people are uploading everything that they have about themselves to these things. to feed these data points. And again, I think it creates this idea of being like these tech companies feel that they have ownership of everything. We've given them that feeling, right? So it's sort of like that wrestling point for me of like the chicken or the egg. You know, the tech boom of like go fast and break everything is also what we're seeing in this. Like go fast, break it, just grab that data there. You know, like you mentioned, Studio Ghibli doesn't want it. Well, who cares? Like, we're going to grab it and make it a point. Like, they feel this kind of sense of entitlement as these companies because they run a majority of the economy. You know, even like movie studios. And back in 2016, it was journalism entities getting bought by tech companies. Now it's movie studios getting bought by tech companies like they are controlling a lot of the outputs. The minute they controlled journalism, they controlled our data. The minute they're going to control movie studios, they're going to control that data, too. The streaming platforms, you know, so it all goes back to an original sin if we're talking about it from an ethical perspective.

[00:44:09.343] Kent Bye: Yeah, it's the surveillance capitalism for Susanna Zuboff that goes back to all that. So great point. Well, I did want to talk a bit about your project that you have here, because there's still questions and concerns I have around large language models and the utility of how we engage with them, the hallucinations and the kind of AI slop is a general critique of the type of experiences that I see when it comes to large language models. there's a certain amount of tuning of it and making it less like the large language model and then the parts that I often like the least end up being the off-the-shelf large language model aspects of I find myself at film festivals having conversations that end up being like just with ChatGPT as an example where it's just like what kind of other experiential design are you doing other than just having me talk to ChatGPT or have at the end of a interactive media piece of like a group facilitation of ChatGPT that just kind of goes off on a completely different direction that isn't even connected to the project. So I feel like there's been a number of projects that I've seen over the years that my tolerance for sort of large language model projects gets a little low. And I feel like there can be a sweet spot where I think it does make sense, like revealing the blind spots or the things that these large language models are excluding or biased in terms of their exclusion or not fully including are great to point out that. So but in terms of your project around the politics, I'd love to hear a little bit more in terms of the desired experience and using this type of AI debating itself in a way to see what is the human side of that experience of what is the end goal in terms of what is the user flow in terms of people coming in, interacting with this AI, and then what is the desired outcome that they might be getting out of that?

[00:45:51.832] Michaela Ternasky Holland: Yeah, so the great debate is very much a work in progress. You know, the original idea was to have all these well-known large language models take on these roles as political candidates and have them live debate each other. Because for me, what I found really compelling was the theatricality of the debate, the theatricality of American politics when it comes to debates. And sort of, unfortunately, what it's kind of devolved or deteriorated into is being like kind of attacks on character. And maybe it was always, you know, similarly like that in history, but we've seen it come forward in a really intense way. And honestly, you know, the project right now as an iterative project, you know, I've gotten feedback around, I want more political leanings. I don't want just left and right. I want anarchy. I want fascism. I want communism. I've got a really great feedback around what if it was all the president's instead of it being about the LLMs, like Gemini versus Claude versus ChatGPT, what it was all about, the national leaders or the leaders of nations all competing for this world president kind of spot. So I've gotten a lot of feedback around, I think, how people maybe want to see the story unfold. But what I'm struggling with right now is I have this tech demo, basically. It's like a screen that you talk to. And I really personally don't think it's compelling. I really think it's like a starting point for what could be more compelling. And I think one of the ideas I have is to turn it into a participatory theatrical performance where the generative AI gets built by the audience, but then actors come out and play as those generative AI candidates and pull from both the generative script and also parameters of improv and parameters of narrative. So we start with act one, generate candidates, act two, candidates debate, you vote for candidates, but along the way, my hope is maybe we educate people around how this technology works, how this technology is encroaching on civic discourse, also how this technology could potentially be inserted into the fabric of democracy. talking about voter fraud. And then there's an idea I have at the end where like, you know, whatever candidate wins, you sort of get a generative news article to see how your candidate impacts the news cycle from the last three or four days. So you kind of get this holistic look at democracy or the democratic cycle. That is one aspect of this. You know, there's the other aspect where you go up to the installation and you type and you talk to it. I just don't know if that's as compelling for me. And so really what I did here at the Onastasonic showcase was just showed the tech demo so I could get audience responses in real time to what they felt was working well, what they felt was missing. And I do think right now, because as humans, it's sometimes it's hard for us to talk to people we don't agree with. And I think that has to do with a lot of things, social media being one of them. what I do find compelling about the mechanics of the project, and again, maybe it's not politics, maybe it's not generative AI, but what I like about the mechanics of the project is that it allows you to kind of listen to opinions you might not always agree with and might even have an emotional reaction towards, but in this kind of separate way because you know it's being done not to you, but like for you in this participatory theatrical experience. Right now, the medium is LLMs, but it could just be act You know, so I'm still wrestling with it myself. I think I'm playing a little bit with fire in the sense that I can't speak to the political viewpoints of these technology companies from the mouths of their LLMs, but I can't help but feel like maybe when you start seeing these LLMs as these political candidates, you start to kind of understand their limitations in a really visceral way that you don't see when you're just talking to chat GPT about everything you want it to agree with you on, too. So these are just some thoughts I'm wrestling with as a storyteller.

[00:49:41.900] Kent Bye: So it would be the the point to create a satire, like, look how ridiculous these elements are, or is it to really facilitate like genuine, deeper discussion around these topics?

[00:49:51.621] Michaela Ternasky Holland: I think it could go either way. Like my goal would be maybe we do a satirical version and let that be just like very fun and entertaining. And there could be a version of this where the same tech demo or sort of the tech base is used to facilitate conversations around civic discourse in a more serious kind of like authentic. I can't think of the word right now, but like. you know, you're being very you coming to the table with everything you have, like in a very eager way, you know. So my hope is like maybe this version of the project could live in multiple iterations.

[00:50:24.480] Kent Bye: Yeah, I feel like that there's a certain part of like you're you're asked to make a decision. Do you want this person to be how far left, how far right, how far in the middle? And it kind of replicates the existing polarized political discourse we've had in America, which is that there's two political parties and that there's not really that much beyond that in terms of third party, libertarian, Green Party, progressive or know in other countries they have more than just two political parties and so you have a little bit more nuance on some of these different issues but here you end up having like for the last 20 or 30 years of all the discourse of politics that are made available on the web and on the internet basically replicating the talking points from the two political parties that isn't really all that interesting in terms of just having a replication of talking points where they're really just talking past each other, not really engaging. And so I feel like there's a way that large language models can start to capture the zeitgeist of those already existing polarized perspectives, but we already are existing in that polarized world. And that's not really, to me, wasn't moving the conversation forward beyond like, how do we get beyond that polarization or how do we actually talk with nuance. I think the other critique or criticism I have just around large language models is they're not actually coherently making arguments or reasoning or making nuanced discussions around things other than just the stochastic parrot's critique that they're just repeating what they've been trained on. But in terms of what I was concerned on is like right now, some of the biggest political issues in the United States right now is one political party doesn't seem like they're all that interested in maintaining democracy and the other is. And so there seems to be like this slippage into fascism in different ways. And so I feel like that's kind of like the most prescient political issues of our moment. And the large language models are trained on data that are preceding this current political moment. So as a political project, it felt like not really responded to the political moment, as it were, because of the limitations of large language models.

[00:52:16.997] Michaela Ternasky Holland: Yeah, I think I think it's all valid criticism. I go back to this idea that like for me, the political skin was like a skin. Like one idea I had was to make it a multigenerational family, right? Where you're talking to somebody who was born and raised in the boomer era, someone was born and raised in the millennial era, the Gen Z, the Gen Alpha era, like the political debate sort of became like a really fun theatrical way of showcasing these elements talking to each other in real time. And also in a way that I think people want to kind of maybe engage with in a more mischievous way, potentially, or a more serious, thought provoking way. And I like the fact that there could be that dynamic between it with the political skin. And I'm not trying to sound flippant at all. I just think like, for me, framing it as a very political piece doesn't feel quite right, because I don't think I have even explored it deeply as a political piece. I was just exploring it more as like a not even a dress rehearsal, like a workshop idea of like, would people find it compelling for LLMs to talk to each other? Would people find it compelling to be able to create these candidate characters or RPG characters and watch them play out in real time? And the reality is, yes, they do. the layer of politics for me was like we are not quite in a sense for me, at least everyone I talk to doesn't want to listen to somebody who agrees that the other way. And so what I found the last few days was like the groups of people would be like laughing or rolling their eyes at both sides, you know, whether or not they identified in a certain way. And I kind of think that's like at least chipping away at some of the ice of the polarization Again, polarization in many different formats. Politics just happened to be the overlay. Could have been polarization around multi-generational households, polarization around cultural differences. It's just the idea of, can these LLMs be so absurd and theatrical that they chip away at the polarization we feel together? And maybe it was not well executed in the political landscape, and I can definitely work on that if I want to keep that as an idea.

[00:54:19.744] Kent Bye: Yeah, and I think things are moving so quickly politically, I think it's difficult with large language models in terms of what they're trained on. But I do think that one of the strengths could be looking through the sociological lens or what are the trends of the biases or the language or the concepts. One benefit of a statistical approach to language is to be able to find these pockets of correlations and correspondences. Someone made a comment that I think is kind of reflective of like it would respond to a first comment and then it would start to talk about its own talking points. And then it was almost like, oh, that's just like politics, where they kind of like maybe respond superficially about what you said, and then they kind of go on on their own. And so it kind of found that discussion point. So. Anyway, well, I think there's some kind of interesting provocative things that could be there, but I think with the human theatrical and other kind of interpretive aspects or kind of framing as, you know, is it satirical? Is it serious? And like what the end goal is that needs to be focused in on. But yeah, I'm still like a little bit more on the critical or skeptical side of larger language models. So I always am a little bit more like, hmm, I don't know, like we'll see. So just wanted to have this talk to kind of unpack it a little bit more.

[00:55:25.939] Michaela Ternasky Holland: Totally. And like, honestly, some of my favorite moments were when people took it into the world of absurd. Like we had somebody come in and do like the Mr. Kool-Aid man, the Pringles man and the Geico gecko. And like it was hilarious because like the Kool-Aid man was saying, oh, yeah, like every two sentences. Right. The Pringle man was like talking about how crispy the Kool-Aid guy talked about breaking through walls. Like so I also think there was some of it where it was very satirical and people were laughing. But then there was like these very clear talking points, like you said. I agree. I think that the idea that we could have more political viewpoints is very compelling for me. Like I want to actually think about if I continue this process, like including libertarian, including fascists, including communism, even including debate styles, like are they more of aggressive? Do they use like a straw man strategy, a steel man strategy? Are they more avoidant? Like, again, maybe that goes deep into this RPG kind of world. But like giving I think the audience more time to shape their characters seem to be really fun for the audience.

[00:56:27.020] Kent Bye: Awesome. Well, finally, as we start to wrap up, I'd love to hear what you think the ultimate potential for XR immersive storytelling and AI and all those coming together and what they might be able to enable.

[00:56:39.400] Michaela Ternasky Holland: I think, you know, outstanding this like ethical concerns and frustrations that people feel, which are valid. I think introducing some of these more generative technologies into the XR space allows there to be something that I think we're all trying to capture when we do XR and immersive, which is this like instantaneous moment of personalization, this instantaneous moment of like you are being seen in a way you've never been seen before. Like you put on a headset and you go through this experience that could potentially be transformative. You turn on an AR experience and you see something in your natural habitat that you could never imagine could happen as a digital asset. You know, this magic that I think we're trying to build and create in the XR industry purely on the creative art level, maybe not on the business level, I think can be just like super powered with what generative technology can afford you, right? Like You might not be able to have a scriptwriter to write 30 instances of tea leaf reading, even 2000 instances of tea leaf reading. Again, with that idea of getting many more people in the door and getting more people seen and getting them to value what we're doing. You know, the tea leaf installation that Aaron and I did is a great example of people like Do I need to pay for this? We were like, man, that would have been a good idea. Maybe we should have had people pay for this because people seem willing to pay for it. But no, it's free. Go ahead. Go on in. And we saw over 2,000 people. Well, we at least read over 2,000 cups. But that doesn't account all the people that came in and just watched. So I think there's an interesting idea that the struggle of XR to get it more mainstream or the struggle of XR to get it seen by more people, that kind of individual participatory moment that generative could give you to scale an experience too many more people or to see many more people or to have individual instances where people feel more compelled to engage. is potentially very exciting. And I think, again, people really count out VR, and I understand why sometimes, because it's been a very up and down industry, but I'm very heartened by what I'm seeing at the Museum of the Moving Image right now, you know, from a curatorial perspective. You know, The Sick Rose came out four years ago at this point, and it's being seen again in a museum audience. Like, there can be a life cycle for these projects. It just is gonna take a little bit more time to unlock some more of that lifecycle for many more projects, but at least we're seeing little versions of it here and there.

[00:58:54.733] Kent Bye: Awesome. Is there anything else that's left unsaid that you'd like to say to the broader immersive or AI communities?

[00:59:01.716] Michaela Ternasky Holland: I appreciate you all like, you know, thank you for holding space for the vast majority of genres and ideas and artists that come to the table. I think this is one of the most welcoming and friendliest and really just authentic communities. And I feel very grateful to have fallen into this rabbit hole. I'm grateful for folks like Kent who are such Argent historians and custodians of our legacy and custodians of our industry. It is so important and it's very key to building this ancestral knowledge that we are all kind of figuring out together.

[00:59:37.970] Kent Bye: Yeah, and thanks so much for joining me. And I feel like I'm in this weird space right now where this fusion of XR and AI and I'm much more into the XR side than AI side. And I feel like there's this unsettledness that I feel where there's a factionalization of like almost the abolitionist side of like we should not use these technologies at all. And then the more creative artist side who are fully embracing them and exploring the potentials. And so I find myself somewhere in the middle where there's sometimes I want to just send all of AI into the burning sun and just like get rid of it because I feel like the certain aspects of our society where we look at our cell phones and where we get so dissociated and disconnected and like I worry around the corrosive impacts of our cohesive aspects of our society and our culture that may be slowly being eroded and devolved and just the way that social media and other things can pollute the political discourse. So I feel like we're at the beginning of this new cycle where we can imagine in next 36 years, where 36 years ago is 1989, where the Internet was starting up. We look to see the impact of like the amazing possibilities, but also the ways that the Internet has impacted and harmed us. And so how do we balance both the benefits and the harms of these new technologies in a way that isn't kind of bypassing some of those different more intractable issues and pushing forward without really thinking about it and moving fast and breaking things. So I feel this kind of weird place and just appreciate hearing your perspective, but also I'm left with like this not knowing of where I'm falling and on which side, listening to the artists, but also feeling torn myself as to like all these broader issues that keep me up at night.

[01:01:13.658] Michaela Ternasky Holland: Yeah, valid. Honestly, you know, I can speak very plainly and very transparently, like finding funding in VR right now is very difficult. And that's just the reality. You know, Meta shut down a lot of their narrative money. You know, the main funder in the U.S. right at this point is a GOG, and they can only fund so many projects for so many amazing creators that exist in the space. And I'm so lucky to be able to have, you know, associates myself with Museum of the Moving Image and this curation and this exhibit. But honestly, that's not going to pay my bills for the next few months. You know, it paid my bills for a couple of months, but it's a short term contract. So the reality for me as an artist is like if I want to continue making art authentically about what I want to make art for, the funding and the interest is currently coming from generative AI. And so it's sort of for me, I'm kind of like wrestling with the fact that I have the privilege to wrestle with the ethics and then facing the reality that unless I go get like a full time nine to five job, I kind of need to embrace these things and take the opportunities that are coming my way and ask myself, how can I still make compelling, interesting work? Because this is where the opportunity is currently coming in. Do I leave behind creativity? Do I leave behind XR for now? Or do I try and make my way through it? and maybe a couple other folks are in my same shoes. So before kind of the abolitionist folks kind of come towards us in certain ways, I would just like caution them to check potentially their privilege of what opportunities they're allotted or what full-time jobs they're allotted or what positions they're allotted to allow themselves to continue to do the work that they do and look at maybe some of us in the independent creator circuit who are kind of trying to stay afloat. Not that you can't have ideology, but more just like the weaponization of ideology can feel very harmful and it can be more disconnecting to an already very tight knit industry that deserves and needs a very supportive all tides rise, all ships mentality. Yeah.

[01:03:23.846] Kent Bye: Great points and appreciate that. And go back to Sharia Freelo, who says, listen to the artists, listen to the artists, listen to the artists, see what they're saying, see what their concerns are. And I think that's a great way to wrap up this conversation, to take it back to the economic realities of what's get funding, what's not getting funding. And yeah, thanks again for joining me here on the podcast to kind of explore all these variety of issues. I really appreciate it. Yeah.

[01:03:46.707] Michaela Ternasky Holland: Thank you, Kent.

[01:03:48.117] Kent Bye: Thanks again for listening to this episode of the voices of your podcast. And if you enjoy the podcast and please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a, this is part of podcast. And so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voices of VR. Thanks for listening.

More from this show