#1445: “In The Realm Of Ripley” Ambitiously Combines Interactive Film, VR Mystery, and Group Conversations Moderated by AI

I interviewed In The Realm Of Ripley director Soo Eung Chuck Chae at Venice Immersive 2024. See more context in the rough transcript below.

Here’s their artist statement:

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.458] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my series of looking at different immersive stories from Venice Immersive 2024, Today's episode is with a piece called In the Realm of Ripley, which has a lot of different moving parts in terms of like it's part interactive film where there's like these choice points that 12 different people are watching that they're making. And then there's a whole other like VR experience that one person in the audience goes into VR and is trying to pick up some different clues to understand like who the murderer was of this boy that you're investigating like Ripley. You're going in the memories of Ripley. And so there's this kind of interplay between the story that's happening, that the audience is watching the story as it's unfolding. They're making different choices at certain points, kind of these different moral dilemmas that come up. And then the person that's in VR is trying to pick up all these clues and solve a murder. And at the end, they come together and then they have this whole conversation that's moderated by artificial intelligence. I think the model they're using is Lama. So I actually went through this piece twice or like one and a half times to be able to see the VR version. And I came back to try to see what was happening outside of that. And there's actually quite a different experience, whether or not you're in VR or not in VR. There are some locomotion interaction design issues that I had with their locomotion mechanic wasn't really picking up and detecting the gesture also. think they expected you to kind of get up and walk around and explore and look for clues but it was like in a very small space with a chair and so it wasn't immediately obvious that i should get up and walk around so i wasn't able to get all those clues but i think at the end it's like everybody's coming together and then there's this kind of like moment where these two audiences the film audience and the vr audience have this opportunity to kind of like share notes and then potentially solve the murder but i think the ai just kind of like goes off on its own path and starts to ask like all these kind of random questions to the audience around like different moral dilemmas around memory or just kind of like facilitating a conversation that seemed to be in my mind a little bit off topic for what the overall thrust of the experience was so it's a lot of these different moving parts of like interactive film with like the vr portion and then having a theatrical element of the ai to kind of like bring it all together and and In my perspective, at least, the AI wasn't able to tie it all together in a way that made it coherent. So in the context of this interview, so the director, Chuck, rescheduled at the last second, and then we're supposed to meet around 10 o'clock, and then he was like 15 minutes late. And so there's some other elements to this conversation where I wanted to kind of dig in and wanted to just like have more space to be able to really unpack and dig into the this piece to really fully understand what his intentions were and what some of the gaps were for where it started to break down from my own experience. But we didn't really have time to really have the spaciousness to really have that more extended conversation. So at the same time, I was doing an unreasonable amount of conversations from Venice Immersive, probably over 30 hours of conversations over the course of five days. And so I had pretty much overbooked myself and there wasn't much space to have much flexibility to move things around or anything. So anyway, I feel like this is a piece that is very ambitious and trying to tie all these things together. And in my mind, kind of leaning a little bit too heavily upon artificial intelligence to kind of like bring it all home. So anyway, we unpack it a little bit more. And it seems like this kind of fusion is something that's interesting that's there, but something that didn't quite cohere in this iteration, at least that I saw at Venice Immersive. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Chuck happened on Monday, September 2nd, 2024. So with that, let's go ahead and dive right in.

[00:04:01.641] Soo Eung Chuck Chae: Hi, I'm Chuck Che. I am the creator of the experience piece called In the Realm of Ripley here in Venice. I come from a film background with... extensive, especially in the VFX field, nearly 20 years. And then after that, the 2014, just like everyone else, with the DK1's Magic, I converted to looking into XR as a new format, especially the VR with the cinematics retelling possibilities.

[00:04:35.994] Kent Bye: MARK MIRCHANDANI- Maybe you could give a bit more context as to your background and your journey into working with VR.

[00:04:42.751] Soo Eung Chuck Chae: Okay. I started out my career as a 3D producer. That was the time we were putting together Red Epic cameras for 3D, live-action 3D. And also, my job was to create a visible gorilla for a feature film in Korea. After working extensively as a VFX supervisor and VFX producer, then I went to China to work on my first feature film debut as a directing But from that point on, I felt like there's something else missing because I studied my film in the University of Central Florida back in the States. And I met a very special professor there named Christopher Stapleton. He introduced me into how to tell a story without a camera. and that was the eye-opening moment at the digital media department i was the one of the film kid who goes there a lot looking at the theme park technologies and i was working as a tour guide at universal studios and disney world as a theme park tour guide for korean tourists and and then i fell in love with how The theme park was able to embrace the audiences as part of their storytelling. And they were using a whole bunch of other technologies and devices and able to deliver the experience. But of course, especially in Korea, it's very hard to build that concrete building with a lot of fun things happening that needs to realize the vision. So I turned to VR. I just realized, just like Michelle said before, we learn how to manipulate time through film. for the last 120 years. Now we learn how to manipulate space in the XR. And then now I want to add that we're also manipulating our existence using AI. So we have all of that three for VR piece by trying to expand more storytelling possibilities with those three elements. Film storytelling discipline is the basic fundamental. But then manipulating space, I have to borrow languages from exhibition and theater play. And also I had to borrow languages from gaming for audience dynamics. But then using AI, I realized that I'm actually bringing back what we have lost before the digital media came about. the theater, a history of more than 2500 years of a good old theater play could come into a digital media scene with the help of a live action interactions, but not the person behind the 3D mask. We're talking about the full on AI that could interact with the audiences and who could remember you and the media or the piece, the storytelling piece could evolve with the audience. So that was a huge leap of possibilities after my feature film days, leaping into VR and then VR with AI. It's definitely expanding.

[00:07:46.040] Kent Bye: Yeah, and I think in the realm of Ripley is one of those projects that is this real interdisciplinary hybrid fusion of all these different mediums all fused together in a way. So I guess, where did this project begin? What was the catalyst for you to start to want to try to tie all these different media together?

[00:08:04.121] Soo Eung Chuck Chae: Okay. In the realm of Ripley, the title is also called the apartment. The apartment signifies how some part of Asia or European countries, they consider apartment as a very cheap and budget and way of staying. You call it home. But in Korea, it's really hard to get your own apartment. It's super expensive and the people are going through a lot of troubles. And I've seen so many dramas per houses. And it's kind of like putting the eggs in a one tray cart so you get to see everything. So we have a variety of storytelling and life that is encased in a concrete wall. But that's also a comparison to our film's theme. It's a sci-fi theme where you could store your memories into computer servers. And then our computer server room is also similar to the gigantic apartment complex. And then I realized that the ways to tell the story of this with a feature film, I don't want to miss out this experience in VR because I could be part of the experience and do something about what's happening in front of me. But at the same time, if you're a lean in or lean back audience, just like the Hitchcock says, you know, if you have to go to a bathroom, you can't focus on the movie. So there's a feature film length duration. Same thing like gaming. When you're running around shooting people, you can't really empathize with everybody unless it's after the fact. So I decided to mix in that different approach as a film cinema audience I believe there's a way is to diversify this storytelling into how the film Approaches to our audience and how the VR approaches to the audience and how the audience can embrace better storytelling Different type of a storytelling and VR. Yeah, my mentor once said when I was doing that 3d movie he said why do you want to bring 3d and VR stuff into you know cinema you know cinema everyone's accustomed to this it was a new media back then back in like 40s and 50s but now it's not we consider it as a part of our culture for our VR to grow into that I believe there has to be a new language for it In the realm of Ripley was the experiment that I wanted to capture and show where we are. We talk about VR all the time but we're still living with a flat screen on mobile devices and computers and we have imaginatory desire. My desire was to, oh, I wish I could go into that movie and do something about it. And yeah, I thought using the VR as a different type of story device and existing, because there's so many things to explore still in cinematic language. You know, using that one obscure of a single camera storytelling is still there's a lot to explore. But, you know, today, with all this media coming in, taking over, like, for instance, the streaming show from VRChat into Twitch has more viewership than a singular title from Netflix shows with a gigantic budget. If we consider the Internet as a theater, we're losing audiences. Well, I don't want to say losing audiences. We have a diverse audiences going everywhere, spending their time. And I just wanted to capture that moment into that one piece. And though we have a full on experience in VR, but then it's a half-assed VR without the cinematic experience and the movie experience is very vague and they've got so many questions to answer on the film itself but then the answers could be there you could find in VR piece and I just wanted to use the audience dynamics in between use them as the medium so they could have Spare their time a little bit more think about what's happening there and overall when you're looking at them from behind with the flatted screen and VR and the best way the overall the best experience is It's you know, just like your podcast It's the best way to storytelling is still the mouth-to-mouth face-to-face the good old communication device yeah, I just wanted to capture that and you know, have some more sympathy or empathy to share that feeling with everyone. Because no matter where you are in the world, we're still, with the technologies being, with the internet, everyone knows everything together. We're kind of syncing on the same page. I just feel like still the cinema and VR, would it change? Especially if I'm listening to the podcast, I believe I still have maybe less than 10 or 20 years of active working period because I'm still enthused about this new technology, new storytelling. Because we're not born as a native VR people, unfortunately. I mean, unless, you know, like there's people born in 2000s or, you know, alpha generation, I don't know. But even them still, their only active period of making money and working in the society is part of the society is probably like less than 30 years. Will the VR will take off like the mobile device did? That was the part I went to kind of think about. because the Apple iPhone phenomenon could happen because there was a bell. With the wired phone, everyone knew how to talk to the phone. And then computer, everyone knew how to use computer and they could listen to music over internet. And that's the point where the iPhone thing came up about. But VR, there's only few selected people have experienced it in the back of the day. But with them criticizing the technology now, I still have doubts we'll take it as mainstream, I don't know, but we just have to learn how to share our audiences into different media form and I believe the creators have to embrace this and try to reach out to different types of audiences there is. Yeah, I mean, I'm a huge fan of like a social VR. This is a way, to me, it's a very new storytelling. We can mix in everything, like game mechanics, film storytelling, live theater play, monetizing aspect of people giving back and giving back and each other something in the virtual world. embodiment you know experience because younger generation I mean now I'm working with my team members of about 20 people it's my first time working with I didn't feel like I got older you know looking at them they don't want to listen to me they they want to feel it themselves you know they want to experience it themselves and embodiment And experience is the best way of telling the story. And I mean, so for that matter, I believe the VR is really, it's the medium that we need for the next generation so we could have more embodiment in the storytelling. So yeah, I mean, but however the people see VRChat or any other social platform as a more like a, gaming-based platform. But to me, they're doing a very advanced storytelling there. I kind of wanted to bring that essence to this piece where you could be seen, you could do something, and you could be associated and part of something. This is something that we could still do in cinema, but with a little bit more touch, like adding a VR participant or live actors. Like recently, Francis Coppola, director of Godfather, he did a new film where the actor actually came out in the middle of the premiere and trying to talk to the audience. So they have been thinking about how to break out of the frame to reach the audience and have an interaction with them. but film even though the artists from filmmakers they really wanted to do that but they couldn't because of the survival of the film or survival of the storytelling we had to print it on a hard book and fix it send it out there film same thing you know send a video file dcp a roll of film it's locked but with this real-time based technology the story could evolve and they could interact with the audience. And I think that's the beauty of it. I think that's the whole point of this festival. Not just the VR. VR embraces so much overall as immersive. And if we talk about what is immersive, it's not just about putting devices on your head. There could be something more because people do put out more. They're more than just audience these days. They could be our co-creators and there are great experiences here that are utilizing that type of storytelling and the directors are becoming more like an architect for them to invite the audience into their world and have them create their own experience out of it. I'm all for that. Really all for that is something that I couldn't experience from the feature film days or just sort of called cinematic VR. I believe there's something more than just a sixth off looking around or touching things. There's more that I believe with the help of AI or let's say very complicated machine learning technology, we could definitely do more of the embracement of audiences. The movie or the experience could be very smarter and evolve together.

[00:17:33.879] Kent Bye: Cool. I'm just going to jump in and kind of give a recap of my experience and my understanding of the piece, because I do want to have some time to break it apart, because there's actually a lot of moving parts in this piece, and I don't think we'll have time to unpack all of it. But I did want to just set the broader context of the piece, where there's 12 audience members who are watching mostly a film that is already created but there's also moments that you're cutting in either VR footage of the person who goes into VR or there's like an AI character who is interacting with the audience and so you're looking at the memories trying to solve like this mystery you kind of have this asymmetry where one person goes into vr and a lot of the story is actually pretty occluded from the person who's in vr it becomes then more of a adventure point and click game where you're trying to get clues to solve the mystery as to who killed the sun and you're like looking into his memories trying to see who his murderer was so then i was the one when i went through it that was in vr and there was a kind of a unorthodox locomotion method where you're reaching out and grabbing and it was probably 50% accurate and it's like at least one out of every two times I tried to use it it wasn't picking up and so that was frustrating and then I was in a chair you're supposed to find these clues but you actually have to stand up and walk around in order to find some of the clues and I didn't know to stand up because I was told to, if I wanted to locomote, I had to reach out my hand and pull it. So I assumed that that was the only way I could move around. And because there was a chair in the space, you don't typically walk around a space that has a big giant chair in VR unless explicitly invited or told. And so then I didn't find all the clues. The audience voted to kill me. I got out and then... essentially the audience was having a totally different, completely different experience. So I kind of walked away being like, I don't think I really fully understand this piece. And so I tried to come back and see what the perspective was from the audience member, because it's a little bit more of, A traditional cinematic story at the heart, but there are some moments where you cut away and show what's being seen by the VR person, but then also you have this AI character who is asking the audience to vote, and so you have this kind of participatory voting. Before the audience watches the movie, there's a whole mobile app where they're inputting some sort of information that is presumably somehow maybe getting fed into the narrative. So there's a lot of moving parts in this piece, and there's this interaction between at the end, then an AI character starts to ask the person who is in VR some questions, but it's kind of a traditional large language model, maybe ChatGPT 4.0 or some variation where it asks a question, it has some bounded information of a prompt, but can kind of go down a path of asking endless questions to the audience around the nature of the technology and what does it mean for our memories and working with it. And so it kind of ends with this conversation and dialogue with the AI. And there's a number of branches. There's probably lots of different permutations that I didn't get to see in my two times. So There's a number of interactive choice points of branching narratives that I don't fully understand. So anyway, I just want to set that context and throw it back to you to kind of like elaborate a little bit about how you put this together and what you were trying to do with each of these different touch points. And if I also missed anything for a key part of what is happening in this experience.

[00:21:04.838] Soo Eung Chuck Chae: Okay. Yes. Well, UX is always something that I'm most frustrated with. Not coming from the gaming background, the UI and the UX of designing and architecting how the user will perform and things like that. It's still the tough part. It requires a lot of beta testing. Especially when we were changing the conditions here at the exhibition with the poor lighting because we couldn't separate the area like we intended to, things like that, you know. And people could walk around inside the entire complex. But then as soon as we found out we only got two by two meters for the VR space, we kind of compromised on just the polling. The act of pulling was just grabbing and pulling towards you. We try to mend it as the people don't want to let go of things, like memories and things like that. So we were just doing this act of dancing. We kind of wanted to put that there. I wish we could have put more variations of dancing movements. for the audience because because it's not a standalone experience for app release but it has to help the audience to perform together with the film so we kind of wanted to have added that in there part of the frustration is from the design i do give myself a little bit more of a harsh feedback to myself that this could be a lot more smoother and should be a little nicer to people but for them to figure out but being there are people who do figure things out and reach out and some people do walk out of the cage to reach and we let them do it trying to get more and more clues and storytelling clues if you do see the entire clue we embedded in there in the order there is a answer to Hooter murder is what really happened and things like that but then It's a matter of how much they are invested in this story in the beginning. So we're just hoping that they could come back for the cinema experience. And you weren't the only one who came back for the second time. So I was very grateful. But at the same time, I don't want to make them suffer for it. But the part of the frustration makes the cinema goer also curious of why you're frustrated in there and what's happening. and it really helps them to vote to kill you because some people are semi-jealous of what you're going through and they want to know, they want to see if this is really a synced experience technology-wise and what's the meaning of it and things like that and then they let go of their frustration in the point of making choices. So it's intended as part of the experience as a whole. Technically also because we're also bounded by the time with the movie duration with some of the points of interactivity. So it's a bit of a time that we wish we had more dual time for the audience to do. But then it was our first time doing matching with the cinema and the VR. So it did help a little bit. We were even thinking about adding some timer devices and things like that to help the audience to make them move and do something more about. But yeah, it's intended in a way, but I wish I could have made it more smoother for especially in this environment. that we just adopted here. But overall, I believe it did work in entirety. If you look at the entirety of the experience, I believe it did work as we intended. Did I get that question right?

[00:24:49.164] Kent Bye: Yeah, well, yeah, we'll continue to kind of unpack different elements of it. So it does seem like there's a number of different input that's coming from the audience or other choice points that are being decided by other information that's coming in. How many different branches or choices are in this piece? I can think of at least one or two that's from the audience question, but I also don't know of like... how other information might be feeding in to create other alternative endings or other branches that are going in. So like, yeah, how many choice points and branches are there in this piece?

[00:25:24.062] Soo Eung Chuck Chae: The experiment here, another thing was in cinema, I feel like because it's a tour driven piece and some of the cinematic VR is also like that, you know, they're written by a single writer. They don't want to have audience mess up with the whole storytelling. They want to include them in the experience, but they don't want to have them ruining the experience for that. I believe choosing and having multiple different endings and branching out in FMV or the interactive movie, it's a different genre. I really wanted to find a different interactivity with the AI. There has to be a definite ending for the piece. But finding that ending could be unlimited choices. Here we let them do thumbs up or down mechanism for the crowd interactions because using the algorithm where it catches the majority of the voices to make the choice. But for one-to-one experience from your home device or your console, we really want this to be more dynamically engaging with the dialogue with the AI actor. So you have millions of different ways to go. But during that conversation of puzzle or the quiz that she gives out eventually will lead into that ending because you are convinced. That's the experiment we want to do for the one-to-one. but want to end interactivity was the difference. So we had to make it more like people yell like a stadium, you know, so they have only few choices that they yell and make choices. But making choices is not an entertainment to me because everyone is making choices every day and they don't want to pay to make choices here. So I believe the interactive film or FMV should be different because, you know, life is all about choosing and we don't want to bring that real life into there. And branching out story and seeing a different possibility is cool, but I believe that language format should stay within gaming. The RPGs or any other branching or open world storytelling, that branching should be there. Even though there's a beautiful piece like Detroit Become Human, things like that, but then I believe the possibility should be more open if you're gonna go branch out it should be more open for the audience not really about choosing dumps up and down for vrps we used llama but we wiped it all off from the base from the foundation we trained it so it stays within the world view and it's safely and sound in our gcp and i wanted to make it more smoother in terms of the conversation design that's something that we need to work on more but however it's it was very interesting to see how how the audience are sharing their story and they're baffled by the AI's question. And that also was part of the experience, even though I'm a very pro-AI person. But I would rather have them do my laundry than creating a music and a movie for me. Using them for creating something new like this dialogue based interactivity is something I kind of want to focus on and more thought-provoking, more brain exercise to be done there. Especially we're also using a crowd effect also with the one too many about 13 people in the room. They are all kind of still feeling each other. And they're preparing how to answer, how they're thinking about they're going to be seen in there. So that psychology was also added into the experiment. And AI just kind of adds that in there. But I needed them to feel like we didn't need AI to talk to each other about the experience. So that effect kind of worked, I guess, in there. But of course, for the service sake, for the storytelling service sake, we should have been smoother in terms of bringing them in. you know, a little more closer. But within that 10 minutes of conversation time, I think it was properly bringing them together. And also, it was actually to train our AI Because all the choices the VR player made has a record in our log with the X, Y, Z points of which point the user picked out the most and also how they answered to AI. It's accumulating. So Venice, we're still developing and the experience is still expanding right now. I believe after the festival run and once we get back home to the Korean audience, Chinese, Japanese, possibly other European audience, I believe the experience could be very smarter and could bring in different possibilities to come. Because I'm a strong believer of the AI could, not replacing the artist, but able to help them to expand the storytelling like this, doing something new. Perhaps we could be studying training the Unreal Engine generated, real-time generated graphics into generative AI and able to create as a semi-real-time graphics and this type of storytelling format could be the basis for when it's time for this generative AI expanded interactive instead of going out shooting eight days you know to shoot different method of endings or creating a branches of animations I believe generative AI could come in to expand that as an option so the creator could just focus on their main core storytelling and have the AI create the variations for it. But yeah, I mean this piece I was trying to focus on that maybe one day it could happen with generative AI to create interactive videos and interactive experience. with the input of the original piece. And still, I believe there's a lot to explore, not only the cinema, but I mean, having them talk to the screen, it was a hard thing still, you know? They're accustomed to just looking at the movie. But for storytelling like this, embracing the audience, how to embrace them and how to give the roles to them and have them do something to affect the narrative without having them ruining it, it's still, there's a lot more. Of course, I still have to explore more into the UX design of the user. Back in 2018 from BuddyVR, that UX, to me, we spent a lot of time just to get it right. But to me, the core experience was the fictional character remembering the audience's name. And then we pretty much acknowledge each other's existence by giving each other give back and receive back something in virtual world. So that part kind of worked in 2018 and also in Venice. After that, I really wanted to find a way to make it more interesting to have audience to interact with the computer. Within that process, you know, the experience is created. My piece is still not completed. They're being completed here as the user or the audience, player, whichever you call it, they are finalizing, they're completing my piece now.

[00:33:07.826] Kent Bye: Yeah, I did, I think, So unfortunately, we do have to start to wrap up and there's so much more to this experience that I'd love to dive in at great lengths. But I do want to make two points and then get your reaction and then start to have some wrap up questions. And so I think this is a type of piece that's extremely ambitious, that's trying to integrate so many different types of media that it ends up at the end not having any center of gravity. It's kind of fragmented amongst any number of like it's part film, it's part interaction, it's part VR experience. And then I'm actually more of a skeptic. I think the A.I. is not doing your story any service at all, mostly because when I was in the experience, I was like, I'm stuck. I can't I need help. And I was like, if there was a person there, like you can get up and walk around. But the A.I. wasn't like telling me that. And so if anything, the AI was not actually understanding what my problem was or why I was having like not able to understand my context and not help me to get out of the situation. Whereas if it was a person watching me and doing it, they could have actually helped do that. And at the end, there's a fundamental kind of dynamic of this experience where there's an asymmetry of power. And like the number one question is like. Okay, what did you guys experience? What did I experience? Let's solve this mystery together. And you throw a LLM at it and it's basically like, what do you think about memories? And I don't know if it asked me, like, can you share what your experience was? And I said something really long. It was like a few minutes and it didn't even hardly respond to it. It was like overwhelmed. And then it started just kind of asking me random questions. I was trying to share back to the audience, like, this is what I saw. And I was trying to get, like, what did you see? Let's solve this mystery together. And it was like the AI was just completely baffled about that and started to kind of monologue at us around these other random questions that didn't have anything to do with anything that we had just experienced, really. So it was kind of frustrating to have the AI at that point because I felt like if this is going to be a large language model chatbot experience, then you can optimize for that. But we had just gone through this whole cinematic experience and a virtual reality experience, and it felt like it was doing it a disservice because it was not really even understanding what anybody had just seen and what the main question was, which is like, who is the murderer? And it kind of got off topic and wasn't even helping us solve it. So I think if you just, at the end, took away the AI and had a person facilitate a conversation, it would be... a lot more able to get to either the process of facilitating that conversations or being able to connect the context gaps because there was a big asymmetry for what each of the two different groups had just experienced. And by the end of it, on both of the times I saw it, I didn't feel like those two bridges had been connected together in terms of understanding those two points. So I think there's a lot of interesting cinematic aspects, but I think actually AI is not at the point of being able to kind of tie all those things together yet. So anyway, I don't know if you have any thoughts about that.

[00:36:11.641] Soo Eung Chuck Chae: Well, obviously you didn't quite enjoy it. Every experience is different. Every show is actually different. The one that you were leading, it was very different. But then in some shows, per different audiences, it comes out very differently as well. I think it was very interesting to see how the user changes the entire show and the show's vibe. I don't want to say it depends on people, but since we're not in a GDC, we're not competing for technical innovation. But overall, the experience depending on per audiences are all different. I don't know what to say because experience is evolving. It's still playing the same mirror. It's reflecting you. We're not doing a media service here. But yeah, I mean, some people enjoy to make something out of it. Some people are hard to take in because the prerequisites of what they had in their mind is expectations all different. But yeah, I mean, it's overall, it's very interesting to see. It gave me more opportunity to explore how all humans are different, how each different audience from different country with different backgrounds are all different. Audience controlling is part of the discipline that we need for this type of storytelling, but then we're still at the infancy stage of what we're doing and developing. And yeah, I mean, technicality could be better. But overall, I believe the experiment kind of worked in a way, depending on the person's stands and the show also comes out differently every time. I wish I could show you some of the videos also. It was very interesting to see diverse outcomes of the show's energy, I want to call it.

[00:38:06.823] Kent Bye: How many of the audiences have been able to solve the murderer?

[00:38:10.604] Soo Eung Chuck Chae: There has been, out of the whole show, two of them got really close to what we had in mind for our series. By the way, this is also pitching for a streaming show series as well. We're getting ready for the 12-episode series for TV. and more of a evolving experience that accompanies to the TV series is the VR piece also. But the core of the story, I was really surprised to see some of the audience really, I mean, I believe they are really a good at gameplay or the psychology. They did get really close to getting the murder or the situational scenario they could recreate. Yeah, I would say a couple of reinterpretations from the audience was really close to what we intended and what's on our Bible, what's on our story Bible for the apartment in the real room of Ripley.

[00:39:09.417] Kent Bye: Yeah, I think, like I said, it is an experience that does have a lot of emergent social dynamics and both shows that I went to were pretty different as well. I wish I could have seen it. Well, like you said, it doesn't feel like it's at its final finished form. And there's probably a lot of like, I feel like the interaction design is probably the biggest thing that could have used some more user testing. Because if you would have just had a locomotion mechanic of handing me a controller, I would have had a lot smoother experience. But yeah, I guess I'd love to hear where this is all going, any of the sort of ultimate potentialities of the medium.

[00:39:46.507] Soo Eung Chuck Chae: We're still bounded by time, money, and physicality of that money leads to what kind of device we can play with, how much time we have in our hands to enjoy all of them. We're in a great time, we're in a mixture of various different medium and a lot of storytelling to be explored. But you know what's funny though? We think VR is new and we could call it a white paper or playground for us to draw and start to explore things, but what it is is, I mean, there's nothing new under the sun, I feel like, because it's all about collective experiences that has been existed before. Even the creators, myself or the others, are learning each different discipline. You know, like if the college, university should have, they should have like, not film major, you know, film, X, AI, all that put together as immersive major or something like that from now. But we're at the stage of where each discipline takes so many years to learn. But as we are using VR as a medium, we're collectively putting together a different type of language storytelling devices together. So... I wouldn't say it's new. It's been there. It's finally coming together so we can perceive it as new. We're just hoping whether we're leading the audience or the audience are leading us. I would say just, yeah, I mean, celebrate life and hopefully we can capture the moment of time that we're in before we are again changed into, you know, into the time. I don't know what to say. Yeah.

[00:41:25.299] Kent Bye: Awesome. Well, thanks so much, Chuck, for joining me today. I feel like in the realm of Ripley is probably the most ambitious project here that's trying to integrate so many different media and different kind of moving parts. And I feel like there's affordances of each of those different things. And in the far distant future, we're going to have a thing that It's kind of more seamlessly integrated into how we can start to interplay with collective agency and audience interaction and interactive narratives and like having asymmetry of experiences where there's one person that sees something, another person. So there's a lot of really interesting ideas that you have in this piece. And I think some of my frustration is from the friction I had in my own experience and And honestly, like the A.I. interactions were I would have preferred to have like a immersive theater person kind of guiding a direction because I don't feel like I was having my questions answered or having like a facilitation that I wanted to hear, which is like us collaboratively solving the murder. It kind of turned into something different at the end that the A.I. was kind of like going off its own. So I think it is sort of narratively in the story wise. I was I left kind of like not knowing it. And so I came back and got more of it. But even there, it was sort of like, ah, still kind of those friction points. So anyway, I just want to thank you again for joining me and helping to kind of unpack a little bit more about what you're doing in the realm of vulnerability. And I do look forward to seeing where this kind of fusion and integration goes in the future. So thanks again for joining me here on the podcast.

[00:42:51.004] Soo Eung Chuck Chae: Thank you. Thank you for having me. It's great. I'm a fan of your show. Looking forward to more. Thank you.

[00:42:57.530] Kent Bye: Cool. Thanks. Thanks again for listening to these episodes from Venice Immersive 2024. And yeah, I am a crowdfunded independent journalist. And so if you enjoy this coverage and find it valuable, then please do consider joining my Patreon at patreon.com slash voices of VR. Thanks for listening. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling in the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my series of looking at different immersive stories from Venice Immersive 2024, today's episode is with a piece called The Art of Change, which happened to be my favorite piece from this year's selection. So this is from the same studio, Studio Syro, that brought you Tales from Soda Island. And this is, like, a music video that's of a concept album, The Art of Change, by Drolo. I'll put a link in the show notes. You can go buy it and listen to it. They basically condensed down this, like, hour-long concept album down into, like, 10 minutes and, like, woven throughout the course of this video. concept album is this narrative where this woman is kind of talking to herself over the course of her lifetime, where she's having these self-reflective conversations to reflect on different stages of her life. And so that's kind of like the narrative through line, but the visual through line is like very much like a music video that's very driven by this sort of like spatial language that is developed by the concept art that Funi was brought on by Vincent, the head of Drolo, in order to take each track on the album and then create these whole artistic pieces. He happened to use Quill where there was all these VR pieces and then he kind of ties it all together within the context of this music video where you're going in between all these different worlds. So I thought it was just a really beautiful piece. I found it really moving and it's starting to kind of play with this spatial grammar of how to build the contrast and use consonants and dissonance cycles to drive emotion. So, yeah, I just felt like it was a really intuitive translation of this music video and just really resonated with me and was one of my favorite experiences from the entire show. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with the team behind the Art of Change happened on Monday, September 2nd, 2024. So with that, let's go ahead and dive right in. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my series of looking at different immersive stories from Venice Immersive 2024, today's episode is with a 360 video called Bodies of Water, which is a really poetic and beautiful piece that is shot underwater and is this collaboration with a choreographer working with these professional dancers and to create kind of like these floating gravity-less types of choreography for people dancing underwater with this whole ambisonic audio soundtrack. And it was just completely compelling, all inspiring and beautiful to watch. So that's what we're coming on today's episode of Voices of VR podcast. So this interview with the team behind Bodies of Water happened on Monday, September 2nd, 2024. So with that, let's go ahead and dive right in. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voices of vr So on today's episode, I'm featuring one of the creators that's featured within the Venice Production Bridge this year at Venice Immersive. Venice Production Bridge is where they bring about a dozen different projects and they get up and give a pitch in front of everybody where you get a little bit of a sneak peek for projects that are in production. They're looking for finishing funds. There's this whole part of the Venice Immersive Island where there's all these producers and funders and creators who are working on these different projects. And they're all getting together, having these different meetings, looking for co-productions and trying to get all the finishing funds for their projects. And so Cameron Costopoulos is someone who did Body of Mine and reached out to me whenever they were launching Body of Mine on the Quest store. I had covered it when it first launched at the World Premiere back at South by Southwest in 2023. And then there's a whole long journey that Cameron went through in the context of his pitch at the Venice production bridge, where I followed up and said, yeah, we really just kind of do a little bit of a catch up for all the different things that Cameron's been up to over the last year. Plus showing body mind, these different places, having this whole like impact campaign that it's going on, but also his next project that he was funding there of looking at how electroshock therapy is still being used with trying to convert people from being gay. Yeah. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Cameron happened on Monday, September 2nd, 2024. So with that, let's go ahead and dive right in. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my series of looking at different immersive stories from Venice Immersive 2024, today's episode is with a piece called Mammy Lou. So this was inspired by a real life experience of Director Isabel's grandmother passed away. And she went through a whole ordeal with having all these bureaucratic things with the hospital, but also just trying to remain present with her grandmother as she was passing. And so this is sort of a experience to kind of process some of those different experiences. And from the perspective of more of a transcendent spirit, passer soul who is trying to help passers with the passing of this main character in this piece, which is also kind of like this grandmother character. And so it's a lot of from a third person perspective, kind of tabletop scale, but sometimes you jump into first person perspective to either go into different memories of the main protagonist within the context of this piece, or you are trying to just create this sense of presence as you sit with the grandmother as she's in the hospital for more of the first person perspective. So yeah, that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Isabel happened on Monday, September 2nd, 2024. So with that, let's go ahead and dive right in. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing on my series of looking at different immersive stories from Venice Immersive 2024, Today's episode is with a piece called Play Life, which is by a Lithuanian musician and writer and photographer and painter. So a Lithuanian film producer wanted to create some of these different immersive stories and then felt like this translation of a lot of these paintings from this Lithuanian painter would make a good landscape to explore all these kind of surrealistic scenes and translate them from 2D into like these 3D immersive images. stories. They had versions of the original piece that was just with music background, but then one of the other four of the co-directors suggested that they try to weave in more of an explicit narrative. And so they went back to the painter and had him share a number of different stories from his life that were matching up with some of the different scenes that may have been coming up with different moments and the different paintings. And so They then had to edit that down into the overarching arc of the story, which is reflecting on this failed relationship and what happened within the context of their time together. And so as a piece, it felt like it was very much driven by the visuals first, and then sometimes the story would correspond with the visuals and sometimes it would kind of deviate and almost be like this completely separate track. And it was kind of like weaving in and out for moments when it was coming together and coming apart. So, yeah, I have a chance to sit down with the creators to get a little more context for how this piece came about and to get a little bit more around, you know, some of these different decisions that they had made in order to create this piece called Play Life. So that's what we're coming on today's episode of the Voices of VR podcast. So this interview with the team behind Play Life happened on Monday, September 2nd, 2024. So with that, let's go ahead and dive right in. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structure and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my series of looking at different immersive stories from Venice Immersive 2024, today's episode is with a piece called The Guardians of Jade Mountain. So this is a piece by Interspace VR where they've done a number of different immersive stories over the years. And this feels like a piece that you would see in a museum where you may be learning about the colonial histories of these different lands, especially like in Taiwan, looking at like the Japanese imperial moments. So there's like an anthropologist who ended up creating a friendship with one of the indigenous members of the community there in Taiwan. And then their countries were at odds, but they ended up having a friendship. And so it's a piece where you're climbing up Jade Mountain. And then through this climbing, you end up being guided by these little animals and trying to create this kind of redirected path that is allowing you to go in a certain direction. And then all the worlds are kind of being edited behind you. So it just gives you this sense of going on a proper walkabout and an adventure. to kind of learn more about this colonial history within the context of Taiwan. Also, I think it's worth noting that there was a decision that was made from a production standpoint in order to have additional throughput through this piece, which meant that there was multiple people going through at the same time, but you would see some trace of the other people by having this big static column in the context of your environment. And it really didn't make sense narratively for why it would be there, but it was just more of a production side, a decision and trade-off in order to increase the throughput through a piece, but having this experience of having these big static columns that were there that I personally found a little bit disruptive, but can understand the need to try to figure out new ways of making some of these different pieces more economically viable. So, we're covering all that and more on today's episode of the Voices of VR podcast. So, this interview with the team behind the Guardians of Jade Mountain happened on Monday, September 2nd, 2024. So, with that, let's go ahead and dive right in. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So this is the last episode of my series of looking at different immersive stories from Venice Immersive 2024. I might actually end up doing a little bit more remote interviews with a couple of projects that I missed. I only interviewed one out of the three winners of Venice Immersive this year. And so Otto's Planet as well as Ito and Maiku are two pieces that I didn't have a chance to talk to folks at this year's program. And I might end up doing like a whole kind of wrap up with critics round table that I've done in the past, but I at least wanted to get my first draft of all these different interviews out there just because, you know, I'm going to be traveling and going on to some other things as well. And I wanted to clear out my queue and open up for some of these other trips that are coming up as well. So this last episode is kind of unique just because it's a piece by a person who is anonymous. Jusmo created this piece called All I Know About Teacher Li. Teacher Li is this Chinese student who lives in Italy. So he's outside of China, but posted about bad news in the context of China. But because he didn't live in China, then he could continue to post these different series of bad news and which is kind of increment up his username. So Teacher Li ended up catalyzing a whole protest movement that went viral and ended up potentially having some impact on the Chinese government to change their really strict COVID regulations and policies with this kind of white paper protest where people were just kind of standing up and holding up these blank pieces of white paper that didn't say anything. And so it kind of slipped through the censors and people had enough context that it was a protest around the COVID lockdowns. And then it kind of like spread like wildfire. So Teacher Li is somebody who has been tracked by the Chinese government, and there's specific considerations for people who are talking about these different topics to not have their identities revealed. So the way that I did this interview is that I had a face-to-face conversation with Jusmo, and then I took that conversation, produced a transcript, then cleaned up the transcript, and then fed those text transcripts back into 11 Labs with an AI voice to voice it to be able to mask his identity and identity. from there then i kind of stitched it all together into this kind of conversation with the ai version of juicemo and all about teacher lead was one of my favorite experiences and actually ended up being like at the top rank in terms of the number of people that saw it and then voted it for to be at the top five but also at the top of the favorites list Last year, there was a survey that happened by XR Must and XR Crowd, Andre Lunev. And they asked people like what we were able to see, what were your favorite experiences? And then if you were to rank the top five, what would you rank them? And then it was from those answers, they're able to get some survey results from the audience just to get a little bit more of a calibration for what folks are thinking about this year's program. And so this was the last interview that I did at Venice Immersive. And I just decided to kind of air everything chronologically in the order in which I recorded everything. And then All I Know About Teacher Li just happened to also be at the top of the list of the favorites and also the top five in terms of the response for people. It was something that is using these kind of like interactions that are very simple, but is using the same interactions over time to really powerfully tell the story around Chinese censorship. So yeah, that's what we're giving on today's episode of the Voices of VR podcast. So this interview with Jusmo happened on Monday, September 2nd, 2024. So with that, let's go ahead and... dive right in thanks again for listening to these episodes from venice immersive 2024 and uh yeah i am a crowd-funded independent journalist and so if you enjoy this coverage and find it valuable then please do consider joining my patreon at patreon.com slash voices of vr thanks for listening

More from this show