#1480: Agile Lens 2023: Hands on with Four Seasons VR Demo & A Christmas Carol VR 2023

I’m diving into my backlog to publish three unpublished interviews with Agile Lens with this last conversation from December 2023. I interviewed Alex Coulombe, Lead & CEO of Agile Lens Immersive Design, Kevin Laibson, Experience Director for Agile Lens, and Jackie Roman, Cinematographer at Agile Lens, at FilmGate Interactive 2023 about the Four Seasons Lake Austin VR experience as well as the A Christmas Carol VR experience. See more context in the rough transcript below.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.458] Kent Bye: The Voices of VR podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast, the podcast that looks at the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my little deep dive into my backlog conversations with Alex Coulomb and Agile Lens, today's episode is with a conversation I had with Alex, Kevin, and Jackie at Filmgate Interactive Festival that was happening in Miami, Florida. So Alex had a chance to actually show off some of the Christmas carols, some of the volumetric scans of a concert that was happening, as well as this big real estate project that he had been working on to do this photorealistic rendering of these condos that were being built within the context of Austin, Texas. It was basically a project to have a direct embodied experience of something that isn't quite built yet in order to actually sell some of these different condos. And if they got enough sales, then they would actually start to be able to build it. And also we do a bit of a deep dive into this Christmas Carol experience that he was showing off there at Filmgate Interactive. And it's going to be continuing again this year. It's a project that they continue to come back to each and every year just to push forward some of the latest emerging technologies. And I think this year, Alex said that they're going to be just doing it for free for anybody that wants to come and check it out. So definitely check out the Agile Lens website to get a little bit more information for some of the different times for one that's going to be coming up later here in the winter. So that's what we're having on today's episode of Voices of VR Podcast. So this interview with Alex, Kevin, and Jackie happened on Sunday, December 3rd, 2023 at the FilmGate Interactive in Miami, Florida. So with that, let's go ahead and dive right in.

[00:01:46.383] Alex Coulombe: Hi, I'm Alex Coulomb. I'm a former architect turned ex-architect, and I run an XR SWAT team in New York City called Agile Lens Immersive Design, and I've got a couple of my SWAT team members here.

[00:01:56.408] Kevin Laibson: I'm Kevin Labeson. I'm an experience director for Agile Lens. My background is in theater, and largely what I do now for Agile Lens is direct and produce theater in VR and AR.

[00:02:05.255] Jackie Roman: My name is Jackie Roman, and I'm a cinematographer at Agile Lens, and I have been helping build a virtual camera system for the experiences that we're doing in XR.

[00:02:16.344] Kent Bye: Maybe you can each give a bit more context as to your background and your journey into working with immersive media, theater, and storytelling, and XR.

[00:02:23.755] Alex Coulombe: Yeah, so I studied architecture at Syracuse University and always had an interest in how emerging technology could help better communicate design intent. And so as part of that, I started to experiment with augmented reality back in 2009, marker-based as a way to show my crazy theater design in a more clear context. And coming out of school, I worked for a few architecture firms that were not particularly excited about the potential of emerging technology. But fortunately, in 2013, when the Oculus DK1 came out, I was working for Fisher Dax Associates Theater Planning and Design in New York City. And they were totally game to see how this technology could help us design better theaters by looking at the sight lines of all the different spots in the hall. And that led to changing different design options, like different railings, different materials, different things on stage. And then that led to actually pre-visualizing real shows that were going up in New York City at places like the Park Avenue Armory and Lincoln Center. And then eventually we got to the point where we thought, hey, this could be its own company. And thus Agile Lens was formed.

[00:03:22.062] Kevin Laibson: Like I said, my background is in theater. I was a director and producer in New York for many years before I got involved in XR. I was the artistic director of the People's Improv Theater, was the founding artistic director of a space called Magic Future Box in Sunset Park, Brooklyn, was a digital director for a space called Super Secret Arts in Gowanus, and have directed and produced work at The Public and The Flea and numerous other places around the country. I also teach improvisation at Atlantic Acting School for NYU, which is how I actually got into XR. I was in a high school theater program with the very talented Kira Benzing. And when I posted that I had gotten the job at Atlantic, she said, oh, I'm working on an improv thing in VR. Would some of your students want to help? And I said, well, to hell with my students. This sounds great. I will come and bring my friends. And that's where I met Alex and David Gosfeld. I started doing a little improv work and directing some improv shows for High Fidelity. and never looked back. That's basically it.

[00:04:17.575] Jackie Roman: My background is in film and photography. I was a photojournalist for many years covering entertainment and music, and eventually started working at a soundstage until the pandemic happened. And when our stage got shut down, we installed an LED volume from 4Wall Entertainment, and I realized I would have to learn Unreal Engine. And I took a class at the Tandon School of Engineering, and got introduced to the world of Unreal Engine and haven't gone back. I met Alex Coulombe on Twitter and jumped full on with Agilent's projects about a year ago.

[00:04:50.559] Kent Bye: Yeah, and Alex, we had a chance to catch up at Tribeca, where you were talking about all these variety of different projects. And we're here at FilmGate Interactive, where you're actually showing demos of pretty much all the stuff that we had talked about back in Tribeca. So it was really great to get a chance to see a lot of these different experiences. So maybe give a brief overview of all the stuff that you were showing here at FilmGate.

[00:05:10.888] Alex Coulombe: Yeah, it's a lot of what we've been doing over the past year. And unfortunately, yes, Kent, when we were talking to Tribeca, I had to be very cagey about like, oh, some of this stuff is still kind of NDA. Much more of it is in the open now. So certainly our biggest project over the last year has been part of the Four Seasons Private Residences on Lake Austin, which is a 5000 square foot showroom in Austin, Texas, meant to sell these properties, which are a $1.2 billion real estate project with $300 million in amenities. And so working with the incredible folks at Dbox, the client Jonathan Kuhn, and then our partners over at Pure Blink in Toronto, we all collectively worked together on building out a very robust local multiplayer, super hyper photorealistic ray traced VR experience over Air Link with MetaQuest Pro. And that had to solve a billion issues that we did not anticipate, but it was a lot of fun. And for more on that, there's actually a nice Unreal Fest talk now that is on Unreal Engine's YouTube channel. Additionally, we in March went over to Clarksville, Tennessee. And actually, just so I'm not talking so much, Kevin, can you speak briefly about La Passion?

[00:06:12.810] Kevin Laibson: Sure. We went to Clarksville, Tennessee to shoot a performance or a series of performance by the Gateway Chamber Orchestra of La Passion de San Marcos, which is a piece by Osvaldo Golioff that is quite remarkable. Everybody should check it out. We were lucky enough to be brought in to shoot La A 360 archival video from several angles, which actually Jackie was primarily, it was the lead on, certainly lead cinematographer. And alongside that we did a sister immersive piece where we did a lot of sort of run and gun volumetric capture through those rehearsals and performances. And now we have this interactive piece where you can move through the space and see highlights of the piece itself with volumetric captures that sort of in their incompleteness we aim to sort of capture the like ephemerality of live performance and recognizing that you know volumetric capture and a polycam scan of a venue can only do so much. We have this archival piece so rather than try to capture all of everything in space we thought wouldn't it be more interesting to capture what it was like to kind of be there. So that's I think the La Passione and I think Jackie also can speak more relevantly to the, certainly to the 360 side of that.

[00:07:19.958] Jackie Roman: Yes, so for La Passione we have incorporated six or so different stereo camera angles and we were utilizing both 180 and 360 and mapping those into spheres that are around the experience that you can pop your head into and Some of the angles are right in the center. You can get a sense of what the conductor's hands are doing. Some of them are way back by the choir and allow you to turn around and look over at the piano player. And it's been an experience too with some of the cameras being almost obsolete and being replaced. Now you can really do some of the stereoscopic work with smaller devices and it's becoming easier and easier. But it's been a very exciting and learning experience to work with these experimental formats.

[00:08:04.488] Kent Bye: Yeah, and just speaking on that experience, there's a lot of point cloud representations. And I was thinking about how Gaussian splats were announced at SIGGRAPH in August. And I feel like there's a lot of exciting work in terms of how you could take point cloud data and perhaps do much more of a photorealistic rendering using kind of the process of developing the Gaussian splats with the neural neuro-radiance field training, and it's a multi-step process. But I don't know if that's something you've also started to look at, of how to start to integrate things like Gaussian splats, take some of this volumetric data that you may have in an archival sense, but give it another level of fidelity.

[00:08:37.048] Alex Coulombe: Yeah, well, we're huge fans of Polycam and Luma AI, and fortunately, both of those apps have basically made Gaussian splats incredibly simple. In fact, with Polycam, they'll even let you take an old photogrammetry archive, as long as you still have all the photos, and they'll say, hey, do you want to turn that into a Gaussian splat? And a lot of them look really excellent. We haven't played with it too much yet for La Passione XR primarily because we're actually very happy with the current ephemeral sense that the point cloud data gives and the way we're able to make all these points, you know, kind of pulse with the music in different ways and adapt to the different movements. But we're certainly very excited about that technology and want to continue to explore it. We really enjoy being always on the bleeding edge regardless of whether or not it is right for a particular piece. We want to know what tools are available.

[00:09:20.143] Kent Bye: And before we dive into some of the other experiences, I'd love to ask you, Kevin, about your experience of being in Austin, working on this real estate project, you know, as a someone from a theater background. And maybe you could connect the dots between how you were looking at this real estate pre-visualization in a sales context to try to get people to buy in. But maybe you just talk about some of your time and what you were doing there in Austin to help pull this project off.

[00:09:42.014] Kevin Laibson: Yeah, I want to be clear in the front of this to say I've talked about this a lot, and I still have not figured out how to talk about this effectively or quickly. So bear with me, and I'll do my best. I can say from a theatrical standpoint, it was puzzling for a while just to figure out. I mean, I'm a theater producer, and so I'm very capable of figuring out what needs to get done and why I am potentially helpful on a project. But ultimately, as we were there longer and longer, it became very clear that my job was not at all on the software side. it was entirely about the like audience throughput that there had been you know from the client's end there had been really no consideration as to what it would be like to move people into this onboard them quickly in vr push them through this hour-long tour and then move them away while they're still being taken care of and feeling welcome while new clients are coming in for another tour which is all very basic audience behavior stuff that just wasn't anybody's area on that project. And similarly, some things would happen in this project I think I can talk about. For instance, we would bring out a chair, right? An audience would be able to see a 3D object of a chair. We had the actual physical chair. And on some tours, my job would be to be the guy to swing this chair out and put it in the place for this person to sit down in, in real life, those sort of like haptics. And our client was like, there's got to be a way we can figure out a consistency to bringing down this chair. How can we know where it goes every time? Somebody must have figured out a system for this digitally. And we were like, are you talking about spike tape? Man, do you just want us to lay down some gaffer's tape on the floor? And he was like, oh, of course, yes. Over and over again, I feel like my job at Agile Lens is to remind people that most of these problems have solutions that have existed for thousands and thousands of years. And if we can just take a step back to figure out what the audience experience, what the intended optimal audience experience is, We can look outside the toolset of just Unreal Engine and just what tech can we buy to make this easier and really talk about how do we make an audience feel safe and considered, which is like an artistic director's job in any setting. I feel like what we do so much with these experiential pieces is act as the artistic director, as the audience liaison.

[00:11:48.857] Kent Bye: Yeah, and as I had a chance to do a little bit of a brief demo in the space at the Filmgate Interactive, which is not the intended space, which is much more of an open space, so there was a little bit of having to navigate the dual realities of making sure I wasn't running into any chairs or other people, but it was really compelling to see these different spaces that you had created to be able to walk around and get a sense of the vastness of what is essentially these condos for super rich people in Austin. But I'd love to hear any of the other technical innovations that you had to do in order to pull off this experience.

[00:12:22.645] Alex Coulombe: Yeah, a lot of it was just kind of going through every possible combination of software and hardware and figure out what was going to work. So for example, early on, there was this discussion of are we going to use Unreal Engine 4.27 or a newer version? And of course, with Lumen and Nanite and innovations like that, we assumed up front, of course, we're going to use the newest version of Unreal Engine. And yet we did find, and I include PureBlink in these discoveries as well because we were all doing R&D in parallel, that staying in 4.27 and sticking with very high quality baked lighting and then using 90% of the rendering bandwidth on these RTX 4090s on these beautiful Puget workstations entirely for ray traced reflections was actually what gave us the most consistently stable results. Lumen now even still gives you kind of a bit of a Firefly sense with its global illumination system, which is a little bit tricky. So that's like on the software side. And then on the hardware side, originally it was going to be a Vario XR3 experience, and that was tethered and a little bit too tight. We're still discussing where this could go next. There's certain wireless headsets now, like the Pimax Crystal, that could potentially be another way to have ultra high resolution, human eye resolution, and have a level of fidelity that could make this even more photorealistic than it currently is. But over the past year, where we landed was on the MetaQuest Pro, kind of for its balance of the form factor, feeling very premium. and then using the Air Link system, which frankly, especially when we were hitting a suboptimal frame rate and needed to use reprojection, if we were hitting, for example, 45 frames per second and needed to reproject to 90, the black magic developed by John Carmack and everyone else at Meta to compensate for a suboptimal frame rate was really unparalleled with every other solution we tried. So that was great on the headset side. And then we had to figure out something like, okay, how do we do local multiplayer? We assumed shared spatial anchors would be a good way to approach that. We'd seen like the dead and buried demos at Oculus Connect a few years ago. and yet we very quickly discovered that shared spatial anchors were nowhere near where they needed to be for an effective local multiplayer experience. So then we looked into using the Vive Focus 3 system, but Vive Focus 3 was not a headset our client was happy with. Then we looked at using anti-latency, which has all these IR green LED lights on a ceiling and kind of creates a fiducial map. This is used in experiences like Felix and Paul's The Infinite, which some of our colleagues saw in San Francisco. But that, of course, is in a totally standalone setting in a permanent building. And the interesting situation with our showroom is it's actually a tent that kind of is pretending to be a building. It looks like a beautiful building from the inside, but because it's a tent on a cliff, it's very windy, and so things like the ceiling would shake all the time. So we very quickly discovered that the anti-latency system, which wants to have these stable lights that never move, was not going to work as a ground truth reference for VR. So then we ended up moving to OptiTrack, which of course is typically used in more robust motion capture setups for moving vectors, people who are doing full skeleton tracking. And then we discovered, well, OptiTrack doesn't really want to be telling a VR headset every single frame exactly where to be, and so we very quickly had to figure out some way to basically say like, hey OptiTrack, where should everyone be right now? And like get that information for about a frame, but then continue to use the inside out tracking from the MetaQuest Pro to let people move around. So that's just like a little taste of all the different things we were trying to solve. But it was a fun R&D journey to speak nothing of like all the 3D printing we had to learn how to do as well.

[00:15:46.089] Kent Bye: Wow, that was a great overview. And people, I'm sure they can go check out the talk that you gave at the Unreal Fest to be able to get all sorts of other details to dive deeper. But Jackie, we're standing here on the streets of Miami. You're holding a couple of cameras. You come from a background that's very analog of traditions of photojournalism. So maybe you could elaborate a little bit more of your transition from going from the analog or even the digital photography with SL cameras, but something that's still very tactile with the physicality of operating a camera. and then moving into the virtual cameras and what that's been like for you to both do the virtual camera innovations in the context of Unreal Engine and some of these theatrical projects.

[00:16:25.069] Jackie Roman: Certainly. So with our Christmas Carol VR project, there is a lot of language that carries over from cinematography in the real world to a virtual space. And what I've really liked about it so far is the ability to procedurally place within the world cameras and map them to certain keys on your keyboard. So in the web experience, you can control the camera yourself as a visitor to that world. those streaming camera angles are also being cut by an editor in a video village type setup for a cinematic live stream which i think is still really an important part of the experience for viewers who are not able to see the entirety of the experience in a headset or they may not have a powerful enough pc to run the experience in a headset so being able to see it on a flat screen and still have the agency to frame your own shots is really cool. And something else I noticed is the translation of resources. When on a film set, you need more light. The resource comes from the budget to rent more equipment, to hire more people to set up the lights. Additional days to your project add up. And in the virtual space, those resources are now on your graphics card. And when you turn on all the lights in a project, you run the risk of crashing your computer. So These are some challenges that I've noticed are going to take some learning, upskilling for some people to be able to translate traditional production tools and processes into virtual production processes.

[00:18:00.305] Kent Bye: And so speaking of the Christmas Carol VR, maybe you could give a bit more context as to the history and evolution of this because I understand that this is a bit of a project that you come back to each year and maybe a look at the latest technologies to see how you can do this theatrical production but continually reinvent it each year.

[00:18:16.826] Alex Coulombe: Yeah, so coming out of already designing theaters and trying to make virtual reality very relevant to people who are both on the architecture side of theaters and those using it internally, we found that we really wanted to kind of spread the gospel of ways VR could be useful for this medium. And kind of speaking to Jackie's point, We were looking for ways to make it not so scary and tell people who are working in a theatrical medium, hey, this is spatial. This is about stagecraft and guiding the eye and using light in certain ways that is actually much closer to theater than film in many contexts. And so we would go to conferences like Opera America and the Theater Communications Group and USITT and try to give talks and give demos that would try to get people more comfortable with getting involved in these experiences. And so Kevin mentioned things like Alive in Plastic Land, which were being done in high fidelity. We were showing some demos of that among other experiences, like a Magic Leap tabletop theater experience we'd made. This was at TCG in 2019, actually right here in Miami. And here we met Robert Barry Fleming, who's a director from Actors Theatre of Louisville, who kind of looked at our demos and said, this is really exciting, we'd love to find a way to collaborate. And we'd heard that from a lot of theaters, so we didn't necessarily expect anything to come from it. But then in 2021, I got a call from him and Zachary Microbuzzy at Actors Theatre saying, hey, now that the pandemic seems to be winding down a little bit, we'd like to reopen our theater. We'd like to do a production of Charles Dickens' Christmas Carol as a one-man show. And we're curious if Agile Lens would be interested in developing the ghosts for the experience, and they could be projected onto the scrims inside the theater. and it could be this really compelling way to have the one-man show feel very alive with all these other characters. And so we were perfectly happy to help on that front, but once we started to understand the full ask of the environments and the lighting and all the motion capture that was going to go into making this happen, It seemed like a natural next step to also say, could we also have an entirely virtual version of the show where all we really need to add in is the virtual Scrooge? Because we have the other characters and the costumes and the set design and a lot of the lighting are already being done in Unreal Engine. And so the original idea was to take Gregory Maupin, who beautifully portrays Scrooge in the physical production, and to put an inertial mocap suit like in Xsense or Rococo or Perception Neuron under his costume. So it would be this kind of simulcast production where you could be anywhere in the world and see what was actually happening at their theater. And for totally understandable reasons, Craig Maupin was like, no, I don't think I want to do that. So I ventured into saying, well, hey, there's this actor we've worked with in the past. His name's Ari Tarr. He's phenomenal at understanding how all these technologies work. Also, a seminal moment for him when he was growing up was watching Patrick Stewart do a one-man Charles Dickens Christmas Carol production. And so we asked if it would be okay for Ari to be considered as a virtual casting for Scrooge, and they signed off on it. And so in 2021, Robert Barry Fleming was directing both the virtual and physical version, and it was a very fun experiment in how we could all work together and translate the vision of their more traditional studio into the work that we wanted to see in virtual reality. And since then, they've been very kind to basically give us some free rein to continue to explore that as a starting point, but continue to evolve the script and the characters and some of the stagecraft of what happens virtually for our own purposes. So back in 2022, we'd learned a bunch of things with our client projects that year and said, Oh, it'd be great to incorporate some of that into this year's production. And so, you know, you've already heard about La Passione and Four Seasons and some of our other projects this year. Other ones for like Vodafone using in-world AI, things using Dell and playing with some other metahuman animator features for them. All of this kind of snowballed again where around July of this year we said, okay, seems like there's some really exciting new things we can roll into this year's production of Christmas Carol. So we certainly don't make it easy for ourselves where it's like, hey, let's spin up that old Unreal Engine project and just go to town with it. Every year we're trying to think what have we learned and how can we incorporate the lessons and code we've learned into this new iteration. Kevin, would you like to speak briefly about what we have in this year's production of Christmas Carol?

[00:22:13.504] Kevin Laibson: All of the many new features? Well, we moved all of the facial tracking into the Quest Pro. I'm not sure if you mentioned that. OK, great. So now both of the actors can see audience in VR, whereas last year it had been a Live Link situation, and so they were not able to be immersed in the environment. What else did we add? AI Dickens, right. So the big sort of exciting potential toy that we added this year, a consistent issue that we have, I wouldn't say problem, but a challenge that exists in VR theater, of course, is onboarding. And how we've solved for this in the past with this show is by virtue of it being a quote unquote Dickensian recital, audience would load up, they would meet Charles Dickens as played by Ari Tarr, and he would onboard everybody collectively, which is fine, and an extra hour that he had to be called and in mocap suit and just felt a little cruel to make him have to do that over and over again. And so this year we took that part off of his hands by training an in-world AI metahuman with his voice and teaching it everything it needed to know about how to navigate audience into the production itself. It knows that it is a robot that is trained to pretend to be a Charles Dickens. and that you're here for a production of A Christmas Carol, and in just a few moments, it's going to hand over the reins to the actual actor, who will be more equipped and more entertaining, but will have a similar voice. And it's been a really interesting experiment. We're finding that audience members really do like having something to play with that is reasonably intelligent in the lobby while they wait, and it has definitely given Ari a lot more grace time pre-show to relax and get prepared and warm up the way that he needs to. And to be clear, we're operating this particular engagement, we're operating under all of the likeness rights agreements that SAG-AFTRA has for any kind of likeness situation, publicity. Which is to say we're going well above and beyond the expected agreement for the AI contract. We have a retainer on for the likeness rights for the voice. He gets paid every time we enlist this project. Of course, Ari himself was like super involved and excited and on board. So just to be clear, this is an extremely ethical implementation of tools that we are ourselves pretty wary of the way that they'll be used moving forward. Yeah, I just needed to cover our bases and say that we for sure are in support of the strike. And we think, frankly, the new SAG-AFTRA agreement, though I don't know when this will come live, we think the agreement is not strong enough and that people should be respecting these tools in ways that the AMPTP does not seem to respect.

[00:24:47.858] Kent Bye: Yeah, I have to say, that was one of the more magical pieces that was new for me, at least, coming on and seeing the piece. Because I've had a chance to play with nworld.ai, with MeatWall. And it's the type of thing where you try to break the boundaries with the AI and find how you can get it to hallucinate or go off script. And I feel like you're able to create a knowledge base that I would ask it, who's producing this? And I'd ask things like anybody within VR would be like, what's the nature of reality? And there was a really nice response to that. So I felt like there was a way that you could stress test to this AI and to have something to play with as we're all waiting to go into the performance. So it was a really nice. toy or, I guess, interactive component to help us be onboarded into this piece. So yeah, I really appreciated that. And I think the nworld.ai does a great job. And it's the first time that I've seen it being trained on someone's voice. Is that something that is a part of nworld.ai? That's the 11 Labs.

[00:25:39.345] Kevin Laibson: 11 Labs. They integrate really well with nWorld and I think every platform that we use. Yeah, we're having a really good time using 11 Labs.

[00:25:47.878] Kent Bye: So you're training a voice on this 11 Labs model to then mimic the voice?

[00:25:51.640] Alex Coulombe: Yes, that's correct.

[00:25:53.501] Kevin Laibson: And in fact, the sample that we were using for the demo, though not anymore, but the sample we were using for the demo that we were showing at the beginning was an audio sample we took from rehearsal that did not cut out moments that Ari just was addressing us in production. And so the early versions of this were swapping in and out of his voice. performance accent, and then his actual, OK, I'm just getting down to business voice. And it was a very jarring thing to have these two different Ari's in this one body having a conversation with you.

[00:26:19.512] Alex Coulombe: And really briefly on that note, too, sometimes Ari is also playing other characters that have very high pitched voices or whatever. And so sometimes even the trained Dickens, for no particular reason, would still be speaking in his Dickens voice, but then start talking very highly. And you'd be like, why did he do that? And it's like, oh, because it's in his training.

[00:26:37.935] Kent Bye: Awesome. Yeah. And so, yeah. And Jackie, I'd love to hear a little bit more about as you have these different performances, like what is your show like as you're moving around these different virtual cameras? You know, when we're here at Filmgate Interactive, you actually have a pole with a camera that's also somehow linked. So you actually have like more of a physical interaction to be able to do some cinematography. So, yeah, I'd love to hear about what you experience when you do something like the Christmas Carol VR show.

[00:27:02.754] Jackie Roman: Sure, so there are number keys on a keyboard that you can punch to to get the same framing that the procedural cameras see and one of them is mapped to the virtual camera system that I'm using on an iPhone and it's just broadcasting my coordinates into a project and that's giving our video village editor the ability to cut to my view in real time and space. And since Ari and Debbie's blocking is organically changing from one performance to the next, there's a lot of latitude to make more dynamic camera angles from one performance to the next and really study the scene and find different ways of looking at it. And we were able to give some people at the FilmGate Miami Festival some hands-on experience with the VCAM, and it's pretty exciting.

[00:27:52.978] Kent Bye: And you're also with another cinematographer, Carlos Austin. And how do you work together with him?

[00:27:57.921] Jackie Roman: So Carlos and I are on a back channel throughout the performance talking about the script and where to cut to next. And he's able to see on a big monitor every single camera all at once. And I've learned a lot from him. He also brought me in to the alien rescue experience. And I got to learn a lot about how to control the VR camera that way as well.

[00:28:20.089] Kent Bye: So yeah, I guess one of the distinct features about this Christmas Carol VR is that it sort of felt like originally a one-man show. Maybe now there's two actors, but they're doing this Wizard of Oz technique where they're swapping out between different characters. And I don't know if you've done this in previous years, but there was this moment where I would see Ari's hand outreach and I'd see him basically click a button and then it would freeze and it would switch over to the other character and there's a little like uncanniness of like something that's very unique to VR where they're kind of like frozen in time and so yeah I'm wondering if you could elaborate on because usually when I see Wizard of Oz it's like someone goes behind stage and they switch it and they come back out but he's kind of having a dialogue with himself Sometimes his voice would slightly change, or sometimes it would sound similar, and basically I'd have to look to see which character is frozen, and also just thinking about, oh, is there a way to add an AI component so there's a way where they're still kind of fluidly moving around? Because it was very distinct. Like, OK, clearly I know that the soul of this character is now evaporated because he's kind of acting really weird and uncanny. And so, yeah, I guess the transition still felt a little rough in terms of the plausibility of really buying in that these were two characters rather than one person who was talking to himself. So I'd love to hear some of the iterations of this, transitioning it from a one-man show to a two-person show, and now continuing evolutions of trying to figure out how to have a whole cast of characters with a smaller set of actors.

[00:29:45.256] Alex Coulombe: Yeah, I definitely want to hear Kevin's thoughts on this. But to give a little background on how the live performance aspect has evolved over time, it was very interesting because back in 2021, we did have the expectation that Ari would be live the entire time, but playing against prerecorded ghosts. In fact, the same prerecorded ghosts that are used in the Actors Theatre of Louisville production. But what we started to find was with latency and some of the other cognitive load of the tech challenges, it was becoming more of just a technical trick for Ari to hit all the timing correctly to speak against these ghosts than something that he actually felt like he could live and breathe into as an actor. And so we actually decided fairly last minute in 2021 to actually have all of the back and forth dialogue between Scrooge or Dickens and the ghosts all be prerecorded. And the surprising thing, talking to even people who know XR Theatre very well, like Brayden Roy of Ferryman Collective, some people came out of the show and said oh that was incredible how that was all live and there were clearly all these characters and then we'd actually have to say no only some parts were live and then you know ari would very smoothly transition into something pre-recorded he could go get a drink of water he could you know do a recalibration of his mocap suit and then come back and keep going then last year we decided that it really was a show that could use two performers so we cast debbie dear as all the ghosts except for marley marley did stay pre-recorded and that was much more live Still some pre-recorded sections, and we only did three shows last year, so we've done three shows now. We're already past the run of what we had, and we're doing performances all the way through Christmas. But there were some discussions, particularly from our director, David Gottschfeld, about different ways that Ari could inhabit Dickens and Scrooge. And before I let Kevin speak a little more about how that puppeteering is done, there definitely are other discussions going on as we're still in preview performances about how to make more clear as we move between the two of them. At one time, we were having if, for example, Ari went from Dickens to Scrooge, Dickens would stay in kind of an idle animation. but we did see some value in freezing into certain poses that would particularly have meaning capturing an emotion at that moment when they switch. But for something like even guiding the eye, we've discussed things like having some sense of a spirit that actually does move between the characters and could guide your eye from one to the other.

[00:31:51.616] Kevin Laibson: Yeah, I mean, I can't speak for David Gosfeld, who was the director and who wanted to make these calls. I guess I can because I'm here and I'm doing it. But so much of what we're interested in with this project and all projects is not just what are the theatrical conventions that travel well into VR, but obviously, what are the new theatrical conventions that can be explored in VR? And so to be totally frank, I don't know that anybody was like, ah, yes, this, this puppeteering mechanic that you're discussing, this is the solution. But it for sure was an idea that came up that we kept coming back to because it was only really doable here. And I agree. It's like we all agree it's not perfect. It's not there yet. But from a prototyping standpoint, I think as a director, it's a really exciting thing to see happen. And And, you know, I don't want to sound negative at all because I'm extremely proud of this production. I consider it an ongoing R&D project. It's like it doesn't faze me at all to hear you say like, oh, it still seems a little uncanny. It's like, yeah, man, but isn't it cool that we got anywhere near it? Isn't this interesting? So, yeah, I mean, I think again, I think artistically it's not 100 percent successful, but I think that it is 100 percent satisfying in answering the question of like, how do we feel about this? Is this worth pursuing? I think the answer is like, yes, it's worth pursuing. And implementations of it might one day really ring beautifully. And right now, I think it's an exciting time to be making VR theater because nothing works. All of it is a little clunky and a little awkward and a little sort of human despite itself that way. I don't like getting soap boxy. It's easy for me to get soap boxy. I apologize.

[00:33:24.932] Kent Bye: Yeah. And one of the other experiential parts that I was noticing was just the fact that I was seeing other spheres that were there and there was like their audience members. And I was like thinking about in something like Sleep No More, you put on a mask so that you're anonymous and you kind of like are on the periphery. But in a piece like this, you're kind of flowing around. And because I don't see my own embodiment, I don't actually know. as to what other people may or may not be able to see and so I noticed that other people were kind of flying right up into the faces of the actors and so I was like they're kind of getting in the way of the show and you know there's this sense of like I guess it's kind of cool that I'm seeing this experience with like five or ten or twenty other people at the moment but I'm just wondering if there's like other ways of representing them, maybe like shadows on the ground that can see like the echoes of them without having them be right in the face of the actors. And I noticed that even Ari was sort of like speaking to these apparitions and like kind of breaking the fourth wall in some sense of engaging with the audience, which as soon as he would do that, he's like, oh, he can see us. Because even then I was like, I don't know if he can see us. So there's always like this solipsistic sense of like when you're in a VR experience of like, knowing what other people can see versus what you can see. And so when I was watching the piece, I was noticing how I was wanting the other spheres to just go away because I thought they were just kind of getting in the way of the performances. So there's this trade-off between the social presence of those people and what's it mean to be in a shared experience with them and these ghostly apparitions or some sort of embodiment that represents that we're there. versus just being completely disembodied as a ghost and not even know how many people are seeing it and just seeing the live performance that's unfolding. So there's a number of different trade-offs, but as I was seeing it, those are some of the things I was thinking about.

[00:35:00.646] Alex Coulombe: Yeah, and just briefly to speak to that and to echo Kevin's sentiment, this is absolutely always an R&D project for us. So, yes, and we've tried all sorts of different ways to represent the audience. The first year, everyone was kind of like these little candle flames. We also played with giving everyone a name tag, and we allowed anyone to turn off the name tags. Last year, we also played with turning off the avatars entirely. That would be an easy enough thing for us to implement this year as well for a more private experience, especially if there are sightline obstructions. because on the one hand of course we're thinking about the fact that isn't it cool that we can be more of an immersive theater context versus say a proscenium theater where the sight lines will change depending on if you're in the balcony versus the front row and here we can let people get as close as they want to but you know it could be as simple as like making the audience avatars smaller or more transparent and that's something we'll continue to play with and in fact that's why this year for the first time we actually went for a little bit more of almost like a Netflix subscription model to this entire season of the production because we want people to see multiple shows and see how it continues to evolve. It's not the kind of thing where December 1st came and we said, OK, we're done. We're going to run the same show for the whole month. After every single show, we are actually doing new builds, sometimes with slight changes, sometimes with more drastic ones. And I am absolutely sure that if we get to have you back, Kent, before Christmas to see one of the other shows, you will have a very different experience and you'll probably see some things that you like more and some things where you're like, I don't know, I kind of preferred how it was in the preview experience. And a lot of that is not only for our own R&D, but we very much want everything that we're doing here in Unreal Engine to serve as a template and ideally eventually a platform that other theater creators, for example, you know, Ferryman Collective, could use instead of something like, say, VRChat, which is a wonderful platform, but absolutely not tailored to live events the way that we are trying to build something tailored to live events. So we're doing stuff both for our own artistic desire, but then also thinking toward what will be useful as tools for other creators as well.

[00:36:53.398] Kent Bye: Awesome. Great. And as we start to wrap up, I'd love to hear what each of you think the ultimate potential of virtual reality, immersive storytelling, and the intersections with theater and architecture and virtual cinematography might be and what it might be able to enable.

[00:37:08.655] Jackie Roman: So the thing I see happening more and more is that our world is burning and people are being displaced from their traditional means of income by robots. And what I think the ultimate potential of VR is is a democratization of many of the tools that we are used to seeing. And I'm really hoping to see all of the things that Agile Lens is working on be widely available and for everyone to see what magic we're creating in virtual space.

[00:37:39.095] Alex Coulombe: For me, it's a resurgence of the arts. I think a lot about the time in 2008 when I was studying abroad in London, and I was fortunate that whole semester to see all these incredible live shows, theater and music and opera and dance, and to be so close to the performers to understand that that is such an inherently different way to experience a live show versus being far back in the nosebleed seats or being in the front row watching a more amateur production. This is a and they're all world-class performers. And so starting from then, I really wanted to see if there was any way technology, having no idea what VR or AR could do at that time, could start to democratize this very rarefied experience. So when I think toward the ultimate potential of virtual reality, I imagine anyone in the world having the ability to feel like they are six inches away from a world-class performer and feeling this catharsis and this kind of world-changing sense of what the best live performance can really do. And many people never get to experience that. And, you know, hopefully we can make sure that no one's obstructing anyone's sightlines.

[00:38:35.854] Kevin Laibson: Yeah, I mean, access. Everybody said I think it's access. Everything about what is interesting about this work is making spatial, ostensibly local spatial experiences available to anyone at any time. I think more. Yeah, I was going to say, yeah, you know what? Let me leave it at that. I was going to give you my pessimistic take, but who needs it? We have enough of those. Yeah, access, man.

[00:38:59.443] Kent Bye: Awesome. Is there anything else that's left unsaid that you'd like to say to the broader immersive community?

[00:39:04.327] Kevin Laibson: We should mention Witt. We were talking about La Passione before, and we did not mention Witt Sellers, who is an incredible developer and who's done amazing work on it. And I want to make sure that we say his name every time we talk about that project, because he isn't here in Miami with us.

[00:39:17.219] Alex Coulombe: Yeah, and Kevin, do we have something like a promo code we could give folks for Christmas Carol if this comes out before then? And by the way, if you hear this episode and you're like, oh, it's after Christmas, Christmas Carol's over, this promo code that Kevin is about to say, we will also apply it to the 2024 production. So just keep this in your back pocket. And we certainly want to encourage anyone supporting Kent and all the incredible work he's doing and listening to Voices of VR to also come check out what we're doing and give us your thoughts. You know, we want all the feedback in the world because we're trying out a lot of stuff and we love to hear what people think of it.

[00:39:49.286] Kevin Laibson: Yeah, use the code VOICESOFVR for 50% off anything. I'm going to go ahead and say anything in our 2024 season. You email us if that code doesn't work for you, but it'll work for you.

[00:40:00.153] Kent Bye: Anything else left unsaid you'd like to say to the broader immersive community?

[00:40:03.415] Jackie Roman: No, thank you so much for the work that you do, Kent.

[00:40:06.846] Kent Bye: Awesome. Well, really happy to be able to see a lot of the experiences that we talked about previously and to see each time I see them, I feel like there's lots more of innovations and changes with just how quickly everything is changing. And yeah, it seems like AgileLens and the rest of your team here are always on the cutting edge of pushing forward what's possible with these really interesting intersections between art and immersive storytelling and theater and XR and architecture and also the cutting edge aspects of Unreal Engine, really on the bleeding edge of pushing what's possible there as well. Yeah, thanks again for taking the time to help break down what you've been doing in the realm of XR. So thank you.

[00:40:40.894] Kevin Laibson: Thank you so much. Thank you. Thank you, Kent. Thanks.

[00:40:44.892] Kent Bye: Thanks again for listening to the voices of VR podcast. And I would like to invite you to join me on my Patreon. I've been doing the voices of VR for over 10 years, and it's always been a little bit more of like a weird art project. I think of myself as like a knowledge artist. So I'm much more of an artist than a business person. But at the end of the day, I need to make this more of a sustainable venture. Just five or $10 a month would make a really big difference. I'm trying to reach $2,000 a month or $3,000 a month right now. I'm at $1,000 a month, which means that's my primary income. And I just need to get it to a sustainable level just to even continue this oral history art project that I've been doing for the last decade. And if you find value in it, then please do consider joining me on the Patreon at patreon.com slash voices of VR. Thanks for listening.

More from this show