Dream was a series of 10 live performances over 8 days that used motion captured actors who had virtual embodiments set within an immersive storyworld of Shakespeare’s Midsummer Night’s Dream powered by the Unreal Engine. This project was a research & development initiative funded by the United Kingdom’s Audience of the Future initiative that involves the Royal Shakespeare Company, Marshmallow Laser Feast, Philharmonia Orchestra, and the Manchester Film Festival.
They were originally going to produce a site-specific, location-based experience focusing on playing with different haptic & sensory experiences within the audience members, but they had to do a digital pivot to an online performance in the midst of the pandemic. They set a goal of trying to reach 100,000 people with their show that had two tiers including a paid interactive experience and free livestream of the live performance mediated through the simulated environment and broadcast onto a 2D screen.
I had a chance to break down the evolution and journey of this project with Pippa Hill, Head of Literary Department at Royal Shakespeare Company, as well as with Robin McNicholas, Director at Marshmallow Laser Feast as well as Director of Dream. We talked about adapting the constraints and goals that there were setting out to do, which was to also feature some of their R&D findings within the context of an experience. There was a lot of work with figuring out how to translate real-time motion capture with the puppeteering of virtual characters, and some very early experiments with audience paritipation and limited interactivity with an underlying goal of making it accessible to a broad demographic ranging in ages from 4 to 104 years old.
We explore some of the existential tradeoffs and design constraints that they had to navigate, but overall Hill said that there wasn’t anything left on the cutting room floor in terms of the potential for how these immersive technologies will be able to continue to impact future experiments with live theatrical experiments in the context of virtual reality, augmented reality, or mixed reality. There’s also lots of exciting and difficult narrative challenges for figuring out different ways for the audience to participate and interact with the story.
There’s also some opportunities to futher explore a tiered model of participation with differing levels of interaction, and also a lot more underlying narrative structures and opportunities to receive either individual or collective agency for how that feeds back into the unfolding of a story or experience.
At the end, there’s probably more new questions that firm answers on a lot of these existential questions of interactive and immersive narratives, but the scale and positive response that Dream has received so far help to prove out that there is a potential market for these types of interactive narrative and live performance experiments. There was also a 60-question survey that I filled out afterwards, and so I also expect there to be even more empirical data and research insights to be digested and reported on in the future as well.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Here’s some behind-the-scenes video clips sent to me by part of the production team.
DREAM had innovations in live motion captured theater performance in @UnrealEngine in an @audiencefuture collaboration with @TheRSC, @marshmallowlf, @philharmonia, & @manchesterfilm.
I get the full backstory from RSC's Pippa Hill & MLF's @robinmcnicholas:https://t.co/62dstPs8LK pic.twitter.com/vFot8G1xwz
— Kent Bye (Voices of VR) (@kentbye) March 24, 2021
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So in today's episode, I'm going to be breaking down the experience of Dream with a couple of the creators from the Royal Shakespeare Company, as well as Marshmallow Laser Feast. They collaborated with the Philharmonia Orchestra, as well as the Manchester Film Festival, to bring together all these immersive technologies and to see how to blend them together with live performance and theater. It's a part of a larger initiative called Audience of the Future. And they were doing this research and development on all these different technologies, and then the pandemic hit, and they had to do a whole digital pivot online. And so then they created what was a little bit of an experiment, I'd say, where they're trying to prototype what does it mean to be able to bring an experience to, their goal was 100,000 people, using live performance and motion capture and aspects of theater to be able to translate characters from the story world of Shakespeare's Midsummer Night's Dream into a story, a self-contained story, that has some elements of audience participation, where there's different things the audience can do to be able to impact different aspects of their own experience of this narrative. So I had a chance to talk to Pippa Hill. She's the head of the literary department at the Royal Shakespeare Company, as well as Robin McNicholas, who was the director at Marshmallow Wheezer Feast, as well as on this project. And just to be able to talk about both the evolution of this project, but also some of their lessons that they're taking away in terms of how all these immersive technologies are going to be blended together with these other performing art practices like theater. Yeah, also just as a reflection that there are these different ephemeral performances and experiences that happen. And because there's so many people that were able to experience this, as well as I think there's some interesting innovations in terms of live motion capture and performance, but also just the distribution aspect to be able to reach that scale that they're aiming for. What are the various different trade-offs in order to get to that level? So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Pippa and Robin happened on Wednesday, March 17th, 2021. So with that, let's go ahead and dive right in.
[00:02:16.207] Pippa Hill: My name is Pippa Hill, and I work as the head of the literary department of the Royal Shakespeare Company. And I'm relatively new to VR and immersive media. but I am not new to storytelling. So I'm the Senior Dramaturg of the company.
[00:02:35.502] Robin McNicholas: And my name's Robin. I'm Director at Marshmallow Laserfeast, and I've had the honor and privilege of working with Pepper Hill on the project, Dream.
[00:02:48.569] Kent Bye: Great. So maybe you could give a little bit more context as to your background and your journey into working on this project, Dream.
[00:02:55.052] Pippa Hill: Go on, Robin. You start.
[00:02:56.713] Robin McNicholas: OK. My background, I've been interested and our whole team have been interested in the XR space or mixed reality for some years. In fact, that's how we bonded really and got going. And we have found ourselves in these positions where, thanks to organizations like Sundance New Frontier and Tribeca Storyscapes and IDFA with DocLab, we've met various entities on the circuit. And what has transpired over those years is as the friendships have flourished, we have said, hey, we should work together sometime. And thanks to Sarah Ellis, who is head of digital at the Royal Shakespeare Company, she thought, you know what, let's put an R&D bid in and created a consortium and it's a vast consortium of tech and creative practitioners from a very diverse background to work together and explore audiences of the future and what happened in the case of this project is we effectively have done a two-year deep dive into research and the project Dream is in fact a demonstrator of that research, a small part but an important part of the R&D.
[00:04:30.949] Pippa Hill: And from my perspective at the Royal Shakespeare Company, I commission all of the new experimental work there. So we've got quite a big lab called The Other Place, which has 30 or 40 commissions, new projects, running at any one time. So it's experimental, radical, investigative work that is my main preoccupation at the company. And this project arrived really at my doorstep in 2019 when Sarah Ellis brought me on board to consult with MLF and Philharmonia and Manchester International Festival on storytelling. And she took me to Sundance, to New Frontiers, got me in a swimming pool with a VR headset on and got me fully up to speed on this new and very exciting space. And I got very excited about the possibilities of particularly puppetry. I spend a lot of time doing experimental puppetry workshops, storytelling, different forms of storytelling on stage. And it struck me that working in this space, you have this incredible potential to puppeteer really sophisticated, really responsive avatars with a live acting company and we had already worked on Tempest in 2016 which had some live motion capture on stage. So we knew that it was something that we were really excited by and really interested in and I began working on this project straight after my deep dive at Sundance.
[00:06:13.158] Kent Bye: Yeah. And just to elaborate, the deep dive into the pool was a literal pool where there was a piece where you were in the water last year that I had a chance to try out as well. Not just a metaphor. So it's nice to hear that these festivals that I've also been going to since 2016, the first Marshmallow Laser Feast piece that I had seen was actually at that Sundance in 2016. So I know that there's been this. cadence of different innovations that have been happening slowly over that time. And I think what's interesting to me is to see that this project is bringing in a lot of those technologies, but reaching a certain scale of being able to reach 10,000 people. But then a lot of the type of avant-garde experiments that have been seen at these festivals, like Draw Me Close was at Tribeca. And that was like a one-on-one experience where there's almost like an inverse relationship between you can have a one-on-one experience to have the most amount of narrative agency and embodiment and presence versus, you know, when you start to do things at the scale. So we'll get into that in a bit, but I want to maybe sort of trace back into the evolution of this audience of the future, because I actually ran into Sarah Ellis at South by Southwest in 2019. I have an unpublished interview that I did with her about the very beginnings of this It'll be interesting for me to go back and listen to that now that I've seen where it ended up. But I heard that there was a bit of a pivot that had to happen in this project because there was a lot of focus on the site-specific nature of things. But yet with the pandemic, everything had to kind of shift into this online aspect, which ended up becoming Dream.Online, which was the show that's been showing from March 12th to the 20th. So maybe you could talk about that, where you thought it was going to go in the beginning, and then that moment where you had to pivot into going completely virtual and completely online.
[00:07:54.179] Robin McNicholas: Yeah. Well, we had got seriously far into an LBE with regards to this project. It was still drawing inspiration from the Midsummer Night's Dream, but heavily tactile in a disused shopping center or mall in Stratford-upon-Avon. And we were taking over three floors and very much throwing tactile screen-based technology at this. But at the core, we were already developing with Unreal Engine and we realized actually during the development and pre-visiting of the LBE project that we had something quite exciting. that we realized could be ported and could effectively author a living story world that when the LBE had passed in terms of the showtimes, that it was the potential for it to live on. And effectively what happened was COVID hit. COVID took out that LBE project and we all collectively had to come to terms with the fact that something we were very passionate about wasn't going to happen. So there's a bit of a grieving process that took place there. But I think there was so much passion and interest within the consortium that we wanted to continue. And already some of the legacy elements of that, for instance, March 13th last year, Friday the 13th last year, was in fact the last day of everything being open before lockdown and it happened to be the day that the Philharmonia Orchestra recorded the Ravel piece that we use in our digital pivot. So bits of it live on and I mean crikey what a year and effectively for the last six months the entire consortium have just lived and breathed this project.
[00:10:02.932] Pippa Hill: On Zoom! We made a show on Zoom. It was so extraordinary. I've never ever been through anything like it. But it wasn't until January this year that we all were able, or some of us actually, not even all of us, were able to get in a room together. And we had very rigorous COVID testing protocols and masks and social distancing. But actually, Making the show during the global pandemic and the lockdown in the UK, I think has had an impact on the storytelling, a huge impact actually, on how it's developed, how the narrative's developed.
[00:10:47.343] Kent Bye: Yeah. Maybe you could describe in your own words, in terms of how you describe what dreams ended up being, if people may or may not have had a chance to see it at this point, it's a run we're right in the middle of the run actually. But yeah, maybe you could just describe what is the final product that you ended up with in terms of how you describe that?
[00:11:05.191] Robin McNicholas: Well, one of the key objectives with this digital pivot was to create an experience that was. able to showcase some of the R&D that we had been developing as a consortium with a requirement of hitting 100,000 people and to get a meaningful experience to that amount of visitors. And so what it ended up being was an online production open to everyone for free, but with the option to buy a ticket as well. and the run for 10 shows over eight days, right? So from the 12th of March until the 20th. And it is fascinating seeing the result of that. We're mid-run at the moment, and we've already discovered that the types of audience is broad, which we were really hoping for, digital natives, as well as those more traditional, less digitally literate visitors are also attending. And that's a really important aspect. The inclusivity, in fact, to reach that amount of audience figures was critical. And that definitely has informed the design process. It's also served as a major challenge because, you know, how do you go about writing a story that appeals to millennials as well as Shakespeare aficionados?
[00:12:39.665] Pippa Hill: Good question, Robin.
[00:12:42.667] Kent Bye: Yeah. And Pippa, maybe you could expand a little bit on, cause I, after I watched Dream, I went back and watched all of Midsummer Night's Dream just to get more context as to the story world. Cause I have an engineering background, not a humanities background. So there's a lot of this core things that I was not aware of. So I was trying to see like, what is the bridge that you're making there? And it feels like you're taking the story world. of the Midsummer Night's Dream and taking a few of the characters, especially the fairies that were maybe very small part, but then kind of really blow up there as a character and give them a whole virtual embodiment. But Pippa, maybe you could talk about that process from your perspective in terms of as you look at the corpus of all the Shakespeare texts, and just the process of looking at a section of a story world like Midsummer Night's Dream, and then starting to expand it into like this whole immersive experience.
[00:13:32.694] Pippa Hill: Yes, I think The challenge that we had was that we were tasked with making a piece of work that was inspired by A Midsummer Night's Dream. We were specifically asked not to stage A Midsummer Night's Dream. And that, I think, was a really exciting challenge because, of course, it's an amazing play, but what's so incredible about it is it has so many different story spans. And they're so deftly and brilliantly woven together. And the characters are so universal, you can completely connect to them. And the fairies are strange, but also very human-like in the way that they react and respond to each other and to the mortals around them. So we really took an environmental approach, not in terms of global warming necessarily, but the environment. We wanted to grow a forest. out of the poetry and that's essentially where we started. We pulled a few tiny strands of narrative from the play, some very minor characters who have very little agency in the original play and Puck who is such a brilliant MC, just so charismatic and mischievous and irreverent as a character. So that was the starting point and Robin and I got very excited by the idea of showing in a not real wood what was really beautiful and magical and amazing about a real wood. So there is a narrative about Puck and the Sprites that they meet and a storm and a kind of collaborative call to action. It's quite a delicate story arc But through that, we've woven these different perspectives of what a real forest might look like in Unreal. So we've got this journey up into the canopy. We take the audience down into the root, the mycelium network, which is just incredible. And it's such an amazing thing that nature creates that we don't normally get a chance to look at. So we're very interested in exploring magical elements of the play but also some of the imagery in the play describes what's really beautiful and extraordinary about a British wood.
[00:16:08.860] Kent Bye: Yeah and Robin I'd love to hear a little bit more of the world building process here because I could definitely see a evolution from through the eyes of the animal to tree hugger to we live in an ocean of air and even you know aspects of sweet dreams in terms of what was originally going to be like the haptic and sensorial experience of it, which then had a pivot to maybe not have as much of that within this final experience. But yeah, there seems to be the Marshmallow Wizard Feast aesthetic in terms of capturing aspects of the earth at many different scales, which I think is one of the more intriguing aspects of this piece in terms of the nested hierarchy of these different contexts that you're able to really take us to a place that we couldn't actually go as humans. And we have to have the virtual representation of that. And so, yeah, I'm just curious to hear as that world building out how you're able to transport what would normally be a theatrical production where there'd be a lot of imagination about these fairies and the world that they live in, but you're able to actually take us to these places that goes way beyond what a normal theatrical production could do.
[00:17:11.957] Robin McNicholas: I think that's it. The explorations in our past work, even down to, we were involved in the animation of a documentary film called Fantastic Fungi. And effectively, what we're interested in at MLF is exploring sensory perception. And through that, using tech like Unreal Engine, for example, You're right, we can take people into the nooks and crannies that I think a traditional stage show may struggle, although I imagine there's quite exciting ways that you can address that. It's not to rule out the creative possibilities on a stage, but it's leaning into what Unreal Engine can provide for us. For example, you can create characters that defy the laws of physics. you can create kinetic characters. And it was very important for us to hand over and empower the acting company with those characters. So they can be puppeteered and exist within the nooks and crannies of this forest in ways that have been very difficult to achieve in the past, especially in a live setting. And I think what we have discovered with that process is that there is something quite new and it introduces new conversations that in fact are just exposed in recent times because of stuff like real-time mocap as a new input. You know, in the past, real-time mocap is quite a laborious process that involves multiple takes and lots of cleanup and post-production. Whereas within Dream, we're using live mocap. So it means that we can improvise, and we can explore and there's more creativity, it's less of a painting by numbers approach, and more involvement, creative involvement from the actors, and the movement director. And as a result, allows us to cultivate in real time, cultivate ideas, and I've got to say that's one of the most exciting and eye-opening aspects of this that then extends to like, hang on a minute, these virtual beings in the future, audiences can embody these virtual beings. There's all kinds of new questions that emerge and new narrative structures that emerge from that development. And I think having seen a few shows now and allowed our brains to rest a little bit, Our sites are firmly fixed on like crikey. The discoveries are the important thing here. You know, something extraordinary is on its way and it's interwoven with virtual beings, telepresence, hopefully when cultural spaces reopen, immersive tactile experiences that erode the geographic constraints that are usually entangled with bricks and mortar theaters.
[00:20:28.045] Kent Bye: Yeah. I wanted to dive into some of that motion capture and the puppeteering aspect. And then also after that, dive into more of my experience as well as some of the agency questions that I think come up as you're exploring as well. But one of the more interesting things that I saw out of the dream experience was, you know, you start with, Hey, this is live and there's a live shot and. At this point, I actually don't know if it's live or not. It's like trying to interrogate the liveness of the live, I think is another deep question that I come up with all the time when I'm watching these performances, because it's like the wizard of Oz, the person behind the mirror could sort of be faking all of it. And so it's hard to know what is it about this that is making it live. But as you go into like, Hey, we're about to jump into this. Here's what we're doing. You're kind of in some sense, breaking that fourth wall. You're saying this is how we're doing it. And then you, you see it. And then afterwards you have even more. the breaking down of how these characters were being puppeteered. Because as I was watching it the first time, I was like, I wasn't necessarily thinking about, oh yeah, there's a person behind this right now. Because again, I don't know what's pre-recorded and what's not. And you're doing a combination of synthetic simulation on top of these live performances. And so I'd be curious to hear about that. evolution of this puppeteering and having all the live mocap and working with these actors. And they're learning how to work with these virtual embodiments and find new ways of combining choreography with dance, with puppeteering and acting and all of this that you're kind of fusing together, which I think is, for me, one of the really interesting innovations out of this piece that I can see is going to have a continued experimentation with this specific aspect of the live performance. and these different virtual embodiments. But I'm curious to hear about that process of, you know, if that was always there from the beginning before the pivot, and then as you get into this kind of live performative aspect, you know, some of your takeaways now that you're halfway through, what are the major breakthrough insights that you have in terms of this type of live performance in this context?
[00:22:27.624] Pippa Hill: I think we felt quite strongly that there is a necessity for the audience to be held at the beginning of this type of immersive experience. In VR, it's called onboarding. In the theater, I guess, it's what you get when you enter the building and people are really nice to you at the box office and they tell you where the bar is and where the playroom is. And there's a welcome where you feel as if you're being looked after and gently eased into the first part of the storytelling. And very early on, Robin said, well, I think it would be amazing if we started the show in that way, where we have an actor who's not in character yet, who greets the audience and takes them on the journey. And I thought that was a completely genius idea. And we had something similar with the show that we had conceived before, but it was more smoke and mirrors. It was more about tricking the audience into feeling like they were going into a real shop, and then suddenly the real shop turned into something completely different. Whereas this was more of a honest approach, I think, in terms of welcoming the audience, setting their anxieties about the format and the technology at ease, and holding their hand and taking them in. And that felt very, important. It is all live each night and we really felt that backstage before a gig or a show or a TV interview is quite a familiar format for most audiences. Most people have seen something along those lines when they practice TV or newsrooms or gigs so we felt it was something that was familiar and something that people would recognise and feel comfortable with. So that felt really important to us. And the honesty of showing the studio, we do that three times in the show, beginning, the middle and the end. And that felt very important, I think, on a really simple level, because of the play within a play, which is a very Shakespearean thing to do. But the play within the play, in Shakespeare's plays, is where you see the truth. you see the real honest truth and it informs the rest of the storytelling. And so we were really inspired by that. And we felt it was a very important element to allow the audience into the truth. And of course, you know, MLF have a brilliant history of playing with perception. And we were really fascinated by perception. There's a line in the play, which is, how easy does a bush seem to be a bear? And we were very inspired by that, the idea of showing something that isn't real, something that is real, and asking the question of the audience, you know, which world are you more excited by? Which world do you want to see more of? So that was something that we were very inspired by.
[00:25:41.482] Kent Bye: Yeah, I mean, I don't know if you want to expand on the acting part, cause that was a big part of at least the version that I saw with the cobweb actor go through how they were puppeteering the cobweb there. And so, yeah, just the ways in which that you're translating the motion capture data into these virtual embodiments and how puppeteering is maybe a different. aspect of acting that is distinctly different from theatrical performance on its own, but you're fusing those things together. And I'm just curious to hear the fusing of the technology is actually introducing new aspects of how these actors can act, which we've seen in other ways of like the Under Presents Tempest presentation of actors being able to be at home and use these virtual technologies. But here, they're in a motion capture studio, but using the highest end mocap that you can get and what kind of things you can do when you have that level of detail of being able to track an embodiment and be able to translate that into a virtual character.
[00:26:31.401] Pippa Hill: So what was really amazing about this process, from my perspective, was I'm very used to working with actors in a space exploring character, exploring puppetry. But what was really amazing about this was that the development of the characters was happening at the same time as the rehearsals. So Cobweb is a really good example, because Maggie Bane, who's playing Cobweb, the first version, there was an element of it that was quite counterintuitive to puppeteer. So some of the actors are in full embodiments of their avatars, but Maggie is puppeteering an eye. So the eyelashes open when she moves her hands apart. And the pupil opens when she steps forward or steps back. But the original iteration of that, the tracking was different and more complex. And she worked with the technical team to develop a type of tracking that she felt that she could puppeteer in a much more natural way and play the truth of the scene without being incredibly distracted by trying to stand on one leg or do something that didn't help her performance. And I think that was a really, that to me was really fascinating because that's really properly developing the technology alongside developing the performance and the character development. And that felt genuinely extraordinary and really exciting.
[00:28:07.612] Robin McNicholas: I think to add to what Pippa said there, one thing I discovered was, of course, within this project, like many projects, it all comes together at the very end. And in terms of the characters, say, for instance, Moth, we had these elaborate plans for Moth to be this kinetic entity made up of lots of tiny moths that manifests at moments as a big moth. And we had to reassure Jurassic, who plays Moth, that this was on its way, whilst Will Young, the lead developer, was doing extraordinary things with flocking algorithms to make sure that Jurassic could in fact control those algorithms himself. So they're not sequenced. And that was an important aspect, like, come on, let's empower the performer. And so when Jurassic raises an arm, that's what triggers the flocking. And when he lowers it, the moth then reforms as a giant moth. And with all of that considered, we had interactive audio as well with this tool called Gesturement that we were mapping the acting company's movement to the music as well, which was another aspect. And of course, the flocking algorithm, Jurassic's performance, and the music all had to gel. And we're really pleased with where it landed. but more excited about realizing, wow, this is just the beginning. This is fully integrated. And we're getting a real sense now of how more advanced dynamic systems can be offered up to actors to convey their versions of these characters. And for me, lifting the lid on that was a major exciting aspect of this production.
[00:30:14.462] Kent Bye: Yeah. And I wanted to dive into the agency questions here, because I feel like this is one of the biggest existential challenges for how to blend together, feeling like you have a sense of embodied agency or narrative agency with this sense of being immersed into this world. I see it as like this contrast between having authorship of a fixed narrative, like say a film where the script and the actors, they have like the most control in terms of building and releasing the narrative tension. Whereas on the other extreme of the generative narrative, you start to get into an individual's expression of their own will to be able to do and make whatever they want. And it becomes more of an open world video game at that point. And so there's a spectrum between where do you land and with Dream, what I see at least is that because you were going for a goal of reaching 100,000 people, you can't have 100,000 cooks in the kitchen in terms of having each of those people have meaningful decisions that are going to change the direction of the narrative. And so finding other ways to find that they're included. And so I think this is a challenge. I think actually the goal of having narrative agency versus the scale of having the number of people that you are, it's a tricky thing to handle. But I think where you landed with having this tiered system of having people that do have a 10 pound ticket that they have ways to do that, but also people who can passively consume, this type of model could potentially be used into other immersive theater projects that have had trouble of scaling up to that scale or reach a certain level of accessibility. So there's these accessibility trade-offs also in terms of making it available for people, but in terms of having on the other end, maximizing the level of agency that you have in the experience, they seem like this dialectic that seem to be mutually exclusive on certain aspects of those things. So I'm very curious to hear about your own exploration of that and perhaps even going back into where you started with the LBE of how you were addressing that issue and then how that evolved into what it ended up with Dream, but how you were kind of wrestling with this tension between the narrative design versus the agency.
[00:32:21.737] Robin McNicholas: Well, I think you're dead on in terms of the difficulty that we're faced with in terms of creating agency. Our approach was to aggregate user data. So we had a heat map, effectively. And in a very basic way, users interacted with a firefly. And the firefly played a role of part lighting department, part costume, and part choreographer in a way of allowing Puck, the lead character, to navigate the wood. And effectively, our heat map would presenters with the most votes effectively of a location on the map and a hero firefly would be placed at that point that would draw Pock in. And actually, that's where Nick Cave's voice came in. Nick Cave played the voice of the wood. I would say along the way and that process exposed so many difficulties. We got so close to a full screen, non-invasive GUI, as it were, which we had to shelve painfully and reluctantly, had to shelve, partly down to time, but also down to just the design process and being explicit about the moments within the experience when audiences should interact. And you've got to remember, we're dealing with audiences from four years old to 104. that was the goal, you know, and so we had to be inclusive. So that was a design challenge to make something that people didn't require prior knowledge to interact with. We knew that if we would, you know, hand a PlayStation 5 controller to a gamer and they'll be thrilled and they'll take to it like a duck to water, but hand that same controller to someone whose digital literacy is just improving because of the pandemic, and they're just getting to grips with Zoom, that's a very different kettle of fish. And so the design process there was a big influence on how we made our decisions to bring interactivity in. I would say that we scratched the surface there. And for me, being so passionate about that area, what I look to is just the realization that it's like, yeah, this two-way mass audience interaction can be done. It's a challenge. And to some expect, the open world with AIs that help that process is, as you mentioned, a completely different silo. And something that I hope that this project presents to other makers and to respected teams within the consortium to realize, oh, there's something there. We haven't quite cracked it, but let's lean into it and explore it further. And that overlaps in some way with other emerging cultural experiences, such as Bandersnatch, that non-linear storytelling, branching narratives that are emerging in popular culture. And in the case of Dream, We entertained the idea of branching narratives, but very quickly came to a decision to have a solid narrative with a robust arc that held people for cohesion and for many other reasons that exposed the possibilities of, yeah, we've learnt now through this process that that is a challenge that we'd be very willing to take on in the future.
[00:36:11.435] Pippa Hill: It was interesting from a narrative perspective, because the branching narrative idea whereby different members of the audience could choose a different narrative strand to travel down, we explored that in some depth. And what we found was that the narrative became more pre-baked that way. You could send people off on different paths and let them choose their way, but the elements of predetermined narrative became much firmer that way. One of the other reasons why we lit upon this idea of the audience being able to send in their fireflies, it did give them the opportunity to explore completely freely. They weren't set on a rail and sent off down one path, which I think would have narratively been the experience if we'd given five different options, because ultimately you then have to be more firm about where people land in order to hold a narrative together. So we did explore it in some depth, and this way we felt there was more freedom in a way, but the interactivity felt that it had a lighter touch, I think. But the other thing that we really wanted to try to experiment with was the idea of an interactivity that gained momentum in terms of narrative meaning. So the first time you interact in Dream, it's really good fun and you're pinging fireflies and they're lighting up the forest and it's beautiful and it's kind of, you know, Puck's having a great time and is collecting all these fireflies and it's enjoyable. But the second time it happens, you're faced with a character who is completely devastated and your job is to help to re-energise them and to rejuvenate them. And there's more weight and more responsibility for the audience to take on in that interaction, I think. And then the final one, I think, is an even weightier responsibility, which is you've just seen this landscape completely devastated. Can we all actually make an effort to do something about it and help to rejuvenate it? So we were interested in this sort of gaining momentum in the meaning of the interaction as well, was something that we were intrigued by.
[00:38:36.145] Kent Bye: Yeah, I wanted to just share some of my own direct phenomenological experience of that agency, because as I listened to some of the marketing, it's like, Oh, you're going to be able to interact with this story. And I'm thinking that's a whole range of like, Oh, I get to be a firefly that I can fly around and see this experience, which would be amazing to be able to do that. But at the same time, it's in a 2d interface and we already have a lot of challenges in VR in terms of attention, what you're paying attention to. So if I'm able to fly around and look at everything, then, you know, obviously I'm not having the focused attention of what's unfolding. So that's a trade-off there of having that embodied presence. The narrative agency, I wasn't necessarily expecting to be able to have a branching path. I was in my mind kind of expecting something that was a little bit more on rails. But I guess the challenge that I had with the agency was I had this whole separate context that popped up. And rather than being fully immersed in this world, even on the screen and directly interacting with what would be more of a, let's say an augmented reality layer on top of this world. Cause the challenge that I found was that, okay, I'm dropping fireflies into this scene. I want to try to trace my agency. I want to drop here and see what happens. And it's almost like when you're turning on a faucet or trying to get the right temperature for a shower, and it's like you turn the knob and it's enough a delay that the longer that delay, the more that I can't trace my agency. It just feels at that point kind of random. And then, you know, to try to see like, okay, I'm going to try to direct it, but it's also an aggregate direction. So it's thousands of other people also dropping fireflies. So whatever I do is also contrasted against what other people are doing as well. And so having that level of trace of agency then becomes some part of a statistical influence, but it's again, like butterflies flapping its wings and it's hard to have a trace of my narrative agency or any sort of agency within that. So that was sort of my experience of that. And I almost thought, is this taking me out of being immersed into this magical world? And would it, would I had a more immersive experience of not having to stress with having this interactivity and expressing my will versus being fully receptive and immersed into the world? And so I think in some ways, the second context started to break my virtual embodied presence within this world and kind of look at this other screen and then try to correlate the two when they were really quite separate. So that was at least my experience of it. And again, the other thing I just want to say is that I don't know if everybody collectively decided to not participate, like how that would have changed how the story was unfolding. Like if Puck would have got lost or it would have been like, say, Hey, I need some light. Or like if there had been some sense of like, okay, now we're going to participate because this character actually really needs us. So that again was another aspect of like, it kind of felt like, well, it was going to kind of move on and I couldn't really actually tell how it was shifting things. So that was just some of my own experience of that agency of dream.
[00:41:18.587] Robin McNicholas: I think that I identify with your experience. And I think it's one of the key learnings. And we'd certainly, I think, given more time on the ground or iteration, we would have probably refined that further through user tests. However, saying that, what was interesting was we were surprised at the young audience, for example. kids under 12 engaging with this and their experience of ability to comprehend and access. And just the basic level of interaction there was an aspect that we were quite encouraged by. And I'm not saying that we designed this for under 12s in terms of the design process, but similarly for those through user tests coming in to identify those who are non-digital literate to get their heads around this kind of first rung on the ladder, there was a sense there. But with all of that said, there's a kind of needle in the haystack aspect to mass interaction, where I think one of the largest learnings is like, wow, I think it's in the handholding process, allowing an active audience member to correlate and make sense of their agency is critical. And it's certainly something that we would like to develop and share our experience of further to say, hey, we've not cracked this by any means, but we're recognizing it and really keen to see what other people can riff off the back of or certainly apply that experience to our future projects.
[00:43:11.550] Kent Bye: Yeah, but I don't know if you had anything to add there.
[00:43:15.578] Pippa Hill: Well, I mean, I think Robin's probably summed it up really well. I found the challenges of the interactivity narratively really, really exciting. And I agree that as soon as you have a large number of people interacting, they need to have agency. They need to feel that they're having an impact. And I think we definitely got part of the way there. And we were very excited and very intrigued by the possibilities. But yeah, you know, it's difficult. It's a really difficult thing to pull off. And I loved the interactivity, but I'm a complete novice when it comes to that kind of thing. So I think I probably sit in the under 12s in terms of my enjoyment of that particular mechanism. I just, I really love it. But yeah, I can see where we've been and where we could go.
[00:44:10.352] Kent Bye: Yeah. And just in terms of the feedback that you've been getting, because I know you're doing a survey, which by the way, that was a whole other experience, which is to fill out like the 60 question survey, which is almost like, you know, experience within itself, a very interactive part of the, of the experience, which was, you know, I think there's an element of like, you just experienced this beautiful thing. And then you have to sort of quantify it and all these sort of numbers and stuff, which is like a whole experience to go through. But, um, happy that there'll be data that is going to be produced. But I'm just curious, both in terms of the data that you're seeing, but also just hearing the feedback from people, what's been the reaction so far?
[00:44:43.934] Robin McNicholas: The reaction from our community, friends within the XR scene has been really encouraging. I'm relieved and very proud of what we've created. And we're determined to share the knowledge and To some extent, it's just very important to share the obstacles and challenges and difficulties that we've been faced with as well. But on the whole, there is a really positive response from a varied audience outside of the XR scene. I think that the traditional theatre goers, for example, they were my biggest concern, if I'm perfectly honest, in that Well, technology and the use of tech is a barrier between the maker and the audience. And the live theatre experience is all about the visceral tension in the room, the atmosphere in those live spaces. And this production is by no means suggesting that it's a replacement for those live events. We cannot wait to get back into theatres and other cultural spaces. And what's encouraging with all of the tech and all of the unreal aesthetics and new aspects that traditional theatre goers have been faced with, there's still a sense that it is lifting the lid on something that we might see much more of in the future. And from my perspective, the prospect of embodying these characters, the prospect of eroding that line between the performer and the audience is where I think there's lots of creativity and exploration to be had.
[00:46:37.126] Pippa Hill: I think from my perspective, the most exciting thing is the numbers of people who've come and watched it, which honestly really blew my mind. There was one performance where we had 7,000 people watching and And that's a whole week's run in Stratford-upon-Avon in 50 minutes. So the accessibility and the democratisation of access through this type of technology, I think is really thrilling, really thrilling. And I think the theatre industry, there's a genuine curiosity about how the toolkit that we've developed for this show could be applied. in live performance in a space with an audience as well. And what delving into that technology can bring, can add to the toolkit. I think it's genuinely really, really exciting.
[00:47:40.706] Robin McNicholas: I'd like to add to that in that one of the important aspects of the XRC, especially with the likes of Marshmallow Laserfeast and similar entities who are making immersive works for live spaces, We've been starved of that during the lockdown. And this is a really hopeful ray of light that has made us realize, having knuckled down and focused on generating experiences for online audiences, that there's real opportunity to keep the scene going and keep active. To put things into context, we live in an ocean of air, sold 30,000 tickets over five months, And we did that for Dream in four nights. And not all of those tickets were paid. And I think that's part of it. We're getting lots of the audience members engaging with Dream for free. But of course, there's advertising and other ways of drawing revenue from that. And I think large mass audience figures are an important aspect of allowing the XR scene to thrive and to be taken seriously. Therefore, I hope that this is encouraging for others to effectively nurture an artistic work that is in essence, celebrating the performing arts, and allowing the multiple disciplines to merge, you know, to cherry pick from gaming, from theatre, from film, and offer up this, this new area. And Yeah, as a result, I feel really encouraged and I just can't wait to get obsessed with a new project.
[00:49:32.212] Kent Bye: Yeah, I wanted to share just in terms of the innovation of that scale and why I think it's so important is because you have Slipknot More experiences for a few hundred people, but then you have experiences like Draw Me Close, which is a one-on-one piece that was tried back a number of years ago. But then you have a piece like the Meta Movie Project where you're cast as a protagonist with other actors that you become a character within the scene, but then they have flybots that are flying around that they have embodied agency. They can fly around and see what you're seeing. But I think you're at that additional layer of allowing people to reduce this down to a cut scene that then you could have and show maybe eventually that'll be another ticket tier. That if those ticket tier prices will be able to really make that sustainable for the whole production, because to have that real narrative agency, it needs that one-on-one type of thing, or at least smaller scale. But to make it economically viable, you need to have a number of people seeing it. So I think this kind of tiered system where you have maybe different levels of like, maybe people were perfectly fine to not have any interactivity and they just want to enjoy the spectacle of being taken on this journey and have that live performance element. And I think that this project is going to already start to prove out that there is a demand for that, but also that people feel like that that's satisfying. And I think as you get back all this data, we're able to potentially prove that out to the larger industry and potentially have this tiered model as we move forward.
[00:50:56.140] Robin McNicholas: I completely agree with your suggestion of a tiered model. You cannot compare the sense of immersion for a project like Draw Me Close. Incidentally, Ollie from the All Seeing Eye team was a big part of the Dream project. And I would say that Draw Me Close is one of the most significant immersive works I've had the privilege of experiencing. And there's a kind of light regular and full fat, and the light effectively is a stream that large audiences can enjoy. But through syndication, when you get to the full fat, all singing, all dancing, tactile, one-on-one experience, that's a very different, very expensive aspect. But if you treat the project and story world as a living ecosystem that can syndicate, to reach audiences that are not only from different demographics, but different areas of interest, you know, and it just allows us to begin to understand what your offer is to audiences and what windows you want in on a story world and how can you provide and author a story world with those windows considered to serve, in the highest possible production values, people at home and keep their interest.
[00:52:29.242] Kent Bye: Great. And finally, I'm just curious to hear, as we look into the future, what you see the ultimate potential of this type of immersive theatre, immersive technology fusion together, what the ultimate potential of that might be and what it might be able to enable.
[00:52:46.026] Pippa Hill: I have no idea. And that's what's so exciting about it. I'm looking at what we've done over the past year and a half. And I don't think there's anything on the cutting room floor. I don't think there's anything that we've tried that we think, nah, that's rubbish. We can't use that in live performance. There isn't anything. So the potential of everything we've experimented with is still really hot, still really live. And that is the most exciting place to be in a research and development project in my view.
[00:53:18.923] Robin McNicholas: From my perspective, having made this project in Unreal Engine, we know that we can port this to immersive VR settings when doors open in the cultural venues and different sites that we can showcase this work. And we know, again, because of the way in which it's been authored, that it's easy to adapt this material for a live stage show, you know, video projections are commonplace within theatres. And there's no reason why audiences, global audiences can't remote in and engage in those live theatrical works in venues. And I feel as though what's exciting is at the core, this story world, can effectively, if made in the right way, and thanks to Unreal Engine and companies like them, where their focus is on integrating DMX lighting and theatrical tech, enabling practitioners like ourselves to get the most out of a project with the energy that's gone into it. And I think and hope that when the pandemic passes, there will be more opportunities to really explore these ventures further.
[00:54:49.161] Kent Bye: Right. Is there, is there anything else that's left unsaid that you'd like to say to the broader immersive community?
[00:54:55.802] Robin McNicholas: I would say when we started this project, I caught wind of the size of the volume that we were going to use the stage space. And in fact, It was seven by seven meters and it's the exact seven by seven meter footprint that is in the MLF HQ. We have a small bike on origin set up there. And I realized, Hmm, that's not very big at all. And we had to create a forest and take people into all the different nooks and crannies of this forest and Converting a 7 by 7 meter volume into a 7 kilometer square space was a major challenge. And what I am thrilled about is that the suspension of disbelief wasn't tripped up by the fact that the acting company were effectively pacing up and down We were carrying out these polar shifts. When Puck would walk to the edge of the volume, we had to, with clever camera trickery and things like that, flip the volume around so they could spin themselves around and carry on walking as though in a straight line. And I cannot tell you how relieved I was in probably week three, with the acting company having rehearsed. And the difference in their facial expressions, because When we first suggested these polar shifts, it just seemed so complicated and impossible, but they took to it with so much willing that that in itself from our side is a major win. and they managed to pull it off discreetly and I hope anyway to audiences that it's unnoticeable the polar shift and if you watch it back through the stream you'll see the camera tricks and moments where we do the switcheroos and yeah yeah so that and the fact that all of the theory about using boxes, staging boxes to at one moment represent a rock and another moment represent a branch in the tree, that we pulled that off as well. And all props go to the acting company. They threw themselves at the challenge. And I'm so proud of their capacity to just take on this fluid process. willingly and with so much enthusiasm. And I think that that is testament to the human spirit within this organization and consortium. We were so determined and the atmosphere was always incredibly positive, despite COVID tests at the beginning and end of every week and all the difficulties relating to personal lives and all the rest of it. So it's been quite the journey. But I will always miss the passion that has evoked.
[00:58:08.874] Kent Bye: Yeah, that was my question that I asked during my session that didn't get asked was if you had deployed any redirected walking techniques, and it sounds like you were doing some modified things there in order to really get the best use of that space. But also it should be worth mentioning that M, the lead actor of Puck, they were wearing a VR headset, which was like, what is that? Was that a pass through? Were they in VR the whole time?
[00:58:33.480] Robin McNicholas: They were not in VR the whole time. Effectively, I found it hugely important to sign to the audiences at home that this is a virtual world that Puck or M is engaging in. And there were movements throughout the experience that were absolutely impossible to do with a VR headset on.
[00:59:00.640] Kent Bye: That's what I was thinking when I was watching it. How is this happening?
[00:59:06.304] Robin McNicholas: Yeah, well, that's the magic of theatre.
[00:59:09.626] Pippa Hill: It should be noted that in the section where you see the actors walking around the space and M does have a VR headset on and it is switched off. they are effectively blind in the whole of that two minute section when they're being flown about by the other actors in the company. And that took an immense amount of trust for them. And also, I think it should be noted that when we started rehearsing that scene, some of the actors had not touched another person for nine months. So it was an immensely moving moment for the company to actually be touching one another, which of course we had to do very carefully in very short periods of time to adhere to the COVID legislation. But it became a very pivotal scene in the show because of what we'd all been through. So for M not to be able to see anything for that whole period of time, and to be lifted and flown around the space puppeteered by the other actors was a major piece of choreography by Sarah Perry, the movement director, and a huge exercise in trust for the performance.
[01:00:29.262] Robin McNicholas: I'd like to add, just finally, my feeling is that the use of VR within live virtual productions and the authoring of these kind of works is just going to get so much more important as we move on. Just from a development side of things, being able to rehearse at home, to be able to engage in the complexities around set changes and mixed reality, and it exposes exciting opportunities to use augmented reality as well.
[01:01:12.018] Kent Bye: Yeah, well, I think at the very beginning, this was a research and development project, and there's certainly been a lot of R&D and a lot of innovations already at this point. But not only that, but very immersive aspects of the world building and the story that you've been able to create. And as people have been able to experience it and give their feedback, I think that's testament to that. But what I'm excited about as well is just all the other insights that you've already started to discover and just to hear that there's so much more potential, that there's very little left on the cutting room floor, that there's lots of stuff to be mined and expanded on and lots of new worlds to be created and explored through these story worlds and immersive entertainment and yeah, the future is bright, it sounds like. So, Robin and Pippa, I just wanted to thank you for not only working on this project, but taking time out of this very busy week of the premiering and this marathon that you've been on to produce this show, to be able to share some of these insights back to the community. So, thank you so much for joining me here on the podcast and for sharing your stories.
[01:02:09.326] Robin McNicholas: It's great to chat, Kent. Thank you.
[01:02:12.670] Kent Bye: So that was Pippa Hill. She's the head of literary department at the Royal Shakespeare Company, as well as Robin McNicholas. He's a director at Marshmallow Laser Feast. So I have a number of different takeaways about this interview is that first of all, Well, I had a couple of times to be able to actually experience this piece. Once, which was the interactive, and once without the interactions. And I think after watching it without the interactions, I started to appreciate some of the interactions more than I did, say, the first time. I think the expectation of being able to step into a story world and be some level of embodied presence and have a certain degree of interactivity, I think, is this expectation that I heard from hearing about it. But then, you know, as I actually go into it, the amount of interactions that were happening were very abstracted. They were happening in a separate screen, and I didn't necessarily see much of my own trace of my agency. You know, the actions I were doing, I couldn't see how that was impacting any dimension of the experience that I was watching. I think part because there was a bit of a delay between when you were taking these actions and when you were actually seeing it, but also because they had made this decision to separate the context of the interactions into a completely separate window that as you were watching the experience, you would have like a full screen cinematic where you're watching this beautiful immersive world, and then they would do a split screen and then bring up this graphical user interface that you would then interact with on the side, and then based upon your little drag and drop slingshotting these little fireflies, and then you had to kind of orient yourself into this abstraction by looking to see their representation of the actor and what direction they were looking, and then to say, okay, I want to put it there, and then you sort of launch a Firefly in there, and then to be able to switch your attention over to the main screen to see, okay, what happened. But because there's a bit of a delay, and then it's sometimes hard to even know what you did. So this level of interactivity is what Catherine Yu calls audience participation rather than interactivity, because the ways in which that is interacting with what's unfolding is very hard to be able to trace back. I know that they were saying they were trying to do this kind of aggregate voting, but at the end of the day, the experience as an audience member is that I can't tell if whatever I did was making any difference at all. And I think that's trying to think about, okay, what is it about these interactions that you want to see? What makes it satisfying? Something like Sleep No More, when you have a sense of embodied presence as you're walking around this, like, warehouse, you're able to pick and choose what narratives that you want to experience. And I know that Pippa said that once you allow the audience to start to have these branching, then, you know, you have to really lock into that narrative. But I kind of already felt like the narrative was pretty locked in here. So I think that those different types of experiments then, for me, as somebody who's attended a lot of these things, would be a little bit more interesting. The thing about the sleep no more aspect is that you kind of are walking around and you have these serendipitous collisions and it can be quite magical when you happen to run into something that is happening and emerging in that moment and it just hits you in a very specific way. And it's that alignment where you're able to have those collisions as you're expressing your agency but having experiences that could only come from you having the choice of being able to move around and be at that place in that moment at that time and then have different moments that emerge from that. to do that at scale is something that's quite difficult. But I think I will say before I started to go into this experience, there was a little bit of that anxiety and that stress of like, oh my gosh, am I going to be able to figure out the technology to be able to interact with it? And I think, you know, there's a whole wide range of what they said from four years old to 104 years old, that that's such a broad demographic of accessibility that you really have to become quite limited in terms of what types of interactions that you do, because you want to make sure that it's clear for everybody to be able to do the action, but then there's this trade-off of the type of either narrative agency or an embodied agency or just degrees of feeling like you're really immersed and involved within the story in a participatory way. There's a little bit of an inverse relationship there in terms of like having such a broad demographic and broad scale of how many people they wanted to experience that by necessity you end up having what essentially feels like a collective voting where there's no sort of back channel for you to coordinate in any way. It reminds me of the collaborative project called Place on Reddit, where you have people being able to control one pixel over a certain amount of time. In order to draw anything, you had to create community and have all these back channels and to coordinate amongst lots of different people to say what the different pixels and different colors were going to be. And it was from that collaborative effort where you were able to have like emergent behaviors that came out of the chaos of everybody participating. So I definitely see that there's this dialectic between the order and the chaos where you want to feel like whatever agency and intention that you're putting into the experience, you want to see some degree of manifestation of that. But this was on the chaos end of the spectrum where it was not really easy to be able to make any of that connection. So it just felt a little bit unsatisfying in terms of that interactivity. But as they move forward, they're going to continue to experiment in different ways. It made me actually reflect upon what are the different aspects of interactivity that makes it feel like a satisfying interaction. Probably the highest level that I've seen is an experience like either the Collider or Draw me close or the meta movie project where each of these projects are a very small scale you're interacting with other individuals and you have a little bit more opportunity to kind of do improv or a certain amount of your own personality and your flair and one of the themes that Noah Nelson has talked about in terms of third rail project is Being able to be witnessed and that glance back from the immersive theater actor back into you to recognize you as a person that's present and that's participating and that is there to be a part of whatever is unfolding in that moment And that's really hard to do at the scale of thousands or tens of thousands of people watching the same experience to really uniquely identify the gifts of each of those individuals that are there. But I think that in some sense is the goal is that you want to be able to add a little bit of your own uniqueness and your own perspective into these experiences that are unfolding, while at the same time still have a narrative structure that is building and releasing tension and not getting completely derailed by having a whole bunch of people wanting to pay attention to you. So this is, I think, one of the existential tensions and dialectics of interactive storytelling, and that we'll still continue to find ways to really mash these different realms together. I see a lot of experimentation that's happening in the avant-garde when it comes to immersive theater, where they're really just experimenting with the form and playing with these gameplay mechanics. I think thinking about game design and the gameplay loop where you're actually going through the whole action of, you know, taking action, seeing what happens and then adapting it and taking the next iterative step, you're having these opportunities to quickly go through this gameplay loop. But like I said, there's no feedback mechanism to be able to kind of like have the degree which you feel like you're actually having those elements of the gamification or the. What I would say is like meaningful interactions to be able to participate. Although I will say that after I watched it again without the interactive parts, there are a couple of moments where people were saying that they were trying to create these small interactions so that over time it's gaining more momentum of meaning so that you're the first time you're just kind of slingshotting a lot of different fireflies. What Robin says that you're serving a couple of different functions. The fireflies are coming down and so they're lighting the scene and so they're part lighting department. They're part costume. and they're part choreographer. So all the different places that you're voting on created a voting mechanism and a heat map. And then at the end of the day, it created like this hero firefly that would appear and then that would set the location for where puck would go to. So there's ways in which that you're changing both the environmental appearance as well as the appearance of the actor as they're being able to have more light on their avatar representation. But it's not having any narrative agency or it's not, feeling like you as an individual have a true sense of embodiment where you're able to kind of express your unique character within the context of this experience. So I think by and far the biggest takeaways in terms of what's going to be carried forward in the future is some of these different innovations when it comes to the live motion capture and being able to translate these movements into different virtual character and virtual being embodiments. The flocking behaviors with moving your arms or being able to control the cobweb with Maggie Bean. or just facial capture, and also the movement as it's correlated to the audio. This was something that was happening, but it was so subtle that it was hard to even determine that it was happening. I think in the future, what might be interesting is to experiment a little bit with very clearly showing that there is some movements and how those movements are tied to what's happening in the audio soundscape. the audio can sometimes just fall back into your subconscious and you're not really necessarily even paying attention to it. So to have something that is correlated to the movements and what the audio is happening, for me as I was watching it, I just kind of assumed that whatever music was playing was pre-scripted. But if there's ways to convince the audience of the movements and how those movements are tied to the audio and to make it so that it's not so chaotic, but you can clearly see that when they're moving in this direction, they can have this audio signature. That, I think, is part of the challenge, is to make these symbolic connections between the body movement versus how that gets translated in different synesthetic experiences. In this case, it's sound. But that's where they're starting with, at least. And the liveness of the live, I think, is something that comes back to me again and again and again. Like, as I watch this experience, it's all in Unreal Engine. It's all digitally mediated. And for me, it's sometimes hard to identify that this is a live performance as opposed to something that is not happening live. The only cues that I get is the jitteriness of what's happening within the camera. If it was all pre-baked, I would expect to see a lot more steadiness of everything, or there wasn't even much glitching out of the characters. So they were able to clean up the motion capture to the point where it didn't feel like it had any glitchiness that was taking me out of the experience. But yet at the same time, what is the messiness of a live embodied performance within a virtual environment that absolutely convinces you that it's live? It's sort of a paradox of anything that's in a context that you experience in a simulation is that everything could be pre-recorded There's really no way for you to interrogate unless you yourself have an embodied experience and you're there interacting with something and it's mirroring you. That for me is what I've determined is one of the things that make you really feel like you're present is that you are interacting in a way that your actions that are embodied are actually impacting the embodied directions of the different entities that are around you. And that is something that has happened into a variety of different experiences at Sundance over the years, where there is this kind of mirroring effect that happens within these immersive spaces. but didn't see, obviously, because you're just interacting with a mouse and a 2D interface, it's a lot harder to have that type of interrogation of those live moments. So I think that a lot of these different innovations of trying to create these kinetic characters and live mocap and how a lot of the characters themselves was an iterative process of trying to see how you're going to abstract your different movements and how the character is being developed as the actor is learning how to modulate the technology. And that is within itself developing the sense of how the character is able to express their own character within the context of the story world. So that kind of iterative process was interesting to hear how so much of these characters and their character development came out of this collaborative process of the actor interacting with these immersive technologies. So it does seem like COVID and the pandemic really set this project on a different path with the original location based experience with their LBE, that they did a whole pivot and it was much more sensory and it would have been a much smaller scale. But part of that digital pivot was to think about, okay, now we're going to try to reach a hundred thousand people, which coming back to that design goal, then that flows down into all the different other decisions that you're making. If that becomes one of the underlying principles that you're trying to achieve, then that constrains the different innovations that you can do when it comes to some of these more interactive participatory aspects of the experience. But the thing that I came back to in terms of just talking to them here, but also just reflecting on the larger immersive industry and immersive theater is that you end up having to potentially find a way to try to take these live 3d experiences and try to translate them into a 2d context the live streamers that are on VR chat and Neos VR and these other VR streamers in general already have to deal with this issue which is that they're immersed into this 3d virtual real-time graphics world and they have to figure out how to operate the virtual camera to be able to translate that experience, the essence of that experience, to be able to be broadcast either on YouTube or Twitch. And when I talked to Kira Benzing and the rest of the team on Love Seat, they were doing a live theater performance, but they were also capturing a virtual performance of that and broadcasting that out into the virtual realm. And they did further experiments after that, where they actually did live streams. And when they do live streams, then Kira Benzing has said that it's very difficult to not have the audience start to occlude different aspects of what would make a very cinematic capture of the performance. And so you end up having these weird camera angles, or you see the camera that's flying around. And so there's these inherent trade-offs between whatever you would do to be able to optimize the best 2D experience of some theatrical performance, versus what's going to be the best for people who are actually there, immersed in that environment. And what Keira Benzing said with Love Seat is that whenever you try to do that, you actually end up having to end up producing two completely different shows at the same time and have different people who are in charge of all those things, because there are different competing trade-offs and you can't have the same person serve these different goals simultaneously. So you end up having to have competing teams that are trying to really focus on their own specific context, but it means that you end up having to kind of double a lot of your production load. But if it means being able to scale up to dozens or hundreds or thousands of tens of thousands of different people to be able to watch these shows, and they're willing to pay for it, then that's going to make it a level of a scale that make it worth it to do those different types of things. So I started to see that with the MetaMovie project, where you were a immersed character, and they did have these embodied robots, drones that were flying around. They called them flybots. And the flybots could kind of decide wherever they wanted to go. And there was different things that they could interact with and they could make more choices in terms of what narratives they wanted to pay attention to. There was myself as well as another character, but also two other characters that went off. And I had no idea what they were saying, but as a flybot, you could go and break off of the different conversations to be able to maybe get new insights and nuances into what's happening. One of the conceits that I've seen that works particularly well in this format is the murder mystery, where you're trying to get a sense of each of the individual's motivations and their behaviors and see if there's things that are suspicious and see if you actually find evidence of people who have committed some sort of crime or murder. So those types of whodunit murder mysteries end up working really well for these different types of interactive experiences, where you're trying to gather a lot of different clues and it'd be impossible for you to Essentially see everything because it's happening simultaneously and the virtual versions you have different ways of rewinding and whatnot But that's one of the genres that I think works particularly well, but in this this is Shakespeare And so they're just trying to like translate different aspects of a Shakespeare story world and I would say there's quite deviation for what pucks normal character is with a midsummer night's dream and they're really also focusing on characters that didn't really have a lot of development or agency within the original play and And so they had a lot of latitude to be able to experiment with what different types of scenarios and interactions do we want to have with these five characters. But at the same time, there wasn't anything about the story that really stuck with me in terms of really committing to the characters and to really feel like I was taking on a whole journey. I thought it was more of a guided tour through this world, both to see the majesty of some of the different world building and the set design that they were able to do, but the actual story didn't land with me as much. So I think that's in part why I'm focusing on a lot of the other technological innovations that they have here, because I think that is a lot of what is going to move forward in terms of the innovations that they're doing here, is a lot of the performance, the motion capture, and the world building, and the integration of the Unreal Engine into this type of performance. And what Robin said is that now that they have it in Unreal, then they're able to take whatever happened here and be able to put it into an immersive VR context if they wanted to. So I guess at the end of it, I'm sort of left with the interactivity. Like, why do we want to interact? Why do we want to participate in these different experiences? I think in some ways we want to feel like what it feels like to be alive, where life is all about making these choices of what we're doing and that there are some times when the choices that we make are revealing parts of our own character. And so can we create these scenarios where we feel like we're able to make these meaningful choices that are, at the end of the day, allowing us to reflect upon ourselves? Like, is there some sort of moral dilemma that we're faced with that we have to make a choice, like individually but also as a part of a team, that shapes the outcome of the story world, but that context, even though it's kind of ephemeral and in some ways contrived, are you able to simulate a context that allows you to discover new and different aspects of your own character? your character has revealed the choices that you're making through these different experiences. And the more pressure and stakes that are within those experiences, then the more meaningful those choices are made. The different types of experiences that I've had, the one that comes to mind in particular, the Collider, where you're interacting with another person and you have the choice to be able to potentially really troll somebody who's immersed within a VR headset, or someone, if you're immersed, someone could be kind of messing with you in different ways. And what are the boundaries between the power dynamics and your own boundaries? And I felt like that was a very interesting immersive experience that was really playing with this archetype of power and boundaries and control, and how you play with those dimensions within the context of this experience, but perhaps how whatever you interact there is reflecting on how you deal with power and boundaries in the wide world in the rest of your life. So I think that's sort of, for me at least, some of the heart of why this is so interesting, is that you can start to step into these story worlds, which are these completely different contexts, and maybe the different choices that you're making are completely tied to the story world and the context that you're in. And then other times, maybe there's different choices that kind of transcend all these different story worlds that you transverse across all these experiences. And over time, and maybe the choices that we make are revealing parts of our own character. And so there's this whole concept of qualia, which is imagining what you would do in a certain scenario or different situation. And then in VR, you can start to simulate enough of those different aspects where you can start to get a taste or a glimmer of what that might actually feel like when you're put into that situation. So I think with VR being a little bit of this experience machine, that's the potential. And there's so much unbounded potential for what you could do from a storytelling perspective and how to start to play with that, especially when you have the live theater actors. And one of the things that Robin was saying is that you're going to potentially have ways of having these actors interact with people remotely. And what would it mean to be in this immersive VR environment and other people in VR interacting with you, but people being there as a live audience and different ways that you can start to play with, what are the different collective actions that could then perhaps impact what's happening on stage? During the week of Dreams, by the way, South by Southwest was also happening, which was taking up a majority of my attention last week, going to all the different experiences there. experiencing all the different meetups and the virtual films and all the conference sessions. It was just the closest that I felt in some ways of recreating what it feels like to be at a conference where you're making choices and you're having these different experiences that could only happen based upon the choices that you're making. But One of the things that happened last week also was this little run of experiments, these one-act plays called Onboard XR, so hashtag Onboard XR, a number of different experiments, narrative experiments, and one of the narrative experiments they had was called Strings at String.dance, and you had the audience bring up their phones, and so I was in VR, and so I was having to kind of pull up my phone and type in the URL and get to this website where I would have like four quadrants and I would start to tap and on screen I saw a live stream of an actor who was dancing and we were trying to puppeteer this dancer to be able to do specific actions to kind of escape this place. So, it was a bit of spamming these different quadrants, but finding some way to, without communicating or having some way to coordinate or collaborate with a bunch of people in any specific way other than their actions on this app, to try to cause different movements to happen from this actor. So, that's yet another way of trying to translate a bunch of people's individual actions into a sort of voting that gets translated into a part that's happening in the narrative. But like I said, I feel like there's something about the agency that the more individualistic, the more that you're in complete control of the actions that you're taking and being able to see a direct action of your narrative agency, that the sort of collective agency, it's an interesting problem, but it just ends up I don't know, feeling like you're just voting and voting in an election where, you know, like sometimes you win and sometimes you don't, but you have in some level knowing that you're able to have some influence. But other times it just feels like why vote at all doesn't feel like it's going to make any difference. It's going to be a roll of the dice anyway of what ends up happening. So try to get away from that and trying to having into these meaningful interactions. while trying to solve all these other design constraints. I think that's the heart of what makes this such an interesting and intriguing area that is filled with lots of different opportunities for further research and further experimentation as you move forward. So that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a less supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.