#396: ‘Sequenced’ & the Challenge of Interactive VR Narratives

Emilie-JolyThe first time that I experienced Sequenced, I had no idea that my gaze directions might have been triggering different branches or small events within the story. Experiencing an interactive drama, but not knowing it was one of the design challenges that Valve’s Chet Faliszek has previously described to me. But there are many other challenges with balancing interactivity and narrative that Apelab CEO & Sequenced producer Emilie Joly explained to me at the Silicon Valley Virtual Reality conference in April.

LISTEN TO THE VOICES OF VR PODCAST

Apelab has developed a series of a Spatial Story platform on top of Unity in order to handle this type of gaze-triggered, interactive narrative. Emilie says that in some Unity scenes that there are over 150 different triggers and both subtle local agency flavorings of control, but also decisions that send you off into different scenes.

Emilie’s goal was to create an experience that is highly replayable as measured by it being something that could be experienced again and again, but with different results. But if the player doesn’t know that they’re impacting the experience with their behavior, then how will they know that they should watch it again? That’s the crux of the design decisions that have to be made in order to let the viewer know that they can dynamically interact with the story through their gaze.

Emilie says that they experimented with making it more explicit, but found that it was a better experience to keep it somewhat hidden. And so she’s okay with people watching the experience without ever knowing the extent to how much their behavior’s impacted the version of the story that they experienced.

Another idea that has come up in previous interviews is that perhaps the audience is going to have to collectively learn the best way to watch different types of interactive narrative VR experiences. Sequenced is a good example of an experience that is pushing the boundaries of what’s possible with VR storytelling, and so there’s a good chance that it might be ahead of what the VR audience is ready for. Too much innovation in this space could lead to frustrating reactions of the audience not getting it. And it might be true that it’s only as audiences start to go through experiences like this that they’ll start to learn the best practices for how to watch it. So this can be a bit of a Catch-22 dilemma for cutting edge projects.

For story-based VR narratives, I’ve anecdotally noticed that non-gamers will tend to sit back, keep their head still, and not really interact with the VR scene. These are the type of first-time VR users where you have to coach them to look around. In order to watch a movie or TV show, we’ve trained our bodies to sit completely still and pay full attention to whatever is happening directly in front of us. VR can try to force us to break out of these patterns, but the audience is still learning the language of interaction that gamers have been cultivating for a long time.

I’ve found that gamers are much more likely to natively know how to explore and watch an interactive experience. They also tend to push the limits of the experience by exploring the bounds of interaction, which can also feel like they’re quality assurance testers trying to break the experience or find edge case bugs.

In hindsight, I think that I might’ve enjoyed and appreciated Sequenced more had I known that it had hidden gaze-based triggers. Road to VR’s Scott Hayden used the phrase “Reactive storytelling” is to describe these types of hidden triggers.

Perhaps it’s worth having a tutorial for people to show them the extent of how “reactive” of a VR experience it is going to be. Or perhaps just knowing that an experience will be responsive to your behaviors and passive gaze interactions will be enough information for some people. Or perhaps some people will prefer not to know anything about the level of engagement available, and let good design of the experience make that explicitly clear. In the end, I hope we can just rely upon good VR design, but we’re in this awkward transitional period where the audience is still learning how to engage with immersive media while the boundaries of the VR medium are being pushed by companies like Apelab.

Another takeaway is that it sounds like the game engines like Unity and Unreal Engine that are used for creating these types of interactive narratives still have a lot of work yet to be done before everyone will be able to easily create their own type of experience that mimics Sequenced’s sophisticated triggering system.

I expect that eventually a lot of these branching story triggers will not have to be so hard coded, but that artificial intelligence agents will be able to be more reactive within certain bounds. Mark Riedl and Vadim Bulitko wrote up a really great paper titled “Interactive Narrative: An Intelligent Systems Approach” that summarizes over 20 years of research into interactive fiction. And for more information on interactive narrative and AI, then also be sure to check out my interview about Playabl.AI with Façade’s Andrew Stern.

Sequenced is an ambitious effort, and doing some important work in pushing the boundaries of interactive narrative. Road to VR previously reported that their 10-episode season is due to “arrive on HTC Vive, Oculus Rift, Samsung Gear VR and Google Cardboard starting in early Q4 2016.”

Here’s a trailer for Sequenced

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to The Voices of VR Podcast. So the balance between story and interactivity is something that I've talked a lot about here on the Voices of VR podcast. And on today's episode, I'm featuring quite a unique blend between story and interactivity. So I'm featuring Emily Jolie. She's the CEO of Ape Lab. And she did a project called Sequenced, which premiered at Sundance this year. So the thing about sequenced is that there's a scene that may have up to 150 different triggers depending on where you're looking at and your focus and your attention. And so there's actually like subtle little branches that happen within this experience. And so we'll talk about the challenge of balance this interactivity and storytelling within this aesthetic genre of anime. So that's what we'll be talking on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by Unity. Unity is a great way to get involved into virtual reality development, even if you don't want to become an expert on every dimension of creating a VR experience. The Unity Asset Store has a lot of different 3D models and scripts to get you started. For example, Technolust's Blair Reneau has won artistic achievement awards using a lot of the assets from the Unity Store. I'm not actually doing a lot of modeling and art for the game. It's a lot of kit bashing, taking Unity assets, tearing them apart and putting them back together. Get started in helping make your VR dreams come true with Unity and the Unity Asset Store. So this interview with Emily Jolly happened at the Silicon Valley Virtual Reality Conference at the San Jose Convention Center at the end of April. So with that, let's go ahead and dive right in.

[00:01:53.897] Emilie Joly: So my name's Emily and I'm the CEO of ApeLab. So ApeLab, it's a production company that started in Switzerland in Geneva and now we've moved up to LA. I've been working in gaming mostly and storytelling. So trying to mix these two mediums together and working also a lot on how to create stories and for new technologies in general. So I've been doing that only that for the past 10 years and focusing on different types of technologies and how do you build new user experiences and new stories for those mediums. You know, what kind of language can you create with those that makes sense?

[00:02:27.282] Kent Bye: So I've seen the sequence experience at Sundance and you're here showing it on the Vive at SVR. So my impression was that you're doing a lot of stuff with kind of a 2D art style in a 3D environment. So it feels like you're able to do some quick prototyping of putting these characters in different places, doing some animations with a 2D tool set like Photoshop or whatever, but then be able to explore what works in an immersive environment. So maybe you could talk a bit about what have you learned so far in doing that?

[00:02:57.138] Emilie Joly: Well, Sequence has been a very specific project because it's an anime in VR, which is kind of a strange thing to do, specifically mixing 2D in a 3D environment. But the core idea behind Sequence was really the interactivity, but the seamless interactivity that you don't really notice when you watch it once. So basically there's like 150 triggers in each scene, and the idea is that the scene can go six or seven different ways depending on what you're looking at. And that's the core thing that's interesting for us in Sequence. More than just the 2D aspect, it's more of an art style that we chose to do. We really wanted to do a hand-drawn animated project in VR, so that's a shot at that. Trying to make that work, and it's kind of fun to do. It's also very quirky sometimes because it's drawings, but we like the art style, and we like working with illustrators in VR. It's kind of fun as well.

[00:03:44.430] Kent Bye: So I had no idea that I was triggering anything when I first experienced it because when I saw it I just went through it and you know I just thought that was it and so this is a challenge when you're doing interactive storytelling is that you are perhaps giving the Viewer some sort of agency, but they may not necessarily even know that they're changing anything They may just think that was what they get

[00:04:04.532] Emilie Joly: Yeah, we've had a hundred thousand debates about whether if we would show the triggers, we wouldn't show the triggers. We've done some user testing where we show actually the triggers and you notice that you're triggering stuff, but you're actually unlocking a whole lot of stuff inside the story. For example, if you look at the door and the door opens and there's a little boy that comes out, you're going to be unlocking the story of that little boy later on. So if you go back into the app, you're going to have some new scenes or new characters or things that you didn't have in the first space. And what we're doing with the Vive and Room Scale is that you're unlocking things by walking around as well. So we mix proximity and gaze-based interaction to create the storyline. What's difficult then is getting a coherent story, so something that works from A to Z. So basically if you manage to get through the story without thinking there were triggers, it's actually that we've done a good job because you didn't notice. In a way, but then the goal is how do you get people to know that they're interacting. So maybe at the end you can get achievements or more of an unlocking gameplay mechanisms, but you don't want to break the fourth wall either. Because when you put triggers all over the place, people are like, it's less fun because you, well, I don't know. There's something, we're not sure yet on how to do that, but it's interesting to figure it out.

[00:05:15.468] Kent Bye: Yeah, if I were to experience it again, I would have no idea what to do differently to trigger different things. There's a little bit of cause and effect that happens within a VR experience where you're able to see some sort of immediate feedback loop. Like, say, if you're paying attention to somebody and they're giving you eye contact, then you can know that maybe you interact with them. But yet, if you have no sort of fast feedback loop cycle in terms of checking to see what's interactive and what isn't, then I just kind of felt like I was watching a 2D film was kind of the experience that I had.

[00:05:44.983] Emilie Joly: Yeah, not completely and it's been interesting to figure out how to understand the feedback but at the same time maybe what's magical is having what works well is having like four people doing it at the same time and then saying oh did you see the monster oh I didn't see him weird how come and then trying to get back to it but It's really the idea of something a little bit more smooth and kind of magical. Things adapt naturally to what you're looking at without really it being a goal. So yeah, it's half and half because it's mixing game mechanism with story mechanism. So you need to get all these two together. We also didn't want to do a game, not really. And when you put triggers and they see it, it becomes kind of a game like, am I looking at this or should I look here? It becomes more of a where is Waldo kind of thing. So yeah, we're struggling with that kind of aspect and figuring out how the smooth interactivity can work. Things we've noticed is that when there are characters looking back at you or animals that react to your presence or, I don't know, maybe you're going a little bit towards the door and then a character sees you and he turns around and he starts reacting to what you've been doing. That's something interesting but it doesn't break the story, it just adds to it or it's a bit like you're there but you're not really there.

[00:07:00.000] Kent Bye: Well, I think that's the challenge of balancing agency. There's kind of two flavors of agency. It's whether or not you're able to have kind of small control over a scene, but it's not actually changing anything of significance. You're maybe giving a little variation, but you could have that you're actually changing the entire outcome. There could be like three or four different, completely different outcomes based upon what you're doing. And so, as you're designing this, are there things that are just kind of like little vignettes that add to the story, or are there actually kind of like, completely different endings to a singular narrative plotline.

[00:07:32.559] Emilie Joly: Yeah, so we had two approaches for sequence. The first was doing, well, Walking Dead-style gameplay, Telltale-style, so having different endings, having a whole bunch of scenes, but that makes it also very hard for a VR production with, you know, four people doing a big project. You would have to build, you know, 30 scenes to make it relevant. So that, you know, every time you trigger something that means you have to build a new scene because they could go that way or they could go there. That would be like the dream project to do something that's completely almost like an open story world where you could explore everywhere. That would be great. That would be fun. In sequence right now is we've decided to build organic, what we call organic scenes. So the scenes themselves go different ways, but the story does not change. You just have different points of view on the same story. So say that you're looking at a specific character at a specific moment, all the other characters are going to talk about that or what you've been looking at. So it changes the way the scene goes, but it doesn't change the way the story goes.

[00:08:30.210] Kent Bye: I see, yeah. So what's been the biggest challenge for you then in doing this type of interactive narrative in VR?

[00:08:36.598] Emilie Joly: Well when we opened Unity there was like nothing so the first thing was building the whole tool to create that and now we're able to work on this without needing their CTO to be here every five seconds so it was really about building the interactivity and the language and the tools that go behind it so that it's easier to produce and faster to produce. That was really a challenge too. set all of this together and make sure things flow and that the production can, you know, move forward and without having to have the coder next to us every five seconds doing all of the triggers. And I think it's also about having a top-down view on what you're doing. It's very messy in VR. There's things all over the place. There's interactivity and it's hard to plan and know where people are looking or where they're going. So it becomes a big mess, you know, when you have 30 scenes, 150 triggers, figuring how that works is hard, so we're also building tools for us so that the script writer can, in real time, change the timing of the scene without having to go into Unity at all. Things like that have been a challenge of scripting, and really mixing game development with story development is something that's interesting to do, but I think VR is the first medium that actually could do that very well. It's been difficult for, you know, there's been a lot of debate between film and games for like 20 years now. And I think VR has potential to be something else. You know, I'm really thinking of that medium of a great mix between games and storytelling and experiences. We call them VR experiences. I think that's what native VR is, but not really know what they are. They're not films and they're not really games. They might be something a little bit different, I guess.

[00:10:15.975] Kent Bye: So are these tools something that you are planning on potentially releasing as a Unity package for other people to tell these types of stories, or is this something that you're just keeping in-house?

[00:10:24.848] Emilie Joly: Currently, it's something that we're building in-house as we work on different projects. So we have Sequence, but we're also working now on Break a Leg, which is more of a gameplay experience that uses the controller and gaze-based interactions. So we're building the content and the tool, and it goes back and forth. Every time we have a new feature, we add it in the tool, and that's great. And the goal is to create a platform called Spatial Stories. And what we want to do is have other developers who are interested in working in that space and exploring what we can do in terms of interactive storytelling in VR and help us build the tools and have them access those as well in the long run. So yeah, it's definitely the goal to create a community around that and hopefully these tools are helpful for others.

[00:11:09.098] Kent Bye: Yeah, and just to kind of clarify the terminology, because there's scenes have different meaning in the film world and in Unity world. So in the film world, a scene is a very discrete, like you're changing the whole environment. So my recollection of Sequenced was that there was just kind of three scenes of three different environments that I went into. From a Unity perspective, there could have been, like you said, 150 different scenes that were pieced together, but only three different big environment shifts. And so in that sequence of sequence that I saw, how many different scenes were in those, like what I remember as three scenes?

[00:11:43.809] Emilie Joly: Yeah, there are actually four scenes in the little prototype that you saw. But those scenes go different ways, each of them. So you trigger different animations inside the scenes. That's how it works right now.

[00:11:54.036] Kent Bye: So when you say 150 scenes, is that... Oh, okay, so 150 different triggers within the scenes. Exactly, that's it. Okay, so that makes sense. So, one of the things that I remember and recall about Sequenced is that I kind of felt like at some point I was watching like a 2D experience, you know, like, you know, the second or third scene, there's a crowd and, you know, the first scene you have a little bit of depth and you have a little bit of like dynamic feeling, like you're in a 3D environment, but it's 2.5D maybe, you know, But for some of the scenes, it felt like it almost would have been better to watch it on a 2D screen. You know, like, I kind of imagine, like, oh, this would have been better cut as a film rather than kind of me doing it. And so how do you really start to use the strengths of what the affordances of VR have to provide?

[00:12:38.633] Emilie Joly: Well, I think it's really in the way that, like, not necessarily in the art style, but more in the way the narrative is set. Well, you could not really experience sequence without a headset right now. It wouldn't work in terms of how the language is set on 360. So, yeah, that's basically what we're trying to focus on now is getting, you know, also used space a lot. Of course, you'll be walking around 2D sprites, but, I know, I think it's kind of fun to do. And it's also pretty new. I don't see other projects really doing that that well. And I think it's really about getting feedbacks on 360 all over the user so that he has some agency and that it's really an organic kind of experience and not just a static film that would actually work better as a TV show. And I think the fact that it's highly replayable and that the story can go a different way works also very well in VR because if you're really looking around and exploring and also want to It's been built for VR, so we haven't done this in any other medium. We're really thinking each scene in terms of how to set up the camera, where the user is going to be. We're going to try and do a lot of different types of camera setup as well, and how do you trigger things, and in which order, and thinking of where the user is looking. How can the whole environment react to that, so that you feel like you're really there, and it's meaningful to be in that 360 environment, rather than being in front of the screen. That's the important thing and give more agency. The fact that you didn't really notice the triggers is something that in a way is okay, but in another way it's not that good because we need the user to understand that this is, wow, this is actually reacting to me. I mean, that is what's cool with this, I think. I'm there and I'm doing stuff and the characters are reacting to me. That's cool. I think that's cool. It's a brand new language.

[00:14:26.940] Kent Bye: Yeah, having some sort of way of very early in the experience, like showing people that there's an interactive component or allowing them to discover it may cue them into that they're interacting rather than just passively experiencing it. And, you know, with this 2D aesthetic, is this just a convenience thing in terms of it's faster to rapidly prototype with 2D than to do the 3D models? Or what were some of the decisions behind doing this 2.5D aesthetic?

[00:14:53.816] Emilie Joly: It's a very good question. I think it's because the art director is a fan of Miyazaki and comic books. It started as a comic book project, like a comic on 360 project. So I think it's really like an artistic thing. And obviously for a small team, working in 2D drawings is kind of... It's probably easier than 3D, it might be, but actually I don't think so. Maybe 3D models would be easier, because you can actually change the point of view. Here what happens is that all the artists have to build the scenes. So we do build the scenes in 3D first, and then the artists have to draw the scenes in perspective. So they have to bend their drawings so that it matches the 360 point of view. So it's actually more painful than helpful to do the 2D part. So it's definitely an artistic choice to do that.

[00:15:36.488] Kent Bye: I think that has potential, but the fact that it's actually more difficult to do, it makes me question whether or not... Like, the benefit that you can get a little bit of lower fidelity of the whole scene may put you into a different state of being in this fantasy world. I can definitely see that. But for you, what's next? What's sort of some of the next steps?

[00:15:54.319] Emilie Joly: Well, Sequenced right now, we're building a short pilot. The series is quite long, there's nine episodes and three seasons that are planned. So there's a huge story which has been built for VR. So that's already there and we're building episodes one after the other. So we'll continue building Sequenced. We're working on the platform a lot, working on the tool right now to have a first beta version that other people could use as well. and working on the new brick leg room scale experience that's going to be kind of fun I think. It's a piece where you are in the backstage of a theater and you have a whole bunch of stuff in that theater you're not really sure what you're going to do with those and a narrator tells you that you're going to have 10 minutes to do a performance. You're not really sure what's happening there. There's a little platform you have to go on top of the platform and it tells you okay well look up when you're ready okay you can choose the way you want to do your entrance if you want to have like fumes or music or whatever. And then when you look up, you go up on a little platform, and then you end up in that huge theater with a lot of people looking at you. And they all go, and basically have to figure out what to do and break a leg.

[00:16:57.522] Kent Bye: That sounds fun. And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?

[00:17:07.107] Emilie Joly: I think virtual reality is going to change the way everybody experiences media, storytelling. It's entertainment, definitely. I mean, there's a lot of work to do yet to figure out exactly. I mean, there's a lot of testing now. Everybody's testing different things. A lot of things work. Some don't work. We're not sure yet what's going to work, but it's a really exciting time. It's going to be changing everything soon. I mean, how many times do you get in a lifetime the chance to build a new medium? Not that many, I think, so it's interesting.

[00:17:37.563] Kent Bye: Awesome. Well, thank you so much.

[00:17:39.004] Emilie Joly: Thank you.

[00:17:40.205] Kent Bye: So that was Emily Jolly. She's the CEO of Ape Lab, and she was talking about the interactive narrative experience called Sequenced. So I have a number of different takeaways from this experience is that, first of all, the first time that I went through this experience, I had no idea that there was any type of interactive components to it. And so I just thought that I saw it, and that was it. And so this is a challenge within interactive narratives, is having some way of letting the user know that their behavior, their attention, whatever they're doing within the environment, is having some sort of interactive effect. So in this experience, I don't know if it would have been helpful to have a little bit of a tutorial at the beginning, or maybe that's just what they're trying to do, is just throw people into the experience and hope that they figure out that whatever they're doing has some sort of impact. I wasn't informed of that ahead of time. Perhaps if I was, maybe I would have been on the lookout for it a little bit more and trying to see what kind of interactivity that I was able to engage within the experience. But for the most part, I was just kind of passively observing it. I had quite a much deeper appreciation for the project after learning that, because honestly, when I first saw it, I just thought, oh, well, this could have been just a 2D film, and it would have been perhaps better than the experience that I had. So I really like the fact that they're also putting lots of different triggers within the environment so that when you are actually walking around within a room scale space, you could actually trigger different experiences. So all of this said, the tool sets that are available are pretty bare and minimal. And a company like Ape Lab is going to have to kind of create their spatial story platform from the ground up in terms of coding and integrating this within the game engine. These are game engines primarily rather than storytelling engines. And just a side note, I know that the Unreal Engine has quite a lot of momentum within other types of narrative storytelling. I know for Oculus Story Studio, for example, they use primarily the Unreal Engine. And so there's some of the timeline tools within Unity that may be lagging a little bit behind the Unreal Engine, which tends to have a little bit more cinematic sequences within it. So I'd expect that over the next two or three years, a lot of this won't be a big issue. A lot of this is just going to be baked into the core engines, or there's going to be a lot more plugins that are available in order to add a lot of this additional functionality. It sounds like at this point, Ape Lab is going to be taking the proprietary route in terms of just using their own IP and producing their own experiences with that. But this type of balance between interactivity and narrative is an ongoing challenge within the video game industry as well as within VR. I was just watching some videos the other day of the future of storytelling. And within them, they had this interview with Corey May, who is a director of screenwriting at Ubisoft slash Alice. And he kind of framed it in this way, in terms of that they try to balance the player story and the protagonist story. So you as a player have your own story that's evolving and growing and you have different things that you can do that changes and evolves your character. But there's also a main protagonist story or a separate narrative that is happening that you are either kind of weaving in and out of or directly participating in. But they seem to be a little bit different, like the overall story of the game versus your own personal experience. And so I kind of see it as your agency and interactivity versus the overall narrative of the experience. So another resource that I came across within the last couple of weeks that I just wanted to point out was this really amazing paper in AI Magazine called Interactive Narrative and Intelligent Systems Approach. It's by Mark Riedel and Vadim Balitko. And it's a really great overview of the landscape of interactive narrative research over the last 20 years. And they have a lot of really insightful maps and models that they describe. So I'll include a link within the show notes here, and you can check that out. It's very much worth a read. So overall, I'm really excited for the future of these types of interactive narratives and to see where the genre goes. And for the sequenced experience, especially for anybody that's really into the anime aesthetic and style and movies, then I think it'll be pretty exciting for people to be able to step into an experience where you have a little bit of a 2.5D experience of it, where you still see the 2D dimensionality of it, but you're kind of immersed within a 3D world. It kind of takes a little bit of insight to know when that really works and how to best use the 3D space within virtual reality because you can often just kind of recreate a 2D scene where the VR doesn't really add anything. And another challenge is to have a narrative where you end up kind of having the person having to edit the sequence where, in other words, they have to turn up to 180 degrees at the time to be able to kind of cut between the two different scenes. At that point, I was just kind of like, wow, this is a really bad position to observe how this is unfolding. But yeah, it could be that some of what I was looking at was triggering different types of the experience. And so perhaps it would have had a different outcome. So I only had a chance to run through it once. And I wasn't able to go through it again to be able to discover some of the agency or interactive triggers. Like I said, the triggers are pretty well hidden in this experience, and so once it comes out as an experience released for everybody, then you might be able to play it again and again and kind of discover how replayable it actually is. So that's all that I have for today. I wanted to just thank you for listening to the podcast, and if you enjoyed the show, then please do help spread the word, tell your friends, and really do consider becoming a contributor to my Patreon at patreon.com slash Voices of VR.

More from this show