Emilie Joly says that the rules of interaction are the stories within interactive narratives, and Apelab is creating the Spatial Stories plug-in and toolset in order to make it easier to define those rules. Unity uses an object-oriented approach for creating interactive environments, but it’s optimized for creating interactive video games. The Spatial Story toolset aims to create point-and-click interfaces geared towards immersive storytellers who want to create interactive experiences whether it’s in VR, AR, or mixed reality. Joly sees immersive storytellers as a combination of world building and writing, and they want to optimize the workflow for screenwriters and storytelling creatives such that they can more rapidly iterate ideas and experiences within both Unity and Unreal engines. They’re also building integrations with AI APIs in order to better implement conversational interfaces.
I had a chance to talk with Joly at Kaleidoscope VR’s FIRST LOOK market, and to walk through some of the features of the Spatial Story toolset, integrations with existing screenwriting tools like Final Draft, and metaphors that she uses to understand the unique affordances of interactive stories.
LISTEN TO THE VOICES OF VR PODCAST
This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So with virtual reality, it's a new immersive and interactive medium, which means that when it comes to storytelling, we're going to have to find new workflows and methodologies and metaphors to be able to understand what does it mean to create interactive narratives. And that's what Ape Lab has been doing with their Spatial Stories plugin. So both Unreal Engine and Unity started as game engines. And so the affordances of what you were to create easily within these game engines were video games. But in storytelling, there's a little bit different things that you actually want. And so with Unity, they actually have this whole plugin system so that if Unity hasn't implemented the features that you want, you can go ahead and build a plugin that can help solve the problems that you're trying to solve. And that's exactly what AbleApp has done with their Spatial Stories plugin. which is going to allow creators to do a little bit more of a drag and drop interface and to create all sorts of different conditional behaviors when you're in a immersive 3D environment so that you can create this interactive story. So that was the genesis of that. And so I had a chance to talk to Emily Jolie. She's the CEO of Ape Lab at Kaleidoscope VR's First Look Market on Thursday, September 22, 2017 in Los Angeles, California. So with that, let's go ahead and dive right in.
[00:01:34.536] Emilie Joly: So I'm Emily, I'm the CEO of Ape Lab and I'm also an interaction designer and we're currently working on Spatial Stories which is a tool for creatives, designers, storytellers, filmmakers who don't know how to code.
[00:01:48.501] Kent Bye: Great, so maybe let's take a step back and Tell me a bit more about your journey into creating this really elaborate Unity plugin to do interactive stories and narratives and what was the genesis of wanting to be able to do this type of interactive stories and then how that kind of evolved into developing this Spatial Stories plugin.
[00:02:07.815] Emilie Joly: Yeah, well it actually started at university with a bachelor's degree project that my husband was doing and he doesn't know how to code and he wanted to do an interactive comic book on 360 back in I think 2011 and so we started to think about how we could do something reusable and easy to do and easy to use for him to create real-time branching narrative in Unity using gaze-based interactions at first and so it started with that and I started building that framework for him so that he could meet his deadline.
[00:02:41.892] Kent Bye: And so yeah, after that, I know that you had a piece at Sundance called Sequenced that I had a chance to see. And my original experience of Sequenced was I didn't actually know it was an interactive narrative. I think we, in our previous interview, we talked about that in terms of how there is different passive gaze activated triggers that would trigger different branches of the story, but it was so seamless that I didn't know I was making a decision. And so I think that, you know, this Spatial Stories plugin that you've created can do that, like, low-level amount of gaze interaction. But I think that the trajectory that I've seen it going is that there's a much more explicit interactions and triggers that are created and that Unity as a whole is a game engine that's designed for a certain amount of interaction design, but yet the types of stories that you wanted to experience wasn't necessarily baked into the Unity engine, so it sounds like you've been developing spatial stories in order to tell these types of interactive stories that are a little bit more engaged based upon your actions and activity, or kind of creating these different branches.
[00:03:46.062] Emilie Joly: Yeah, well definitely, when we opened Unity for the first time, there was mostly nothing there when you start from scratch. There's a camera and then, you know, you can turn it around, but there's not much that you can do without any, like, C sharp knowledge or things like that. And the tools completely evolved with what we were working on at the studio. So our next project is called Break a Leg, and it's an interactive, fully interactive room scale with controllers. So that made also the spatial stories evolve from, like, simple gaze-based interactions to basically everything. At Ape Lab, we're a pretty small team and we have engineers on one side, designers, and the goal for us is always to have the designers build the content. So the engineers never are not building the apps, it's the designers. And it's kind of a test run for us on the plugin as we try to make sure that a designer will be able to build fully interactive VR app without having to code anything. So we kind of work like that in the team as well.
[00:04:40.658] Kent Bye: And could you talk a bit about your own personal design process when you're making an interactive story like this? Because I think with your tool set, you have within Unity the ability to rapidly iterate and click a lot of buttons. But yet, having a direct experience of it and iterating is different than trying to plan it all out and have all those interactions in your mind from the beginning. But I'm just trying to imagine, like, is this something that you say, OK, I want to have this type of story but yet the nuances of the interaction sometimes you don't know what's going to happen until you actually code it and see what it feels like.
[00:05:14.368] Emilie Joly: Yeah, no, definitely. So we usually start with that. We do world building a lot. So we kind of know, you know, what the world is going to look like, what the scenes are going to have inside. But and we know a little bit that there's a narrative, there's dialogues, what kind of assets you have. But, you know, at first we had to do that, like mapping out all the interactions with like post-its and try to figuring them out. But with the plug in, what's cool is I can just go in there, do it myself and test it out. So there's no there's not that much planning. Now because we can just do it right away from scratch so you just import your assets and then try things and that's kind of the best way to do it with VR because you kind of need to experiment whatever you're doing and it used to take so long for me as a designer to actually get to testing my ideas or my project and now it takes like an hour so that's pretty useful in my book.
[00:06:03.340] Kent Bye: Yeah, and so I had a chance to look at the user interface, and I guess it seems like you have a couple of different interaction objects. So let's maybe start there. You have an interactable camera and an interactable object. Maybe you could talk about the interactable camera first in terms of, like, this is something that's different that you get out of Unity that's different than the major camera. So what does your Spatial Stories camera give you?
[00:06:25.398] Emilie Joly: So the way it works is you turn your standard Unity camera into an interactive camera and then you get access to a rig that's already pre-made and it works for all the platforms, so whether you're doing ARKit or HoloLens or HTC Oculus, it doesn't matter, it's going to be the same camera and it'll just automatically know what inputs you have, right? So HoloLens has voice recognition and it has gestures, Oculus has controllers, etc. It'll know exactly what's available and the interface will change. So you turn it into an interactive camera and then it'll know automatically what platform you're on, whether you're doing Vive, Daydream or HoloLens, and it'll change the inputs that are accessible to you. And you just basically have a camera that is your user and represents your user.
[00:07:12.058] Kent Bye: Yeah, and in your presentation that you gave here, it seems like that you're also looking to have these integrations with both ARKit for the iOS, as well as ARCore for Google. And so you've been doing a lot of virtual reality spatial storytelling, but yet it seems like with augmented reality, you have the ability to almost like turn your body into a controller in a space where you're able to actually physically walk around and have that ability to have a spatial story, but through the window of a phone. So I'm just curious to hear your expansion into AR, building upon what you've been able to do in VR and how these ideas of spatial stories, how you see storytelling in augmented reality is going to be similar and different.
[00:07:53.125] Emilie Joly: Yeah, well, it's two different things, but the inputs are still similar. You have a user, that user can either walk around, look at things, grab things, and just different inputs. Either you tap on the screen or you have a controller and you press on a button, but the sort of core framework is the same. It's just the way you're going to design an experience is going to be very different in AR. We worked with AR quite a long time ago with Tango. In the very early days of Tango, we were doing R&D with the team. working on new content and figuring out how we could, you know, build interactive stories for the Tango using spatial stories. And there's a lot of challenges to AR that are very different than VR. There's the mapping of the environment, there's a lot of technicalities that users need to learn, which are kind of difficult to grasp at first. And if your character cannot walk on the floor because ARKit didn't detect the floor, they don't know and they don't care. They just see that it doesn't work. So there's a lot of things that need to design around that. But what's really, I think, great about AR storytelling is that you can see that it's easy. It's immediate. People can just pick up their phones and they can see the magic very quickly in an easy and simple way. They don't necessarily need to put a headset on, so it's a bit less clunky. They can be together, too, so you can see, like, three, four people experiencing the story. But the mechanics we use are kind of similar, actually, to VR. It's just the story that we're going to tell is very different.
[00:09:16.699] Kent Bye: Yeah, and the other new feature that the Spatial Stories has within the Unity interface is this interactable object. And I think this is where the heart of all of the actual configuration of triggering different things happens when you actually have these objects that you're able to do all sorts of different triggers by location or sound. To me, there are so many things in there. I'm having a little trouble even wrapping my brain around all the different functions and capabilities you have. So how do you describe and think about as a model of like these interactable objects as the heart of interactive narrative? And then what are the different things that are being triggered? Do you think about it in terms of verbs or nouns or, you know, how do you sort of have a conceptual framework to understand how to even think about and configure these interactable objects.
[00:10:03.747] Emilie Joly: So, well, any 3D asset that you have in your scene, you can turn them into interactive object and that's how Spatial Story has been constructed. It's around those interactive objects and in VR and AR, anything that you have that you want them to react to your user, they're going to be interactive objects. And there's a lot of functionalities to them, but they're all needed. They're like all the groundwork that you would need to build anything interactive. So let's say I'm turning a character into an interactive object, then maybe I want it to react to the gaze or I want it to react to the proximity to that object. Maybe I want him to do something when I grab him, when I release him, things like that. So all of that is embedded inside Spatial Stories and you can basically configure those objects to react to your user. So instead of building a story like you would for like a film for a spectator or someone more static, you're really writing for the user and that's what interactive objects are for. They know what the user is doing, you know, and then you can decide what's going to happen depending on that.
[00:11:06.718] Kent Bye: Yeah, it seems like the stories, they usually have like a linear, like if you're going to write a story, you might write your script. And to me, it feels like to do an interactive experience and almost is like you have to go in, in like a sandbox environment and just start playing with what's even possible with a tool like Special Stories. And then once you know what's possible, then you can start to even formulate how to construct a story around that. And so I guess if I were to try to learn spatial stories, that's kind of my approach that I would take was to be able to kind of like go in there and do these little prototype experiments just to be able to learn the tool. And then once I knew the tool, then I think that the story possibilities start to open up. Is that what you recommend or what you found is for people to become really familiar with what the capabilities are and then let that be the thing that's really driving these interactions?
[00:11:57.357] Emilie Joly: Yeah, I think it's definitely a very good way to approach it. Just mess around with it and have fun and try things. It's super easy and you can try many different things. There's also voice recognition, so you can just put a text. Let's say you want a character to jump when you say, come here. You can just put the text in there. There's also geolocalization, so you could put a coordinate in the world and then have that object appear at that place. There's a lot of things that are usually complex to put together that are just there in a very simple way. So you can just, you know, mess around, do a lot of different experiments. And then it'll also help you maybe find new things that you hadn't thought of. Because sometimes when you design a VR AR project, maybe you think of the constraint or technical constraints and you don't really open up. And if you start by actually messing around first, that might help you get to something completely maybe off of what you've thought at first, I guess.
[00:12:51.505] Kent Bye: So how do you write a story or plan a story out? I mean, it seems like there still would need to be some level of either world building or like, what does that look like for you in terms of the process of constructing a story with this spatial stories plugin that's really driving these different interactions?
[00:13:09.565] Emilie Joly: I actually do write still, but we do a lot of world building, so we have our director doing a lot of concept arts, we do a lot of 3D modeling and testing, so there's world building and there's a lot of writing too. And I actually write in interaction, and I'm not the only one, we have other studios who are working in VR, I've seen their script and it looks really like an interactive story, it goes, the user goes into the room and opens the door. If he grabs the cup, then this happens, then that happens. And that's what the script looks like. So that helps me then put together all of the interactions inside Unity, because I have that sort of basic script. But that's when we actually know what we're doing. In experimental mode, when you're in R&D on a project, you can just mess around with things and explore. But then you can start actually writing your story or your adventure game. Could be anything, could be like a teaching project, could be something for building a car. It doesn't matter, you can just write what the experience is going to look like for your user. And what we're also working on now is actually translating that script into interaction right away. So we have a prototype where you can go and start typing your story and say, oh, there's a chair here and there's going to be a door. You can open the door and all of that by text and it translates into interactive objects automatically. You can see your story map out like that in a very simple way.
[00:14:31.432] Kent Bye: So it sounds like you'd be able to then upload that sort of metacode into Unity and then like somehow it would be able to assign that to the right objects and you would basically kind of write a script and then import it directly? Is that the eventual goal or what you're specifically talking about?
[00:14:46.940] Emilie Joly: Yeah, well, right now it's actually live, so I'm writing as I say, oh, there is a chair, and I, you know, a little bit like smart writing is already in things like Final Draft. You can put tabs and things like that, and it knows it's an interactive object, so I would say, oh, there's a chair tab. That's an interactive object, though, and the user comes in, and the sun comes up, and the sun is an interactive object, too. So I kind of wrote a grammar of Caballery that works completely seamlessly with the system, which is kind of interesting. And we're exploring that. And it makes it super, actually, fun to create stuff like that. Because you actually see it live in VR. So you have your headset on. And you can write. And then you can actually see that happening right away.
[00:15:26.061] Kent Bye: Oh, wow. That sounds amazing. Well, it sounds like also, as you're creating these interactive stories, there's going to be a challenge of making sure what you had actually implemented works. So you have to kind of iterate through all the different branches or paths and make sure that there's no dead ends or anything that's wrong with any of the extremes of any of those branches. What does that process look like? I mean, do you sort of, like, map out, like, here's for the quality assurance person to kind of go through and just know the script so well and then just iterate through a first-person perspective, just go through and just make sure everything's working?
[00:16:01.693] Emilie Joly: Yeah, it's tedious, and especially when you start building very complex interactive adventures. We have a one hour and a half game coming out, and there's probably 3,000 triggers in the whole thing, so you need to know the script for sure. And we're also setting up ways to debug it more quickly, so there's a big map where you can start to validate all of your interactions without actually having to play it all through, because if you have to play a one hour and a half game every time you want to test, it gets super, it just gets impossible. So there's a way to shortcut when you can just start at different times. A little bit like a timeline you would have in a video where you could start at 20 minutes or 30 minutes. You can decide to start your experience at that interaction or that interaction and test it that way too and try to figure out where the dead ends are. But a lot of user testing helps.
[00:16:53.790] Kent Bye: Yeah, and it also seems like that there's a part of what you're doing that is happening a lot of the user interface. But in order to actually visualize it, you've got some prototypes of actually seeing what that looks like. So you can maybe take a step back and see if it looks OK. I mean, is that something that you plan on building into this tool? Is another way to kind of take a step back and to do almost like a visual blueprint of the different triggers and actions and see if there is some sort of nodal connections between these different objects and actions between them just to make sure that there's a way to Verify it visually or then even perhaps do editing directly with that Yeah, no, definitely.
[00:17:31.393] Emilie Joly: So we're working on maps and we're also the system is also smart now so it knows if if there's a dead end or if you missed something that you didn't put a Dependency between the thing and it kind of breaks it. It'll know it'll give you an alert that there's something wrong there so we're working on those things for sure and we're also trying to think about a timeline. So in Unity, there is now a new timeline and we're working also on a way to visualize things in time, but it's not really time as it would in a video, it's time in terms of interaction. So instead of having to see the whole map of 3,000 triggers, which could be a bit overwhelming, there's a way to, you would see the trigger before and the trigger after, which will help you sort of see what's going on. So working on sort of an interaction timeline, let's say.
[00:18:20.122] Kent Bye: Yeah, and I think that when I look at storytelling and interactive storytelling and gaming, there's this spectrum between like a completely authored story, and then you get into some sort of like maybe single branches that maybe converge again. So there's like maybe different flavors of choices that you make that aren't making an overall change in the outcome of the story. And then you get into, like, choose your own adventure, which becomes more explicit, and it's branching, and it's forking off, and it becomes, like, more of a matter of riding all those different branches. And then you have, like, this completely open sandbox world, where it's more about exploring the world than it becomes about, you know, telling a time-based, you know, story around that. And then finally, you have, like, a generative story, where there's maybe more of, like, a designing of an AI that has, like, a conversational interface than it's more about you having a high agency interaction rather than dictating any sort of specific narrative at all. It's more about designing a probability space of things that might be able to happen. So for you, I'm just trying to figure out like, you know, how you think of that spectrum from the narrative that you're receiving a story versus like at the far extreme of what starts to maybe feel like a game. And it's more about the expression of agency and interaction and then how, to balance those two things between that expression of agency and interactivity of engaging the user but also being able to then have them able to listen to the deeper story that you're trying to tell.
[00:19:50.664] Emilie Joly: I think it's also a design choice in general. At least with Spatial Stories as it is, you can either do something pretty linear where you have a couple of triggers and branches or you can decide to build an AI. For our latest project, which is an augmented reality piece, we actually built in an AI into the characters inside the tool and then things will happen or not in a more open way. I think storytelling, even like generative algorithm and project and AI is super, is great. And that's also storytelling in a way. You're just going to be framing your world a bit differently. I also love open worlds. I also, you know, like open worlds and adventure games where you're going to be just exploring, you know, a universe and then something sometimes happens or does not. That's also storytelling and we're trying to put that into the framework as well because we believe that storytelling and that creators should really think about it that way too and look more at what games are already doing and look at what storytelling is doing and sort of mash them all together. It's more high-level storytelling when not necessarily linear. It doesn't have to be like a linear video. It could be completely AI driven if you if you'd want it to.
[00:21:02.466] Kent Bye: And it seems like using that as a metaphor, the spatial stories, you're talking about the environment that you're able to explore and then from there you have these triggers and objects that are revealing the story to you as you interact with them in some way. And I think that's a trope that I see a lot in VR games and sometimes you're collecting objects and then taking them to other places in space and having them interact and having different stories kind of unfold. Because you're using that name of spatial stories, I'm just curious how you think about this process of environmental storytelling or using your space to be able to tell the story.
[00:21:38.603] Emilie Joly: Yeah, well it's been, me, myself, it's been like world building is really my thing and I believe that when you create worlds, you create rules. So stories are really large in that sense. The rules that you're creating are the story, right? So you're mostly creating a world and you're creating the rules inside that world and then your user is going to do things, he might not do them, he might do other stuff, but what you've created is a world where things can happen and those things come from you as a creator, right? So you're already building that story because you're building that amazing world for them to mess around with, I guess.
[00:22:12.471] Kent Bye: Yeah, it's an interesting way to frame it, that the rules are the story, in a lot of ways, determining the bounds of the interactions. You had mentioned that you're starting to create AI interactions. And so what does that look like? Are you starting to plug into something like IBM Watson or things that are doing the Microsoft Cognitive Services or Google's different APIs? There's a number of different services that are out there. And so how are you interfacing in terms of being able to sort of set up the boundaries and the rules of these interactions of these AI characters?
[00:22:46.423] Emilie Joly: Yeah, so there's things that our engineers have just set up as pre-made behaviors that you can choose from for the AI, and then you can build your own code inside the framework if you want to. We're working now with also implementing Watson inside the spatial story, so it will allow you to just use Watson as a condition or an action for any interaction with a character, so you'd be able to to just mess around with whatever Watson knows what to do and use them inside that. And that's really, really cool. And that's key, I think, for VR and AR. AI is super interesting, and there's a lot to do with that. But it's still pretty inaccessible for creators, so we're trying to see how we can simplify that process and give them those tools in a very simple and easy way. Also voice recognitions, things like that, and geolocalization, speech-to-text recognition and stuff.
[00:23:37.035] Kent Bye: And so for you, what are some of the open questions that are really driving your work forward and things that you're trying to explore?
[00:23:44.998] Emilie Joly: Wow, there's too many things that we're trying to explore, I think. I think we're really trying to think about how we can create the perfect workflow for VR and AR. And I also, and I think it's maybe unique to Ape Lab or to how we believe, but we really believe that design and engineering works together and giving the power to the creatives to actually build those things themselves. I think it makes the projects really great and different. Instead of having two separate teams, which is most of the time like that, you have the engineers on one side working by themselves and then you have the designers working on the other side. And by mixing everyone together and making sure that the creatives can also build their own interactivity and it drives me, you know, giving the power to creatives in that way. And we're also just really excited about how the technology is evolving and, you know, there's going to be eye tracking and maybe also hand tracking coming soon. Brain waves and all of that for us is just like food for a new tool and just giving us the opportunity to build new crazy things. And we want to make sure that, you know, everyone can actually do that too.
[00:24:55.855] Kent Bye: Yeah, there's all sorts of, you know, galvanic skin response and heart rate variability and brain control interfaces and eye gaze. Have you started to play with any of these technologies in terms of the storytelling implications of what those will mean?
[00:25:09.209] Emilie Joly: So with the brainwave side of things, we're gonna probably be able to start once we have the hardware. There used to be, I used to work with those, there's funny like games that you can already play where they have a headset and you can start to supposedly think and then make things move around. So I used to, I hacked one of those like three or four years ago for fun, but there's definitely things that... All of that is going to be amazing tools for storytellers and for creatives in general. Haptics is super interesting. We tested a bit of eye tracking with the FOV as well. It's going to get there. What else is there? There's so many things.
[00:25:47.375] Kent Bye: Just trying to get to what that's triggering or what that gives you in terms of the fidelity of knowing about your audience, the person who's watching it, and then how you're able to dynamically change an experience based upon what's happening in their physiology.
[00:26:01.492] Emilie Joly: Yeah, we're gonna do a brain action inside Spatial Stories, basically, where you could say... I don't know how you'd actually think about, like... It will depend what the input is for brainwaves, like, is the user thinking about a positive thing, then make the sky blue, or if he's thinking... If he's sad today, make, I don't know, the chair fall down or whatever, anything could happen, I think. Yeah, brainwaves are definitely interesting and facial recognition too is something that I'm really interested in. I used to work with the FaceShift team, which is now the guys doing all the Apple facial recognition stuff. We had interactive paintings and we were already implementing the facial recognition in there. That's a super fun tool for creators, you can really define your own set of expressions and then decide what your character or your environment is going to do based on those expressions. I had experimented with paintings, so you had one painting where it was just a weird baby from the 14th century in a little crib and he would just mimic your facial expressions in a very weird way so that was kind of funny and then we had another painting which you were changing the environment with your face so depending on your facial expressions the character would walk down walk down the mountain or the moon would come up when you close your eyes so you would actually never see what was happening. So you could close your eyes and then the moon would rise and then it would be very beautiful, but only the other people looking at the painting could see that. So facial recognition also has a big, big potential for immersive stories and games, I think.
[00:27:34.495] Kent Bye: Awesome. And finally, what do you think is kind of the ultimate potential of augmented and virtual reality and what it might be able to enable?
[00:27:44.011] Emilie Joly: potential for VR and AR. There's a lot of potential for both in so many directions. I think it's hard to... I think games are definitely not the only, you know, the only aspect that's interesting. There's, I believe, a lot in education and learning. I think it's gonna, you know, it's gonna spread in all verticals and everywhere. Healthcare also is extremely important and I think it's going to change the way we interact with the virtual world in general. If our screens disappear and if our phones disappear and we end up with just a pair of glasses on our heads, then it's going to change the way we interact with data. It's going to change the way we interact with others as well. It's also our role here, now that we're building those things, is figuring out what that future is going to look like for everyone in five to ten years and make sure that it's a good world that we're building, I guess.
[00:28:36.195] Kent Bye: Yeah, just as you mentioned the screens going away, I think Apple just had their recent press conference where they announced the Apple Watch being able to turn into a phone, and so they have the earbuds that you put into your ear, and so I think we are moving towards the foundations of these conversational interfaces and moving away from screens in that we're starting with phone-based AR. But yeah, I do see that we're on this trajectory of these immersive technologies, but To me, I think that there's an important role of the storytelling into this whole medium because it's the new storytelling capabilities that are going to be unlocked that are not going to have people like a film where you can't engage and participate. But to me, it's like, what is the potential of people being able to actually participate in the stories in different ways? And that seems like what you're focusing on. I don't know if you have any other thoughts on the interactive story. dimension of what it means for the medium for people to actually be engaged within the context of a story.
[00:29:31.792] Emilie Joly: Well, story is everything, right? I mean, everything has a story, whether you're building for brands or for games or for film, there's a story there. It kind of depends what you mean, what the story is to you, but people are going to be able to interact with the world and that's going to change everything. You're going to be able to be together outside, inside, and instead of looking down at your phone, you'll just be looking at each other and exchanging like assets, exchanging messages, exchanging, you know, experimenting stories. It's going to change everything, I think. It's going to change the way we experiment the world in general. And hopefully the stories we're going to tell are going to be really impressive and great and, you know, change the way we think of the world.
[00:30:17.753] Kent Bye: Awesome. Well, thank you so much.
[00:30:19.115] Emilie Joly: Thank you. Thanks a lot. Nice to see you again.
[00:30:22.963] Kent Bye: So that was Emily Jolie. She's the CEO of Ape Lab, an interaction designer, and the creator of the Spatial Stories plugin. So I have a number of different takeaways about this interview is that, first of all, I think that the more that you're able to learn about game engines, the more of the metaphors that you'll be able to understand of creating spatial stories. So in Unity, it's a 3D space. And you have different objects that are in that space. And you're able to attach different behaviors and reactions into those objects. essentially what the process of creating interactive story is. For Emily, she says that the rules of this interactive world kind of become the story. So whatever the interactions are starts to become the affordances that are possible in this possibility space that you're creating. So, you know, when you think about a linear screenplay, you just sit down and you write to the script, but with this, it's a little bit more of conditional statement. So it becomes a little bit more of a regular scripting language if you do this than that. But it's even a step further, it's like an object-oriented programming language because you're attaching different behaviors to these different objects and kind of defining how these objects are relating to you, but also how they're relating to each other. And so this is something that Unity does on its own, but in order to really do all these things in order to create these spatial stories, then this is what the Spatial Stories plugin enables you to do, just to make that easier. So they showed a demo of an augmented reality experience where you're able to actually walk around a physical space. you can kind of just think about it as you're walking around a space and you're coming into near proximity to different virtual objects within this augmented reality experience, then it's actually triggering these different animations and further aspects of the story unfolding. So the other thing that was interesting about what Emily was saying is that there's much more of an ethic for the designers and engineers to work much closer together. Ideally the designers will be able to get in there into Unity and actually build by themselves just by pointing and clicking. And that, you know, the more that you're able to do that and get closer to the metal, so to speak, of the technology, then the more that you're going to understand the basic affordances of the medium and be able to actually go in there and kind of do things yourself. Now, as you scale up into bigger, bigger teams, this becomes less and less viable because, you know, the expertise gets much more defined. But, you know, in the early days of VR, the people who are in virtual reality, you kind of have to just learn and do it yourself. So a lot of people who are storytellers and are using Unity, they're in their hands on the code and actually help setting things up. And as we move forward in the future, and as there's more and more virtual reality tools to be able to create VR within VR, then I imagine that some of these creators will be able to take it even to the next level of being able to actually create the special relationships of where everything should actually be in a scene. And so Emily really sees her job as this combination of world building and writing, and that they've been working on these different interactive scripts with other, you know, software like Final Draft, such that you are able to perhaps add some tabs and other sort of language such that can automatically trigger behavior that's happening within the virtual world. You can imagine a future where you're sitting within virtual reality. Maybe you're at a keyboard or maybe there's these different voice interfaces so you're able to do conversational interfaces and you're able to just kind of speak what you want and then use your hand-tracked controllers to be able to push buttons and to orient things. But I imagine a time when the authoring of these immersive experiences are going to happen completely within virtual reality. We're pretty far away from doing that now just because of the screen resolution is so low. If you're actually looking at text on the screen, it's not a great experience yet and all these conversational interfaces aren't there. But I could see that Ape Lab is starting to prototype and work with this integration between being able to type out a script and have it just happen within an immersive virtual reality experience. So I also have a background in open source content management systems like Drupal, and there was quite a lot of pointing and clicking that was happening within Drupal. And there was a big effort within the community to be able to find ways to be able to export all of that metadata into configuration management files. And so I imagine that eventually we may be getting to the point of creating these declarative language command such that you're able to, you know, have a high level kind of meta programming language where you're able to define the structures but to maybe just, you know, either type or create more of a declarative language of you just declaring what you want to happen and then let the technology kind of take it from there. without you having to get down to the lower level of actually using all the proper C-sharp syntax and everything else with the coding. But it's just moving from the pointing and clicking, which can be great for users, but once you get to a certain point of wanting to automate things, then you kind of want to have a way to make it a lot easier for you to work at the scale that you want to. So thinking about the automation and the testing and the quality assurance of that whole dimension is a whole other thing as well. And finally, it just seems like this type of framework is the same type of framework that you would need in order to drop in some sort of conversational interface engine, whether it's IBM Watson or Google or Microsoft Cognitive Services, all of these different conversational interfaces driven by AI on the back end and on the cloud. This is going to be something that's going to be integrated into this system as well at some point. So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a donor. This is a listener supported podcast, and I do rely upon your donations to continue to do this coverage. So if you enjoy that and want to see more, then please do become a member at patreon.com slash Voices of VR. Thanks for listening.