The creators of Agence call it a “dynamic film,” but it’s part immersive story with game-like interactions and puzzles where you engage with either traditional heuristic game AI bots or bots trained through reinforcement learning processes. There’s some automatic cinematic cutting that leverages the language of film for you to project all of your human dramas and stories onto these five bots who are interacting with each other. But after an average of 8-10 minutes of interaction in this simulation (or as long as 30-45 minutes depending on your actions), then the simulation comes to one of many different endings, rolls the credits, and immediately starts you up again.
The National Film Board helped to produce this piece by Pietro Gagliano and his Transitional Forms studio based in Toronto, and it’s got quite of a lot of interesting experiments looking at sharing authorship between the creators, the audience, and these AI entities that have different “brains” ranging from heuristic game AI to reinforcement learning.
I had a chance to talk with Gagliano and the NFB’s David Oppenheim on September 8th after I saw the world premiere at the Venice VR Expanded, and we unpacked the design process as well talked about all of the implications of training AI within the context of these story worlds. They’re also releasing a set of tools for AI researchers to be able to train their own brains, and so Agence will serve as a publishing platform for experiments in AI architectures within the context of this story world where audiences can interact with them within either virtual reality (Steam, Viveport, & Oculus), 2D PC game (Steam), as well as on mobile phones (iOS & Android).
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. So today we're going to be doing a deep dive into the intersection between artificial intelligence and immersive storytelling. So at the Venice Film Festival this year, there's a piece called Agence, that's A-G-E-N-C-E. So it's a little bit of a play on word because it sounds like agents, but it's agents. But you are interacting with these artificial intelligent agents throughout the course of this piece. So this piece is a little bit hard to describe because it's a little bit of like an immersive experience. There's cinematic storytelling elements to it, but it's also like an open world simulation where you're interacting with it. So it's got these puzzle like game elements, but it's not quite a game. It's not quite a film. It's somewhere in the middle. They like to refer to it as a dynamic film. But I think that's really quite fascinating to see the different types of experiments that they're doing here, because they're trying to go beyond the normal narrative structures that we typically think of. and to have a variance of different endings but it's like a complex non-linear system that they're foregoing some of their authorship agency as creators and they're giving that part to the audience and the other part to these artificially intelligent bots that are constrained by the certain number of behaviors that they can do and they're training them within that context but They're releasing them into the world that the audience can interact with and they have their own autonomous agency to some degree, although it's limited, but it's still got this decision making that they're looking at the world and environment around it and seeing how they operate within that world. And how do you tell a story about that world? So it's not only exploring immersive storytelling, but it's also looking at this story world that they've created to be able to train artificial intelligence. And they've actually released a tool as well to be able to explore that. So lots to dive into with this conversation about agents, which happened to be released today on Monday, September 28th, 2020. So there'll be links in the description that you can go actually download and play it. Highly, highly recommend that you go check it out. You can listen to a little bit more before we actually get into some areas that would be spoilers. I highly recommend checking it out and then diving into this unpacking and deep dive into the intersection between AI and immersive storytelling. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Davin Pietro happened on Tuesday, September 8th, 2020, as part of my coverage of the Venice Film Festival's Venice VR Expanded. So with that, let's go ahead and dive right in.
[00:02:32.675] David Oppenheim: Well, my name is David Oppenheim. I'm a producer at the National Film Board of Canada out of the Toronto studio. And I have been working both before I joined the film board and since I've been there in various forms of storytelling, both linear film, but mostly interactive forms of storytelling. And for the last four years or so have been working as both a creative producer and producer at large on mostly VR pieces.
[00:03:03.236] Pietro Gagliano: And my name is Pietro. I've done a number of VR pieces as well. I was a co-founder of a company called Secret Location. And for a decade there, we did a lot of immersive entertainment, nonlinear storytelling pieces. And I was the creative director there for a long time, and just recently started a new company called Transitional Forms. And we do immersive entertainment, but our focus is on the power of AI in storytelling.
[00:03:32.319] Kent Bye: Great. So yeah, maybe you could just tell a little bit of the backstory of how this project agents came about.
[00:03:39.514] Pietro Gagliano: Yeah, like I said, I've been doing immersive entertainment or non-linear entertainment for quite some time, and I've always wanted to create a piece that went beyond the human writing room, where you're deciding path A or path B or path C, and you're trying to architect a story that unfolds in various ways. And I thought, you know what, in the future, a machine will be able to do this way better than what humans could do. So the concept came as a result of assuming that AI characters could play out in a story and for it to create nonlinear results. And I'm very passionate about AI. I'm not an engineer, but as a creative person, I think it's the new tool for storytelling. And I think that stories that shape and frame AI in a positive light or in a way that can help us understand machine intelligence better are only going to be positive for humanity right now.
[00:04:35.610] David Oppenheim: And then, you know, I think that vision met the work that I do. Specifically, Pietro and I had lunch about two years ago. We had worked together before on a project called High-Rise Universe Within. Pietro was the creative director for that interactive documentary. So Pietro called me up and said, Hey, I'm, you know, I've got a few ideas and I'd like to work on some of my own art. And we talked about them and they were all super compelling in their own way. But I sort of jumped at the sort of what was then the seed of, you know, what became Agents. And I think for me, it was really two things. One is the creative use of artificial intelligence as a tool for storytelling. And that's something that I think looking for tools that can be used by artists in the service of storytelling is very much what's my focus and very much the film board's focus. And then the second was Pietro sort of had a clear point of view as an artist. He wanted to create a piece of art, an experience that had something to say about the world, in particular, the world of AI and human beings. And so I think that led to us developing the project together.
[00:05:42.853] Kent Bye: Right. And so this is a premiering here at the Venice VR Expanded Film Festival that's happening online. And I got a chance to play through it like eight different times and trying to see different stories that I could have with my agency, lack of agency. And so is this going to be first before we dive in, I just want to know, like, if it's going to be available for people to be able to play, because I'd love for people to be able to try it out before, you know, we sort of spoil everything.
[00:06:07.930] David Oppenheim: Yeah, so right now the world premiere of Agents is at the Venice Film Festival, as you mentioned, which, you know, just to say briefly is for us the absolutely perfect place to be premiering it. I mean, the oldest film festival in the world that, if my math is correct or memory is correct, had its first year as a film festival in 1932, sort of at the tail end of the silent film era. And to be here now in 2020 premiering a film that is powered by artificial intelligence, is pretty amazing and maybe we'll get into it. But I think for us, this is kind of like, you know, we're in the silent film era of dynamic filmmaking, which is how we refer to agents. So our world premieres here. And then later in September, we will be launching across VR, PC and mobile platforms to consumers outside of the festival setting. So it will be available soon.
[00:06:59.918] Kent Bye: Yeah. And so I watched a little behind the scenes video and there was one little moment that was really quite striking where you were saying you split up your authorship into three parts where you have your filmmakers and what you've been able to do and encoding these simple rules that are unfolding. You have the audience that is participating in this experience, and then you have the AI agents who are learning at the same time. You have other heuristic-driven bots that are not driven by a neural net, but you also have the option to create bots that are dynamically learning and have different brains, and you're going to swap out brains. Maybe just talk about that, splitting your authorship into those three different areas and like how you approach that to be able to put together what is a coherent feeling narrative that is somewhat of a game and somewhat of a story. And then how do you see those things all kind of colliding together here in this project?
[00:07:53.834] Pietro Gagliano: Yeah, for sure. As you mentioned, Kent, there's a three-way authorship, which is, you know, filmmaker authorship. Myself and my team who have outlined a certain story structure that can branch and unfold as the user interacts. And the second is the user's interaction, which of course we don't control. And we wanted to create a system where the user could interact at any point. So you're not given a moment to say, you know, choose door A or choose door B. You can interfere with this world at any point. And then the third tier also out of our control is the machine intelligence. So there's two types of AI that drive the character behavior in the film. One is heuristic AI, game AI functions that are basically if this, then that scenarios. And those creatures are very cool and fun to play with. And then there's another side, reinforcement learning AI, which these characters are trained with human engineers over you know, millions of iterations of learning through incentives, positive and negative rewards on how to do certain things on the surface of the planet. And these creatures actually can think for themselves. That's what's cool about them. And so they're really outside of our control because they actually have their own agency. But as a director, this is the the scary part and the fun part at the same time. It's two thirds of those scenarios are outside of my control. I can't control what the user does. I can't really control what the agents do. And yet there is somewhat of a cohesive story each time, no matter what happens, as long as the user is willing to engage with the emergent narrative that comes out of each experience, there are stories to be had with each playthrough.
[00:09:34.685] David Oppenheim: Yeah, I mean, I think Pietro and the team, yes, as he says, they're kind of one third of that triumvirate, but I would say just, you know, being part of the process and, you know, over the last year of production and previous to that, about a year of development and prototyping is that You know, of course, the entire system within which these three authors converge is also, you know, of course, to state the obvious, it's designed by the team and the creative. And, you know, the vision is Pietro's as a human director. So, you know, one example, one way is as Pietro refers to, you know, there's a story structure within which, you know, beginning, middle and end, we very much wanted to embrace the language of filmmaking within dynamic filmmaking. So there is a beginning, middle and end. We wanted people to leave both on an emotional level, having gone through something, and then sort of at a cerebral level with things to think about when they left. And so Pietro and I have been making interactive stories, you know, in our own careers for over a decade. you know, we both have come to believe in the power of the linear or authorial storytelling within the world of interactivity. So, you know, this was sort of adding dimensions, but also kind of believing in that power of at least a loose linear structure. And so Pietro was pretty masterful guiding the team through figuring out the balance, because it's a tough balance to get right. You know, it's a tough balance. And then on top of that, you know, you have AI creatures involved. So, you know, hopefully we got the balance somewhat right in this piece, but that's something we want to continue pushing on creatively in the next piece as well.
[00:11:18.325] Kent Bye: Yeah, I tend to see that there's these different spectrums of dialectics. One of those dialectic is order and chaos. For someone who's watching this, you want to see some order, but you don't want to have so much order that it's completely predictable and it's boring. On the other extreme, it's complete chaos and you have no idea. You might as well, it could be just totally random each and every time. finding somewhere in the middle, that happy medium between that order and chaos where your brain is trying to predict what is happening and why. And that's part of the imagination of the viewer to be able to piece together what that narrative is and what is happening in the group dynamics as well as the individual dynamics. And there is a moment you know, there's agents are falling off, they're dying. And at the end, you have this little thing that says, we'd like to thank all the AI agents who have died in the making of this piece. And it was like, I thought that was very provocative. And then at the very end of the credits, you just start over. And then it's like, okay, well, I'll just play it again. And I think I played it like three times the first time. trying to see the different variations, but it was like, okay, well, is this a bad thing they're dying? Should I prevent them from dying? And it's sort of like all these little mini games of like, okay, what if I can like, get this to happen? Okay, what if this happens? And then, then I see it happen. And then there's something that's different than what I expect. And I think that level of novelty is something that creates this game loop that, you know, if it was just completely predictable, then it wouldn't be interesting to watch. But because they're exhibiting some of this unpredictability, I think that in some ways is that quality of some level of intelligence, that they have some agency in their learning. And yeah, so I guess that spectrum between the order and chaos was interesting to see how you were playing with that. So just didn't know if you had any thoughts on that.
[00:12:58.284] Pietro Gagliano: Yeah, for sure. Like you said, there's a lot of different spectrums at play, and Order & Chaos is for sure one of them. The other is the idea that this is a filmic experience or a game experience, and we're really trying to ride the line between those two extremes as well. You can be an observer within this universe. You can interfere to a certain extent and then sit back and watch it unfold. There's dynamic camera systems and a dynamic cutting that will make a simulation as cinematic as we could get it. And then the other side of the spectrum is where it may be an interactive game. And we have intentionally not created levels or points or goals within our simulation, but there are different endings that you can unlock. There are different storylines that you can pursue. And so it really is up to the user on how far they lean into that interactivity or how far they lean into being an observer in this world.
[00:13:55.199] David Oppenheim: Pietro, can you tell the story about how your mom chose to go through Agents?
[00:13:59.859] Pietro Gagliano: Yeah, for sure. I showed the experience to my mom, and it was obviously a very important moment for me. She discovered a storyline that I had not anticipated, where the main interaction, the main cause for change in this universe is planting these magical flowers on the surface of this planet. And my mom didn't plant the flowers. She just picked up the agents and, you know, the agents started to kind of battle each other, fight, get a little feisty, and she just kept picking them up. Oh no, you behave now. You be nice to the other one. And then she'd pick another one up and look at it and put it back down. And she did that for maybe 45 minutes. And yeah, that was her experience. And she said she loved it. And I just was not expecting that infinite storyline of someone just engaging in that way with the,
[00:14:47.459] David Oppenheim: I mean, how, how mom is that just to separate the five kids? Exactly. Just, you know, at least, you know, we've watched so many people through play testing that, that sort of range, I think is something we were hoping for where, you know, as with any piece of art, you bring so much of yourself to it and so much of your previous experiences that especially within the VR experience of Agents, we wanted it to be, you know, to take advantage of that affordance of feeling like you are on the planet with these artificially intelligent creatures and that you're, if not at the top of your mind, asking yourself the question, what would you do with your own agency or the responsibility of your own agency? You're feeling that and you're making choices in how you approach it, to push the sort of, different limits of that, and people go through it in different ways. Some try and solve a puzzle, some watch a film, and some, like Pietro's mom, scold the kids and pick them up and relate to those little creatures in different ways.
[00:15:59.371] Pietro Gagliano: She really nurtured them, which was nice to see. That was a word we used a lot in development, is how do we nurture these little creatures? How do we empathize with them? How do we create a sense of responsibility and care for these creatures that didn't ask to be alive in the first place? I thought that was great. That's what my mom did in her experience, was she really tried to nurture them.
[00:16:30.760] Kent Bye: Yeah. And that other spectrum between, is this a game or is this a narrative experience? You know, when the instructions first came up, I was like, Oh God, I don't want to have to learn a bunch of stuff. And I kind of like, I didn't read through it comprehensively at first. I just kind of went through it and then. know, when it started again, then I started to say, Okay, I mean, I did, I was interacting with it the first time, but wasn't until like the third or fourth time that I started to change who was a learner who wasn't a learner, and starting to play around and experiment with that. But there was this option for cinematic VR, where you have cuts where you're zooming in. And by default, you have that turned on. So you have like this the language of cinematic storytelling is embedded into this experience to some extent, where if you're just going through it the first time, you have these different close-up shots and you have the language of film that's embedded into this narrative that doesn't otherwise have any other language that is being spoken that is English. It's like using the language of film as well as the language of gaming And I also did it without that cinematic cut. So just to see it from the perspective of that omniscient God perspective where you're just looking at the agents as they're moving around. Maybe you could talk about developing this cinematic VR view because you are taking the affordances of storytelling from film and putting it on top of this simulation game like experience and that how that helps to form the narratives in people's minds as they're watching this.
[00:17:59.318] Pietro Gagliano: Yeah, when we first started this project, I really wanted to teach AI about storytelling and storytelling structure and that type of thing. We ended up pivoting that approach because with any AI endeavor, you need a certain amount of data for it to start to work and stick together, except for reinforcement learning and probably other technologies that I'm not as familiar with. But reinforcement learning, you can teach AI, train AI through simulation. So you don't need that huge database. You're creating data as you go. So what we ended up doing is, I like to say we started by trying to teach AI storytelling, but we ended up using storytelling to teach AI and teach us about AI. And what I find is being there, as David mentioned, with the agents on the planet, you start to relate to them and layering in a cinematic language on top of that, where, you know, there might be a moment of loss or of conflict or something like that. And having this dynamic system choose the best shot or what it understands to be the best shot and bring you there and show you that moment of conflict, it really makes you relate to what the agents are doing or thinking. And it adds a whole nother level for observing intelligent machines.
[00:19:13.265] David Oppenheim: And I'd say that the references for this project were a mix of filmic and interactive. And I think If we were to look back at the original vision video that was done for the initial R&D phase for Agents, it's really stayed true to that in the sense that, you know, the end result is a piece that borrows from both cinema and simulation and game mechanics in a way. So I don't think we necessarily put it on top in terms of the cinematic language, but we integrated it from day one. And, you know, there were certainly challenges in terms of specifically the dynamic cameras and how do we allow for movement or agency, but also use that language of cinema of framing a particular cut. And so we spend a lot of time working on that kind of balance, I think.
[00:20:03.192] Pietro Gagliano: And that's something, too, that we're still in the Wild West, especially with VR on what is possible. You know, when VR first came out, as you well remember, Kent, there was a lot of like, oh, don't move the camera in this way, or you got to frame it like this. And I believe that there are conventions now that we can lean on, thanks to a lot of trial and error from content creators around the world. But what we wanted to really play with are the things like dynamic cutting, sense of scale, you know, change in position and perspective, point of view. So we did experiment with a lot of that in agents and I think it turned out quite well.
[00:20:41.137] Kent Bye: Yeah, and knowing a little bit about the reinforcement learning, I know that a lot of AI and relationship to AI, training AI and video games, you know, going through a lot of the Atari games and having the AI learn how to play these Atari games and the development of AI algorithms being so connected to the old Atari games from the 80s is pretty remarkable to see how the genesis of that then led to this explosion of different AI approaches, but you know, there's very specific scores and numbers that provide an input towards that reinforcement learning. And in this, it's more of an open world and it's up to you to figure out what the scoring is for what type of behaviors you're going to reinforce or not, whether or not they're fighting or whether or not they're looking at this flowers that are blooming or not. Like maybe you could talk about like, how do you train these neural nets in an open world environment where you want to have some sort of emergent behaviors, but at the same time, in order to do that, you have to have some objectified scores based upon different actions that are taken so that you could start to drive that behavior.
[00:21:50.827] Pietro Gagliano: Yeah, so just to be clear, the agents aren't actually learning within the film. They've already been trained, like the reinforcement learning agents, that is. They've been trained and put into the film, and that's something that we're excited to continue to do beyond release, is train more brains. We have a very small team, but we're going to continually train more brains and put them in the film and see what kind of different behavior we're going to get. And the way that process happens is the very non-technical way of explaining it is that we have a version of the environment that exists within the film, a stripped down version with the same physics and rules for the environment. And we have the agents train on that environment millions and millions of times over through giving them positive or negative rewards. So if we want to train them to not fall off the planet, for example, we give them a negative reward when they fall off the planet. And a million lives later, they'll start to, you know, understand that and then they'll develop their little neurons to shape that behavior. So yeah, just wanted to clear that up that they're not actually learning through each play of the film. That's something that we wanted to experiment with, but again, it's a passing data situation. But yeah, they're pre-trained and then put into the film. And one thing that we hope to do is publish that training environment to let other people train characters for this film. And if an interesting character emerges, they'll send it to us and we'll put it in there.
[00:23:20.210] David Oppenheim: And so there's that kind of indirect loop where, you know, although you as a viewer, when you're in Agents and playing through and moving through the dynamic film, although obviously your actions fully upend their world through your choices and how you kind of wield your power, the Agents, having been pre-trained or brought into the film, You can, of course, and I think you saw this when you dug into the pause menu, as a viewer, as a user, you can choose which agents use reinforcement learning and which don't. So, you know, that's where the mix of game AI and RL reinforcement learning happens. But we do hope that through this indirect feedback method, once we launch, that there is the ability to kind of if you're really into the project, is to follow the training of the brains that are happening outside by various engineers in the community, and then follow the trajectory. Because it is possible that agents basically pass on their DNA in the sense that, you know, you can take an agent that's been trained 50 million steps with a certain objective, and you can then take that agent and train it some more. So there is this sort of notion of longevity or an arc to their learning, I guess.
[00:24:38.493] Pietro Gagliano: In Agents, there is an open world as far as an experience goes, but there are only certain amount of things that are possible on the planet. We've kept that very stripped down and elegant. For the Agents, they can balance the planet and they have to cooperate in order to survive. So there's a survival mechanism that they need to learn. There is a consumption mechanism that they need to learn. So there's these magical flowers that they can consume and they fall in love with these magical objects. And then there's competition aspect where if one agent is consuming too many, another one might get jealous or they might get more reward for consuming it ahead of another agent. So between those three or four mechanisms, we've figured out that there are a lot of possibilities in terms of the social and physical dilemma that these little creatures are in. So even in a very basic environment like that, there are many possibilities for different storylines.
[00:25:35.582] Kent Bye: Okay. Yeah. Both of those points are helpful to help me understand because when I see that these are neural network enabled agents that have reinforcement learning, my first thought is I'm going to give them some sort of feedback. And after playing it like eight times, I couldn't see any traces of my agency being fed into this, these bots. Now, of course, when you train these neural nets, it takes thousands and thousands and thousands of iterations. And so even if you did have that, I don't know if I still would have been able to, but at least in the course of my playing through it, I wasn't able to find those traces of my agency to see like, I'm going to train these bots to do something because they're already pre-trained. In the video that you had made, you also had mentioned how there's an ability to visualize this behavior that has been trained because there's this dynamic within AI, which is whether or not the AI is explainable or not. It's essentially like a black box. Why did you decide to do that? And that's a challenge with neural necks because it's some ways difficult to turn that into some sort of mathematical formalization that's applicable to every context. It's like, depending on the context and the situation, there may be different things that are being triggered within that. And it'd be hard to have much transparency into that black box. normally within those neural nets. But I think one of the interesting points is that if you have an embodiment of that AI as a character within a scene that you're able to watch, then as humans, we are great pattern detectors and we may be able to discern some of those patterns and to kind of have a deeper, I guess, intuition as to what that learning has done, which I think at this point is a little difficult to know, okay, you've trained it, like what's different? And then as you're doing QA, it's like, you almost have to develop this sixth sense on top of a sixth sense of like, to be able to discern as you watch it across all these different contexts to kind of know what even changed after you deploy a new shift. So just wondering if you could kind of elaborate on that process a little bit of that explainable AI, the black box nature of it, and how you're using the simulation to get deeper intuitions as to these behaviors.
[00:27:45.412] Pietro Gagliano: For sure, the feedback that we've gotten from, again, I'm not an AI engineer, so this is feedback that I've gotten from a small community that we've been dealing with. These AI experts that look at our film, they say, you know what, this is awesome because we look at graphs and charts and we're trying to optimize policies all day long, and then we get to see our trained agents take place and behave in a story world. And that, to your point, Kent, it's like the intuition of humans layers on to the actual training results. So we're able to put it into context of a story, watch that story play out or the possibilities of story. And all of a sudden there's this other layer of meaning that is part of the AI training process. And in the future at Transitional Forms and the NFB, we want to lean into that more. We've got some initiatives in place that we'd like to start working on in the fall that really lean into the power of storytelling as it relates to artificial intelligence.
[00:28:45.936] David Oppenheim: And then, you know, I think the other side of that coin really is, you know, those conversations we've been having through production with the small group of AI engineers who've been working with us and providing their expertise and guidance is that, you know, Pietro talked about the benefit to them as AI engineers. But one thing that really struck me is they also were really excited about the fact that, you know, here was this dynamic film that would incorporate AI, but be a story that could really be put into the public sphere. And I think they put it like, we're expert at training AI and making AI, but we don't know anything about production values or telling stories or creating VR. And so I think they're also super excited, not only for the implications of agents and things like this on their work, but also on the public. in terms of the public being able to, within a story world, and within a story world where they have some agency, kind of come face to face with these AI creatures and perhaps begin to develop a bit of an understanding of what AI is and what it can do and perhaps the role of human beings in creating AI.
[00:29:54.131] Kent Bye: Well, I'm really struck by the fact that these bots are agents that you have within agents. They're really a representation of your agency as creators, because you've been watching these agents die over millions of iterations. And you've had to go and say, you know, I don't like that behavior. Let's shift it a little bit. Let's tweak it. Let's change the training a little bit. And so what I'm seeing is the result of your agency being expressed through these bots.
[00:30:24.813] David Oppenheim: Yeah, I mean, it goes back to your original question around three-way authorship. And even though Pietro and the team creatively were one plank of that three-way authorship, of course, we, Pietro and his AI engineers, are, of course, framing the way that these AI creatures are being trained. not always able, of course, to predict their behavior, but certainly setting the parameters of their rewards and disincentives. So, yeah. And I mean, I think that sort of is metaphorically kind of the piece at large in terms of what we hope people will take away from it in terms of the role of human beings in AI in general.
[00:31:02.815] Pietro Gagliano: Yeah, there's a social and physical dilemma in the actual piece, and that's what a lot of the storytelling revolves around. But I'm excited as an artist and director to see the artificial intelligence break that and to come up with stories that I wasn't anticipating or behavior that I wasn't anticipating. That's another balance in this project is there are behaviors that we wanted to see from a filmmaking perspective, and they're not there yet. We will continue to train different brains and behaviors around that notion. And then there are a whole bunch of unknowns that I'm excited to have agents in the film that break the narrative or that give us surprises. And that's certainly the exciting piece about this.
[00:31:48.258] Kent Bye: There's a part of this experience where it's a bit of a puzzle game where you're like, what's the algorithm that is driving these behaviors? What's the most simple way of reducing down the cause and effect of, if I do this, then I can expect reasonably that this type of reaction is going to happen? And after going through this like eight times, I felt like I was able to explore that enough, but still have a lot of mystery and not quite knowing. And you mentioned that there's like this three-act structure. And sometimes I would go through and just have a minimum viable path where it was like I planted one flower and then it's like very quickly over. And then another time I probably planted anywhere from 40 to 50 flowers and it took a lot longer to get to that story. So maybe you could, for anybody who has already played it and maybe wants to have a little bit of the magic demystified a bit, what is this three act structure that you've set up? Like what are the different turning points that you have within this piece?
[00:32:44.429] Pietro Gagliano: Yeah, so the three-act structure is, it's very loose. So there are different possible endings that we've outlined. There are different possible middles, the second act. There's only one first act. You arrive at this planet and the agents are cooperating and living peacefully. And from there, you kind of enter act two, where it's kind of up to you to change their world. Yeah, let me think of how to elaborate on that.
[00:33:11.164] David Oppenheim: I mean, the one thing that we looked at a lot were several film references. And, you know, one was the NFB film Neighbors by Norman McLaren, you know, brilliant anti-war film, in many ways, Oscar winning film that has this social dilemma at the core. And for those who haven't seen it, Yeah, brilliant film. Hop onto nfb.ca and watch it. But effectively, it has a similar social dilemma to Agents. You know, two neighbors side by side, along their property line, appears this flower. And needless to say, it sets off this chaos because they both desire this object. And they fight to the death over it, basically. Bit of a spoiler there, but basically there's that social dilemma. And, you know, I think Pietro looked at that. He looked at another film called Balance and then 2001 and really spent a lot of time communicating to the team a way to understand a simple structure. You know, we ended up sort of choosing to communicate to the team using Pixar's StorySpine. And maybe that's where Pietro, you can jump in and sort of just talk about it in that sense.
[00:34:19.750] Pietro Gagliano: Yeah, there were a couple structures that we leaned on. First of all, classic hero's journey, Joseph Campbell, having a call to action and then pursuing that call and it not going the way that you anticipated and then coming around to the other end of the journey with some new perspective. So there's that, there's Margaret Atwood was really inspirational to us, Robert McKee, Pixar's Story Spine, Dan Harmon's Story Circle, which is a simplification of Joseph Campbell's hero's journey. So all of these structures we kind of mapped onto each other and thought, you know, what are the mechanics within this game world, within the simulation that we can map to storytelling possibilities? And that way, no matter what happens, there's in theory, there's an arc to it. And so we spent a lot of time theorizing in that way, and then mapping the behavior accordingly. But truth be told, it's like a double pendulum that you can't predict. I don't know if there's any secret sauce, Kent, that I can share with your listeners to say, if you do this, then this will happen. It's chaotic, but just enough where you can maintain some sense of relationship to that chaos.
[00:35:30.372] David Oppenheim: I mean, I think those structures basically mapped onto this idea of what would happen if you were to play God to AI, which was kind of the controlling idea of agents. And Pietro laid out the first act. Essentially, the second act picks up where your controller buzzes and If you are curious, you realize that you can do a few things, and one of them is to plant this magical flower on this otherwise barren planet. Essentially, that's kind of the inciting incident. We actually took to calling the flower the MacGuffin in something Hitchcock popularized. This is kind of the object that motivates the plot forward in that sense. And, you know, the second act is effectively watching what happens when this new object is introduced into the story. And you played through eight times, so I'm sure you saw different endings, but sort of act three can go different ways, but effectively it's gravity only goes one way in this world. So needless to say, not all the agents survive, but they're always after that object of desire, this magical flower that grows in which they're attracted to. There's different endings, you know, some that we started to call good ending and some, I think the great ending.
[00:36:46.275] Pietro Gagliano: Sad ending, rad ending. Yeah.
[00:36:50.233] Kent Bye: One playthrough, I tried the best I could to always save all of the agents so that no one would ever die and that there would never be any conflict. And it just kept going and going and going. And then I realized, you know, sometimes a story has some conflict that's at the core. And if I prevent all conflict from happening and I save everybody and then no one ever dies, then it's sort of like, as a story, it gets kind of boring. It was like, there's not much place that you can go there. But there was another time when somehow I got a tree where some of the learn agents saw it and one of the, that wasn't learning enabled agent. And they started flying around and then through the credit scene, and then they sort of crash into the world and start a new flower. I was like, wow. That felt like, okay, I'm not gonna be able to top that ending. That felt like, if I were to achieve an ending, that was like the top ending that I could imagine. Because it was so, after going through it so many times and then to see that, I was like, wow, okay. So I don't know what that ending was called or what was even happening there. I don't even know how if I could do it again, but it was magical being able to experience it.
[00:37:57.602] Pietro Gagliano: We architected that ending because there is a story that is told to encountering this world, thinking that you're helping it, it doesn't go the way that you planned the end. You know, that is a story. But after a certain number of playthroughs, if you keep getting that story arc, to your point, Kent, it becomes getting a little repetitive or you're like, yeah, I learned that lesson already. Show me something else. So there are endings like that that are more satisfying where you let these agents, you know, live to the end and ascend and become a flower to the next planet and the cycle will continue from there.
[00:38:34.393] David Oppenheim: And we wanted to hint at it a little bit, even in the early playthroughs of the film, so that you would get a sense that, okay, well, other than causing chaos here, maybe there is a role for me. And so, you know, watching people play test, some people really caught on to that idea. And that's where it becomes a bit of a puzzle. If you approach it like a puzzle, there are some puzzle aspects for you to figure out in terms of, witnessing different storylines, witnessing different agent behavior. So yeah, I'm glad you got to see what we call, quote unquote, the good ending. There is a great ending, Kent, so you might need to go back.
[00:39:12.712] Pietro Gagliano: We're still not sure if our game balancing made that mathematically impossible, or if it is possible, so if you see it, let us know. And also for any people that are playing the game, we want to hear feedback on strategies and that type of thing, because it really is something that is unfolding as you play it, and it's unfolding for us as well.
[00:39:32.488] David Oppenheim: But you notice, Kent, how the moment you took us on a track of talking about it like a game, then Pietro refers to it as, you know, for those playing the game. Because, you know, there is that sort of mix to what it is, and it's something we wrestled with a lot during production. You know, it's, okay, well, what is it? It's part simulation, it's part... you know, we definitely wanted to tell a story and be a film. There's some game mechanics. Well, what is it? And so I think we, at the end of the day, refer to it as a dynamic film. And that's what we started talking about at the beginning of this conversation. But certainly, if you get into it, and certain people will, you can approach it like a bit of a, you know, a story puzzle or a bit of a puzzle in that sense.
[00:40:13.264] Kent Bye: Well, and if people are completionists and they want to see all the different permutations, then it's nice to know that the creators have put in a certain amount of things that, you know, it's like an achievement in some ways, like you reach some sort of achievement and that achievement, you get some sort of narrative reward because the ending is, you know, there's an ending that you, you see it's like a default ending and then. it kind of repeats. And then our brains are really prediction machines. And like, when we predict something, it's not as interesting, but when we see something that we don't predict, it has that sense of novelty. And I think that you can get randomly rewarded. And it's like gambling is like that, where you get randomly rewarded, where people can play through, but like every once in a while, you'll just like have this like transcendent iteration. But because I don't always know how to see the traces of my agency and the experience, I would have no idea whether or not I'd be able to replicate it and whether or not the initial conditions were even possible to be able to have certain outcomes on any iteration. And so that's where that spectrum between seeing the trace of your agency between that order and chaos, it tends to lean a little bit more towards chaos because It's hard for me to be like, Oh yeah, I'm going to have this ending in here. I'm going to go in there and do it. The authorship split up into the three. It's like, no, no, no. If either the filmmaker you is who's created it or the agents are not going to cooperate, then you can use your agency as much as you want, but you're not going to be a majority winner of trying to have that ending.
[00:41:39.318] Pietro Gagliano: That is until we build a bot to play the dynamic film and figure out what the strategy is. We'll do that next. I'm just kidding. Engineer team that's listening. Well, maybe we'll get there someday. But yeah.
[00:41:54.423] David Oppenheim: Kent, I was curious, did you dig in to sort of replace some of the game AI brains with reinforcement learning brains?
[00:42:01.075] Kent Bye: Yeah, I did. I did one with all of them. That was the next question I was going to ask in terms of what you noticed between the difference between the two, because what I noticed was that the reinforcement learned bots or agents, they seem to be resistant to looking into the flower. The ones who seem to be looking into the flower were the ones who were just heuristic, and they seem more likely. But beyond that, having different combinations of having three and two was, you know, how many I had with the good ending. And that, again, I don't know what I did to catalyze that, but that's the mix that I had was three learned AI bots and just the two of the normal bots. So yeah. What, what have you noticed in terms of the quality of the character of either behaviors or how you start to even describe what the difference between those two are and how they behave in this piece?
[00:42:49.887] Pietro Gagliano: We've taken a lot of different strategies to how we wanted to train the agents, whether it's based on human qualities or storytelling tropes, and all of that became a little bit too complex for training purposes. And so the bots that you've seen in there, the agents that you've seen in there right now, Kent, are focused on balancing mostly. Once in a while they'll consume from the flower, but I've noticed that two of them are very dependent on one another, so I pretend like those two are in love. And sometimes they get too close to each other and they fight, so even better of a story. But what you'll notice in the brains that are in there right now is they're very focused on balancing in a particular pattern. So you can move them around and they'll kind of find their back to their rhythm and they've optimized that goal in that way. By the time we release the dynamic film, we may have different brains in there with a different pattern that we'll have to just play with and decipher. But currently, they're very focused on balancing and we'll be training more that might be more focused on consuming or competing. Then it'll be really interesting when we get combinations of them in there.
[00:43:59.877] Kent Bye: What's next in terms of what you see as the next iteration for experimenting with this? You've started to put AI bots in there with neural net reinforcement learn trained agents that are in this cooperative environment. What do you see as the next step towards reaching artificial general intelligence, but one step at a time through these types of simulation environments?
[00:44:22.072] Pietro Gagliano: If we put a trajectory out into the future, and it's artificial general intelligence is right at the end of that trajectory, and that's the technological singularity, and we have no idea what happens after that, probably about halfway, I imagine that entertainment will have AI incorporated in it where there will be intelligent agents that are characters in stories that have their own ambition and their own reward system, their own agenda, and that the user, the viewer, the audience, whatever we call them at that point, will come in and either keep them on track or set them off course and see what happens. And it's kind of weird to think that these intelligent beings will just be a form of entertainment for us. And that's kind of why I want to start creating this art now, is because they're more than that. These intelligent beings have the potential to be generally intelligent, to be more intelligent than humans. So I'm hoping that the use of storytelling can help us understand what it means to be machines and for machines to understand what it means to be human. And that's the important link between artificial intelligence and storytelling in my perspective.
[00:45:30.625] David Oppenheim: And I guess for me, I'm not I'm not as well versed in the theories of the singularity, etc., and I tend to be really just looking down at what I'm working on at the moment versus future forecasting. But I think what Pietro and I share is that regardless of where it goes, I think I see reinforcement learning in particular, because that's what we've used here, but as a tool for creation, as a tool for artists, as a tool for production studios in storytelling, and I kind of prefer not knowing, or certainly of just basically accepting that I don't know where to lead, but to know enough that it's a really important creative tool to put into the hands of artists and creative teams, and for us to work with, in particular at the NFB, to see what AI can bring to storytelling and That's something Transitional Forms as an independent studio is devoted to and will continue, but it's certainly something that I'm interested in as a producer and that our studio is interested in. Perhaps towards the end of our conversation, as we're wrapping up, we can share a little bit about something we'd like to release tied to agents around the launch date that might partly answer your question in terms of what's next. But I know it's something we're working on feverishly leading up to the consumer launch late in September, but perhaps we can talk about it towards the end because it does sort of point towards what we hope will take root with agents when it's released into the world. But yeah, as far as those larger questions of what's next, I really like not knowing, but knowing enough to sort of keep creating.
[00:47:02.058] Kent Bye: Yeah. Did you want to talk about those? Uh, it sounds like you're going to be working on some tools and other things that you're releasing as well, but maybe you could go ahead and dive into what that is. Yeah.
[00:47:09.883] Pietro Gagliano: And, and Ken, did you mean what's next in the immediate future or like the longterm future? Cause I went pretty far.
[00:47:15.648] Kent Bye: No, well, you went far, now you can go short term.
[00:47:19.109] Pietro Gagliano: That's his role. I went really far, right as far as we could see. Right to the singularities. Yeah. And I don't know what's beyond that. Who does? In terms of the immediate future, what we'd like to do is take what we've learned on agents, combine them with even greater number of dynamic systems. I am a real believer in dynamic content, and that's the content of the future will be you know, living and breathing experiences that unfold for you or for a group of users. So I'm excited to take what we've learned on agents and add more dynamic systems to it, like dialogue, script writing, environment generation, music, that type of thing. So that'll be really exciting to start combining efforts. And in the immediate future, we're working on an initiative that uses a storytelling framework to train these creatures using reinforcement learning. It's still very experimental in early days, but we're hoping that our efforts and agents will educate us in how to build a tool that lowers the bar for reinforcement learning engineers and people like myself, like a creative person, a director, someone who's just interested in creating simulations. We're hoping that we can create a tool that allows anyone to train reinforcement learning agents through the context of storytelling.
[00:48:40.816] Kent Bye: You mentioned dialogue, and I wanted to ask, because in the experience, I noticed some sort of primitive language that they were speaking to each other. They were communicating, and sometimes you saw a little tilde on top or a question mark, and they were interacting. Is their process of communicating with each other, is that part of the learning where they're able to actually have a certain state that they achieve and then be able to communicate to other entities that are around them?
[00:49:05.483] Pietro Gagliano: Our intention with that was to leave it magical. And so there is no communication between them. There's an emotion system and an attention system that we've built to say, if this is happening on the planet, likely they will be in this emotional state. So what you're seeing is them vocalizing their emotions that we have, you know, it's just another dynamic system that's in there. If they were able to communicate like that, that would be certainly magical. And maybe we'll do that in another version.
[00:49:35.622] Kent Bye: Okay. Yeah. I saw that you have different emotional states. So it seems like the glyphs that are on top of them are representing that one time I had, I had one of the agents, I wasn't doing anything. And it just looked straight at me with a question mark. Like, are you going to do anything?
[00:49:49.611] Pietro Gagliano: Yeah. And that, that's a storyline that you can pursue is not interacting with them. And eventually they will notice you, you know, spoiler alert, they will notice that you're there and, and that's how you've interrupted their existence. So you were probably on the cusp of that path.
[00:50:04.726] David Oppenheim: Yeah, we felt that was just it was key to be able to outwardly illustrate some of the sort of code that is driving them, but isn't obviously visible to an audience. So, you know, for us, that was a way of visualizing that and, you know, taking what was driving them, but turning it into I mean, in quotes, emotions or sort of the display of that code. So yeah, that's what you were seeing, essentially, is those vocalizations and those emoji that would sort of underscore, again, with quotes, frame of mind in terms of their frame of mind.
[00:50:35.726] Kent Bye: Yeah, I'm sure that helped debug as you're watching it. And the QA process, I would imagine be a bit of a nightmare because you'd have to just watch it over and over again. And I don't know if you developed a system to look at an abstraction of it, do a run through without having to watch it in real time, or if you really actually had to watch it in real time in order to really understand what was happening.
[00:50:56.398] Pietro Gagliano: That is one of the most challenging parts of a project like this, because it is all these dynamic systems that are interlinked and interdependent, not quite knowing what is being triggered at what point or what is reliant on another aspect. It has been definitely difficult for both storytelling and debugging and that type of thing. But we did have a console that we built internally to change the speed and adjust different parts of the simulation to make it easier to play and test on repeat.
[00:51:26.866] David Oppenheim: I mean, that is like a tool of necessity that our amazing technical director, Dante, and Alex, our machine learning engineer, and Nick, our art director, I mean, those three spent the most time inside Agents replaying, replaying. And so, yeah, tool of necessity. They figured we're going to go crazy if we have to go through the film so many times. So they figured out a way to sort of short circuit that.
[00:51:52.045] Kent Bye: It's like their portal. It's like the matrix that they're watching down and seeing the matrix of what's controlling it.
[00:51:57.707] David Oppenheim: So yeah, they know the agents the best out of anyone.
[00:52:02.348] Kent Bye: Great. Well, just to kind of wrap things up here, what do you each think is the ultimate potential of immersive storytelling, VR and AI and what it might be able to enable?
[00:52:13.739] Pietro Gagliano: David, you have to go first.
[00:52:16.743] David Oppenheim: Nice one. I think I'm just sort of fond of just saying, I mean, it's a bit of a non-answer, so sorry Kent, but I think I'm just fond of saying that it has the potential of art, that it's, in that sense, To me, it's a medium. It's a medium in its third wave, but in its infancy overall. So I think the ultimate potential is as a mature art form, and all that art forms, whether it's painting or filmmaking, that's had filmmaking over a century of creative language development. I think the potential is there for VR to do all of the things, the wonderful things that film does, that any art does in terms of holding up a mirror on social condition. It's a lot of potential, but I think it's open-ended for me in that. Again, I like not knowing or not even thinking to predict, but just to know that it's powerful enough and has its own unique affordances that we'll see what it brings.
[00:53:14.782] Pietro Gagliano: Yeah, I agree. I love the fact that AI is a tool that can be used for all kinds of creativity that is right now unknown. And I think I enjoy the idea that it can be combined with storytelling because AI is poised to be humanity's last invention or the greatest tool we've ever made. And storytelling is, in fact, the greatest tool we've ever made. And those two things belong together, I think. AI can learn from storytelling and storytelling can learn from AI. And in terms of how it relates to VR, I really believe that these three-dimensional liminal spaces that we're creating allow us to connect with the machine intelligence in ways that other mediums simply can't do. And that's a really exciting part of the medium that's coming as well.
[00:54:03.200] Kent Bye: Great. Is there anything else that's left unsaid that you'd like to say to the broader immersive community?
[00:54:09.292] David Oppenheim: One thing I just wanted to add, you know, it's something that we'd like to share with you about some of our launch plans for Agents. You know, we're going to be launching towards the end of September, but it sort of goes back to a couple of your questions in terms of, you know, the viewer, the audience's involvement, the way that the Agents are trained. You know, right now, what we've been talking about mostly is the dynamic film Agents that you play through. We've talked about the fact that, you know, as you experienced, you can place brains that have been pre-trained into the film. One of the things that we're going to release to a small group of engineers and then see where it goes is our training environment and open up training to engineers that will then work with our system and tools to train agents to their own kind of liking. And then we will be reviewing those brains every couple of weeks and then putting those brains back into the film. And, you know, it probably will be a little small group of people interested in agents and training, but we wanted to open up training to other engineers and put them back in the film. And then we just wanted to share that with you because that was something we thought you might be interested in.
[00:55:21.472] Pietro Gagliano: I would like those who engage with this dynamic film, I would like to A, know what you think. It's a different type of project. It's been hard for us to come up with the label for it. It's not quite a game. It's not quite a film. It's an experience. So I'd love to know what you think and I'd love to know how it made you feel about artificial intelligence, if it shaped your worldview in some way and how we can relate to these intelligent creatures.
[00:55:48.891] David Oppenheim: And I think for me, just building off of that, I feel like probably more than any production that I've worked on, you know, this sort of is living and breathing in a way that's just sort of hard to describe. that similarly, it would be wonderful to talk to the community, both people who are playing through agents that, you know, don't work in the field or just are interested in VR and artificial intelligence. But yeah, if you're an artist, especially, or you're in the field, and you're interested in agents, love to hear what you think, and would love to also just connect with other people working in the medium. I mean, there's been some amazing work with AI and storytelling, and We just want to sort of have those conversations with the community, so encourage people to get in touch once they experience Agents.
[00:56:35.915] Kent Bye: Awesome. Well, David and Pietro, I just wanted to thank you for joining me on the podcast. It's a fascinating collaboration and collision between AI and storytelling and VR. So I look forward to see where this goes and if it continues to evolve and grow and inspire many different people out there to kind of start to experiment with what's possible and to get some deeper intuitions as to what all this means, how to work with these AI agents and, you know, how they can be a part of these stories and story worlds that you're creating. So thank you.
[00:57:04.201] David Oppenheim: Thanks for having us, Ken. Thanks very much, Ken.
[00:57:07.163] Kent Bye: So that was David Oppenheim. He's a producer at the National Film Board of Canada. He served as the creative producer and producer at large on VR pieces there. And Pietro Gagliano, he was the co-founder of Sequel Location that worked on VR and location-based experiences. And he started The Transitional Form, which is looking at immersive entertainment and the intersection of AI and storytelling. So I have a number of different takeaways about this interview is that first of all, Well, how they're thinking about authorship and splitting up that authorship, I think is very helpful because they're trying to give some of that agency over to the player, which is what you typically see in games. But they're giving even more agency over to these AI entities that they don't exactly know what they're going to do in all the different situations. They're trying to cultivate these very specific behaviors. But at the end of the day, it's not like an if-then type of situation because they're not only looking at the environment around them, that is the world and how they're balancing on it. but they're also looking at each other and in relationship to each other. They're also in relationship to the different actions that you as the player and the person who's interacting with this experience is also doing. So you have what is essentially this complex non-linear system that if you try to actually map out in your mind, how would you explore all the different narrative arcs and potentials of this environment that you have these different states that each of these AI bots are in, and it's really difficult to know exactly how things are going to play out. So generally, this is a story world that they've created. They have a training environment that is a little bit abstracted from everything that you see within this full experience of agents. But they're able to cultivate these very specific behaviors given a certain amount of inputs. And so Pietro listed three different things. So there's a balancing dynamic. So they're on this world. And then if they get too far off the edge, then they're going to fall off. And the world itself is also rotating. And so they have to not only pay attention to where they're at on the world, but they also have to pay attention to each other. because they're trying to balance this world based upon how they're moving. So again, the reinforcement learning that they're doing here is over many different iterations, over many different lifetimes. And so there's some level of memory that even though if they die, then they still have that as a negative reinforcement that gets fed into the overall reinforcement learning of these bots. And they have two different types of AI. They have the reinforcement learning and they have the heuristic AI. And the reinforcement learning is the thing that they're also trying to cultivate and train. So the other thing is there's a consumption element. So there's these flowers that you as the user can plant and they want to eventually see the flower and they get rewarded. And then there's a fighting component or a competition component. So if there's one entity that is either getting more flowers or just however they've decided to reward the fighting, then it's presumably some of them will just start fighting on their own if you just let them be. So they have different phases here. They've actually broken up into the three act structure where the first act as you go in, there's nothing that you can do. It's basically the same every time. And then the inciting action of the second act is when you actually plant the flower. You know, I've done it where I didn't plant anything. I've also done it where I've planted like 50 or a hundred different flowers and it still kind of keeps going. And so you plant a flower and it also grows up to the point where some of the entities can actually look at it. So. There's that element too. And then they start fighting and some of them dying, they're falling off. And then, you know, there's a variety of different endings that can get catalyzed. You know, there's the baseline ending that they have where it just kind of ends and it says how many have died. There's the good ending, which is where they look at the flower and they transcend and they start flying around and they go flying down and plant themselves right off the bat and starting with that inciting action within the next iteration. And then there's other endings that they have. One where I started to go down, I didn't fully explore, which is that you don't do anything and they start to notice you and they start to interact with you in that way. And there's apparently a great ending, which they themselves don't even know is mathematically possible. So they've created a whole layer of tools to be able to analyze the different states. And one of the things that I would like to see as someone who's gone through it already, and that would actually draw me back in, would be to have some window to be able to look at the metadata that is behind each of these entities. to be able to be a little bit more transparent as to the results of my agency. Because I think that was the hardest thing to determine as I'm interacting with this experience was I don't have a lot of transparency as to what's happening within these bots. I mean, they are radiating different emotions, but to correlate that into some sort of prediction as to how their behavior is going to unfold, it became really difficult to be able to actually understand what they're doing and why. And it ended up being a bit of a random experience where I was kind of clicking and picking up entities you know, moving them around. And I wasn't able to say, okay, now that I do this action, that this is what is going to happen. And so that's a good place where it's not completely predictable. Otherwise it would just kind of boring. I mean, the fact that it's unpredictable gives it some level of interest because there is novelty that continues to play out. And there's enough ambiguity that I'm projecting the stories onto what is unfolding. But at the same time, because there's very little transparency for me to be able to have any traces of my agency over time, then it ends up feeling a little bit random and chaotic, where it could be just about any action that I do could have different results. And even when I got the good ending, I literally have no idea what I did to even evoke it. So it's a bit of that shared authorship as you're going through this, is I am working within the context of this existing simulation that the creators have created, what the bounds of my interactions can even be. And then the AI agents have the bounds of what they've been trained in terms of their input about what rewards them and what drives them. But then there's still relational contexts that happen where it's like, okay, what's gonna happen in this situation when they're in this certain spatial configuration, given the state that they have. And without any transparency of what may be on the back end, if there's any thresholds that are triggering, like say a fight behavior, then it's hard to actually trace the impact of anything that I'm doing. And I think it would be interesting for people to kind of play around with that AI, just to understand it a little bit more, to give some of those tools that they themselves used in order to train the AI, and to start to reveal some of that back end and some of the guts of this experience. Because as a story, it's already kind of ambiguous. You're really projecting a lot of things onto it. And to have more access to that information, I think, could just help to elucidate what's actually happening with these different AI brains, especially as they move forward. And I thought it was really quite interesting to hear how the AI interacts within these environments would be a way for AI researchers to be able to train reinforcement learning policies. So the policies are the conditions under which that you're giving positive or negative reinforcement. So then you set those thresholds and then you train them and then you see how they act. And so to have a little bit more of a storified way of watching that behavior. But again, it's very anecdotal. And I think there's a certain amount where you'd almost need to have like a statistical run through where you can do a simulation to really know. And so there's a lot of tools on the back end that they're using that I think over time, it would be interesting to see if they're able to integrate some of the tools that they use themselves to sort of cultivate this experience. and feed it into like maybe a setting, not a default setting, because I think the way the experience is now, it's great for people just to go and have that mystery of not really knowing and giving them an opportunity to really interact and play around with it. Because if you just show all the backend right away, then it does remove a lot of that mystery and a lot of that intrigue and this sense of play and exploration. But I found myself reaching a certain point where I was like, okay, now I want to see a little bit more context or information. I think this would actually be interesting to see how this type of data visualization over time could be used to be able to inform the cultivation of future AI algorithms or how to evaluate and test AI on a more experiential level. Because what Pietro said is that they're essentially trying to cultivate these very specific behaviors. And so they're having these different dynamics and interactions, and they have to go back to the drawing board to do the training of the AI. and to see if they're able to cultivate some of those different interactions. They already have interactions of the balancing, consuming, and fighting, so they're very primitive in terms of just those three things, but yet the permutations that you can get from the five different AI entities that are moving around and they have different levels of which they're being triggered on this, I kind of metaphorically think of it as different thresholds as to whether or not they're gonna be triggered into wanting to consume, whether or not they wanna be balancing, or whether or not they're gonna want to fight with each other. And so just with those three different behaviors, you start to get all sorts of different social dilemmas and social dynamics out of those individuals. So that was another aspect here, which is that they're taking those at the individual level, but there's also a group dynamic there. So there's a relational component between the different entities that you're trying to pay attention to and discern what those patterns are. I think that's another level where it'd be great to be able to do some data visualization and think about, okay, what would be the augmented reality way of trying to map out the relational dynamics of this community of five different AI agents? And some of them will fall off the edge at some point. And so maybe that's because it was a deliberate way in which that they've not fully optimized for balancing and they want them to die off and that just becomes part of the attention, or how that scoring changes over time, or how you do these algorithmic shifts in order to bring about a very specific behavioral interaction that you want of these AI entities that you don't have your full agency over. So it's really quite interesting, a profound experiment that they're doing here. And the fact that they're releasing the training tools out for other people to start to train the AI brains and to see how you have further different relational dynamics start to play in, I think that's really interesting as well. So there's a recurrent neural networks that's essentially memory. And so there's no memory for these. And so as I was going through it, it wasn't immediately obvious whether or not whatever I was doing was actually training the AI. The reinforcement learning, the fact that you're calling them learning bots implies that, okay, over the course of playing with this, they're able to learn from your behaviors, but they're already pre-trained. And so it's almost like a little bit misleading in my mind to call them like learning bots or learned bots. But the type of AI that's driving them definitely merits that. It's just, in the experience, it'd be nice in the future to see, okay, what happens if you put different neural network architectures so that it's able to actually learn dynamically. So this reminds me of the experience called Facade. And in Facade, you're at a cocktail party with a husband and wife, and they're fighting. And you have to do natural language input to be able to interact with these two characters. Part of this experience is that they do have a narrative engine that's controlling how the narrative arc is unfolding based upon the different contexts and that kind of reminds me of what they've created here where you have different states that you achieve and then based upon that state then it sort of branches off into different narratives based upon your interactions up to that point. So I think that it's important to find different ways to see what the interaction is and to see how you can maintain that narrative tension of a narrative arc with having this open world exploration where you're able to interact with it. So you still have the different dynamics that are unfolding. So having things like a recurrent neural network to have memory would imply that you're able to actually have dynamics that are even more agency given to you as the user and more over to this entity that you can have more of an interaction. So one of the things that Andrew Stern had told me in interviewing him about Facade, he's one of the co-creators of Facade with Michael Mateus, he said that, you know, they wanted to make it like 15 minutes, and so it's very short. And they wanted to have a lot of replay value. And I think it's very similar to an experience like this. It's very short. You can kind of play through it. And the idea is that each time you go through, you're trying to test out different things. And so it's through that testing that you're learning more about this complex nonlinear system. and to see, given this input of your agency, how does that impact how this unfolds? So it's really training us to understand, in a more metaphoric way, the nature of these complex nonlinear systems. But given that, it's probably better to stick to the short form like they've done here, and to have more experiments to see how can you throw different AI to see different group dynamics that you have. So one of the things that I'm excited to see where this goes in the future is starting to build up trust and rapport with these different entities. So if you try to do something right away, then you can't do it. But if there's different modules that are combining with each other, so having simple rules that are, you know, having a trust behavior or having skepticism. And, you know, this is something that Andrew Stern, Michael Mateus and Larry LeBron had done on this. DARPA experience called Immerse. And I talk about this back in episode 293, but I had a chance to actually do it at the Portland VR meetup here. And it was intended to be able to train soldiers who are going over into these areas where people don't speak English and to be able to do these gestural interactions in order to build up trust and rapport in order to get information about the whereabouts of somebody. So in that, you couldn't just ask right away. You had to do different things to be able to build up trust and rapport. And then once you did, then you unlocked other behaviors that happened after that. So that would be interesting to see what it means to build trust, these conditional types of behaviors that are added on top of each other, and to start to explore those different types of architectures as well. But I'm just excited to see where this goes already with what they've created with this architecture to be able to train new brains and to be able to have this storified environment. But I would like to see a little bit more quantified information to be able to see what's happening on the back end. This is, I guess, another aspect of explainable AI, like how do you explain what AI does within these different contexts? And if it is like this neural network, then it's not like an if-then heuristic, or even to try to describe, given this context and this situation, this is the type of behavior that you expect. Once you have a number of different types of states and different behaviors and all the different contexts that you're trying to map out I mean it becomes pretty complicated pretty quickly even with a simple number of different behaviors in different contexts Let alone talk about the complexity of trying to do that for a human being But this is a good start to keep simple like these are AI are kind of like pets They're like virtual pets and so you can start to flesh out some of these other tools and these other ways that you can interact with these Fairly simple and rudimentary AI entities and bots within the context of these experiences So, the narrative structures are interesting, just to touch on that for a second. The hero's journey, you know, the call to action, pursuing something and it doesn't go your way, and then you have a return. So, that basic structure that was reimagined by Dan Harmon and his story circle, and then the Pixar story spine, and Margaret Atwood, Robert McGee, all these are different narrative theorists who put forth different structures to be able to understand the basic structure of a story. And it's essentially, you have a conflict and then it gets resolved. you know, the way that you break that into different stages, there's many different ways of approaching that. But the challenge of having this complex nonlinear system and given certain states how to be inspired by those different stories to see if you can cultivate different behavioral dynamics that people can See what those dynamics are and be able to project a little bit of themselves into that so they talk about this meta layer of meaning that they're able to project onto it from that story lens and to see the different behaviors that you're able to then cultivate but you're able to see that in the context of the story so and It's a little bit of transcending what happens on an individual agent level, but seeing the group dynamics, but also the context of the world. And given the context of the world, that helps set a more of a narrative context that you can start to then project what some of those meanings may be, whether it's a love story or conflict or a battle or whatever the human drama may be. Right now, there's a very simple communications of emotions. It's emotional states of pleasure or anger or confusion but to think about in the future having more complex dynamic range of emotions to see that can be evoked and see if they could communicate their emotions to each other and what's that mean to have an overall zeitgeist of emotional field of these AI agents and You're trying to invoke either anger or happiness or joy or sadness or grief or whatever that ends up being. That the emotional states of the AI agents could be some indicating factor of how that continues to evolve and create different social dynamics and different stories as well. And you know, one of the things that Andrew Stern had told me is that in the context of these story worlds, then you can start to invoke behaviors that AI does that seems more intelligent than it actually is. AI is very limited right now. And so with those constraints, then you can start to create a larger context and you start to become a little bit more forgiving, but also having the ambiguity of a game or these pets, you know, they're sort of entities that don't feel like they're quite human yet. And so you can kind of play around with them and interact. You have this predictive coding theory of neuroscience, which is that your brain is constantly making predictions as to what it expects, and then if it just does exactly what you expect every time, then it gets pretty boring. It's like you've figured out the algorithm. I think what makes an experience like this so compelling is that you don't actually know what's going to happen, and even the creators don't know. And so they don't even know the certain strategies to be able to invoke different endings and whatnot. They just haven't had enough time to play through it all. So there's a certain amount of even the creators trying to get feedback into even understanding a higher order level of heuristic if you do this and that or try to cultivate this type of behavior or have this combination of learned agents or the heuristic agents. I mean, there's so many different variables here that it's really difficult to understand the full potential of this. But I think that's part of what these types of experiences train us to do is to be able to model these different types of complex nonlinear systems And as we do more iterations ourselves, then we start to learn how to interact with it. At one point, Pietro joked saying, you know, well, we need to create an AI that's actually playing this experience so that we can know the best strategies to explore all the different possibilities. So instead of just having the agency be left up to the audience member, that is a human that has the free will and consciousness to be able to explore and do these different things. Is there ways that they can create the AI to step into that so they can do even more simulations to see what different types of dynamics come up? And I think they'd have to reach that point in order to come up with, say, more of a statistical representation to say, OK, this ending happens across this many different Monte Carlo simulations. This happens like 5% of the time, or this happens 30% of the time. So you start to get like more statistical insights into the behaviors that then you're able to quantify a little bit more. But I think the whole point of an experience like this is to not just be trapped within the context of numbers, because AI research already has that with the graphs that they already have to be able to try to optimize based upon those graphs. And I think what's unique here is that you're starting to see more of an experiential embodiment of these AI entities. You see the behavior and from that behavior, you may be able to discern certain patterns and behaviors that you wouldn't have been able to see in a graph. I think that's what's the interesting part, especially when you start to think about the many different dimensions of things that are interacting with each other because AI is already working on a higher dimensional way in which that they're mapping out this topology that you can't even put into your mind because it's a higher dimensional entity. So all the different relationships that are happening between these different AI entities and how they're relating to each other, you can start to maybe get more of an intuitive understanding of that as you see it play out in the story world. Now again, I think it goes back to the challenge of the difference between an anecdote, which is like one anecdotal playthrough versus a representational way of understanding those behaviors beyond just that one sample size. So I think that's the challenge of all AI is trying to see what the behaviors are across a variety of different input. But I think in simulated worlds like this, there's going to be a very important role that these simulated environments that are very constrained are in order to developing AI algorithms so they can deal with more and more complexity within those environments. Because the real world is super complex and it's going to take a long time to understand even how to create the neural network architectures to be able to deal and model with the complexity of the world. And so by constraining things to these simulated environments, then AI is able to create repeatable contexts and environments to be able to develop the AI. And that's what we've seen with reinforcement learning already with Atari games and how AI algorithms have seen this huge level of innovation and explosion by having these played in these constrained simulations, i.e. video games. In this instance, you have human agency that's played into that. So how did that play into it? And I think Pietro is right. They'll likely have to eventually start to simulate the different interactions that you can do. And from that, then start to maybe get more insights into statistical representation, or maybe even new ways of visualizing these different AI potentials. You can think of the metaphor of quantum mechanics, where there's the potential of all the different possibilities at any given state. So how would you actually visualize that as you're playing through? There's different branches you could go down. So could you actually visualize that as well? So I think there's actually a lot of room to be able to do all sorts of meta-layer data visualizations on top of an experience like this. And I hope that they start to go down that a little bit because I think that's going to be maybe not so interesting from a storytelling perspective, but certainly more interesting for the creators and the people who are actually trying to train these to get more of an intuitive understanding as to the different conditions that these different entities are going through as they're going through the simulation. So I'm really excited to see where this goes, because I think there's a lot of really innovative areas here, not only for virtual reality and interactive storytelling and the role of AI. I think that's a whole interesting branch, but it's like the relationship for how those VR environments are going to start to help cultivate and train AI in ways that even the researchers are going to find. compelling and probably even inspired to start to play around with this. And maybe they'll start to tinker around and start to discover new approaches or new paradigms with this story world environment, and maybe start to create a whole other branch that goes beyond just Minecraft and other ways that they're already doing AI and interactions with agents. But in this case, to see if there's any algorithmic innovation that comes out of this. But I think the storytelling is also the other part. And I think, you know, we think about the authorial control of a linear story, we can really understand that structure. But as we start to introduce new levels of agency, both from the audience, but also from those agents, then you start to come up into more complex nonlinear systems that don't fit any sort of linear arc. And so how can you meld what we understand and want from a story from that narrative tension that you still have? with this open-world game-like environment that starts to move away from that linearity. That becomes more about the states of those entities. And it goes beyond just like object-oriented design where you have objects that are in a world that are spatially spaced out because the behaviors in this experience are so relational with those entities in relationship to each other and also relationship to the world. So there's these different conditions that they're triggering within each other that makes it more than just a static object-oriented metaphor, because it's more of a dynamic tree that's growing and evolving and living and breathing, and how to really make sense of that. And I think that's the world that we're living to. And I think eventually we'll call that intelligence, which is the ways in which an intelligent entity is able to react dynamically to the conditions and the relationships within an environment. Maybe that's the definition of intelligence there. And we're trying to start to simulate that here with an experience like Agents. So I highly recommend people check this out. I think there's actually quite a lot of deeply profound implications when it comes to not only AI and immersive storytelling, but also the future of this intersection and the future of AI development within itself. So that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.