#743: Storytelling in AR Panel at Sundance New Frontier 2019

I moderated this 90-minute Sundance New Frontier panel discussion about immersive storytelling in AR and the role of virtual characters and conversational interfaces. It included Alice Wroe, who is the creative director of Magic Leaps digital human named MicaPeter Flaherty, who is the director of The Dial, Artie’s co-founder and CEO, Ryan Horrigan, and Ted Schilowitz, who is the Paramount Pictures Futurist in Residence.

We had a wide-ranging discussion about spacial storytelling, the importance of moving your body through space as a fully-engaged participant, the differences between VR and AR in storytelling, the differences between phone-based AR and head-mounted AR, the ethics of AR storytelling, the future trends and challenges for AR as a utility vs AR as a storytelling medium, the role of context for story in AR, the role of emotional context and body language in AR, and finally how virtual characters and conversational interfaces will be a part of future of interactive storytelling.


Photo courtesy of Anastasia.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So there was a panel that I moderated at the Sundance New Frontier, it was called The Second Coming of AR, and I would have probably called it like storytelling within augmented reality or a big part of our conversation that we covered conversational interfaces and the role of AI characters and AI. So Bruce Wooden, also known as Somatic Bruce within the VR community, he's somebody who helped start the Silicon Valley virtual reality meetup and helped do the first conference and helped found Altspace VR, which eventually went out of business and then passed over to Microsoft, which then recitated it and it still carries on. But now he's at 60AI and he's helping to do developer relations there, but he was helping to curate at Sundance all these different panels. And so he asked me to help moderate this panel, which featured Ellis Rowe. She's the creative director of Micah at Magic Leap. Peter Flatterty, he did The Dial, which was being featured at Sundance New Frontier. Brian Horrigan, he used to work at Felix and Paul, now he's co-founder and CEO of Arty, which is really focusing on avatars and conversational interfaces within augmented reality. And then Ted Shilowitz, who is now the futurist at Paramount, who's really looking at the future of immersive technologies in the long term. So we cover all of the different dimensions of storytelling in AR, as well as conversational interfaces on today's episode of the Voices of VR podcast. So this panel discussion with myself and Alice, Peter, Ryan, and Ted happened on Monday, January 28th, 2019 at the Sundance Film Festival in Park City, Utah. So with that, let's go ahead and dive right in. All right, well, welcome, everybody. This panel discussion is the second coming of AR. I do the Voices of VR podcast. So I've focused a lot on VR for the last five years. I've done over 1,000 interviews in this space, looking at virtual reality as well as augmented reality. I've got a lot of interviews that I'll be releasing as well. And so I'm excited today to dive into some of what is happening in terms of storytelling and augmented reality, as well as what's happening with conversational interfaces and AI characters. So we'll be covering all of that. But first, I want to give an opportunity for each of my panelists to introduce themselves and a little bit about what they're working on in the realm of AR.

[00:02:29.506] Alice Wroe: Hello, my name is Alice Rowe and I'm so happy to be here. Thank you so much. I am the creative director of Micah's personality. Micah is Magic Leap's digital human. And I feel like I should tell you that even a year and a half ago, I had never worn a Magic Leap device. I'd never worn any AR or VR. I had very little interest in technology, actually. And I was an assistant curator at Tate Modern. I ran a project called Her Story, which uses feminist art to engage people with women's history. And those were the things that made me feel tall and like I had some momentum and like I was physically present. Those are the reasons why I can speak to you with some level of confidence is because of activism and feminism, art history. And so, When I got the opportunity to try the Magic Leap device through my feminist and artwork, I was kind of not really fussed. I was happy to be asked, but I wasn't particularly excited. And then when I put the device on, it was genuinely like I'd been kicked in the stomach in a really good way. It was as if all of these strands that made me feel present and made me feel like my life had meaning and it made me move forward with urgency had come together in this medium that felt totally new and like it made sense and like we could step into the future that I wanted to live in conceptually, creatively, social justice-y. So when I got the opportunity to work on Micah's personality, the digital human, I was utterly honored. She is meant to be empowered. She's meant to be empowering. Some of you may have read that she's a virtual assistant. She totally is not. Micah will not give you answers, but she will help you think of better questions. So I am so happy to speak about Micah with you all today, and I'll talk about storytelling creativity. I won't talk lots about technology, and I hope you will understand that. But yeah, that is me.

[00:04:17.597] Kent Bye: Thank you.

[00:04:18.259] Peter Flaherty: Yeah. Hi, my name's Peter Flaherty, and I come from a background of art making and large-scale integrated media performance, and in the last six years or so, VR and now AR. So my background has always focused on storytelling. A lot of the work has been presented in galleries, museums, or on stage, so there's always been a component of liveness to it, which is how I connect those different threads and join them up with AR and VR. that liveness produces a natural level of interactivity or immersion in the work. I've shown work in venues ranging from international arts festivals, Broadway, the Metropolitan Opera in New York, and now in the VR, AR world, where I really actually feel most at home. And I'm working on a project here at the festival that's called The Dial, which is next door in the other New Frontier space, New Frontier Central, just next to MICA. We're neighbors. And The Dial is a project that combines augmented reality with projection mapping. It is housed inside of a glowing cube, and it's an interactive drama where you, as the participant, are able to control time in the story. So by moving your body, you can move time forward, or you can move time backwards. So you're essentially able to edit your own version of how the events play out. And I'm very excited to be here. Awesome. Thank you.

[00:05:44.007] Ryan Horrigan: My name is Ryan Horrigan. I'm co-founder and CEO of a startup called Arty. We're an artificial intelligence company focused on making AI characters. You could call them virtual beings, virtual humans, AI avatars. There's all kinds of names for these things, but they're quite interesting and we're using machine learning, computer vision, sentiment analysis, natural language processing, and we're bringing that into real-time rendered experiences for AR and VR. My background was in VR prior to this. I was a Chief Content Officer of Felix and Paul Studios. One of the pieces I worked on in my prior job is here at the festival Traveling While Black and then also actually a second piece called Marshall from Detroit that features Eminem and then prior to that I was a studio executive in Hollywood for over a decade at Paramount and Fox New Regency and got my start at CA and I've always sort of been looking for the convergence of technology and storytelling and that's sort of what led me to leave Hollywood and sort of the traditional film space and you know every year here at New Frontier the progress that's being made is kinda just miraculous and what we're seeing this year compared to what I saw maybe four or five years ago it's just incredible and I'm glad that you're all here to share that with us.

[00:06:51.073] Ted Schilowitz: Hi, everyone. I'm Ted. I have an unusual job and an unusual pursuit. The way I like to describe it is I work in the land of pretend. So I work for a big movie studio, Paramount. And the job of a movie studio is to explore the art and commerce of pretending, which is entertainment. And I have effectively a pretend job inside a pretend industry, which is that I'm a futurist for the movie studio. So I get to explore. and learn as a full-time pursuit. And I look out for things that are not just one or two turns away, but many turns away. And sort of looking at the evolution of the technology that we use as vehicles for creativity and all the things that relate to that, social impact, social justice, storytelling in all forms, in ways that My sort of goal is not to be restricted by what is available today or even what's coming tomorrow, but really studying much deeper out. So we actually study the human equation. And what will humans adapt to when they adapt to the next technology? What will get thrown away? What will be the kind of like, also ran, didn't mean anything? And what will become so meaningful that'll replace almost everything we do today? So we kind of study backward to look forward. We look at things that used to be highly relevant and highly important in our lives and society, and what has now moved to the museum or the trash heap, what is current, and where is it going. So that's kind of what I get to do. Kent and I spend a lot of time together talking about this stuff, because we do this both for a living.

[00:08:35.819] Kent Bye: Cool, yeah, so that was a good overview in terms of there's a couple people that are working on virtual assistants, AI characters, but also with the AR technologies, and I think it'd probably be good to start with maybe setting a baseline in terms of what's happening in the ecosystem right now in terms of the devices that are out there, because I think that the technology is driving what is even possible with this AR, spatial computing or narrative storytelling. So the way I think of it, at least, is that in the video game industry, you have mobile, you have console, and you have PC, where you have something that's tethered, you have something that's maybe standalone, and then you have something that's completely mobile. And I think an AR is similar in the sense you have phone-based AR, which is people have in their pockets an AR kit, AR cores, enabling them to have these devices with looking at a window, and then meta and other tethered solutions are sort of falling to the wayside, so we're kind of left with both the HoloLens and Magic Leap, which is the head-mounted, but there's going to be potentially other ones in the future, but something that's hands-free that allows you to have a contextual awareness of the environment and as you're walking around. So maybe you could each talk about your orientation of your center of gravity as to which devices that you're focusing on right now, just to get a better sense of the ecosystem that's out there right now.

[00:09:47.159] Alice Wroe: OK, all right. So yeah, as I said, I work for Magic Leap. And I told you about that kicking stomach moment that I had. And for me, the reason why that platform is totally game-changing was because when I was younger, stories happened to me, right? So I couldn't be competent in my own life if it hadn't been for Anne of Green Gables, or if it hadn't been for Matilda, Roald Dahl. Stories were so important to my formation and who I am. But they happened to me. I kind of just put my head in there, let their story wash over me, and then there was this separate thing that was my life. Whereas with this sort of device, stories are something that can happen with us and happen because of us. So with Magic Leap, I'll give you the example of, there was a piece made by the band Sugar Ross in collaboration with Magic Leap with Steve Mangia and Mike Tucker. And when I walked into the space, there was all of these kind of organic forms that were like rippling up around me. And they would be like sitting on Kent's knee, or they could be over there in the gangway. And as I would approach them, they would respond to my touch. And these noises, these sounds would come out. And it was generating music because I was touching them and getting close to them in potentially my home or on this stage. And so my body became important. And I was being a cultural producer with the music. And so what I love about the device and that platform, and in terms of collaborative culture, is that we're not saying stories are important, they're over here, my life is over here, and this is here. We're putting them together in a way that empowers me as the user and enables me to be in touch with the creation. And so from a creative point of view, it's mind-blowing because I suddenly get to be generating culture and part of it. And that's really exciting.

[00:11:33.375] Peter Flaherty: Well, I'm an independent creator, so I'm quite promiscuous with the various devices. I think, from my point of view, there's no one clear leader or winner in the device game. The way I think of it is I think of it as application to a desired outcome. So for me, that's narrative. And so based on the story I'm trying to tell, mobile AR may be more appropriate, or a head-mounted display may be more appropriate. And the way I'm thinking about it in the current piece I made, the dial, which is projection mapping and AR, in service of a cinematic narrative, dramatic narrative, it made sense to me that I wanted a very, very wide field of view for my participants, because I want them to be able to take in both the augmented frame and also the actual experience, the projection-mapped table and room that you're inside, for those of you who have seen it. It's inside of a large glowing cube, and there's a table with a house on it, which are projection-mapped, and the story happens around that. So for me it was very important that you be able to really take that in and not always have to see it through the digital pane. So the way we did that was with mobile AR for this particular piece and I think the frame of the phone interacting with the interactive space you're actually physically in was a really interesting way to reference cinema. And so we see people engaging with it as though they themselves are cinematographers. In fact, when Kent did it, I saw some exquisite cinematography taking place with great attention to detail of the foreground and background, which is very exciting for me because in AR, of course, you don't get to choose the frame as a creator. So what we tried to create was opportunity. And the opportunity is what you leave out there like a trail of breadcrumbs. In fact, many trails of breadcrumbs for your viewer to find. And they start to find it as they move through the piece, and they find it in different ways. And I love being surprised when they do find different ways to move through it. But then I also had many of Alice's colleagues from Magic Leap go through it and say, like, well, what would this be like in a Magic Leap? It would be so interesting. I totally agree. I would love to see what it looks like from the lens of a Magic Leap, and I feel like it would change the experience in a really interesting way. So for me, I think as I move forward, the next piece I'll do, I think I probably will do a large-scale piece with a head-mounted display to start to unpack what are the advantages and limitations, because for me with new technology, That is the point of leverage for you as a creator, is figuring out what can this do, what can't it do, and how do I make not only its advantages work to my benefit, but how do I make its disadvantages work to my benefit as well.

[00:14:23.167] Kent Bye: I just wanted to jump in and add one piece as I went to LeapCon and was covering it and was talking to a number of creators and what I heard from creators was that you may have one installation where it's like a destination where you have to actually go into the store. But in order to get people there, you may have like a phone-based AR experience. And the advantage of something like both phone-based AR and Magic Leap is that you can have an experience that degrades according to whatever way that you're experiencing it. So you could create a volumetric capture experience, be able to see a part of it within a phone-based AR, but then eventually, because not everybody's going to have access to the Magic Leaps, it may be sort of a launch of some sort of marketing campaign or a destination where you have to go see it. And then once you're there, then you have the full hands-free spatial computing experience of that. But because the phone-based AR is so widely distributed, there's so many phones that are out there that are enabled, it's allowing the larger immersive community to sort of bootstrap itself into spatial computing by giving these different experiences on the phone, but eventually being able to do something much more high-fidelity on the Magic Leap. So I just wanted to add that as well.

[00:15:29.013] Ryan Horrigan: So at Arty, we're agnostic relative to head-mounted displays, AR, VR. I will say we're focusing more on mobile AR ourselves with some of the conversations we're having. And the reason for that is I think there's lots of smart people, and I would say the majority of smart people in this space are focusing on the head-mounted display space and sort of the future potential of that, which is ultimately where we're going to arrive. But I kind of felt like personally, I didn't see enough people outside of the large tech companies focused on mobile AR and sort of making that more accessible and creating original content there that wasn't just for selling you something, you know. So, I agree with Kent that I think that it's a nice gateway to a head-mounted display experience if it's the thing that kind of gets you to show up to the location-based place where you're going to put on a Magic Leap or some other device. But I also felt like what I noticed about AR is there's a whole new thought process that has to go into what is storytelling, and in VR is quite different. Obviously, you can design an entire world, but in AR, you're kind of stuck with all these different various disparate locations that all the users have and you can't usually account for them unless you do have that one specific location-based space. So when it came to mobile AR, we became more interested in characters without the proscenium and without sort of the stage, if you will. And because of that, we decided to focus on characters you could talk to and interact with in the early stages of a human-like way. And for me, that's what's been most interesting about mobile AR is being able to communicate with characters and seeing the sort of dawn of these different disciplines or sub-disciplines of AI informing how you can do that. And the other thing that I think is really important, and I'm getting a little bit more into distribution, is most entertainment-driven AR experiences that are stuck in the app ecosystem of the App Store or Google Play, unless they're huge IP, they struggle to gain traction. So a lot of the AR experiences or things that have been gaining traction there are more so utilities, like I want to envision the furniture from whichever store in my living room. Those are probably more widely downloaded than some of the original entertainment content. And one of the things that we're trying to solve for is to make AR content on mobile shareable in the same way that you share photos and videos and news articles and even your own thoughts, which is on social media. We believe that most of the entertainment content you experience today outside of, say, SVOD apps, is being shared and discovered through the channels that you are already on. So we're really bullish on that being a solve for getting water distribution on mobile.

[00:17:47.283] Ted Schilowitz: So I guess my perspective on this, and you and I have talked about this at length, is before I start telling you this, this shouldn't be taken as a criticism, even though it will sound like one. It should be taken as a very, very positive sort of evolution of how we get to what we would consider the next end game. So the sort of area or pursuit of study that I spent a lot of cycles in is something I call the curve from wrong to right. And in my opinion, we are still very firmly on the wrong side of that curve when it comes to the kind of technology we're using today, its abilities, its strengths and weaknesses. There are more weaknesses than strengths. That doesn't mean that it's going to end here. It just means that this is an interesting sort of place in time, 2020 time frame is a nice sort of round number to talk about, where if you look at like this pendulum where we're trying to get to a spot where there's an absolute massive recognition that this is the thing I want to use more than the last things I used. We're still way over here. They're heavy, they're clunky, they're fairly low resolution, they're problematic in a lot of ways. But the essence of all these devices are absolutely on the right track. And they are driving us to a really interesting place where we can start to see that entertainment in all of its forms can evolve accordingly. But it's still mostly wrong. And part of the challenge, like a good example of what's wrong about it is it's highly throughput constrained. So an example of this festival is there are very few people by numbers that can get into these experiences compared to going to a mass audience of a motion picture. because we have to build these very specialized devices and put people through them one or two at a time. So how do you solve for that, right? And a lot of the stuff we think about is, imagine a day when, just like you all likely have a smartphone, you all have your own dedicated device that you will walk into a space like a film festival, and it will just be a giant empty warehouse, gigantic, like stadium size. You could put 5,000 people in there, all with their own device, and you could pick from anything from, there it is, floating in space, of the entertainment experience you want, and you will not have to wait in any kind of line to get into the experience, which is what we have today. So we're kind of thinking about that, and Magic Leap is a good example of the beginnings of how to solve for that. I would say their device is really intriguing and impressive in lots of ways. And it gets mischaracterized and misrepresented because people have such small amount of touch time to it that they actually can't understand what makes it so magical. You actually need many, many hours in the device to start to crack your brain into why this is so special. So at any point, if you have access to one of these devices, The thing I would recommend that you try, and there's lots of things to try that are really valuable, but the one thing you should try and try is a game that Weta built called Dr. G. And it's a multi, multi-hour experience where literally creatures will be popping out from behind your couch, in through the walls, literally around you, and start to break down that barrier of what's real and what's not real. And until you can do it for at least three hours, you won't get it. That's what's so interesting about it. The short form stuff just kind of sprinkles your ideas about, look how cool it is. It's like this transparent world. But until you live it for a while, you know what I'm talking about. You really will get it after you live it for a while.

[00:21:33.596] Alice Wroe: Ted, I like you so much because you're being so wonderful, but I do disagree with you a little bit. Oh, good. That's why it's a panel. Yeah, I think you're totally right about people doing it. If you get a chance to do the device, blimmin' do it. Because I told you about my experience. My experience was very instantaneous. And there's a thing called Create that's on the Magic Leap. I don't know if you've done it. And in your home, this is the special thing about it. When you're talking about the device as being the way to get through to that kind of Magic Leap moment or whatever you want to call it. For me, it's about letting the device be, as you say, trying it and using it in your home. So that it's not this revered, big cultural moment, but it like slips into your life. And so create, I love doing this in a domestic setting, right? So you take the clicker. And you can create these big swoops of color in space. So I was doing it a little bit, and then I realized, oh, I could make a cocoon cave, right? And I could get my whole body cocooned in this color that was three-dimensional. And then I was like, ah, if I I walk over there, it doesn't follow me. I can look at it. And then I was like calling my colleagues, like, come see what I made, come see what I made. And it was physical in space. And so for me, it was instant. It was like, OK, the thing about this is it knows where I am. It relates to me. And I can make it. And it will stand alone. And I can twinkle on my mantelpiece. or alongside my door. So it was quick for me, but I think you're totally right. If people don't try it, it's quite an intimidating thing. As someone that didn't come from a tech background, it is intimidating. But I think when artists put it on, they get it, generally. But I think that people should do Dr. G too.

[00:23:06.428] Ted Schilowitz: And the way you're talking about it is, you have a much more intimate understanding of it, because you're with it all the time, right?

[00:23:11.491] Alice Wroe: But I didn't then at the beginning, not the beginning. I was novice to all this, like no tech background.

[00:23:15.873] Ted Schilowitz: Right, but the idea is the longer you spend with it, there's this kind of intimacy that is created, like when you build that world, and it's not just there for a second and gone, it's there. And you can like have it. And that time that you spend with your brain allowing this kind of mixed reality world to happen, It takes longer than people, I think, realize to understand the power of it. That's kind of what I'm saying.

[00:23:38.185] Kent Bye: Yeah, and I do think that both virtual reality and augmented reality represent this spatial computing paradigm shift. That's what makes it so interesting and compelling to me is because I do think there's so many different philosophical shifts that are bringing this and just having computing that's much more natural and intuitive. One of the things that I see in the difference between VR and AR is that VR, you're able to completely switch your context into a completely new context. But in AR, you're surrounded in your existing context. And then you're overlaying either dimensions of new context or trying to switch that context. But you're still in the center of gravity of being in the real world in that context that you're in. And so AR in particular, I think, is going to have to be contextually aware. And as we move forward, it's going to be like trying to establish what context you're in, or looking at your environment that's around you, and then trying to customize the experience based upon your environment and your context, which I think is a terribly challenging and difficult problem that we still have to go through many iterations to see what exactly that looks like. But I'm just curious to hear some of your initial thoughts on this. challenge of both detecting the context but also overlaying information on top of the location and environment that you're at that makes it different than just having a virtual reality experience because you're in that context. I'm just curious to hear some of your explorations in that.

[00:24:56.065] Ted Schilowitz: Well, it'll be interesting to see if you guys agree with this thesis or not. If you think about screen behavior, the way we use screens, and you look at the VR paradigm discussion and the AR mixed reality sort of discussion. The closest kind of parable connection sort of wide main distribution to VR would be like going to a dedicated space like a motion picture theater, where you're trying to leave everything behind. The whole real world is trying to be left behind, right? It's dark, it's dedicated, you're there to escape. And what's interesting when you study it, now you would not be a typical audience, right? You're gonna want that behavior and do that behavior more than the normal American. But in America, like if you just look at the statistics, in today's world, people go on average less than five times a year to a motion picture theater now. So you're talking about, at best, 15 hours a year of that kind of experience. Whereas if you look at what we would commonly refer to as today's version of mixed reality, screen behavior, what you're doing here, what you're doing here, like some of you are doing it right now, that's statistically 10 to 12 hours a day. So 10 hours a year. 10 hours a day. So it's kind of where I land on this thesis that virtual reality will probably be the sideshow. It's still going to be a multi-billion dollar business. It's going to be very important for lots of different verticals, but it's not the main attraction. Augmented or mixed reality will more than likely be the main attraction Simply because of what we know about us as humans that in today's modern world We are touching a mixed reality universe where we're holding a screen with us or around us, but we still have a Understanding of what's going on in the real world 10 plus hours a day now So when we get to start to wear it, when it gets nimble enough and smart enough that we don't have to hold it or laptop it anymore, it kind of makes sense that that will be the main attraction. Which means from an economic standpoint, that will likely be a multi-trillion dollar business worldwide. Whereas the VR cover your face, I use the term box on face world, will likely only be in the billions of dollars worldwide. So that's my sort of overall. I don't know if you guys agree or disagree with that.

[00:27:13.981] Kent Bye: I'll just pipe in then to say quickly that that may not be true. Good, now here we go. It could be that you're able to do way more amazing things in VR, and you're able to have way more compelling social interactions than you could ever have in the real world. Worried about your podcast? No, no, no. I've been thinking about renaming my podcast Voices of XR, but I've been very resistant. But I'm just saying, I would say that that's the common, like a lot of people say that. But it may not be true. It may be that there's way more compelling things in VR that we haven't even discovered. But because the panel is the second coming of VR, or the second coming of AR. You'll get it right eventually. Very confusing. I want to focus on the AR and the contextual aspects, but I would just say it's open to debate.

[00:27:56.718] Ted Schilowitz: But if the device itself gives you both, right? So if you look at a mixed reality device like a Magic Leap or a HoloLens and sort of extrapolate X a many years out, knowing that it can do mixed and it can do fully cover your world, that's the more powerful device than just the box with a screen which can only give you the

[00:28:16.202] Ryan Horrigan: Yeah, I agree, but you're also talking about a difference in display type. Today, VR is screen-based. It's an LED or OLED screen, and what you're talking about is more like what we're trying to get to is projecting light into your retina, bouncing it off of glass. So I would agree that I think that that's probably the display technique that will win the day. And I do believe that you're right, that it will do both. It will do VR and AR, and it's just a spectrum of opacity. It's a spectrum of pixels. You're either 100% or you're less than in your AR. But I do kind of agree. Logically, what you're saying makes sense. But I do agree it's too early to tell. And I think if it becomes more VR than AR, it's because we're in Ready Player One, and we're all sitting at home. And the virtual world is just much more interesting than our real world. And maybe we all don't go to an office anymore, and we work remotely. Now, I kind of don't love that future, to be honest with you. I don't either, by the way. Neither. There's a terrible Bruce Willis movie called Replicants, where I think there are, if you guys have seen it, I don't want to live in that world necessarily, or the Ready Player One baseline reality world either, or certainly not the Matrix with the thing. I don't want to do that. But the thing that's interesting about AR, and there's plenty of great stuff about VR, it's so much more transportive, but for AR what I like about it is you can bring people into physical spaces socially. Now VR can be social remotely, like we can all connect together over the internet and have a social experience with our avatars, and we can be different characters, and that's powerful. But I also think AR is interesting even today with phones. We can all collide. Like, for example, a movie could put one of its characters in Times Square, like Marvel could put a movie in Times Square on the Saturday the movie comes out, and then all the super fans will hear about it on social media, and they'll go to Times Square. to see that character in AR and see what he's going to do. And it's like a flash mob shows up. That hasn't fully happened yet. Maybe a little bit with Pokemon Go, that kind of stuff happens. I think that's really exciting with AR, is to bring people out of their homes into physical spaces and tell stories on top of reality. The only challenge is, can you contextualize reality? And sometimes you may want to. Like, hey, I'm going to give this physical space context within the story I'm telling. And other times, it doesn't matter. It's mood. You don't need context. you know, like the dial, right? Like you tell that story on a white table and you've created a house, you create the context by the projection mapping and the phone, but the space itself doesn't need context, so I don't need to know, like, why I'm in that room, you know? So it really just depends on what you're trying to achieve. It's gonna be interesting, that's all I can say.

[00:30:34.918] Peter Flaherty: Yeah, I mean, I think in terms of the growth curve, I'm inclined to agree with Ted. I feel like because we have devices in our pocket that not only are AR capable, but deliver on the full promise of the medium, full six degrees of freedom AR. I think that that curve will probably leapfrog VR, but then VR will tail back up and take its place in the entertainment spectrum. That's my hunch. We live to be proven wrong in this world, but that's my hunch, I think. But when it comes to the idea of context, I feel like the physical context question, I feel like Kent described it extremely well. But then when you go to the place that Ryan was just talking about, where you're trying to, for example, tell a story in public, let's say, Then you suddenly have emotional context, which for me becomes an even bigger thing that you have to grapple with. So first you have to deal with people coming and going and objects and buildings and things like that. Where am I? What's going on around me? more so for me, how do I feel today, Monday, in Park City, but also how do I feel about all these people that are around me and how we're interacting, whether it's meaningful or casual. So trying to figure out how to place a story inside of those worlds, I think, is a really interesting, challenging problem, and one where, for me, and I think this is a lesson that I learned really early in VR, was that you can't control the participant. It is foolish to try to control the participant. We can come up with all kinds of tricks to try to do it. And in the early days, when you saw film directors migrating into this world, it was always like, well, we'll make a sound come from behind them, and that will make them turn around. No. No, they're going to be focused not even on the thing that you thought they were focused on that they would be distracted by the sound from. They're watching something else that you didn't even predict that they'd be interested in, or just sort of like looking up into space at some other thing you created, which is not a problem. It's valid. That's the whole point. That's the point. You have to remove the frame. And it sort of opens the question of who and how are we creating? Who's creating these experiences and how? What is the right? knowledge base for building these contexts, which is, for me, fascinating to talk to Alice over the course of the week because she comes from a very different world. And so how she's thinking about constructing every detail of Micah, of this character, of the world that she lives in, is informed by a background that has nothing to do with filmmaking, which I'm still convinced is only one small slice of the pie in terms of who really has the best input in this space, that there are people from Literature, for example, like I worked on a great VR piece where the writer was a novelist and it was phenomenal because they're used to building worlds that are full of details. Screenwriters are used to writing scenes that are, you know, a hundred words long. They don't build out the details because no one wants to waste the time reading it, you know? So it's sort of interesting, like, how do you do that? What is an immersive theater artist making in this space? What is a philosopher making in this space? And then you kind of bring the context question full circle. So it goes like, physical, that's the sort of core of the onion, maybe, because that is the baseline reality you have to navigate. But the bigger question is like, all this other stuff that's going on and how it makes you feel and how it makes you engage and how, if it does get pervasive, where you are spending that amount of time in it, how do you cope with that emotional component over that long period of time?

[00:34:04.231] Alice Wroe: I think that's so interesting. And I think that I obviously can't speak to the trends of entertainment and wouldn't try to and don't feel necessarily compelled to. But what I know is about context and feeling and the emotional connection. And with Micah, I would love for you, if you have a chance, to go and meet her, encounter her in the other space. And when you walk into the room, I sometimes get to sit in the corner and watch people, and you can have the most kind of stony-faced tech journalist that wants to hate it, and they may still, they may come out saying they don't like it, but when she smiles at them for the first time, they all go, And they can't help not do it because she is sat there with you and there is that emotional connection. And that is what Micah is all about, is being present with you. And at the very beginning, you have this reciprocal moment of looking and she is as interested in you, the participant, as you are in her. She's not going to let you visually consume her and look away like art history has had us done for centuries. She's going to look right back at you and be as present with you as you are with her. And so what the context allows is for that emotional connection. And that's what this platform allows for. And it just fills me with utter awe and excitement for the future. The future of our storytelling is that we can emotionally connect and feel seen. And that's so special to me.

[00:35:19.382] Ted Schilowitz: I remember the first time when I was down in Florida seeing it in its early prototype stages. And it wasn't the smile moment that got me. There was a moment where she had like scorn and got a little too close to her and she kind of went like, what are you doing? And it was actually quite shocking. Like I step back and I'm like, oh, sorry, I didn't realize that. But and you kind of forget she's a digital character, which actually kind of brings up this interesting kind of ethical discussion about What are we actually doing to ourselves here? Like we're building these things that can act and react like actual humans, which is kind of what you're touching as well. And there's all kinds of amazing, positive, wonderful things. But there's also this really interesting kind of ethical kernel of like, are we putting enough cycles into the ethics to know that this is actually okay to do this kind of stuff? It's really interesting.

[00:36:07.078] Alice Wroe: And that's why it's so, in my opinion, if we are going to humanize AI systems, if we're gonna do that, we have to treat them with respect, because children are going to grow up, if hearing me be like, shut off the music, Alexa, it's like, what, is my kid gonna think, oh, you just speak to women like that? If we gender these assistants, of which Micah is not, I can't say that enough, we gotta think about how we relate to them, because we are teaching each other how we're gonna relate to each other. So I think that's vital.

[00:36:34.021] Ted Schilowitz: Yeah, when I was at CES a couple weeks ago, I had this slight little odd epiphany as I was walking through the halls. And I noticed that they relegated all the AI and robotics to the very back of the South Hall. And I was thinking to myself, I wonder how the robots feel. And there are thousands of them, and all these different companies building different robots. And they're all kind of, like, literally pushed all the way to the back. Like, yeah, I saw a movie like this once. Like, they're not going to be happy that they're in the back. And next year, they're going to, like, take over and come take it from us. And it was a very odd sort of thing, you know? Because, like, that's real. Like, it could happen, you know?

[00:37:09.103] Kent Bye: Well, I want to slip into the ethical thread here, because I think there's another component, because you're being invited into people's homes, and you're literally rewiring their memories of their homes, and so what are the ethical responsibilities that you have for having a murder scene in your living room, which now is going to be a memory that you may always have?

[00:37:29.003] Ted Schilowitz: And if you want to try it, there's on the HoloLens, there's a piece called Fragments, which I don't know if you've played it, multi-hour crime narrative that literally takes place in your home. And you're the detective and you have to figure out the crime. But the characters over time start to kind of embody the fact that they're really in your house. You're definitely doing entertainment in a different way at that point.

[00:37:49.636] Alice Wroe: Do you mean the ethics you have to yourself because you've got to live in that space?

[00:37:52.152] Kent Bye: Well, as creators, mostly as creators. So if you're going to have super traumatic types of horror experiences, then there's a bit of like, do people know what they're getting into before they see this? And now all of a sudden, they're going to perhaps have a permanent memory located in their house, because there's a connection between location and memory. So you're being invited into these locations, and so there's a bit of ethical open questions in terms of, as creators, what are your responsibilities to be mindful of disclosure, but also making sure that people are consenting to having these types of experiences within the context of their home?

[00:38:26.178] Ryan Horrigan: I think while this medium or these media are young, we probably need some sort of a, I don't know if it's a rating system, but we need to warn people, like, hey, this is what you're getting into, because they don't know, and if they haven't tried a device before, they don't really know what sort of impression or emotion they're going to be left with unless they're using it all the time. Like maybe this group here on the stage we kind of know what we're in for already, but for the casual user they don't yet. So I think we should be mindful of that and right now there's no rating system that I've seen or there's no sort of preamble that tells you kind of what you're about to see is graphic or... sexual in nature or anything like that, but yet the content is being made, and I think also as kids start to play with these devices, that's probably important. I have a one-year-old kid, and I'm sure he'll be doing AR stuff, I'm guessing, right? So I want to make sure that he's not scared of his own home.

[00:39:13.125] Alice Wroe: My question to Ed, though, in film, I was going to say in the early days of film, but that's not me presuming you were around in the early days of film.

[00:39:21.209] Ted Schilowitz: I was definitely around, trust me.

[00:39:23.185] Alice Wroe: OK, good. Did people ask these same questions in film, though? Because you were exposed to them in your house.

[00:39:27.488] Ted Schilowitz: Absolutely. Yeah. But it's just kind of, as the screens get more powerful, it's that kind of level of responsibility, right? So you could ask a question of, OK, you're going to watch a very intense horror movie, Rosemary's Baby back in my day, right, in your living room, not at a movie theater. Are you OK with that? OK. But now you can ask a question like, let's say some of you had an eight-year-old son or a daughter. And you're going to ask them a very legitimate question. Are you OK watching a murder scene play out on the living room couch? And at some point within what the power of these screens can do, your brain, as an eight-year-old, probably also as an adult, may not be able to fully know, did it really happen or not? That's an interesting question, right? Did I do this? And should we do it, right? And we're kind of like leading ourselves to water anyway, because if you watch what kids are doing with their modern video game experiences, they have very, very large screens in their homes, they're sitting fairly close to them, and they're engaging in hyper-violent entertainment for many, many hours and building a whole life around that kind of entertainment. But there's still one degree of psychological separation of what we call full simulation. It's still a screen. You kind of still know it's a screen. Part of your brain knows. even if you're super absorbed in it, that it's not actually real. But like when I played Dr. G, when I played Fragments, I can start to get a sense after, that's why I said time matters. After a long period of time, your brain kind of forgets, is it real or not real? Now, with all the foibles of the devices today still telling most of my brain, I'm still wearing this damn thing on my face, when it gets to the point where the resolution is so high and the device is so light that you literally forget you're wearing it, that's where the ethics start to come into play. Because you're really building reality. You're building a digital reality in an office space, in a living room, in any kind of space that creates this kind of simulation behavior. And it kind of gets back to that interesting, where do we go? Where's the bigger play? Humans like full simulation behavior. We like to go to theme parks sometimes. We like to go onto those giant simulators, which are basically like VR at scale. But we only do it sometimes. But we like mixed reality, grounded, like here's a screen, but I still know I'm in the real world, and my feet are here, and my butt is here, and everything is here, literally almost all the time now. So when we put that into that context of mixed reality entertainment, it's actually, it's amazing, because it's going to be amazing, like Micah's amazing. But it's also terrifying if you don't know what you're getting into. So it's just interesting to think about it.

[00:42:13.731] Kent Bye: Well, I wanted to cover storytelling and conversational AI and then open it up to the audience for the last 20 or 30 minutes. So get your questions ready in your mind, and I'll take some questions here at the end. So storytelling and augmented reality. The way I think about it is that the thing about what's happening in AR is it's like your body's moving through space. I've heard people talk about, they've created VR experiences where they notice that often there's a teleportation mechanic and people often stand pretty still. Even though they're locomoting in virtual space, they're not moving their physical bodies around as much. There's some experiences that really do a great job of really amplifying that, but more than that, you learn to avoid bumping into the chaperone and so you end up just not moving at all. in augmented reality, some of the experiences I've had, especially in Magic Leap, it's really encouraging you to move your body through space because you know how to deal with walls. You see the wall, and you've spent your entire life not running into walls. So you're OK with moving your body through space. And so I'm seeing how immersive theater as a form of spatial storytelling, of actually moving your body through a space, is starting to prototype some of these best practices for what does it mean to actually move your body through space and tell the story. And so I'm curious to hear from each of you in terms of how you think about bodies moving through space and how that relates to storytelling within AR.

[00:43:33.676] Alice Wroe: Yeah, I was so happy when you did the experience at LeapCon and you described it as immersive theatre. And when you meet Micah, it is really special because it's the minute that I told you about at the beginning where she really returns your gaze and you've got that emotional connection and that's powerful and that moment of reciprocal looking. But there's another bit in the experience, and I don't want to give it away, but I'm going to nod to it, where she gestures for you to... Okay, I'm actually going to tell you. She gestures for you to hang a real physical frame on a real physical hook. But obviously, she is not physically there. She's digitally there. So you, the user, pick up the frame, and you hang it on the wall. And so in terms of the kind of playfulness with Siri and Alexa, you know, whereas they might switch on your lights, you are changing Micah's physical space. And at first I thought it was just really cool because then she reveals an artwork, a digital artwork in that frame, and I really like the interplay of physical and digital. But then I realized what was powerful about it was that as Micah stands up to show you this artwork, you the user have also stood up. to hang the frame. And there's this thing about sharing space with Micah and moving through space with Micah that I had not experienced in any other way. And I think that is what's special, is that you are both, her and me as the participants, responding to physical space. So we both are responding to that frame going up. We're both looking at that artwork together. To look at an artwork with Micah is just So special, because you're engaging with culture together. And so I think it is really exciting to negotiate space, real physical and also virtual space, with a character. And the emotional connection that is possible because of that shared space, I think is so exciting.

[00:45:13.291] Peter Flaherty: Yeah. I have a lot to say about this one. So the projects I've worked on, kinematics, motion through space, in an emotional narrative context are always really critical to me. So The Dial is a drama where I want you to actually have a certain level of empathy for the characters and moving your body through space and choosing the path through the story that's not simply a binary, not just left, right, one, two, sort of in the choose-your-own-adventure style was critical for me. And I found that it It activates a lot of things in a participant, and we had a little conversation before the panel started about this terminology around what is this person who is taking part in the experience you've created. And viewer doesn't seem to do it justice, user doesn't do it justice to cold, player is wrong for many of us because we're not gamifying so much, and really the right word is participant. It happens to be too many letters, but it's the right word, I think. And the reason it's right is because it describes all the levels of engagement you need to have in order to have the experience in AR that we all are looking for, which is emotional engagement, physical engagement, possibly verbal engagement, and even many others that you would normal behavior in life. And so for me, storytelling in AR, it's incumbent upon you as a creator to incorporate a physical experience for your participant into the storytelling. If your body moves through space and it doesn't do anything to the characters or to the narrative, I think you may have missed a really big opportunity because suddenly the question of why am I here remains unanswered. And I think some of the ways that we tried to answer that question in VR were necessitated by the fact that it was a wholly constructed reality. So POV was one of the most common ways to figure that out. third-person camera was not so much acceptable. However, when you're in a world with other people that you can see optically with your own eyes, we are having that experience because we are a bunch of cameras sitting on top of bodies moving through space. That's the human experience. So suddenly, that physical experience can be connected to a third-party experience, if you will, where you're not necessarily incorporated into the drama. But if I look at filmmaking as an example, the way that we empathize with characters is by empathizing with their point of view, and essentially taking that on. And the way we watch it is like this. That's the opposite of how we experience AR. And I think the way that we need to empathize with stories is therefore different as well. And for me, once you get that sense of kinetic motion in the body, you open up all the avenues of perception and feeling that your body contains. So suddenly, your neuromuscular system is charged. You have a greater sense of your body awareness, which gives you a greater sense of your connection to all sorts of things. Emotional, your feelings, your fear, your amygdala, all your deep fight or flight impulses, your love impulses, your sexuality, all of those things are enlivened suddenly because you start moving your body around. And it doesn't require third party empathy like a film does. We immerse ourselves in that character, essentially taking on their emotional container. through our optic nerve. But in this case, we're trying to take on a lot of that emotional life and story qualities in a different way. It's through participating. And for me, then, it also is incumbent upon us to try to figure out what is the social component of that story. So in a movie theater, you have the awareness that you're sitting next to whoever's next to you, whether that's someone you came with, or it's a stranger, it's an empty seat, there's someone down the road. You're aware of where those people are. And you can take a lot of learning from the various theories around theater in particular, like Peggy Phelan's campfire idea of what is the point of the gathering? Is it necessary? Is a movie different than going to the theater with people? How do you have that interpersonal experience? And I think with AR, for me, experiencing a story with other people and noticing their body language, hearing their small reactions, possibly even cuing other participants to something that's going on that they're not noticing, this is what makes me optimistic about the world of stories in AR, is that it is actually reengaging us with other people and with ourselves in a new way. So we're not glassy-eyed. We're not stuck to screens at a close range. We are using this digital technology to engage, but we're using it to engage with other humans in a way that feels more kinesthetically and emotionally connected to our real IRL experiences as we know them now, which I think most of us can say we want to try to save, maintain, protect, because they need protection, because these technologies have that amount of power.

[00:50:12.748] Ryan Horrigan: One thing I really like about The Dial that I think is a strength of AR and VR is that you can contextualize the role of the viewer. And if you're watching cinema or theater, or you're listening to radio, there's this abstraction. You're not part of the story. You have no skin in the game, right? You're just this person who sits outside of that story. You have no connection to it except for the emotions that you're feeling. But in The Dial, you move around the table, and you control time with your movement. And you're basically the god or the author of time for this story. And thus, you're the god of these characters, in a way, and their existence. And I think that even something that's more poetic or abstract like that is really beautiful and and what I found in VR is similar like VR it seems like you'll see more first-person content in VR you do see some third-person content like tabletop stories but in AR I think you'll see a bit more third-person and I think it kind of frees you a little bit to be able to do third-person because first-person is very hard like when you have to contextualize yourself as a character, you then have to have the rules of being in that scene and having the laws of that world. So one thing that I know is a big challenge when I was working in 360 video before working in real-time content is in 360 video, you're there in the scene, you could be conceptualized as a character, but you can't move around, you can't talk to people, you're a ghost. So often we'd be creating these narrative conceits like, I'm in someone's memory, so then it justifies why I'm not expected to talk or move, right? Or anything that was like a hallucination or anything that was not baseline reality works really well in 360 video if you're going to try to create context for yourself. But the minute it's in a game engine and you can walk around and move, then you have more freedom. And then I think the thing that I was inspired with 3rdi was You could walk around with controllers, you could pick things up, you could do things, but you still couldn't communicate like a human to other characters and be understood as a human. And I think a lot of that is verbal, a lot of that's body language, facial expression, and now we're starting to see those tools and sort of things creep into the immersive space, and it's really exciting.

[00:52:04.832] Ted Schilowitz: I think, Peter, what you're talking about is interesting because you're talking about what actually makes the tenants of virtual reality. So there's this perception, which is actually incorrect, that virtual reality is about putting a screen in front of your face. Virtual reality is about fooling reality. It's about taking all of the things that your body, your physical body, your limbic system, and everything that relates to what we think is real. Like if I touch Ryan, he's there. If I see you all have a spatial sense, I could get up and walk to Kent. That's all virtual reality, right? if you can create it artificially. It's a magic trick. It's the next magic trick. And for 100 plus years, cinema has been some degree of trying to fool you, trying to make you believe that that thing on the screen is actually real. Even though you know it's an actor, you've seen him in six other parts, but he's playing this part now, and part of your body wants to believe that that's playing that, so therefore you believe it. The level of sophistication we can do now with these tools is so insanely powerful that we can create this thing we call virtual real, and it's artificial, which is part of the power of it. And when you were sort of talking about the kind of journey, right? How do you get to all these places? I guess from my standpoint, it started very early in life with this understanding of what a lot of us refer to this form of entertainment as spatial entertainment. Because you can move around inside the vehicle, inside the entertainment. It's not just a fixed screen that you are asked to sit in a place and put your eyes to watch. You're actually asked to move around inside it and learn about what the entertainment is by your physical body. So it goes, for me, all the way back to the first act of my life, because I grew up in central Florida. And I grew up inside the world of theme parks, the biggest theme park in the world. So I had this very early understanding of what it means to move around inside entertainment. It's not restricted to a flat screen. You're going to go into something called a ride vehicle, and they're going to physically move your body around. And then that, over time, became simulators. where you're going to physically feel something. Now we can kind of take all that stuff, all that theme park stuff, and blow it out to almost any environment you want because these head-mounted devices. So like in the third or fourth act of my study of this spatial kind of stuff, you just understand that a lot of these things go back to the tenets of theme park entertainment. breaking out of that rectangular limited space into a space where you physically move your body in space and it starts to feel real to you. That's kind of what to me is interesting.

[00:54:49.340] Alice Wroe: And I think that's so cool. And I totally get it. And that's actually helped me think about it. And entertainment is exciting. But if you could take the things that you've just said about entertainment, and then think about, like, education. And think about, like, Bell Hooks has this great way of describing. She talks about radical education. And she says it's so much more than just regurgitating, like, memorizing something and regurgitating something. That could be called schooling. It could be called education. But radical education is where it's at. And that's where you start. to think in bigger, more flexible ways. You are educated to have an opinion for yourself and to be able to be critical and to navigate the world with your own sense of who you are and what you want the world to be. And if you can take those exciting moments from kind of roller coasters, like entertainment, really cool and brilliant and will make us all sparkle with joy, and then put that in education, it would be exceptional. And I think that that's what we're on the cusp of being able to do.

[00:55:41.759] Ted Schilowitz: The fact that you're all in a space now and you can reflect back on this physical space, you will remember this more than if you watched a video version of this. But imagine in today and continue on in the future, this kind of spatial thing, you don't have to physically come to this space to get the spatial version. That's what Micah does. It's spatial, like the character is way more than just a projection. You can move around it. It's like it feels real to you. Her, she. It, her.

[00:56:12.634] Kent Bye: What's Micah's preferred gender pronoun? At the moment, she's definitely a female, right? I think she's a female. So, I wanted to wrap up and then open up to questions, but I wanted to focus on conversational interfaces in AI, because I think one of the big themes I'm seeing both in VR and AR is that you're moving the abstractions of being able to look at screens or pushing buttons, but we're trying to move into more natural intuitive movements for how we express our agency within a scene. And I think one big way that we express agency with each other is that we can speak to each other. And we can use our bodies to maybe do body language as well and subtly communicate. And I think that's one of the powerful things about Micah is that you're starting to bring in the body language component to these virtual human characters that we have. And so I'm just curious if each of you could talk a bit about how either you're looking at how this conversational interfaces or EMB computing is changing the context of what it means to do storytelling within augmented reality.

[00:57:08.167] Alice Wroe: OK, say the question again.

[00:57:09.247] Kent Bye: So one of the experiences here is a jester's tale. You're actually speaking, and it's asking you questions. And so it's trying to drop you into this conversational interface. But in order to do that conversational interface, you have to have some degree of either AI or, but it's just the idea that part of the mechanism of storytelling in AR is going to be this natural conversational interfaces, AI, but also just this natural interactions.

[00:57:33.172] Alice Wroe: So, okay, thank you for that, that was helpful. With Micah, in this experience, what we're not going to do with her personality is tell you exactly who she is, what she's gonna do, all neatly and give it to you there. You will only be able to understand who Micah is, her ethos, what she's about and what she'll do in the future through collaboration and through doing these experiences, through building up a relationship with her in the experiences. And that is through gesture. This experience is silent, Micah doesn't have a voice yet. But I hope that people that do the experience will see that she really, really believes in the power of voices. She wants you to all lend her your voice. And she wants you to go and share stories that have been buried by the patriarchy and buried throughout history. And she wants you to share them with other people here at Sundance about women filmmakers. And so Micah can't speak yet, but she wants to use your voice, and she wants you to feel empowered to use your voice in exciting, socially just ways. And so I think through this experience, we're able to, through gesture, through that deep emotional connection I keep mentioning, to build up a sense of who she is, but more importantly, to build up a sense of who you are when you're with her. And I think that that is the most important thing about this dynamic between you and Micah, is that you feel more present in your life, hopefully in time, and that you feel empowered to create change beyond that room. And that's why I hope it sparkles into physical true actions that go beyond the demo room.

[00:58:58.670] Peter Flaherty: Yeah, I mean, I think one way I think about AR storytelling, since that's the frame we're sort of approaching it from, is to think about what the actual bigger frame around what AR is. Which is to say, AR is, this is an overly black and white statement and intentionally contentious. AR is really designed to be a utility and a toolkit. It's not actually optimally a storytelling tool. The reason I say that is because I think it's optimally a lot of other things first. And I think they're more intuitive as things you would do with AR. And they're more useful, frankly, and they're more important. So education, remote surgery, training, I mean, the list goes on and on and on. Heads up displays. Stop looking at your phone and the map and your text messages while you're driving. Don't kill yourself. I mean, there's a lot of ways you can apply digital layers, which I think all count as part of the AR sphere. And then when we arrive at entertainment, I feel like we have to then take all of the positive and negative baggage that comes with that understanding and apply it. The positive baggage, which is I think what we maybe want to focus on, is inherently we need to be able to interact with the system for all of those utilities and tools to work. So the advantage we get as storytellers is that it is incumbent upon us to use those input methodologies. So suddenly moving your body and using your voice and some understanding of basic body language is really important. But when you look at those, let's just take those three things, moving your body as a form of locomotion, moving your body as a form of body language and communication, and then verbal communication, those are hierarchically organized from least complex to most complex. So talking to a machine is obviously, in an AI context, the most complex operation of the three. We don't even really have a capability for understanding body language yet. We can track your position. And we are beginning to be able to respond with an artificial intelligence to some of your language. But there's always going to be big mistakes. So the point where it gets conversant and actually moves into the realm of story, because I would argue for story to really fully evolve in language, it needs to be a conversant technology. For me to be able to get my map to take me to the right place, it doesn't actually need to be a conversant technology. It needs to be a technology that can understand the bare bones of what I'm saying with a very small vocabulary. Similarly, with body language, that's a very small vocabulary. How many things can we really articulate with body language? So for me, it's sort of starting with some of that low-hanging fruit, and then as AI emerges, again, finding these points of leverage where we can maximize what it's really capable of, acknowledge what it's not capable of, and somehow draw the boundaries of the sandbox around what we can actually work with in storytelling.

[01:02:04.528] Ryan Horrigan: I completely agree. And I think we've been working in voice for the last almost six months or so or more, as well as body language and sort of object recognition and facial tracking. And if you think about the expressions on your face, there might be like seven primary emotions that are standard happiness, sadness, boredom, excitement, and a few others. And then maybe there's variation. So there might be 30 emotions on your face that we might need to be able to understand. So there's a finite small pool and that's not incredibly difficult with today's computer vision technology to do already today. And body language I think can have in some ways more simplicity but also sometimes more nuance because it can be contextualized with other things and then mean different things. So it's sort of about like the context. But voice is really challenging and I think what we found for voice is there's sort of two approaches. One approach is to allow an AI system to be kind of completely unsupervised and autonomous and scrape the internet for all possible things. And then all of a sudden you have a character who feels more like a robot and a generalist who doesn't have a personality and isn't confined to like their own humanity. And I think what I like about Micah and sort of like what we're trying to do is we're trying to create human-like AIs that have limitations that aren't infinite, that aren't able to speak about anything. And the truth be told, that's not even available to us today. There's no artificial general intelligence. Google hasn't even made it yet, and it's going to be quite a while before you can have a true kind of sentient being. What I like to sort of say is think about it like the inverse, or at least what we're doing, the inverse of an Alexa or a Google Home. If you talk to an Alexa or a Google Home, Most people, only if they have one, speak to them less than once or twice a day, and the session times are really short. And the reason for that is a couple things. One is they're very wide in their scope. You can ask them anything, but they're very shallow in how deep they can go. They need to be able to basically do an internet search and set a timer and play your playlist. But if you want to say, hey, you know, talk to me about my day, you can't do that. And also you, the user, drive all the engagement. You tell the story. And oftentimes you don't know what story to tell. But if you allow the author of the content or the character to tell the story and lead your engagement, and you, the user, or the participant, rather, you follow, and it's a very much more narrow but highly contextual experience, it can be much deeper. But it can only do maybe a handful of things. So for example, if you're working with like a movie character, like say you're working with a Marvel character and it's Iron Man, Iron Man really only needs to know about what happens to Iron Man in his movies, or maybe the comic books. And then maybe whatever conversation the creator decided that this experience is about. So maybe Iron Man wants to talk about video games, so He's going to open the conversation and say, hey, what's your favorite video game? And then he's going to lead me, and we're going to go different ways, and there's going to be branches, and all kinds of things happening, and all kinds of possible outcomes. But ultimately, they're going to be finite, maybe largely finite. But they're not infinite. And I think if you look at sort of some of the art that AI has created when it's completely unsupervised, it's abstract, it's meaningless, it's random, it's weird. And I think it's going to stay that way for quite a long time. So I do think that a human will be and needs to be in the loop when creating character-based AIs and ultimately supervised learning is important. but we'll slowly see that, like, okay, if we're here today and this is sort of filmmaking, sole human author, and this is completely autonomous, unsupervised learning, we're maybe, like, able to get 10% of the way, but the sweet spot might be, like, right in the middle. Maybe we don't want to be all the way to the right, where we just have AIs creating stories, because I think they're gonna lack context, and when you have all the world's information, how do you reconcile that into something that's specific and meaningful? So, we've been taking an approach of trying to create guardrails or parameters, or almost like a sandbox, where this character has certain qualities, it's specific, and he or she or it is here to talk about this one thing. And that is achievable today, and that's really interesting, I think.

[01:05:59.620] Ted Schilowitz: I think to your question about voice, it's probably the most natural, most efficient input method, right? We all use voice all day long. We communicate with each other with voice. We modified sort of voice dynamics because compute couldn't do voice to this thing called typing, right? So we would use our fingers and input as fast as we could and got pretty good at input via that, and computers could do that. But now we're kind of at an age where computers are actually very viable to use voice as an input method. So it's become quite ubiquitous. Like I use voice on my smartphone constantly. I almost never type on my smartphone anymore because it's just way more efficient, way more, just faster and more functional on this, which I kind of sort of extrapolate to getting it up here. when I don't think I'll ever type anymore when it's up here. I'll just use my voice and my eyes to move things around. So I think we're kind of at this interesting age of like the Alexa-izing of an entire population. And it's become sort of a word like Kleenex now. It's like way more than just what the device does. It's a definition of the fact that we can use our voice and the binary compute systems are fast enough today that they can actually drive relevance out of using our voice. It's really powerful. It's going to lock into the next level of this spatial visual compute system because you need an input system that's natural. And this is no longer really that natural anymore. It's actually very slow and very clunky. Whereas the amount of material we can pump out with our voice, as long as the computer can actually understand it and give us the relevant info back, Typing is a pretty good relevant, you put it in, it understands, it reads it, it sends it back. Voice has to do this translation thing, but it's pretty damn accurate these days for most things you ask it. When a lot of people now, like myself and a lot of kids I watch, are constantly dictating into their phones and expecting that voice translation to be accurate. And I would say it's like 90-ish accurate if you have a very mainstream United States accent. But you have some challenges, because it won't do it as well. It's a very Westernized thing, what the voice has been built to do. And I watch people from Australia and the UK, and they're like, I can't use that. It doesn't work. Scottish, forget it. It doesn't work. But they're getting better at it. But it's definitely come from the US. It's come from that Silicon Valley sort of build out. And it's skewed toward that kind of voice. So it works great for me, but some of my friends, not so much.

[01:08:33.580] Kent Bye: Cool. So I'm going to open up for questions. We have about 20 minutes. And then we have a question here. And then I invite the panelists, if you feel really compelled to answer, we don't have to have everybody answer everything, but just kind of have a popcorn style. So here you go.

[01:08:47.550] Questioner 1: Hi, I'm an anthropologist from Las Vegas at UNLV, and so this is all kind of new to me, but I'm working on a project that is using AR to help bridge our anthropological research to the public through schools and museums, and also to help, you know, represent underrepresented social groups that we have throughout the world. And the reason why I mention that is When I teach my students about anthropology, I always tell them that the best way they can learn about other cultures and people is to go over there and to immerse themselves with them, you know, physically. So each of you have been touching on this, and this is just a really great panel. And I'm curious, what do you guys think of how close are we to really making almost like a one-to-one immersive experience through AR to help people empathize and learn about other cultures and go away feeling like, this isn't the other. They're not different for me. They're not the other anymore. I can relate and empathize with them.

[01:09:50.677] Ted Schilowitz: Totally exist today. You can do it absolutely today. You can do it in Magic Leap. They have collaboration. You can do it in almost every VR system. There are collaborations. One of the things we're trying to solve for is you got to put this thing on your face, right? So you lose the face. So there's a lot of work and study around how do we create the photorealistic or semi-photorealistic avatar that you can essentially puppet from behind the scenes that will be fairly facially accurate to the kind of nuances of what your face does if you're going to have to wear something that covers up your face. to make that human connection like we're doing right now. I see your face, you see my face. If we were doing this in an AR system or VR system, which we can totally do, and I've done it many, many times, but I have a thing on my face, right? Because of that. So it's kind of the trade-off of, do I want it to be in that flat screen world where it's just taking a camera version of me? And like the Facebook portal thing is interesting. It hasn't really caught on because they didn't really communicate well what that device is actually doing. It's different than video chat. The camera is actually tracking you and paying attention to where you are in the world automatically. So it's much more natural. They just haven't communicated it well. So people are like, well, I already got video chat. I'm like, yeah, but that Facebook video chat is actually like when the grandparents want to watch the kids run around, the kids don't have to hold the phone up. It's just going to find them. So it's more natural. So it's definitely a real thing. It's definitely happening.

[01:11:11.271] Kent Bye: And I would just add that there's a big thing you get from actually being in the full context. So it's very limited than actually being immersed there. So it's still got a long ways to go, I would say.

[01:11:21.347] Questioner 2: Hi, hello, I'm an anthropologist too. I'm actually a storytelling anthropologist. I'm specifying this thing and I teach at university in Italy. And what I wanted to ask is that, historically, every artistic language is has to face the problem of teaching their audience the grammar of the language they are experiencing. So like cinema, in the beginning there was just like an imitation of theater. And then they understood the grammar, and they could break the rules. so they could challenge the audience. I think that in the moment where you can break the rules of the language and engage the audience, there comes the art and the education, too. Because usually, for example, in storytelling, when a character has flaws, those flaws make them human. So when you can use a language and break the rules of the language without actually breaking those rules, you have fully formed languages. So I'm trying to ask, where are we? Are you guys trying to build the rules, or you are already breaking them?

[01:12:30.368] Ryan Horrigan: In VR and AR, I think the rules have been broken or have been getting broken for a couple or a few years already. I think the first couple of years, I would maybe go back to 2014, 15. Everyone was treating it like cinema, especially when it was mostly 360 video. And I think the rules started to get broken a little bit in 16, but really not until 17 and 18. And now I think it's fully broken and a lot's changed.

[01:12:54.406] Alice Wroe: I'll tell you something that's not broken. I feel one of the biggest problems that I have when we're creating Mica is that we've got this new technology that is fresh and interesting. There's collaborative opportunities there which are so empowering with this way that we approach this new technology. And I feel like sometimes developers and also users are lugging with them the old Well, all of their experiences and then they go and they sling it onto Micah and they project their experiences and then the way they can relate to her. And so the amount of times that people come up to me and they're like, oh, you're making Magic Leap's digital assistant. It's like, no, because she is a woman, people relate to her as a digital assistant. And that is because our histories, sadly, have made us relate to women in that way. All of your voices on the elevator, always women. The talking clock, when you ring it up, it's always women. And it's because women's voices, because of the patriarchy, it's easier to reduce them to the information that they convey. And so I hope when we walk into this new era of technology, we can take a pause and think, OK, what do we want to create? What do we want to leave behind? And that's really exciting. So I don't think all of the rules I don't know if the rules have been broken or whatever, but we should really think about what the rules are because we've got an opportunity to create new ones at this point.

[01:14:15.208] Kent Bye: And I would say that there's best practices in terms of how you don't make people sick and emerging ways of telling stories. I think there's a language that's emerging and forming and that a lot of people I talk to are resistant to putting too strict of a rules in place because it's still so early and new, but I do think that they are forming. And the way I think about it is that there's four pegs of what's happening right now. There's the technological innovation that is creating new possibilities. Then the creators come in and push the limits of what the technology can do. And then the audience has to learn how to watch that content. And then there's the distribution platforms that create a larger economic context under which people can survive as artists. we're kind of still iterating. The technology is, in the VR space, settling down pretty good. AR is, I think, just getting started to where VR was maybe four or five years ago. And so there's going to be a lot of that iteration in terms of the new technology that's being made available with things like Magic Leap and next iteration of HoloLens and the new SDKs. It's going to be putting more and more capability. But as creators, it's your role and responsibility to see what the limits of those SDKs are and the technology. and then to tell the stories to the limits that you can. And then that's another question as to how much an audience can receive that. And that's why playtesting is so important is because you just got to iterate and create something. And you got to remember that as you are immersing yourself into the technology, you are cultivating those skills and then you forget what it's like to be a first-time user, which means that you have to constantly playtest it and show it to people all the time to just make sure that you're onboarding people onto the place where they can actually start to learn the language of some of these experiences. So that's at least how I think about it.

[01:15:54.363] Peter Flaherty: I also think that in a lot of ways, I think the short answer to your question is very early in the process. That writing rules is almost We started out really early with some rules around what we could do in the nascent form of VR 360, right? And we kind of came up with ideas of, could you move the camera? Couldn't you move the camera? Could you edit the video? Couldn't you edit the video? You know, and the fact of the matter is, with those types of rules, it's like, really what we're doing is we're trying to engage in learnings. We're trying to understand something, and the rule is kind of the byproduct of that learning. And typically we're wrong, you know. But the reason we're wrong is not apparent until we learn some more and build some more. And so I think the way I see the tale of both of these technologies as they intertwine VR and AR moving forward is that we're in the period where the epoch of learning is very small. So it will increasingly get larger and larger as we move forward. So we'll learn exponentially faster in smaller periods of time. And then slowly but surely, it will start to gel. And you look at cinema, particularly regarding narrative, as a way to understand that. Now, we codified a world where, for example, reverse shots and a 180-degree line are part of our understanding of what the contextual positioning of images and sound, side by side by side, need to create meaning. And we think of that as a rule. I would argue that in another planet, in a parallel time frame, we could have created rules where there wasn't a 180 degree line, and there was no such thing as a reverse shot, and we could have created stories that our brain would have understood that way. That in a lot of ways, one of the things I'm so excited about about being in this field or this melting pot at this point in time is that, in a lot of ways, we're writing the rules. And it's about instinct and intuition and how we feel like we want to interface with the world and understanding at large. So what Alice is saying about the understanding of gender, that's a huge topic. And we're going to write that rule. There's no question what she's saying is absolutely correct and incumbent upon us to figure out. We're going to write those rules around so many things. It's going to be so important that thoughtful people are at the helm making some of those choices. Because if the people who are making some of those choices and codifying the rules or the learnings are not so thoughtful, if they are more driven by profit or first-person shooter video games, it's going to have a really detrimental societal impact. And it's really incumbent upon us to make good choices in that regard. And I take that with a degree of honor. It's like something we have to fight for, because sometimes the good choice is going to be the hard choice. But it's something that we have to do.

[01:18:43.013] Ryan Horrigan: I just wanted to say one thing. I think we have this safe space right now to keep the bad actors out of this space because we're not at this crescendo of a business model or of, put it this way, people aren't moving into this space looking at dollar signs quite yet. So I think a lot of the people who have bad intentions haven't arrived and mostly we're here with artists and people from the education space, and that's a good thing. A lot of people say to themselves, why isn't this happening quicker? Why aren't we seeing quicker adoption? And maybe because we're not supposed to be. Maybe because we need this time to get things right.

[01:19:16.273] Questioner 3: Thanks, Ken. Hi. I just wanted to add to that. We might not have bad actors yet, but we do have bad data sets. So the fact that Siri finds my voice difficult to understand, and I'm a white cis woman, the fact that the computer vision won't recognize Purdue, like that I can't actually look at another human being in the face because it's not my religious practice. So I think that we may not have bad actors, but we have to be so intentional to be inclusive with our data sets. And so that it's not just the Silicon Valley boys putting themselves into them.

[01:19:47.292] Ryan Horrigan: Agreed.

[01:19:50.433] Questioner 4: Is there a question back here? Oh, sure. Yeah. So we actually create a lot of social impact content. We work with the UN and ACLU and places like that. And we're developing VR content around these issues. My question for you guys, what specifically are you guys doing, if anything, any projects, in terms of humanitarian, very specific humanitarian causes or organizations that you may be partnering with? Because we're kind of living in a polemic world, the re-envisioning of 1968. I don't know if each of you, maybe if you have anything specific, you could talk about how you're, not just for awareness, not just like, oh, people need to know, because I don't think that's that important. I think people need to know and then do something, like Venezuela, the Women's March, Black Lives Matter.

[01:20:35.170] Ted Schilowitz: Just from my perspective, almost all of the pro bono work that I do related to VRAR relates to the medical field in some fashion. I work with a number of hospitals and universities, and we touch a lot on therapeutic value, training value, removing distress, pre-op, calming, using these visual tools to just try and take the opioid level down and the visual opioid up. And that's a pretty broad answer, but doing some stuff with hospitals in Santa Monica that's tied to a big chain and UCLA. And then broader, just education in general. I like to work with universities. In fact, Megan, with the strange accent there that Siri doesn't recognize, is from the University of Nebraska. And I'm helping her a lot on envisioning this new school that she's in charge of. to build a whole new school around emerging media arts, not a cinema school, not a TV school, next-gen media school. So we spent a lot of time thinking about what would that look like? What did we learn from all of these cycles that I just get a chance to put in because of the job that I have, and how can I sort of feed that back into a feeder pool of kids that are going to be the ones that are actually making and consuming this kind of media and will be leaving a lot of traditional television and film behind. That's coming. Their level of how they need their version of entertainment, interactivity, the kind of devices they're going to use, the kind of experiences they're going to have are very different than this audience. So I like to spend time in the education world. That's what I try and bring back to the equation and medical.

[01:22:16.305] Peter Flaherty: I mean, I kind of made a deal with myself three years ago or something that I would try my best to do for every narrative project I did, I would do something that did address some of those issues, and I sort of came to that because I did a VR project that had to do with psychiatric disorders, specifically brought on by digital technology. but more broadly just psychiatric disorders and how maybe digital technology could be a positive force in that regard as opposed to a negative force. So the story was about the negative implications, but in researching it, I tried to find ways that it could apply to the positive rehabilitation. So in some ways, similar in a way to what Ted's doing, I've got a partnership with someone who's dealing with PTSD in veterans and homeless youth in Los Angeles. And so we're launching a pilot program of a AR piece that's I think we're going to target the youth population first, something that feels like the technology could be used as a kind of grappling hook onto them to really compel them to engage with a variety of different methodologies for treatment, ranging from mindfulness education to other types of approaches to calming, neural rewiring, and things like that. I'm trying to figure out how to make that a kind of a one-on-one situation, where for every piece that's about entertainment or about some sort of learning that feels compelling to me as an artist to make, that I'm trying to make something that feels like it's useful as well in a bigger picture, more practical, utilitarian way.

[01:23:42.763] Ryan Horrigan: I just wanted to throw a shout out to my former job, Felix and Paul Studios. We have a piece here called Traveling While Black that's directed by brilliant directors Aisha and Roger. And it's a piece about some of the harrowing stories of African Americans who are traveling, both during the civil rights movement and also stories from the Black Lives Matter movement. And it's playing next door, so if you have a chance, check it out. But in that job, we spent a lot of time making virtual reality documentaries. We made one about climate change with President Obama at Yosemite National Park. And I think there's a lot of great content being made in the nonfiction space in VR and AR. And it's exciting. And in our new venture, we're working on one thing in particular that's going to sort of question and hold a mirror up to the masses on social media and sort of hold a mirror up to their own bias in a really interesting way. And you'll hear more about that later this year, hopefully.

[01:24:32.628] Alice Wroe: I feel like I'm letting magic leak down because I'm so in my mica DNA bubble that I don't know what the organization as a whole do. So I'm going to do some research into it because I hope that I would have a great answer to give you. I would do take what you said. I like your question. And I think it's good that you made every single one of us answer because we should be accountable for this stuff. I do disagree with you slightly when you said more than awareness, like, you know, not awareness. Awareness is this flippant thing. And I really believe in the connection between them both. When I think about role models and being able to see yourself. And I think it is important that we share stories and that we do educate each other. And I think that that is important and shouldn't be kind of dismissed as not as good as the kind of practical on the ground stuff. I think it's definitely a relationship between two is what I think. But I'm going to do research into the magic leap thing because I think you're right.

[01:25:19.701] Kent Bye: You're welcome. And for my part, what I do at the Voices of VR podcast is I feel like I'm both trying to capture an oral history of the evolution of these mediums of both virtual and augmented reality, but also try to shine a spotlight on what I see are the really cutting edge experiences that are really trying to do some of these things. And I highly recommend Traveling While Black. I just had an amazing interview with Roger Ross Williams. And what is happening in Traveling All Black is he's literally creating a healing ritual of truth and reconciliation for people to share a context where people are able to share their lived experiences in a way that you may not be invited into a context to be able to have that level of truth of that context. And I think that it's actually creating these healing rituals that we can start to use virtual reality as a healing modality. And I've also, a couple of interviews that I did with Monica Bilaskita, she's a digital nomad who travels all over the world, and she's attuned into what's happening around the world. And so she was really emphasizing how it's important not to just have a Western-centric perspective on these spatial computing mediums, but that a lot of the big innovation is likely going to be coming from non-Western cultures. They have these traditions of these rituals and community practices that I think are going to be translated very well into spatial computing. And so she really brought my attention to these decolonization movements that are happening all over the world of trying to look at the impact of imperialism and colonialism and capitalism and speak truth to the power in terms of trying to reclaim your own culture. And I feel like that, for my part, I've been trying to go to philosophy conference and did 27 interviews talking to some of the people that are studying these philosophical implications of decolonization. And I think that there's going to be a part of VR and AR as a medium that are going to take these philosophical concepts and give people an embodied experience of these ideas that then they can have a concept to bring about these larger paradigm shifts. And so I really see that VR and AR are at the cusp of these philosophical transformations into being more embodied and more immersed, and that it's going to ripple down into every dimension of society. And so I think as everybody in this room, as creators, it's going to be our job to figure out how to tell those stories and how to give people those embodied experiences that then become metaphors that allow them a context to understand the full breadth of the human experience. And with that, I think that's about it for our panel. And I just wanted to thank you for joining us. Thank you. So that was a panel discussion that I moderated at the Sundance new frontier. It was titled the second coming of AR. I would have maybe called it the storytelling in AR or something about conversational interfaces and virtual beings and interactive storytelling, but it featured Alice Rowe. She's the creative director of Micah at Magic Leap. Peter Flaherty. He's the director of the dial, which was being featured at Sundance new frontier. There was Ryan Horrigan. He's the co-founder and CEO of. Artie, as well as Ted Shilowitz, he's a futurist at Paramount. So I have a number of different takeaways from this discussion is that first of all, well, I think the aspects of spatial storytelling and moving your body through space is one of the key aspects of augmented reality and what makes it so special. And one of the things that Peter was talking about is that, what do we call these people that are going through this? And it's really participant. And the way that he phrased it is that because it's engaging all these different layers of our engagement, whether that's our emotional engagement, physical engagements, our verbal engagements. For me, I see that there are these aspects of emotional presence and our physical presence with our embodiments and as well as environmental presence, our mental and social presence, the way that we're communicating, the way that we're sharing space with other people, as well as this active presence of we're actually participating and be caving and being able to express our agency within an experience in some way. And so I do see that there's all these different dimensions are coming together within these special storytelling and What does it mean to actually move your body through space? And one of the things that Peter was saying is that it's actually activating all these different aspects of your body where you're actually stimulating your neuromuscular system, your body awareness, all your emotions are engaged, your amygdala and your fight or flight responses. All these aspects of your lived presence are being activated while you're moving your body through space. If you're sitting still and passively receiving something, you have a certain experience within your body. Advantages of augmented reality immersive theater and virtual reality spatial storytelling is that it's inviting you to actually physically move your body through that space and I think AR in particular way more than VR is able to do that because You're able to have your whole life experience of how to navigate physical spaces without running into things and in virtual environments You have to do still at this point all these levels of abstraction just to locomate your virtual body through the space but I think that's significantly different than actually physically moving your body through space. And so one of the things that Ted Shilowitz imagines in the future of something like Sundance is that there may be these big wide open spaces where people who are the audience members are bringing their own devices and then maybe there's a way to directly transmit the experiences directly to their headsets, maybe with 5G and being able to seamlessly connect the content with these headsets with these open spaces would potentially help solve some of these throughput issues, which is that pretty much everybody now has their own smartphone. But when you go to the Sundance New Frontier, it's like a very small, limited space. There's only like anywhere from 50 to 100 people that could experience it. And then once you get inside of there, then there's a subset of those people who are able to see all those various different experiences. And so it's very resource constrained in terms of the physical spaces that are required, which means that only a handful of people can actually see some of these experiences at the Sundance New Frontier. And you know, Ted was saying that he really thinks that augmented reality is just going to be like orders of magnitude, just way more popular and successful than VR. And I, I really challenged that as an assumption, because what are we doing on our phones? We're sort of distracted and we're looking at things. And so I actually think that one things that I like to think about in terms of the difference between AR and VR is that I actually think that people are going to be doing virtual reality in the privacy of their own home. And I think that augmented reality is going to be perhaps more out in the context of interfacing with other people. And I think there's like a little bit of an underestimation of the social dynamic that happened with, say, like Google Glass, where you have a camera and like, how much do you want to be interfacing with other people when they have all these devices on their faces? And they could be looking at and paying attention to something completely else and something different and how do you know that there's like this social contract of like eye contact and your full attention. I am skeptical of us just completely wholesale adopting augmented reality technologies where we're like in these virtual environments 100% all the time. Maybe that will happen and that's on the path where we're going but to me I like that context switch where when I'm out in the world, I don't actually prefer to use much technology at all. In fact, when I was at Sundance this year, I didn't try to coordinate too much through email. I tried to have these embodied interactions and collisions. And then from there, then I would schedule things, but I would try not to mediate or do too many things via email. So with that, though, when you're actually interfacing with other people, then what is the degree to which that we want to have this augmented reality technology that's going to be an interface in between these different interactions? And I, I think that in the future, when we have social interactions that are in the privacy of our own home and we're able to be in virtual reality, then maybe there was going to be that bit of a context switch. But one of the other points that was made by Ryan was that eventually these devices could actually have the same type of being able to black out pixels and be able to do these paths through augmented reality and to essentially have a device that's able to do a full on virtual reality experience as well as augmented reality experience there's a lot of technical limitations for being able to actually have an equivalent level of experience right now but maybe if you get to the point where you're like just like shooting photons into your eyeballs then maybe you'll be able to have a little bit more of an experience where it's a little bit harder to differentiate the boundaries between the AR and VR and it's all going to be this mixed reality. It's also interesting to hear the differences between first-person perspective narratives as well as third-person in that you tend to have a little bit more of the third-person omniscient cameras within these augmented reality stories just because you have a little bit of a limited field of view either within the window that you're looking on your phone or on these AR headsets and so you actually have to kind of like shrink things down in order to see everything in the full view which means that you're kind of like this omniscient third-person camera rather than a first-person perspective which is a narrative conceit that you saw a lot more within these virtual reality experiences. So the difference between first-person and third-person I think was an interesting differentiation that came out of this discussion. And that I actually really agree with Peter when he says that AR was really kind of designed to be more of a utility and a toolkit rather than a storytelling medium and that like there's a lot of things that you have to overcome in terms of the pragmatic utility of being centered in the concrete reality and that We're going to have to have a lot more complicated ways of detecting your emotional context, your body language, your physical context, and to be able to then on top of that, be able to customize and mediate these different experiences, depending on where you're at physically in the world, whether you're at home or out in public. And so there's lots of work that's still to be done in terms of like becoming contextually aware. But that Peter also said that he's convinced that AR needs to be a conversant medium using these AI mediated conversational interfaces. And that in order to have that sense of interaction and participation, you actually want to have this interactions with these AI characters, which is, I think, one of the things that Ryan Horgan of RD is really trying to focus on that as well. And that also the Micah character is trying to explore that dynamic as well. It's super fascinating to hear Alice's perspective, and I really appreciated all the different dimensions from her background in feminism and just bringing these different philosophies of how are we training ourselves interfacing with these technologies and how those human behaviors that we exhibit when we are interfacing with technologies are just a reflection of the ways that we treat either women or gender stereotypes or assistants or what we may classify in our minds as second-class citizens of people that are there to serve our needs but there's a certain amount of dehumanization that could happen And so really looking at the ways in which technology could help build these habits that we're actually trying to get into these better relationships, ultimately with other people, but using the technology to help train us to get there. And I'm totally on board with that. I actually do agree with Ted where he's bringing up these different ethical considerations because there was a certain amount of Language that alice was using around micah where she was really anthropomorphizing and you know insisting that it's her that there's one moment where alice said that When you're looking at micah that she's going to be just as present with you as you are with her And I have a problem with that in terms of like it's actually a digitally mediated technology It's not a human being and so I don't think the digitally mediated technology can be just as present and conscious As I am as i'm looking at it I do think that Micah is in some sense a representation of and projection of the consciousness of its creators. So I do think that there's a certain aspect of Alice's consciousness in her intentions and her deeper motivations for why Micah is important, where she's embedding her own intentions and deeper consciousness as an expression through a piece of art that you're interfacing with Micah. But I wouldn't say that Micah within herself has this sense of conscious agent being that I'm interfacing with. I didn't actually have a chance to talk to Alice or to Magic Leap. We had an interview scheduled and then it got delayed. But there was actually a couple of other pieces at Sundance that were very specifically addressing this issue of the dangers of anthropomorphizing AI characters. Asad J. Malik's A Jester's Tale, as well as The Dirt Scraper, explored these issues of the dangers of anthropomorphizing AI. And I have a whole interview where I decompress all the different points that Asad is making with his piece. And what does it mean for us to be anthropomorphizing AI? And what are we giving up in terms of our own agency when we're projecting a sense of agency on these digitally mated characters? Are they going to be able to manipulate us through these constraints of body language that's being represented through these digital mediations? And what are the benefits and risks of that? I think there's certainly lots of benefits in terms of narrative and storytelling, but also a lot of risks for potential manipulation and control, especially what are the deeper intentions of these entities. Now, when it comes to what Micah specifically is trying to do, I'm totally on board with her trying to give the sense of not fully understanding what the deeper intention is, but to be more being asked questions about both looking at our own patterns of behavior and how we're relating to the world and technology, but also empowering us to be able to facilitate different conversations that are happening that are catalyzed and mediated through the Micah character. But I would differentiate the difference between Micah and its creators and the intentions of the creators as a piece of artistic expression. And then it's a tool of artistic expression rather than a conscious being within itself. And I'll be unpacking and covering that a lot more with my conversation with Asad. And I hope at some point to be able to have more of a direct dialogue and conversation with Alice and the other creators of Micah, just to be able to get into a little bit more detail as to what they're creating and what their deeper intentions are. So I think we actually covered a lot of great ground in this conversation. Just another quick number of highlights, the connection to theme parks and theme park development and its connection to immersive theater, as well as the importance and the role of context. I think you see a lot of times at these Sundance Film Festivals that you have these art installations where the creator is able to really tightly control the context. And then thinking about in the future of AR, how it's a lot more about using the technology to be able to both detect and adapt to whatever the context that's unfolding. And kind of discovering the different ways to both detect and classify this context is, I think, actually one of the hardest problems that AR faces, but that once that starts to get cracked, then you're going to start to potentially have these different ways of transforming your own physical context and your own meanings to be able to change relationships that you have with those locations and be able to overlay these different layers of story. And I do think there are going to be lots of ethical implications of that, that people are going to be very choosy in terms of what types of new types of spatial memories that they want to be constructing within the context of their own home. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listeners-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show