“Wizard of Oz” VR experiences use improv actors to drive either a single or multiple virtual characters. This technique is commonly used within VR training applications where it’s cheaper to have a single actor puppeting multiple virtual characters rather than hiring multiple actors in order to create a sense of social presence. The “interactors” driving the content of the experience are able to use a set of keyboard commands in order to drive pre-rendered gestures and animations, or they can also do more sophisticated motion capture and virtual embodiment.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to talk Charlie Hughes, who is the co-director of the Synthetic Reality Laboratory at the University of Central Florida. He’s also was one of the founders of TeachLivE, which is a training application to prepare middle school teachers for complicated social dynamics and different types of students.
Artificial intelligence is not good enough to be able to fully automate these virtual characters within many of these different types of training scenarios, and so human surrogates are still being used to dynamically respond to the user’s actions through what their virtual characters say and do within the experience. I predict that narratives in VR are going to start to use a similar human-in-the-loop approach of using improv actors to drive live immersive virtual theater types of experiences. And if the winner of the Real Time Live competition at SIGGRAPH is any indication, then the technology to be able to do this type of live theater with cutting edge special effects is already here within the Unreal Engine. There are a lot of breadcrumbs for the future of interactive narratives in the live theater genre with what TeachLivE has been able to do with human surrogates and digital puppetry.
Demo of the TeachLivE Wizard of Oz system:
Demo of Real-Time Cinematography in Unreal Engine 4, which won the Real-Time Live competition at SIGGRAPH 2016
Donate to the Voices of VR Podcast Patreon
[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye and welcome to the Voices of VR Podcast. So continuing on the theme of storytelling in VR this week, I'm going to be focusing on a technology that is used in a training application, but I think is going to be applicable to doing live interactions within the context of narrative VR. So, Charlie Hughes is the director of the Synthetic Reality Lab at the University of Central Florida, and he's created this system called Teach Live, which essentially has one improv actor behind the scenes, kind of in a Wizard of Oz style, puppeting five different middle school students. And so, it's a system to be able to teach middle school teachers how to interact with a classroom of students, but that classroom social presence is created through this one actor in the background running the whole show. We'll be looking at how training applications are using this mixed reality approach of driving a classroom of students with just one actor. And this concept, I think, is going to be pretty relevant for the future of narrative in VR. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsors. Today's episode is brought to you by the Virtual Reality Company. VRRC is creating a lot of premier storytelling experiences and exploring this cross-section between art, story, and interactivity. They were responsible for creating the Martian VR experience, which was really the hottest ticket at Sundance, and a really smart balance between narrative and interactive. So if you'd like to watch a premier VR experience, then check out thevrcompany.com. Today's episode is also brought to you by The VR Society, which is a new organization made up of major Hollywood studios. The intention is to do consumer research, content production seminars, as well as give away awards to VR professionals. They're going to be hosting a big conference in the fall in Los Angeles to share ideas, experiences, and challenges with other VR professionals. To get more information, check out thevrsociety.com. So this interview with Charlie happened at the IEEE VR academic conference that was happening in Greenville, South Carolina from March 19th to 23rd. So with that, let's go ahead and dive right in.
[00:02:25.060] Charles Hughes: I'm Charlie Hughes. I am a co-director of a lab called the Synthetic Reality Lab. It's at the University of Central Florida, and I've been in the VR business since 1987, and my focus these days is on human surrogates, sometimes virtual characters, sometimes robotic characters that are standing For humans, my personal interest is focused on complex environments and complex human-to-human interaction, really, and trying to help people become better in dealing with these complicated situations, like teaching a middle school classroom, or dealing with somebody who's being aggressive and you're a police officer and your job is to de-escalate the situation and so what we do is give people the opportunity to practice skills, get reflective feedback on it so they can learn from whatever they did and get back in and the beauty is Those virtual middle school kids or high school kids are very forgiving, unlike real kids. So the mistakes you make don't hurt anybody, including yourself. You can learn from them. And we feel the same is true with police officers and others.
[00:03:40.958] Kent Bye: To me, some of the interesting innovations that you're doing with this setup is that you're trying to get human interaction, but do it through virtual avatars, but yet you have a human actor that's jumping between multiple virtual characters. So maybe talk about that.
[00:03:54.523] Charles Hughes: Yeah, the approach we've taken keeps a human in the loop on our side as well. So there are humans learning and there are humans... Really, the avatars are the middlemen, the mediators between two humans. And what we have is one human who we call an interactor, and that person controls all of the virtual characters. And the way that occurs is they can control the mood of everybody, for instance, in the class, or they can control the mood of individuals. And so that's program behavior. But then when you start as the participant to address one of the kids in the classroom, they jump in and inhabit that particular kid. And the beauty of how these people are trained is they know everything about the backstory of that child or that adult in the case of police de-escalation. And there is never a time that they don't say true to that character, its personality. What we've found is for middle school kids, there's some really good literature in psychology that says There are two primary dimensions that describe a middle school kid. One is dependence or independence, so that's one of the dimensions. and the other is aggressiveness or passiveness. And so we have a virtual class with kids representing all the combinations, but it includes dependency upon somebody else in the class, dependency on the teacher for approval. There are multiple types of dependency, and there are multiple types of aggressive behavior. Typically, though, we can manage with about five kids to give you all of the personality types. And then we've expanded in our high school to include kids with autism and intellectual disability because teachers are more and more needing to deal with inclusive classroom. So come back to the technology. Most people presume if they talk to five different kids and they have different voices, different personalities, never change. When you ask them questions, it's always based upon their background and their personality. They assume we have five interactors. We don't. We have typically one. The software we've developed could have two or three interactors per person. It doesn't care. It can be one to many, many to one, or it can be many to many. The way we do the control is not using motion capture. Because the problem with motion capture is if I want to put the head of a kid on the desk, then I have lost all of my situational awareness if I have to put my head down on the desk as the interactor. I need to be able to see everything that's going on. Plus a lot of the behaviors you want to do are physically demanding on the interactor. So it's all gesture based. Everything is gestures and the gesture that I might use to control a particular behavior of a kid can be different if Kent is trying to control that. You personalize it to what is best for you in terms of physical and cognitive demand. And then the beauty of that is Everything is based on micro gestures. So what we do is have an essence of vocabulary of micro poses and we blend them together to most closely match the pose that you're giving relative to how you trained it. Then when we transmit that over the internet, all we're transmitting is weights associated with poses. We are not transmitting all of the angles of joints or anything associated with that. So it's extremely lightweight, and when it gets at the other end, it is then rendered, and so you have no lag appearing at the other end, and you have no perceptible latency that we contribute to. Latency can exist just because routers along the pathway get congestion, and there's not a hell of a lot we can do about that.
[00:08:06.834] Kent Bye: Yeah, so it sounds like, from the teacher's perspective, they may be in a virtual environment situation where, from their perspective, they see a class full of children, but yet there may be only four or five that they're interacting with. That one person on the back end is using, perhaps, like Razer Hydra controllers to kind of jump around and do these micro gestures to be able to control and voice act and fully be aware of the backstory and kind of do improv acting while they're jumping from different body to body, it sounds like.
[00:08:36.452] Charles Hughes: The word improv is perfect because everybody that we bring in is great at improv. Now, they're not always great at being consistent and if they're not, they don't make it through our interview process. So we have callbacks and do all that. The beauty of living in Orlando is there are tons of people in the universal Disney community there who are very, very talented. So we can get lots in and we can find the best of those lot to do the training with. But let me go back to what happened in those classroom settings. You were talking about the connection and what we do, I'll just use that as a play off and connect. There is a connect in there and it is tracking the teacher participant as they walk around and we change the virtual camera point of view so you can walk to the kid in the back row. You can walk to the kid if they're in tables. You can walk and position yourself right in front of them. So it feels like you're in the same space they're in, even though you're not actually in it. They're on a big flat screen in front of you. We have done a study funded by the Gates Foundation that showed four 10-minute sessions in this simulated environment is enough to change two to three of the measures of effective teaching that were determined in another GATE study that was funded at the University of Michigan. So, for instance, when teachers are in there, if we're trying to encourage them to do high-order questioning, and four times in there, they will actually be much better at it, and they'll carry it back to that classroom. Because if it doesn't carry back to the classroom,
[00:10:18.952] Kent Bye: It's a waste of time. In looking at this type of training, what does success look like and what does failure look like? What is the ideal result of someone going through this? And if they go through it and they don't demonstrate these things, then what does that look like?
[00:10:32.755] Charles Hughes: Well, if they don't demonstrate it, it means that they're really not reflecting on their performance. So what we do is we have a set of tools that help them to see how they performed and to look back reflectively on parts that are tagged either automatically or manually. to indicate parts of their performance. Some people just don't want to change. There's not a lot we can do about that. But that is pretty rare, we've found. The vast majority of teachers are in there because they really do care and they want to become better at their trade. They start off a little uncomfortable in anything that's a technical environment. Some of them do, not the younger teachers obviously, but some of the older teachers. But about 15 seconds in, they connect with the virtual kids, and they lose track of the fact that they're in technology. And so that's part of our success is since these kids have these deep personalities, social presence occurs quite rapidly and the physical movement, the physical presence supports the social presence. And that social presence to us is a key to the success of the environment. Now, failure, yeah, we'll get them. You get somebody who thinks they ought to bring a baseball bat into a classroom and they'll be successful. That works about as well as the policemen who go in and start yelling back at somebody who's being aggressive. You're not going to accomplish anything in either of those cases. And we will have people like that, but they're rare, very, very rare. So the measure of success to us is that they bring these skills back to the classroom and that they're persistent.
[00:12:16.719] Kent Bye: And so what are those specific skills? What are the competencies that they're learning from this training?
[00:12:20.606] Charles Hughes: Well, one of them is asking higher order questions. Another one is learning to pay attention to all the kids in the classroom. Invariably, when we ask somebody after their first session, did you pay equal attention to each of the kids in the classroom, the answer is yes, that they give. And the true objective answer is hell no. And in particular, they almost always miss the smartest kid in the classroom, because she sits there with her head slightly down. She is a very independent, passive kid. and she is deep as deep can be. If you get to know her, you find out that she knows more about literature than the people who think they want to be in that field, and yet she wants to be an engineer. And she's just sharp in every which way, and so many teachers miss that, and we want them to learn not to judge people by that outside affect. You need to get to know every kid in your classroom. So that's one measure of success that we have. Another measure of success is if I ask a question, I just spend a little time waiting for the answer. And too often teachers will ask a question and then half a second later they're starting to answer their own question. And once you do that, it's no longer Nobody owns knowledge in that classroom except you. So we want to teach them techniques that allow the students to own their own knowledge. I'd say those are a couple of the main things. And also to recognize that if they do have a kid who has some cognitive issues, whether it's on the autistic spectrum or not, they can get them involved in the class. So I pick something different than that. I pick a physical one. which is if you've got kids in the class who have visual impairment, you need to learn techniques to describe objects, not to point at them, strictly, because if they've got a visual impairment, pointing's not going to work particularly well. And to recognize those things and have strategies to work so you bring all your kids along. What often happens is trying to be good to the child, they will isolate them on a project and say, well, you're going to work on your own on this because you've got communication, and they'll never say that explicitly, but that's what happens, and they're actually devaluing that person as a team member, unintentionally.
[00:14:56.132] Kent Bye: In terms of teaching de-escalation skills, what type of scenarios do you have for cops to be able to learn how to de-escalate a situation?
[00:15:04.787] Charles Hughes: Techniques that I saw, I'll give him a shout out, it's Chief Berry. He is the Chief of Police at the University of Central Florida, but he's also been the Chief of Police other places prior to that. And he taught me. I don't know the answers to those questions until I talk to the subject matter experts. And what he did when he got in there is just the perfect example. I'll give you the scenario. Scenario is you've got somebody in a public place who is espousing a controversial topic. and they're very loud, they're disruptive to other people in there, and there are areas that they could go to do that, where it's perfectly acceptable, but they've chosen an area that it's not. And they feel it's their right. And you come in there, and if what you do is just tell them to move out of there, you're escalating the situation, because you've never figured out, why are they being antagonistic? What's going on? So this particular scenario has a a young man whose mother just died. She died of cancer. It was the cancer that killed her. But she had a hell of a last six months because she wasn't eating. And the reason she wasn't eating is she was, what's the word? It's when you're on a chemotherapy, it's really difficult to hold food down. But one of the ways you can help a person is through medical marijuana. It makes a big difference. And it actually, it's not really a controversial topic. It's controversial in state legislatures to assume that medical marijuana is the same as marijuana and it's a gateway. And what happened in Florida is that another police officer, a sheriff, with two days to go when they had an amendment out for medical marijuana, declared that we should turn this amendment down because it's the gateway to drugs, and nobody had a chance to respond to him. And so that's why the guy's angry, because he feels betrayed by the system. His mother was betrayed. And if you can learn that about him, and you can respect his opinion, even if you may disagree, because he may want total, all drugs, to taken off that you don't have to agree with everything. You just have to respect him as a human being. You need to have empathy for the situation that drove him to be so angry. And if you do that, he will willingly walk out with you because you're the best person he's run into today to express his opinions. So that's one thing that we do. Now you got to handle it a little bit different if you actually have a mentally unstable person. And we don't have yet a scenario built for that. But we have a scenario built for, we often have college freshmen who come in who never saw anything but an A plus in high school. They get into college, they're away from home, it's a new environment, lots of things going on socially, and they start to find themselves slipping. And by the end of the semester, some of them get very distressed and maybe even suicidal. And the police get called in. The problem with the police getting called in is they get guns in there, and they have to. They're required because they are state police in our particular situation. And any police officer in the US seems to have to carry one in. That escalates things automatically. So what they have to learn is tactics of how you stand, where your hand goes. It can't go near there. But then they also have to be very aware of their environment. If a kid drops down, and their hands are hidden, a police officer becomes at that point hyper-vigilant. And so what they have to do is exercise strategies that in a gentle way gets people not to hide their hand. And what they've found with police officers is if they de-escalate situations, they learn those skills, they are healthier, And they have better home lives as a consequence of that, because otherwise they go home and they're hypervigilant. They don't make friends out of anybody except fellow police officers because they understand. But if you can get them away from that, then they can blend in a society, and then it makes it much easier for them to blend into the communities that they're serving.
[00:19:33.258] Kent Bye: So if you're using virtual actors in these different scenarios, then what are the affordances of virtuality providing? Like, why not just do an acting scenario in that way?
[00:19:42.441] Charles Hughes: Well, what happens is I will have an interactive who might be a 35-year-old woman who can play the school principal, the teacher. five kids in a middle school class, five kids in a high school class, and she looks like a high school kid when she's a high school kid. She looks like a black high school kid when she's a black high school kid. She looks like a lily white Irishman like me when she's that. She looks like a woman when she's that. Her voice is the quality of whoever she's doing. When you start talking about standardized patients, which is the model for that standardized human doing, that role playing, they cannot really look like anybody but themselves. They may be able to do some voice morphing, but not as good as you can do it electronically. They cannot do all of the twists and bends in the body and the smiles and deformation of the face that you can put in with really good modeling efforts that make it personal to that individual. So these virtual characters are much more adaptable than a single human being being physically present. But that single human being controlling the avatars can give you all of that diversity.
[00:21:06.215] Kent Bye: So you mentioned social presence, and I'm curious, like, how do you think about social presence? And like, how do you measure or define it? And it seems like I can get an intuitive sense, but I'm just curious from your research perspective, like, what is social presence?
[00:21:20.495] Charles Hughes: Okay, well, let me just differentiate it. Physical presence is the sense of being in that environment. Co-presence is the sense of being physically with others in that environment. Social presence is making connections. And connections can involve empathy, a sense of empathy. So, I'll give you a perfect example. One of the boys in our classroom has a dog named Chewy. Now, this is a perfect example of social presence. Teachers get so connected to Sean. Some of them can't stand him because he is the aggressive dependent. He's always wanting your attention. But most of them, after a while, really realize Sean is a great kid. And they really do. And one of them was off in a park. This is in the L.A. area. And she's walking around. She runs into a woman who's got a nice dog. And she's got her dog. And she asked the woman, what's the name of your dog? And she said, it's Chewy. She said, oh, I have a friend. I've got to go back and talk to him and tell him I met another dog named Chewy. She said, oh, never mind. Sean's not real. And the woman looked at her and walked away. But that is social presence. She's made such a connection with Sean that she really, to him, he's part of her group of friends. And teaches when they get together at conferences. There are education ones that have worked with Teach Live. They actually discuss their students in the sense of talking about individuals in the class and say, sort of a variant of Maria, but with some of the personality of CJ. Now CJ's the girl who is the boss of that classroom, and she is. And if you don't recognize that within her peer group, she's dominant, then you won't have a chance in that classroom. And so you have to understand all the dynamics of that. And the dynamics of them is also part of social presence. You have to understand that.
[00:23:25.347] Kent Bye: And you had mentioned that you had up to like 20,000 teachers that went to this Teach Live over the last year?
[00:23:30.029] Charles Hughes: Last year it was about 20,000. 12,000 the year before. When we started, we were one university doing it internally, then we got a second university, Utah State, and then we got a number of others, and we got up to about eight or nine, and Gates Foundation came in and said, what we'd like you to do is do a little evaluation of how this is going and bring all of those schools together. when they brought them together and what they thought they were going to do is to try to then as the next stage get us doing studies with them when they came that all had started research studies. And so the Gates Foundation went from the $100,000 we're interested to the $1.5 million we're really interested. And so we had metrics and the metrics were we had to have 30 universities on board at the end of three years and we had to have plans towards commercialization. At the end of three years, we had 85 universities. At the end of two and a half years, it actually had a commercial partner. And it's already commercialized. And the commercialization means that it's now being used in other applications. We brought Best Western to the company because we don't want to do things like that. They're doing 2,500 Best Western hotels. and the desk staff on those now. And then if that works well, they're going to go to the rest of it, the international part, and then they're going to start working with other people within the hotel chain to do that. So it has taken off.
[00:25:07.902] Kent Bye: Yeah, that's pretty amazing. What is the proof that this is so successful? What kind of numbers?
[00:25:12.504] Charles Hughes: The numbers that I thought I mentioned before, but I'll mention them again, is we found we did a study of 160 teachers, but across 10 different universities. The universities did the study, but these were actually teachers in practice already. So we observed them in their classroom across the various metrics we were looking at. And these were all these measures of effective teaching. Then we brought them into the environment four times, ten minute sessions. We observed their improvement during that. We did give them reflective feedback. And then we went back into that classroom several months later and observed that they were retaining, not retaining what they'd got to at the fourth session, they were better at them. And we think, although we have no evidence of this, that getting back in the context of their own classroom They were really highly motivated to practice those skills and to see how they worked. And the fact they were still doing them means it must have worked within the classroom. Now, did we do a lot of studies of the kids and how they did on standardized testing? No, we didn't do that.
[00:26:25.333] Kent Bye: Great. And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable? Oh, Lord.
[00:26:34.809] Charles Hughes: Those are always the brutal questions, the prognostication. Just, you know, from my perspective, what I want to be is at the point where a person can inhabit another manifestation of themselves, whether it's a robot or what have you, and feel that true sense of being there in every aspect, socially, co-present, all of that. And when we get there, I don't know what the hell the limits are, because they are going to be amazing. But what it takes is a big effort that has to be done, and we're doing that and doing it with Carnegie Mellon with LP Marenzi up there. We're doing some work on the emotional state. If your system cannot understand the emotional state of the other characters in there, then it's not going to work in terms of you being able to influence, understand, have real dialogue with them. And that's picking up, that's a multi-modality problem. And some of those things are really hard. Like, I can observe right now where your eyes are, and you go back a ways and I still can. But cameras just are not quite that, except for very expensive. So everything needs to be done off the shelf to be successful. Because the only reason VR delayed so long is the cost. You know, a video see-through HMD, the Canon ones were $27,000. The new ones were $110,000. Who the hell's going to pay that type of money? Now you can do video see-through head-mounted display by putting together an Oculus, with the Omri Vision, I can never pronounce that one right, but there are a whole bunch of them, of cameras, so you've got the effective left and right eye, so you get depth sensing and all of that. That's cheap now, and then you can put the Leap Motion on, and you can pick up hands, and so you can get gesture control and all of that. That's very recent. And so to predict how fast it'll move now is really hard.
[00:28:44.154] Kent Bye: But the answer is very. And you still have human in the loop. Do you foresee a time to completely replace that with AI? Or do you see that humans just you can't replace them for what you're doing?
[00:28:54.023] Charles Hughes: I see it as a way off. That's the one thing I see as a way off. But I think we can learn so much from the humans. And so what we can do is As we pick up how they control characters in these situations, if I understand how they're reacting to your emotional response, I have to understand your emotional response. And the more I pick up on that, then the more of it that I can take away from the human interactor and do automatically for them. And that's what our goal is, is we're trying to assist the human actor and make their job easier and easier And the more we do that, the closer we get to what you're talking about. But in my lifetime, I don't know. Because I'm a person who went through the winter of AI, because I've been in computer science since 62, which is a long time. And I've seen the ups and the downs, and the VR, and the hype graph on VR from the Gartner group. Have you ever seen that? Yeah. So it got really high there after SimNet and things like that. And there was all this great interest in it. And then people started looking at the cost and it flopped down. And now the cost is low. So it's coming back up. I don't think we're going to see another winner. I hope not. AI didn't. But why? It's because most of the people in the early time were trying to attack AI in a neat way. You know, grammar based, all of that. And the scruffies were around, but scruffies didn't have the support of the computational means we have now to deal with large volumes of data. And statistical approaches to it, which are essentially heuristic approaches, scruffy approaches to it, won in natural language recognition. I don't know exactly what's going to win in terms of figuring out emotional response. There may be some great heuristics that come up there too.
[00:30:56.662] Kent Bye: Who knows? Great. Is there anything else that's left unsaid that you'd like to say?
[00:31:01.488] Charles Hughes: Yeah, I'll just say one thing. If you're a techie like me, learn to listen to other people. Nothing I do would have any value if I didn't learn to listen to people in education who really do know what the hell they're doing. And your job is to be a great tool maker and have them understand not just the limits, but listen to them when they're outside of those limits. And sometimes you find a compromise. that you realize suddenly, hey, we really can do that. But don't walk in and just start throwing technology at them. Demo some stuff and then go into a conference room and just listen. And that works with police officers, that works with teachers, that works with people who are anthropologists, archaeologists, all sorts. Listen. People in hotel management, listen. And don't stop listening. Don't assume just because you got one thing right after that, that you got it all right. They're your partners for life as long as you're working in those areas. That would be my best parting comment.
[00:32:10.830] Kent Bye: Great. Well, thank you so much. Okay. You're welcome. So that was Charlie Hughes. He's the co-director of the Synthetic Reality Lab at the University of Central Florida. So I have a number of different takeaways from this interview is that, first of all, I think that this technology and concept of using one improv actor in order to drive an entire scene that creates this sense of social presence, I think this is actually going to have a lot of applications for how some narratives are going to be told within VR. Just like there's live theater, I think this technology gives the possibility to be able to recreate that type of live theater experience and have a little bit more of dynamic interactions with people that goes beyond what artificial intelligent NPC characters can do at this point. So I expect to see this type of Wizard of Oz technology be adopted and used within different virtual environments. And some of the technologies that was just shown during the real-time live demos at SIGGRAPH this week really showed how you can use one actor to be able to embody different characters and add all sorts of special effects. That was used within the context of recording a pre-recorded scene in this live demo that happened at SIGGRAPH. I think in the future, we're going to be able to see kind of this live theater experience where maybe many people are watching a show that's being driven by just a few actors that are embodying all the different characters. So I think that the anecdote that Charlie shared about social presence and what social presence means, I think is something that's really striking to me of just the fact that some of these teachers in training are using the system. They're just looking at this 2D screen with five different students within the classroom. The camera is kind of zooming in and out and they're able to have these interactions and the interactor who's kind of puppeting the different characters has that camera on the teacher so that they can see them but the teacher's only seeing kind of like this virtualized environment of these different avatars. That seems to be enough to create this sense of social presence and build these relationships with these imaginary characters, so much so that it gives this impression within people's memories that these imaginary characters are real. Just the moment where the teacher had to say, oh, never mind, my friend isn't real. It kind of speaks to the power of virtual environments and I think that one of the dimensions of social presence is one of the key aspects of why I think some of the VR experiences that have the social dimension are going to give you so much more sense of it being real and you being present there. I've talked before on different episodes about my own personal theory of how I think about that there's four different types of presence, one of them being social presence, the other one being the active presence or the presence that is involved with you manipulating and using different tools within VR. And then there's the embodied presence, where you actually physically feel like you're there or that you have the sense of your virtual body ownership illusion. Or there's emotional presence, where you actually have a emotional response that's being invoked by participating in the virtual environments. So I think that this mixed reality experience is really focusing on building that social presence but also the emotional presence because the teachers are kind of building these emotional connections to what they're encoding within their mind as relationships with real people. And so in the YouTube video that's embedded in this episode there's an example of looking behind the scenes and having one of the interactors be motion captured and doing the voice acting and embodiment of the five different characters and then they also show this other interactor who is playing this keyboard in order to do these shortcuts for gesture based controls and so I think that this may be a little bit old video and I think they may have consolidated this so it's possible for one improv actor to also be doing some of these keyboard interactions but The idea is that there's these shortcuts that you could use in order to do interactions that are more fully animated. And so in terms of a non-particulating interface, if you're acting and doing this with many different teachers, you may get tired by doing the full embodiment of these virtual characters. And so just having a little keyboard and these shortcuts, I think, is going to be an approach where it's going to be able to scale out these types of interactions. You can imagine it having much more sophisticated types of animations within a narrative context. And it's really interesting to hear from Charlie the success of this program and the efficacy of doing this type of training to see how teachers have not only retained but improved on some of their skills as they check in with them over time. So much so that the Bill and Melinda Gates Foundation has given them a $1.5 million grant and just supported this type of program to be spread out across the country. So I expect to see more of these types of virtual trainings that are doing different social interactions. I know I've done a few other interviews at the IEEE VR talking about some of these, virtual humans that are being puppeted with this Wizard of Oz type of scenario. And there's a lot of different situations where there's complex social dynamics that are happening that are difficult to get all the different resources together for people to train within these different scenarios. And so we'll be looking at some of these other use cases for how this Wizard of Oz type of approach is being used in training. But in looking at narrative and storytelling in VR, it really feels like this kind of approach would be very well suited to doing these type of live theater types of productions where there may be a limited set of actors who are embodying many different characters, even simultaneously, because, you know, when you're watching an experience, you're really only able to listen to one or two characters at the same time. So that's all that I have for today. I am back from SIGGRAPH. I did a little over 20 interviews there over this last week. And next week, I'm going to be heading to Los Angeles again for the VRLA conference. And if you're in the realm of doing storytelling in VR and like to catch up with me at VRLA, I tend to keep a pretty open schedule and roam around. So keep an eye out for me and come up and say hello. And if you have something that you'd like to talk about, let me know. And also, I just wanted to give an additional shout out to one of my sponsors of the VR Society, which is this Hollywood consortium of different companies who are coming together to talk about storytelling and narrative in VR. And look out for their VR and lot gathering that's going to be happening here sometime in October. I think they're still trying to settle in the exact dates. But check out thevrsociety.com for more information on that. And if you've been enjoying the Voices of VR podcast, then please do tell your friends and consider going to iTunes and leaving a review so you can tell more people about what you're getting out of this podcast. And if you'd like to send some financial support my way, then please do consider becoming a contributor to my Patreon at patreon.com slash Voices of VR.