“Wizard of Oz” VR experiences use improv actors to drive either a single or multiple virtual characters. This technique is commonly used within VR training applications where it’s cheaper to have a single actor puppeting multiple virtual characters rather than hiring multiple actors in order to create a sense of social presence. The “interactors” driving the content of the experience are able to use a set of keyboard commands in order to drive pre-rendered gestures and animations, or they can also do more sophisticated motion capture and virtual embodiment.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to talk Charlie Hughes, who is the co-director of the Synthetic Reality Laboratory at the University of Central Florida. He’s also was one of the founders of TeachLivE, which is a training application to prepare middle school teachers for complicated social dynamics and different types of students.
Artificial intelligence is not good enough to be able to fully automate these virtual characters within many of these different types of training scenarios, and so human surrogates are still being used to dynamically respond to the user’s actions through what their virtual characters say and do within the experience. I predict that narratives in VR are going to start to use a similar human-in-the-loop approach of using improv actors to drive live immersive virtual theater types of experiences. And if the winner of the Real Time Live competition at SIGGRAPH is any indication, then the technology to be able to do this type of live theater with cutting edge special effects is already here within the Unreal Engine. There are a lot of breadcrumbs for the future of interactive narratives in the live theater genre with what TeachLivE has been able to do with human surrogates and digital puppetry.
Demo of the TeachLivE Wizard of Oz system:
Demo of Real-Time Cinematography in Unreal Engine 4, which won the Real-Time Live competition at SIGGRAPH 2016
Donate to the Voices of VR Podcast Patreon