People were waiting in lines for over an hour to see Pearl both at SIGGRAPH and VRLA, which is one of the best narrative VR experiences that I’ve seen so far. The central character in the story is a car named “Pearl” who gives selfless service to her father and daughter musician owners. The experience traces Pearl’s selfless service spanning a wide range of emotional memories doing mundane chores, road trip adventures, and key turning points in their lives. It takes inspiration from Shel Silverstein’s The Giving Tree, and it proved to be a powerful, moving, and emotional experience for many people watching it was SIGGRAPH and VRLA.
Pearl was directed by Patrick Osborne, who won an Academy Award for Disney’s animated short film Feast. It was produced as a part of Google’s Spotlight Stories using their custom VR animation software codenamed Moxie that’s being developed within Google’s Advanced Technologies and Projects (aka ATAP).
I had a chance to talk with Patrick Osborne at SIGGRAPH, where they premiered the interactive version of Pearl showing on the Vive. It originally premiered as a 360-video at Tribeca Film Festival, and was eventually released on YouTube during Google I/O. I have a chance to talk to Patrick about pacing and editing in VR, making a folk music video in VR, and the inspiration for the story.
LISTEN TO THE VOICES OF VR PODCAST
You can watch Pearl on YouTube now, and I’d recommend waiting for whenever the more interactive version is released on the Vive.
The YouTube channel for Google’s Spotlight Stories has their previous 360 video experiments, and they’re releasing some introductory videos including how to watch a 360 video on a phone without even using a VR headset.
In fact, I’d say that their latest Buggy Night release serves as a training video designed to help people learn how to watch 360 videos primarily on a mobile phone.
AMD has made a number of different announcements over the past couple of weeks at both SIGGRAPH and VRLA. They announced some new Radeon Pro graphics cards designed for professional visual effects creators, as well an open source VR video stitching software called Project Loom (to be released on GPUOpen later this summer), as well as rebranding and open sourcing their ray tracing program ProRender, which is a rebrand of FireRenderer.
At VRLA, AMD announced that they’re going to be brining VR demos to the masses in public spaces like malls and movie theaters in partnership with Awesome Rocketship. In order for VR to be successful, then AMD is helping to support initiatives to make more accessible for consumers to demo. AMD also announced the least expensive VR-ready PC that meets Vive’s and Oculus’ minimum specifications with the CYBERPOWERPC Gamer Xtreme VR for $720 now available on Amazon.
I had a chance to catch up with AMD’s Roy Taylor, VP of Alliances, Content, and VR, at VRLA to hear more about AMD’s recent announcements, their open source philosophy, their support for VR storytellers, and the upcoming VR on the Lot event on October 13 & 14th that will be helping to educate the film industry about VR.
LISTEN TO THE VOICES OF VR PODCAST
Radeon Pro is being sold to visual effects professions and VR storytellers
Here’s a teaser trailer for the Awesome Rocketship VR demo pods that AMD will be helping to bring to malls, movie theaters, and other public areas where people gather.
Digital lightfields are a cutting-edge technology that can render photorealistic VR scenes, and Otoy has been a pioneer of the rendering and compression techniques to deal with the massive amounts of data required to create them. Their OctaneRender is GPU-based, physically correct renderer that has been integrated into 24 special effects industry tools with support for Unity and Unreal Engine on the way. They’ve been pioneering cloud-based compression techniques that allows them to stream volumetric lightfield video to a mobile headset like the Gear VR, which they were demonstrating at SIGGRAPH 2016 for the first time.
Jules Urbach is the CEO and cofounder of OTOY, and I had a chance to sit down with him at SIGGRAPH in order to understand what new capabilities digital lightfield technologies present, some of the new emerging file formats, the future of volumetric lightfield capture mixed with photogrammetry techniques, capturing an 8D reflectance field, and his thoughts on swapping out realities once we’re able to realistically render out the metaverse.
LISTEN TO THE VOICES OF VR PODCAST
Otoy is building their technology stack on top of open standards so that they can convert lightfields with their Octane renderer into an interchange format like gLTF, which will be able to be used in all of the industry-standard graphics processing tools. They also hope to eventually be able to directly deliver their physically correct renders directly to the web using WebGL.
In the Khronos Group press release about gLTF momentum, Jules said, “OTOY believes glTF will become the industry standard for compact and efficient 3D mesh transmission, much as JPEG has been for images. To that end, glTF, in tandem with Open Shader Language, will become core components in the ORBX scene interchange format, and fully supported in over 24 content creation tools and game engines powered by OctaneRender.”
Jules told me that they’re working on OctaneRender support for Unity and Unreal Engine, and so users will be able to start integrating digital lightfields within interactive gaming environments soon. This means that you’ll be able to change the lighting conditions of whatever you shot once you get it into a game engine, which makes it unique from other volumetric capture approaches. The challenge is that there aren’t any commercially available lightfield cameras available yet, and Lytro’s Immerge lightfield camera is not going to be within the price range of the average consumer.
Last year, OTOY released a demonstration video of the first-ever light field capture for VR:
Jules says that this capture process takes about an hour, which means that it would be primarily for static scenes. But Jules says that their working on a much faster techniques. However, they’re not interested in becoming a hardware manufacturer, and are creating 8D reflectance field capture prototypes with the hope that others will create the hardware technology to be able to utilize their cloud-based OctaneRenderer pipeline.
Jules says that compressed video is not a viable solution for delivering the amount of pixel density that the next generation screens require, and that their cloud-based lightfield streaming can achieve up to 2000fps. Most 360 photos and videos are also limited to stereo cubemaps, that don’t really account for positional tracking. But lightfield capture cameras like Lytro do a volumetric capture that preserves the parallax and could create navigable room-scale experiences.
@OTOY added support for rendering stereo cube maps in the Octane renderer. Their test is the highest quality scene I have seen in an HMD.
Jules expects that the future of volumetric video will be a combination of super high-quality, photogrammetry environment capture with a foveated-rendered lightfield video stream. He said that the third-place winner of the Render the Metaverse Contest competition used this type of photogrammetry blending. If Riccardo Minervino’s Fushimi Inari Forest scene were to be converted into a mesh, then it would be over a trillion triangles. He says that the OctaneRender output is much more efficient so that this “volumetric synthetic lightfield” can be rendered within a mobile VR headset.
Overall, Otoy has an impressive suite of digital lightfield technologies that are being integrated with nearly all of the industry-standard tools and with game engine integration on the way. Their holographic rendering yields the most photorealistic results that I’ve seen so far in VR, but the bottleneck to production of live action volumetric video is the lack of any commercially available lightfield capture technologies. But lightfields are able to solve a lot of the open problems with the lack of positional tracking in 360 video, and so will inevitably become a key component to future of storytelling in VR. And with the game engine integration of OctaneRender, then we’ll be able to move beyond passive narratives and have truly interactive storytelling experiences and the manifestation of the ultimate potential of a photorealistic metaverse that’s indistinguishable from reality.
During the Enlightenment, René Descartes declared that the mind and body were split and that we should think about them as separate dualistic entities. But more and more evidence is pointing to the fact that our bodies are much more involved in cognitive processes than we ever thought before. One of the most interesting theories along these lines is called “embodied cognition,” which asserts that we learn about the world by manipulating and interacting with it through our body and all of our senses and that the context turns out to be an extremely important part of learning as well.
LISTEN TO THE VOICES OF VR PODCAST
I went to an Embodied Learning educational workshop at the IEEE VR academic conference in March where “embodied cognition” was the hottest buzzword throughout the entire day. I had a chance to meet and talk with Clemson education student Nikeetha D’Souza who worked on an interdisciplanary team where they used dance to teach middle school girls concepts of computational thinking.
They used a visual scripting program called “Virtual Environment Interactions”, which is abbreviated as VEnvI. Students could choreograph a dance routine by using the visual scripting language to do move sequences, conditionals, and repeating loops. This would control a virtual avatar who would be executing the dance. And then they would go into a virtual environment and then dance with the digital avatar.
When the recession hit in 2008, book publisher Charlie Melcher looked to reinvent how Melcher Media told stories using the latest smart phone technologies. They developed an iOS app for Al Gore’s Our Choice, and started having a lot of conversations with other media producers from many different disciplines to see how they were using code as a canvas for storytelling. So in 2012, Charlie founded The Future of Storytelling Summit to gather together the most cutting-edge innovators of telling immersive and interactive stories. For the past four years, they’ve been featuring more and more virtual reality technologies at their yearly summit, which is happening again this year on October 5th and 6th.
Charlie cites Orality of Literacy by Walter Ong as a book that explores the impact of what was lost from oral traditions within cultures when the printed word started to become the authoritative source rather than from stories that were collaboratively shared. He sees that virtual and augmented reality is bringing us back to a previous time with “Living Stories” that are personalized, responsive, immersive, and multi-sensory. Rather than continuing to produce uni-directional linear media, these new immersive platforms are enabling us to play a more significant role within stories where we can exert our agency, express our creativity, and more fully collaborate in making stories.
I had a chance to catch up with Charlie to explore his thoughts on what VR can learn from immersive theater, the transformational potential of becoming a character within a story, the power of living stories, creating more social storytelling experiences, and how immersive technologies may be bringing back some of these pre-literate oral traditions and a greater tolerance for dealing with mystery and enchantment.
LISTEN TO THE VOICES OF VR PODCAST
I recommend checking out some of the speaker videos produced before each of The Future of Storytelling Summits. Here are some of the VR highlights worth checking out:
World Building with Alex McDowell
Glen Keane – Step into the Page with Tilt Brush
Language of Looking with Eyefluence
Saschka Unseld – Uncovering the Grammar of VR
Ubisoft’s Corey May on the Player story vs. Protagonist Story
The Three Moods of Netflix: Escape, Expand, or Socialize
Adventure Time creator Pendleton Ward has been fascinated by the idea of virtual reality since he first read Snow Crash as a teenager. He backed the Oculus Kickstarter, and has been exploring many of the early VR prototypes over the last three years and has started to create interactive stories. Little Pink Best Buds has some VR components and was made as a part of Double Fine’s 2-week Amnesia Fortnight game jam in 2014. Pen is currently exploring the bounds of identity in VR through a new adventure game and story that he’s working on, and he’s been taking inspiration from many different early VR prototypes but especially the out-of-body experiences he had in Robin Arnott’s Sound Self VR experience.
I had a chance to catch up with Pen at SIGGRAPH where he talked about Sound Self, contrasting his ability to express identity in real versus virtual spaces, how VR allows him to get out of his head, recreating a scene at the Black Sun virtual nightclub from Snow Crash, his explorations in social VR, and where he sees the metaverse is heading.
LISTEN TO THE VOICES OF VR PODCAST
For more information about identity in VR, then check out my interview with Mel Slater, who is one of the leading researchers in exploring presence and virtual identity. I’ve also interviewed a couple of Mel’s students including Domna Banakou and Nonny de la Peña who have both explored the virtual body ownership illusion.
Also check out this summary of research from Mel’s Event Lab on their findings about Positive Illusions of Self in Virtual Reality
And here’s another video exploring the Time Travel Illusion in VR
Here’s Pen’s pitch video for Little Pink Best Buds, which was the experience that was selected on to be made as part of Double Fine’s Amnesia Fortnight 2014.
And finally, here’s a couple of cartoons about VR from Pen as well as a Tilt Brush drawing he made using a ladder:
"personal bubble" shields seem like an important feature to prioritize in these early days of social-VR pic.twitter.com/Afbp8MxLJq
“Wizard of Oz” VR experiences use improv actors to drive either a single or multiple virtual characters. This technique is commonly used within VR training applications where it’s cheaper to have a single actor puppeting multiple virtual characters rather than hiring multiple actors in order to create a sense of social presence. The “interactors” driving the content of the experience are able to use a set of keyboard commands in order to drive pre-rendered gestures and animations, or they can also do more sophisticated motion capture and virtual embodiment.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to talk Charlie Hughes, who is the co-director of the Synthetic Reality Laboratory at the University of Central Florida. He’s also was one of the founders of TeachLivE, which is a training application to prepare middle school teachers for complicated social dynamics and different types of students.
Artificial intelligence is not good enough to be able to fully automate these virtual characters within many of these different types of training scenarios, and so human surrogates are still being used to dynamically respond to the user’s actions through what their virtual characters say and do within the experience. I predict that narratives in VR are going to start to use a similar human-in-the-loop approach of using improv actors to drive live immersive virtual theater types of experiences. And if the winner of the Real Time Live competition at SIGGRAPH is any indication, then the technology to be able to do this type of live theater with cutting edge special effects is already here within the Unreal Engine. There are a lot of breadcrumbs for the future of interactive narratives in the live theater genre with what TeachLivE has been able to do with human surrogates and digital puppetry.
Demo of the TeachLivE Wizard of Oz system:
Demo of Real-Time Cinematography in Unreal Engine 4, which won the Real-Time Live competition at SIGGRAPH 2016
Catherine Rehwinkel is working on creating a conceptual framework that enables storytellers to storyboard linear stories in VR. She’s a filmmaker who recently finished a master’s degree computational & systems thinking at NYU. She’s been inspired by Donna Haraway’s “Situated Knowledges” feminist theories that take into account how location and place impact our perspective on events, and she’s intrigued by VR stories that grow and evolve when you watch them from different locations.
LISTEN TO THE VOICES OF VR PODCAST
One way that I understand the importance of location in VR storytelling is looking at Rose Troche’s Perspective series where she explores how a narrative story can change if you watch the same events through different character’s eyes. Rose concluded that the first-person perspective is extremely vulnerable, and in order to get a more complete picture of an event then it’s helpful to take into account many different perspectives from different people.
Similarly, Donna Haraway doesn’t believe that we can have a truly objective, passive, or omniscient scientific observations that are independent of our subjectivity. Instead, Haraway’s concept of “situated knowledges” is calling for us to think of people as a lot more messy and complicated creatures who are full of contradictions. Situated knowledges can be described by thinking about subjects who become “complex contraptions made of biological vision and personal will, the scientific gaze is dissolved into a network of contested observations, and objects become Coyote-Frankensteins, produced and yet much more in control than the traditional modest witness would care to admit.”
In order to get a more complete picture of any topic, then you have to triangulate between many different complex and contradictory subjective perspectives (incidentally, this is part of the philosophy and intention driving the Voices of VR podcast). Catherine Rehwinkel believes that VR is particularly well-suited medium for simulating the multitude of different perspectives through the simple mechanism of changing your location as you watch and re-watch a series of linear events take place. And if the narrative is constructed well enough, then your understanding of the story could continue to evolve and grow as you watch from many different vantage points. Catherine believes in this vision for storytelling in VR, and is in the process of building conceptual tools for storytellers and VR designers to storyboard, architect, and prototype these types of experiences inspired by the concept of situated knowledges.
Back in 2014, the CLOUDS interactive documentary premiered at Sundance New Frontier where it debuted a VR interface to navigate over 40 oral history interviews with creative coding pioneers. Movies have typically been pretty linear, but how could a documentary become more interactive? Just as Hypertext links enabled web content to interactively link to other related resources with many inbound and outbound links, then the CLOUDS creators James George and Jonathan Minard used a similar concept to hand craft multiple inbound and outbound connections for every segmented interview clip. This created an interconnected web of media that can be navigated within VR, which ends up being more like a tagged media database than a linear film.
I had a chance to talk to one of the creators of CLOUDS, James George, who is the co-founder & CEO of Simile Systems & founding member of the production company called Scatter. James is a very innovative thinker about the future of interactive media, and has many deep thoughts on the topic. He talks about the 4-year evolution of CLOUDS, documenting the creative coder movement, how they implemented their interactive documentary, and the future of cracking the narrative code of the VR medium through the defining of different each of the genres.
LISTEN TO THE VOICES OF VR PODCAST
CLOUDS has been out for a few years now, but I think it’s still ahead of it’s time in terms of what it’s doing with interactive documentary. James is in the currently process of productizing the Depth Kit to transform a Kinect into a computational photography camera, and is in production on a VR narrative experience called Blackout VR with Alexander Porter.
Brett Leonard’s journey into VR all started when he moved to Santa Cruz and started partying and smoking pot with some of the elite visionaries from the Silicon Valley technology scene. He was an aspiring writer and film director who got inspired by Jaron Lanier’s evangelism of virtual reality technologies. Brett got to try out a lot of cutting edge VR and then went on to help popularize the term “virtual reality” on a global scale with his 1992 film The Lawnmower Man, which is a dark cautionary tale that also contains many prophetic predictions. It’s still one of the earliest and most accurate portrayal of the potential of VR as an immersive video game medium, and Palmer Luckey has cited it as an inspiration for being able to step into a video game. It also shows how VR could open up new neural pathways into the mind and serve as one of the most transformational mediums today.
I had a chance to sit down with Virtuosity Entertainment’s Brett at Casual Connect last week to talk about some of the history of how he got into VR, and how he’s been thinking about how VR will transform storytelling into the process of building storyworlds. He’s taking inspiration from shamanic and tribal rituals as well as immersive theater productions like Sleep No More to come the conclusion that the primary experience of a story world will be kind of like a “clothes line under which the entire experience is hung.” While a tradition narrative is more like being presented a singular clothes line that’s fed to you, in VR it’ll be more like discovering many different clothes lines where you get to decide which line to focus on and then “try on the clothes on that line in order to discover where that clothes line leads.”
Brett also think of VR as primarily as a feminine medium, and has been actively thinking about the ethics of VR with his Five Laws of VR as well as participating in the Three Laws of Human Augmentation Code led by Dr. Steve Mann.