Roy Sherrill is enabling the time travelers from the future to get a sense of presence at a number of different tech and cultural events around the San Francisco Bay Area. Roy’s been shooting a lot of 360 videos from his wheelchair at a number of different VR events this Spring and Summer, and he thinks of his selfie stick with a 360-degree bubblecam as a “wizard staff” and “portal stone.” When asked to explain why he’s recording all of this footage he says, “I capture slices of reality one moment at a time and save it in a bubble so that I can project others into it in the future so they can get the sense of presence as if they’re there themselves.” He’s enabling future members of the Society for Creative Anachronism to have a more direct experience of history as it unfolds.
LISTEN TO THE VOICES OF VR PODCAST
Roy is doing a sort of virtual tourism that is enabling other people with impaired mobility to get a taste of the different experiences that he’s having. While some of the footage may seem mundane to us know, it’s this type of footage that will be a lot more fascinating to people 20-50 years into the future. One example of this is the documentary Heavy Metal Parking Lot, which was shot before a Judas Priest concert in 1986 and serves as an anthropological study of the metal scene in the mid-80′s.
Roy is hoping to inspire others to start sharing 360 videos of adventures that he’s not able to go on himself, and that eventually he’ll be able to live out his sci-fi dreams of becoming teleastronaut and operating a scientific research vehicle on our moon or distant planet while immersed in VR as described by NASA’s Jeff Norris.
Pokémon Go has quickly become the #1 mobile game of all time, and while there’s been some debate as to whether it should be considered Augmented Reality or not, it’s clear that location-based gaming has taken to the next level.
I had a chance to unpack some of the game design principles to see how it’s optimized to facilitate cooperative social gameplay with Roadhouse Interactive’s VR Director Kayla Kinnunen at Casual Connect this week. Kayla talks about Pokémon Go has connected her to more strangers in two weeks than in 15 years, how it’s changing her relationship with her wife, encouraging her to walk more, and some of the social contract issues as well as deeper lessons for the future of augmented and virtual reality gaming.
LISTEN TO THE VOICES OF VR PODCAST
Here’s a video of a Pokemon Go stampede that happened in Central Park last week:
And here’s a video of a Squirtle crowd in Bellevue, WA downtown park from Pluto VR’s Shawn Whiting
What are the game design considerations for creating a VR experience that will be compelling for not only the primary player, but also the spectators who may be watching it on a livestream with VREAL? This was the big question that I discussed with VREAL’s Director of Developer Relations Tadhg Kelly, who has been collaborating with VR developers to make sure that an experience works for both 3rd person spectators as well as the primary player.
The precise formula for the ingredients that will make a VR game that’s well-suited for streaming is still not known yet, but Tadhg suspects that it should be fun to play over and over again, and inspires competitions that have a near infinite number of outcomes. There’s also a lot of open questions around what the core competencies of a good VR streamer are going to be, and what existing PC and console streamers can teach VR streamers.
Tadhg and I explore all of what is known about VR streaming so far, and we talk about all of the open questions and make some predictions about the future of VR streaming with VREAL.
When people dream about what they want to do in VR, it inevitably involves actually moving around within a virtual environment. But VR locomotion triggers simulator sickness in a lot of people, and solving it is one of the biggest open problems in virtual reality. NextGen Interactions’Jason Jerald wrote a comprehensive summary of much of the pertinent academic research about VR in The VR Book, and in Chapter 12 he summarizes the five major theories of what may cause simulator sickness.
I had a chance to catch up with Jason after he taught a class about Human-Centered Design for VR at the IEEE VR conference, and he explained each of the five different theories that may cause simulator sickness including the Sensory Conflict Theory, Evolutionary Theory, Postural Instability Theory, Rest Frame Hypothesis, and the Eye Movement Theory.
Jason also talks about ways to mitigate simulator sickness including implementing different viewpoint control patterns, using a cockpit, providing a leading indicator of movements, walking in place, vibrating the floor, reducing texture complexity, and limiting optical flow in the periphery. He also discusses the tradeoffs for varying the Representational Fidelity ranging from stylized to photorealistic, and the Interaction Fidelity ranging from abstracted to literal and natural gestures. While there are ways to mitigate some of these causes of simulator sickness for VR locomotion, some of them remain open problems yet to be solved by the wider VR community.
Dr. Ming Lin has been working on realtime physics simulations before physics engines were cool. Ming is now actively researching how to simulate audio in real-time. Rather than recording or generating sounds that are then simulated within a virtual environment, Dr. Ming and her students are pioneering methods for “coupling sound synthesis with sound propagation to automatically generate realistic aural content for virtual environments.” She’s also using machine learning techniques to be able to extract the material properties of the environment from a sound recording.
I had a chance to catch up with Ming after she presented some of her SynCoPation techniques as a part of her keynote at the IEEE VR academic conference in March. You can read more about some of her virtual sound simulation work in the “Interactive Sound Rendering” section of her vast amount of VR-related research.
Pete Moss is known as the “VR Dude” within Unity, and has long seen the potential of virtual reality. He’s really been on the frontlines of this virtual reality revolution over the past 3-4 years creating a bridge between the hardware manufacturers and content creators. I had a chance to sit down with Pete at the Intel Buzz Workshop a few weeks back to let him reflect on the consumer launch of virtual reality, and some of the emerging trends that he sees. VR locomotion is still a big open problem that he’s actively researching, and he encourages people to reach to him if you have innovative ideas. He also shares his perspective of the future of audio in VR, and the vital role that VR artists will play in expanding our minds.
Two years ago, not very many people were thinking about how important audio was going to be for consumer VR. Jason Riggs was pitching what would eventually become OSSIC by claiming that it was going to be the Oculus for audio headphones. Two years later, his prophecy came true when his OSSIC X Headphones raised over $2.7 million dollars on Kickstarter, surpassing Oculus as the largest virtual reality crowdfunding campaign ever.
There’s clearly a lot of demand for high-end, immersive audio for VR, and Jason’s vision for where he wants to see the future of spatialized audio go has started to be realized. Beyond the VR applications of OSSIC X headphones, part of their success was that it could also have an immediate impact on existing 2D games with spatialized sound as well as recreating the sound of a home theater sound system.
I had a chance to catch up with OSSIC founder and CEO Jason Riggs at SVVR Conference where we talked about the technology behind OSSIC, dynamic HRTF measurements, how to quantitatively and qualitatively measure the accuracy of their 3D audio solution, and the challenges facing a potential open standard for 3D audio and that contain audio objects.
LISTEN TO THE VOICES OF VR PODCAST
Here’s the Kickstarter video that inspired over 10k contributors.
Sound is probably the second most important part of creating a compelling immersive VR experience, but it’s also usually put off to the end as an afterthought. The visual system is so dominant that this is not a huge surprise, but I thought that it’d be worth focusing on trends of immersive sound on the next four episodes of the Voices of VR podcast starting with Dolby Atmos’ solution for creating spatialized sound with audio objects.
Just as Unity and Unreal are able to create 3D sound environments for interactive games, Dolby Atmos has an ProTools plug-in with a 3D interface that allows you to mix 3D audio objects for narrative, 360-video content. I had a chance to catch up with Dolby Lab’s Director of Virtual and Augmented Reality Joel Susal to talk about their audio spatialization solution, how it’s being used in both Hollywood Blockbusters and cutting-edge narrative VR experiences, and why it’s important to have granular control over individual audio object files rather than just relying upon 4-channel, ambisonic sound fields for 360 video productions.
Josh Farkas has given over 6000 VR demos over the last couple of years, and he’s been in the position of having had to try to explain the potential of VR to many skeptical businesses. That’s in part because his Cubicle Ninjas was primarily a web development and creative agency before becoming an early adopter of VR. They’ve released two virtual reality applications so far including Guided Meditation VR and the augmented reality filter app Spectacle. I had a chance to catch up with Josh at SXSW in March where we talked about using the Gear VR to detect heart rate and provide biometric feedback, releasing the first augmented reality application for Gear VR called, and some stories from the frontlines of evangelizing virtual reality.
LISTEN TO THE VOICES OF VR PODCAST
@PalmerLuckey After 6k+ demos: -Review controls -Remind to look 360 -Avoid 2D screens -Clean HMD -Ask fit comfort -Get feedback -Have fun!
The first time that I experienced Sequenced, I had no idea that my gaze directions might have been triggering different branches or small events within the story. Experiencing an interactive drama, but not knowing it was one of the design challenges that Valve’s Chet Faliszek has previously described to me. But there are many other challenges with balancing interactivity and narrative that Apelab CEO & Sequenced producer Emilie Joly explained to me at the Silicon Valley Virtual Reality conference in April.
LISTEN TO THE VOICES OF VR PODCAST
Apelab has developed a series of a Spatial Story platform on top of Unity in order to handle this type of gaze-triggered, interactive narrative. Emilie says that in some Unity scenes that there are over 150 different triggers and both subtle local agency flavorings of control, but also decisions that send you off into different scenes.
Emilie’s goal was to create an experience that is highly replayable as measured by it being something that could be experienced again and again, but with different results. But if the player doesn’t know that they’re impacting the experience with their behavior, then how will they know that they should watch it again? That’s the crux of the design decisions that have to be made in order to let the viewer know that they can dynamically interact with the story through their gaze.
Emilie says that they experimented with making it more explicit, but found that it was a better experience to keep it somewhat hidden. And so she’s okay with people watching the experience without ever knowing the extent to how much their behavior’s impacted the version of the story that they experienced.
Another idea that has come up in previous interviews is that perhaps the audience is going to have to collectively learn the best way to watch different types of interactive narrative VR experiences. Sequenced is a good example of an experience that is pushing the boundaries of what’s possible with VR storytelling, and so there’s a good chance that it might be ahead of what the VR audience is ready for. Too much innovation in this space could lead to frustrating reactions of the audience not getting it. And it might be true that it’s only as audiences start to go through experiences like this that they’ll start to learn the best practices for how to watch it. So this can be a bit of a Catch-22 dilemma for cutting edge projects.
For story-based VR narratives, I’ve anecdotally noticed that non-gamers will tend to sit back, keep their head still, and not really interact with the VR scene. These are the type of first-time VR users where you have to coach them to look around. In order to watch a movie or TV show, we’ve trained our bodies to sit completely still and pay full attention to whatever is happening directly in front of us. VR can try to force us to break out of these patterns, but the audience is still learning the language of interaction that gamers have been cultivating for a long time.
I’ve found that gamers are much more likely to natively know how to explore and watch an interactive experience. They also tend to push the limits of the experience by exploring the bounds of interaction, which can also feel like they’re quality assurance testers trying to break the experience or find edge case bugs.
In hindsight, I think that I might’ve enjoyed and appreciated Sequenced more had I known that it had hidden gaze-based triggers. Road to VR’s Scott Hayden used the phrase “Reactive storytelling” is to describe these types of hidden triggers.
Perhaps it’s worth having a tutorial for people to show them the extent of how “reactive” of a VR experience it is going to be. Or perhaps just knowing that an experience will be responsive to your behaviors and passive gaze interactions will be enough information for some people. Or perhaps some people will prefer not to know anything about the level of engagement available, and let good design of the experience make that explicitly clear. In the end, I hope we can just rely upon good VR design, but we’re in this awkward transitional period where the audience is still learning how to engage with immersive media while the boundaries of the VR medium are being pushed by companies like Apelab.
Another takeaway is that it sounds like the game engines like Unity and Unreal Engine that are used for creating these types of interactive narratives still have a lot of work yet to be done before everyone will be able to easily create their own type of experience that mimics Sequenced’s sophisticated triggering system.
I expect that eventually a lot of these branching story triggers will not have to be so hard coded, but that artificial intelligence agents will be able to be more reactive within certain bounds. Mark Riedl and Vadim Bulitko wrote up a really great paper titled “Interactive Narrative: An Intelligent Systems Approach” that summarizes over 20 years of research into interactive fiction. And for more information on interactive narrative and AI, then also be sure to check out my interview about Playabl.AI with Façade’sAndrew Stern.
Sequenced is an ambitious effort, and doing some important work in pushing the boundaries of interactive narrative. Road to VR previously reported that their 10-episode season is due to “arrive on HTC Vive, Oculus Rift, Samsung Gear VR and Google Cardboard starting in early Q4 2016.”