rob-morgan
Rob Morgan is a game writer, narrative designer and voice director. He got into the realm of writing for VR experiences from his experience at Sony London Studio, and then freelanced with nDreams on their Gear VR game Gunner as well as their conspiracy-theory/moral dilemma adventure game called The Assembly.

Rob brings a very unique perspective about what’s different about writing narratives and telling stories in VR after working on a number of different projects of significant scope & budget across the Morpheus, Gear VR and Oculus DK2. One of the big takeaways that Rob had is that there are a whole level of social & behavioral interactions that we expect to have with other humans and so you can’t treat NPCs in a VR experience the same way that you might in a 2D experience. For example, there are social queues that you expect a human to react to based upon where you’re looking, whether you seem like you’re paying attention or if you’re threatening other people in some way. There’s a whole range of interaction that we demand and expect to have, and so there’s a lot of interesting nested body language and social queues that if they’re added within a VR experience could add another dimension of immersion.

Rob talks about the importance of having other human-like characters within the narrative experience in order to go beyond an interesting 5-minute tech demo, and to start to have an engaging narrative. He says that there’s a distinct lack of human characters in VR demos because it’s hard to not fall into the trap of the uncanny valley. But Rob suggests that one way to get around the lack of visual fidelity within VR is to start to add simple interactive social behaviors in NPCs to create a better sense of immersion.

He also talks about how important the voice acting is within VR as well because the uncanny valley goes beyond just the look and feel of the graphical representation of humans. Humans are really great at detecting fakeness, and Rob says that this is a vital element of immersion if you’re acting is somehow stilted or not completely authentic or believable.

This was one of my favorite interviews from GDC because Rob lists out so many different interesting open problems and challenges with storytelling in VR. He says that the rules haven’t been written yet, and so there’s a large amount of space to experiment with what works and what doesn’t work.

He eventually sees that there will be a convergence between VR, AR and wearable technology in general, and he’s excited for the possibility of creating a fictional layer of reality for people that they can interact and engage with in a way that’s just as real as the rest of their reality.

Rob presented a talk at GDC called “Written on your eyeballs: Game narrative in VR at GDC 2015” which can be seen on GDC Vault here if you have a subscription.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

rod-haxton
Rod Haxton is the lead software developer for VisiSonics, which created the RealSpace™ 3D Audio technology that Oculus has licensed to put 3D audio into VR.

My experience is that having 3D audio in a VR experience is a huge component for creating a sense of immersion, especially when you’re able to go beyond panning the audio between the left and right channels as you turn your head. With RealSpace™ 3D Audio, they’re able to go beyond panning to simulate the elevation and whether the sound is in front or behind you. They process audio in a way that’s analogous of doing ray-tracing for ears where they take true material audio reflections and do calculations that are based upon Sabine’s reverberation equation.

Our ears filter sound in a way that helps us be able to locate the sound in space. Everyone’s ears are different, and VisiSonics can create a specific profile for your ears in what’s called a HRTF, or head-related transfer function.

They have a database of HRTFs, and use a default profile that works pretty well for 85% of the population. Rob talks about how VisiSonics has a patented a fast-capture for a personalized HRTF where they put speakers in your ears and have an array of microphones in a room. He sees a vision of a time in the future where you’d go into a studio to capture the HRTF data for your ears so that you could have a more realistic 3D audio experience in VR.

Rob also talks about:

  • Special considerations for spatializing audio & a special tool that they’ve developed to evaluate how well a sound will be spatialized.
  • Oculus’ SDK integration of RealSpace™ 3D Audio technology
  • Unity integration & VisiSonics direct integration
  • Options available for 3D audio that are provided by their SDK
  • Maximum number of objects that you could spatialize & what’s a reasonable number
  • Future features planned for the RealSpace™ 3D Audio SDK
  • Unreal Engine support coming soon
  • Originally funded by the DoD to help develop a way for nearly-blinded soldiers to do wayfinding
  • How Tesla is using their panoramic audio cameras to improve the sound profiles of cars
  • How Rod helped get RealSpace 3D audio into a game engine & how they connected with Oculus at GDC 2014
  • How they’ve developed a panoramic audio camera to be able to visualize how sound propagates
  • Good examples of 3D audio integration can be found in Technolust & demos from Unello Design & Bully! Entertainment
  • How poorly-implemented HRTFs had given them a bad name over time

This week, VisiSonics announced Unity 5 integration is now available in their latest v0.9.10 release

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Tobias-Baumann
Tobias Baumann is the director of game development for Perception Neuron, which is an IMU-based motion capture system. It might eventually be used as a VR input device, but Tobias acknowledges that the current price for the full system is a bit steep to be used for anything than a motion capture system at first.

The 32-sensor based full-body suit with finger-tracking runs at $1,499, and the 18-sensor full-body without fingers has an academic price of $799. The single hand version is more affordable for $100, and a two-handed option available for $200.

The Perception Neuron kickstarter more doubled their funding goal of $250k, and are getting ready to ship out their rewards within the next couple of weeks.

In the interview Tobias talks about:

  • The mechanics of how their wireless IMU-tracking system works and how it’s mapped to an inverse kinematic skeletal model
  • A bit of the history of Perception Neuron & how he first got involved
  • Some of the preliminary prototypes and design issues in grabbing virtual objects
  • Space requirements for roaming freely and cautions about Gear VR locomotion issues
  • The production pipeline for using Perception Neuron for motion capture and the Unity plug-in & Perception Access data capture software
  • Rigging considerations for getting Perception Neuron to work with motion retargeting
  • Hopes that it might eventually be cheaper to be used a viable VR input device

Finally, Tobias talks a bit about how he first got into VR and how he got involved with the Perception Neuron project.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Anush-ElangovanAnush Elangovan is the Founder and CEO of Nod, which produces the Nod Gesture-Control Ring. The Nod ring exposes motions, gestures, swipe, and tactile feedback and enables user interactions ranging from tap, double tap, swipe up and swipe down. Anush says that he started Nod because he saw that touchless interactions & gesture controls would bring about a new computing revolution paradigm.

Nod was showing a demo at GDC that used the nod ring in virtual reality as an input controller. It was a simple and intuitive interaction to interact within a VR environment, but it also felt like the latency was pretty high. Anush says that the extra latency was likely due to also broadcasting their demo to a screen so that others could see the VR action at GDC, and that their benchmarked latency is around 7.5ms per packet via Bluetooth.

The current price of a Nod ring is around $149, which seems a bit high. But it’s definitely worth keeping an eye on in the future as the VR input controller ecosystem evolves, especially for the mobile VR experience. Nod is integrating with OSVR, and so it could have a pretty straight forward integration with VR experiences through the OSVR SDK.

backspin
At GDC Nod also announced the Nod Backspin controller, which combines motion, gesture, touch, analog and haptic feedback. They had an early prototype on display at GDC, but didn’t have any demos with it yet.

Finally, Anush sees that reality and VR will blend into a mixed augmented reality where it may be difficult for our perception to discern what’s real and what’s augmented. In the end, he’d like to see Nod play a part in helping to capture human intention through various gestures and pointing behaviors.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

joel-green2 Joel Green is a producer and audio director at Cloudhead Games working on the The Gallery: Six Elements adventure game. He talks about the process of being invited to a secret meeting at Valve to experience the HTC Vive for the first time, and then going home with a development kit to create room-scale VR experiences.

Cloudhead Games has been working on consumer VR for a long, long time, and they’ve tried out a number of different hardware solutions. They see that the HTC Valve have really solved the input and tracking problems for the first time in a viable way. Up to this point, it’s been a lot of incremental progress and half-steps, but Joel says that Valve has really nailed this solution and it will take VR to another level of immersion and presence.

Joel talks about the process of collaborating with the other development shops who were invited by Valve to develop a demo for GDC. They were going through the process of debugging the hardware and software together because Valve was still developing the underlying SDK as they were already building experiences. There was a private forum that the developers used in order to share their lessons learned for developing a room-scale VR experience.

Joel also says that Cloudhead Games intends on still supporting as many different VR experiences as they can since they know that not everyone is going to be able to experience the full room-scale VR experience. He talks a bit about the Lighthouse tracking solution, and what’s involved with setting it up. And finally, he says that room-scale VR and this new tracking solution from Valve has the potential to bring in a lot more physical interactions into VR and that he’s excited to continue to experiment and develop more experiences that explore the realm of what’s possible.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Patrick-OLuanaighPatrick O’Luanaigh founded nDreams based out of Farnborough, UK in 2006 originally creating content for the PlayStation Home. After seeing some of the early consumer virtual reality head mounted displays, he quickly pivoted to focusing exclusively on creating immersive virtual reality content.

One of the first games that nDreams announced is called The Assembly, which is a sci-fi adventure game that presents the user with a number of different moral dilemmas. They’re not ready to talk too much about it yet, but Patrick did give some of the background story, intention and motivation behind exploring the medium of virtual reality through the process of making this adventure game. They’re planning on having it as a release title for both the Project Morpheus on the PS4 as well as for the Oculus Rift.

Patrick talks about the wide range of different initiatives ranging from developing for the Gear VR to helping AMD make some announcements about Liquid VR at GDC.

Patrick also talks about providing the seed funding for VRFocus.com in order to build excitement for virtual reality by posting very frequently about all of the latest developments and announcements in the VR space. I had the chance to interview VRFocus editor-in-chief Kevin Joyce at the Immersion 2014 gathering in Los Angeles. VRFocus has established itself as the AP of the VR world posting 6-7 updates per day on all of the latest incremental developments, and Patrick talks about the editorial freedom that he provides to Kevin and the rest of the VRFocus team.

Patrick is certainly a key figure in the consumer VR space, and so I was definitely glad that I had a chance to catch up with him at GDC and hear more about his background and to hear where he thinks the overall virtual reality space is headed. nDreams is definitely focusing on the gaming market at first, but he sees that there are a lot of really exciting opportunities for education in VR.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Danny Woodall is the Creative Director at Sixense, and he talks about how the electromagnetic STEM controllers can help to bring full-body presence into VR. He talks about the SDK improvements that they’ve been making, and some of the updates that were made to the lightsaber demo including updating it to UE4.

I had the opportunity to first try their Lightsaber demo back at Oculus Connect in September, and I had another opportunity to try it again at the VR Mixer at GDC. I often mention this demo as being one of the most immersive experiences that I’ve had in VR because it makes all the difference in the world to be able to track your body within a VR space.

Mel Slater’s research into Virtual Body Ownership Illusion has shown that the minimal thing that you need to be convinced that your virtual body is your body is to have a 1:1 correlation between you body movements in real life compared to what you see in VR. With the ability to have a your hands tracked relative to your head, Sixense has been able to create an IK skeletal model that really feels great.

Sixense also had their STEM controllers in the Tactical Haptics demo, and it made all the difference in the world to be able to track a physical representation of the weapon that you’re holding in real life with the weapon that’s being tracked in VR.

After the recording of this interview, Sixense announced to their Kickstarter backers that they failed the FCC testing. They said:

The reason the Base is failing is specifically due to our design of housing the five radio frequency (RF) dongles inside the Base. The RF dongles require grounding, but this grounding interferes with the electromagnetics (EM) of the tracking.

To address this issue we redesigned the Base electronics to keep the RF dongles located internally but not conflicting with the EM. This will require the production of new Base PCBs and further testing to ensure everything is working properly.

This will cause some further delays of the delivery of the STEM to at least back to July, and if they fail again then they’re looking to September to start shipping their units.

The STEM controller may have some advantages of working without having exact line-of-sight, but there are other potential issues of EM drift or interference from other electronics. For more of an in-depth discussion about some of the potential issues, then I highly recommend listening to this interview that I did with Oliver “Doc_Ok” Kreylos at Oculus Connect.

And for more of a background into Sixense, then here’s an interview I did with the founder and CEO Amir Rubin back at SVVRCon.

The STEM controllers are something to keep an eye on, especially considering that Danny mentioned in this podcast that they’re adding Gear VR support. If the price point can come down, then it’ll be a valuable addition to the VR input problem because having your hands in the game with a physical controller and physical buttons will have all sorts of applications that can create an incredible sense of immersion within virtual reality environments.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Henrik Johansson of Tobii Technology talks about some of their eye tracking technology that’s started to be integrated into different video games like Assassin’s Creed to be able to have an “Infinity Screen” user interface to 2D video games.

Eye tracking in VR has a lot of applications ranging from foveated rendering to being able to dynamically adjust the focus of the object that your currently looking at. At GDC, Henrik says that Tobii was talking about some of their collaborations with virtual reality head mounted display manufacturers for the first time. He wasn’t specific in terms of who or what exactly they’ve been working on, but said that sometime by the end of the June that they would be announcing more collaborations and work within both AR and VR. But they do have some AR-type of eye tracking Tobii Glasses 2 ready for the consumer market.

Henrik talks about some of the existing implementations of eye tracking with video games, and one of the exciting new game mechanics that becomes more possible with eye tracking is to give a more realistic and human behavior to interactions with NPCs. You can either trigger responses or change behavior of NPCs based upon whether or not you’re looking at them. He also shares some of the other applications and implementations of their eye tracking solutions in different software programs, and a bit more information about their SDK.

Eye tracking in VR is going to be able to add a lot of engagement within social situations and collaboration within VR, and so be sure to keep an eye on Tobii near the end of the second quarter of this year for more information on how they plan on using their technology within the VR and AR space.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

alex_chu
Alex Chu is an Interaction Designer at Samsung Research America, and he talks about the process and insights that came from designing Milk VR 360-degree video player for the Samsung Gear VR that’s powered by Oculus.

He talks about the importance of space when designing for virtual reality experiences, which he sees as a blend between real space and digital space. The Gear VR prototype team had been designing for mobile apps, and it took them some time to realize the importance of adding an environment and putting in architectural space around the content to help define the surrounding environment. For example, here’s a comparison that shows the difference between a gallery in 2D space vs designing a gallery for a 3D space and in VR:
samsung-gallery-2d-vs-3d

At the Samsung Developer Conference, Alex gave a talk titled:
VR Design: Transitioning from a 2D to a 3D Design Paradigm. He gave an abbreviated version of this talk at the Immersive Technology Alliance meeting at GDC, but you can see the full presentation slides here.

Alex showed a number of different ergonomic graphics showing about how users engage with content and giving metrics for comfortable ranges for the user. They also researched Field of View in order to help determine how to size their elements, as well as using peripheral vision to help guide their eyes. Finally, they also looked at depth cues and the range of visual disparity at different distances to see how the strength of stereoscopic cues varies depending on the depth. Here’s a number of slides with more info:
samsung-layout

samsung-3d-feeling

samsung-depth-perception

samsung-spheres

samsung-sphere-depths

He also talks about the user interaction options for a Gear VR, and why they decided that having a physical tap interaction paradigm was more reactive than just a gaze and wait type of interaction.

Some of the advice that Alex gives about user testing is to make your test meaningful by focusing on something very specific, and to eliminate all of the other variables that you’re not trying to test.

One of the biggest open problems with designing for VR is that there’s a huge range of how people react to VR experiences, and it’ll take some time for everyone to learn more about these differences and share data about how to best design for everyone.

Finally, Alex sees that the sky is the limit for designing for VR right now, and that he’s really looking forward to seeing how using architectural representations in VR will change the process of designing physical spaces.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

You can watch Alex’s full presentation from the Samsung Developer conference here:

Here’s a YouTube playlist of all of the VR-related presentations from the Samsung Developer Conference.

IMG_20150306_140753
Julie Heyde & Mariam Zakarian talk about the process of bringing the stories of Norse mythology to life with their RagnarökVR experience. Julie talks about going from hacking in Unity3d by herself to doing a series of game jams to eventually recruiting a total of a dozen people from the indie gaming scene.

Julie talks about the inspiration coming from facing one of her childhood fears of wolves, but also to explore some the Norse myths that she’s grown up with. They talk about the process of crafting an explorational horror experience, and how they’re going beyond using jump scares.

A big motivation for Julie is to create the type of virtual reality experiences that she wants to loose herself in. Their team has been taking a very iterative, game jam approach of rapidly prototyping ideas and then getting a lot of user feedback by showing it frequently at various public gatherings.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.