shauna-hellerI talk with Shauna Heller from Clay Park VR about here thoughts about non-entertainment and non-gaming VR content in education, medicine, and enterprise applications.

LISTEN TO THE VOICES OF VR PODCAST

Subscribe on iTunes

Donate to the Fatality & Summer Trip

jim-marggraffI had a chance to do a demo of Eyefluence, which has created a new model for eye interactions within virtual and augmented reality apps. Rather than using the normal eye interaction paradigm of dwelling focus or explicitly winking to select, Eyefluence has developed a more comfortable way to trigger discrete actions with a selection system that’s triggered through natural eye movements. At times it felt magical to feel like the technology was almost reading my mind, while other times it was clear that this is still an early iteration of an emerging visual language that’s is still being developed and defined.

I had a chance to talk with Jim Marggraff, the CEO and founder of Eyefluence, at TechCrunch Disrupt last week where we talked about the strengths and weaknesses of their eye interaction model, as well as some of the applications that were prototyped within the demo.

LISTEN TO THE VOICES OF VR PODCAST

Eyefluence’s overarching principle is to let the eyes do what the eyes are going to do, and Jim claims that extended use of their system doesn’t result in any measurable eye fatigue. While Jim concedes that most future VR and AR interactions will be a multimodal combination of using our hands, head, eyes, and voice, Eyefluence wants to push the limits of what’s possible by using the eyes alone.

After seeing their demo, I became convinced that there is a place for using eye interactions within VR and AR 3D user interfaces. While the eyes are able to accomplish some amazing things on their own, I don’t think that most people are not going to want to only use their eyes. In some applications like in mobile VR or in augmented reality apps, then I could see how Eyefluence’s eye interaction would work well as the primary or sole interaction mechanism. But it’s much more likely that eye interactions will be used to supplement and accelerate selection tasks while also using physical buttons on motion controllers and voice input in immersive applications.

Here’s an abbreviated version of the demo that I saw with Jim presenting at Augmented World Expo 2016:

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

In 1968, Douglas Engelbart gave “The Mother of All Demos” where he gave the first public demonstration of a mouse as a computer control device. For the last 48 years, the mouse and keyboard have remained the primary input devices for Human Computer Interaction. Virtual and augmented reality represent a new immersive computing paradigm where the equivalent 3D user interfaces are being continually refined as there is a burst of innovation with new input devices.

doug-bowmanDoug Bowman has been one of the leading researchers in 3DUI as the Director of the Center for Human-Computer Interaction at Virginia Tech, and the co-author of the 2004 book titled “3D User Interfaces: Theory and Practice.” The second edition is due to come out in early 2017, and is available in early release.

I had a chance to catch up with Doug at the 2015 3DUI conference that was co-located with IEEE VR in Arles France to talk about the five universal tasks in 3DUI including navigation, selection, manipulation, system control, and text input. We talk about the open problems of 3DUI, the uncanny valley of VR locomotion, and the strengths and weaknesses of academia when it comes to comparing different approaches individually and then within the context of a larger application. I also recount some of the big innovations in input devices since this was originally recorded in spring of 2015.

LISTEN TO THE VOICES OF VR PODCAST

Here’s the moment when Douglas Engelbart and Bill Paxton publicly demonstrate the mouse for the first time at the 1968 Fall Joint Computer Conference in San Francisco:

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Michael-RallaWhen I was at SIGGRAPH this year, one of the hottest topics was figuring out the workflows for how to deal with the complexity of capturing, processing, and distributing volumetric digital light fields. Otoy’s Jules Urbach had a lot of insights, and a number of different high profile VR agencies like Framestore VR appear to be secretly working on their own processes for how to best capture and display live action volumetric capture.

Johannes-SaamI had a chance to talk with a couple of people from Framestore VR about this, Johannes Saam, a senior software developer, and Michael Ralla, a compositing supervisor. They weren’t ready to provide a lot of specific details just yet beyond the fact that it’s a hot topic, and that they’re possible working on their own workflow solutions with some insights gathered form deep image compositing.

LISTEN TO THE VOICES OF VR PODCAST

Here’s some recent footage from Lytro Immerge camera that shows how they’re compositing 6 DoF Volume Capture Inside Nuke:

Here’s the final VFX Build of “Moon” by Lytro that blends the live action 6DoF footage shot with Lytro Immerge on top of the composited environment.

It’s unclear whether or not the Lytro lightfield camera will need to only be used within a green screen environment. There’s a previous marketing video that has the disclaimer that “conceptual renderings and simulations are used in the video,” and so it’s unclear whether to not this camera is able to actually capture this type live action footage with objects in the near field somehow be able to accurately render the background paralax for any occluded portions:

What is known is that there are still a lot of open problems with digital lightfield capture and workflows, and that Framestore VR is one of the production studios that are actively investigating it.

Johannes and Michael also talked about some of the high-profile ad campaigns that Framestore VR has been a part of including one for BWM M2 that was like a race car shell game for keeping your eye on the right car as a 360 camera races down a runway. It’s received over 5 million views on YouTube, and is a great introductory experience for people new to VR to help train them that they are able to look around.

They also worked on an interactive meditative application called Lumen with Dr. Walter Greenleaf that uses procedural generation of content to grow trees, harvest blossoms to plant new trees and grow a forest around you. It is a part of the TIME Life VR initiative that launched today.

Framestore VR also created a Field Trip to Mars as a part of a STEM initiative from Lockheed Martin by replacing all of the windows of a school bus with transparent LCD screens. They created a Mars environment within Unreal Engine, and then matched the real-life bus movements with virtual Mars rover movements to create a collective virtual reality experience for a number of school kids.

They also produced the Game of Thrones Ascend the Wall VR experience that premiered at SXSW 2014, which was one of the first high-profile advertising campaigns using virtual reality.

Arya of #GameOfThrones screaming while using the #GoT Oculus Rift "Climb the Wall" experience! #SXSW #MashSXSW #BAHatSXSW

A video posted by Brian Anthony Hernandez (@bahjournalist) on

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

chrisperkinsDungeons & Dragons is a form of collaborative storytelling that isn’t constrained by time or budget. Because it’s all happening within the theater of the mind, then if you can imagine it, then it can be constructed instantaneously within everyone’s imagination. The end result is that each participant is able to express the full extent of their free will to the dungeon master, who either directly controls their fate or delegates it to a roll of the dice. It’s the ultimate expression of imagination, improvisation, and storytelling that provides a high benchmark and design inspiration for what virtual reality and artificial intelligence can only hope to someday fully replicate within the metaverse.

Chris Perkins is a Dungeons & Dragons story designer as well as the Dungeon Master for the Acquisitions Incorporated podcast. I had a chance to talk with Chris the day after the 3-hour, season finale show for Acq Inc. that took place in front of a live audience of 2500 people in the PAX West Main Theater.

LISTEN TO THE VOICES OF VR PODCAST

Chris and I talk about what DnD can teach VR storytelling, designing a DnD story within a traditional three-act structure, the expression of free will in DnD, and how to balance out the participation of all of the players and enabling them to do something really cool. Chris sees so much of the dynamics of DnD storytelling as a social experience, and as such most of the biggest open questions for DnD are more shaped by human interactions than by technological limitations.

Some of the hardest open problems with artificial intelligence have to do with understanding stories, disambiguating pronouns, and comprehending inside jokes, cultural references, and different tones of voice. The dungeon master has to track all of these things, observe the mood and body language of all of the participants to keep them engaged while at the same time pacing each character though series of perils. These are all sufficiently complicated enough that having an AI dungeon master successfully guide DnD players through a campaign could be a next-generation Turing test.

Chris also hasn’t been impressed with any of the VR experiences that he’s seen so far because it felt like walking through someone else’s mind. With all of the DnD experiences he’s had, he’d much rather walk though a VR experience of his own mind. There’s TiltBrush and Oculus Medium, but painting or sculpting is 3D is still no where as fast to the instantaneous ability of the mind to construct a scene and story on the fly. Perhaps it will some day be possible if neuroscientists are able to completely code the brain, and unlock the ability to be able to use neural activity to automatically translate our thoughts into virtual objects and full scenes within virtual reality.

Chris is fairly confident that DnD doesn’t have too much to fear from technological competitors. It’s entirely possible that technology may never be able to fully replicate the capabilities of the human mind as we visualize stories with our mind’s eye. So he’s skeptical about the capabilities of VR or AI to be able to accurately and synthetically express your own personal “theater of the mind.” But he also said that it’s inevitable that we’re going to try our hardest to do so because humans and storytelling are inseparable. As history has shown, we’re going to always be looking for new ways to reach people through the latest storytelling techniques.

Here’s the YouTube video of the PAX West 2016 Acquisitions Incorporated campaign discussed in this podcast:

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Alina-MikhalevaHBO premiered a VR experience for their new Westworld series at TechCrunch Disrupt, and they used Spherica’s camera stabilization technology in order to pull off an extended live-action tracking shot in VR. Common advice given to new VR filmmakers is to not even try to attempt to move the camera since any shaking or sudden unexpected movements can be a motion sickness trigger. But Spherica has been able to create stabilization platform using a GoPro mount and remote-controlled rover that is able to comfortably move a VR camera through a tracking shot.

I had a chance to catch up with Spherica’s CEO Nikolay Malukhin and managing partner Alina Mikhaleva at TechCrunch Disrupt where we talked about their rover, drone, and cable camera stabilization solutions, collaborating with HBO on the Westworld VR experience, scaling up their rig to Black Magic and eventually RED Epic cameras, and some of their upcoming content and hardware projects including a first-person perspective helmet mount.

LISTEN TO THE VOICES OF VR PODCAST

You can watch a high-res demo of their Spherica technology in this Immersive Combat demo for the Gear VR, or watch it on YouTube here:

The marketing agency Campfire was responsible for designing the physical Westworld booth experience at TechCrunch Disrupt, which created the feeling that Delos was a real travel agency. The actors running the booth were telling attendees that they were showing a virtual reality experience that featured one of their destinations, and so I didn’t have any idea that what I was about to see was really an immersive advertisement taking me into the surreal and dystopian world of a new HBO series starting on October 2nd.

Here’s some photos of the booth and the travel brochure they were handing out:

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

ted_schilowitz_p_2014When Ted Schilowitz was looking for what to do after traveling the world as the first RED Camera employee, he happened upon an opportunity to serve as a futurist for 20th Century Fox for looking at how to use emerging technologies for storytelling. Over the past three years, he’s had a lot of early access to hardware from all of the major virtual and augmented reality companies ranging from Oculus, Valve, Sony, Google, Magic Leap, ODG, and Microsoft.

LISTEN TO THE VOICES OF VR PODCAST

He’s been exploring what’s possible with VR and AR, and he says that “the abilities of a new medium start to define the demands of a new medium.” He’s worked on a number of different VR experiments to discover how to best blend together narrative and interactivity within the context of these new “spatial mediums.” One of the first and most ambitious experiments was a half-hour long Martian VR experience that was one of the hottest tickets at Sundance. It integrated the D-BOX 4D effects chair and Oculus Touch controllers, and put you in the first-person perspective of many key scenes from The Martian movie.

I had a chance to catch up with Ted at VRLA where he told me the story of introducing VR and AR technologies to a lot of Hollywood studio executives and storytellers. He shares some of his favorite interactive narrative experiences ranging from Pearl to Valve’s Aperature Repair to The Gallery as well as polished interactive experiences like NVIDIA’s VR Funhouse and the Valve Lab demos. We talk about the balance between global and local agency in interactive narratives, what can be learned from storytelling in theme park rides, the emerging language of storytelling in VR, and what it takes to become a viable practitioner of these future technologies.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

robin-hunickeOne of the best VR experiences I did at PAX West this year was Funomena’s Luna, which is a unique blend of tactile puzzles, creative world building, musical improvisation within the context of an emotionally-charged narrative. Luna is designed by Journey’s Robin Hunicke and Martin Middleton, and is a part of the emerging “deep games” movement that Fast Company describes as games “where players ‘win’ by becoming more enlightened, empathetic people.” It uses the unique affordances of VR with the Oculus Touch controllers to create an experience of imaginative play that is difficult to pin down into an existing genre, but provides an experience that is bound to introduce VR to a new audience.

LISTEN TO THE VOICES OF VR PODCAST

The story of Luna is about a lost bird who is trying to find it’s way home. Solving tactile puzzles reveals a totem from the bird’s past that you then use to decorate a near-field mini-world and rebuild the path to getting back home. You’re then transported into this world that you helped to reimagine, and a chapter of the bird’s story unfolds through interactions with different animals. Robin says that each chapter represents a stage of grief, and that the overall experience is about learning to recover from mistakes and heal from trauma.

I had a chance to talk to Robin at PAX West about the design process of creating Luna and how their delightful papercraft aesthetic had a variety of different inspirations from many different mediums ranging from Bertolt Brecht from theater, animator Yuriy Norshteyn, print artist Umetaro Azechi, photographer Georgia O’Keeffe, filmmaker Andrei Tarkovsky, and sculptors Lee Bontecou, Gabriel Orozco, and Anish Kapoor. We talked about using creative expression to more deeply connect people to the story, and the challenges of exploring deep emotional themes using the most cutting edge immersive technologies. And we talked about Journey and some of the lessons learned from creating a profound cooperative and connecting multiplayer experience within the context of a gaming console.

Here’s the teaser trailer from Luna that gives some sense of the story and art style, but without any of the actual VR gameplay.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

zach-jaffeIt was on a San Jose sidewalk in 2015 that I first tried the SubPac, and it blew me away. I felt like I was immediately transported into a dance club standing in front of giant subwoofers with the soundwaves of bass rippling through my body, but yet no one around me could hear a thing. The magic of the SubPac is that it translates the inaudible frequencies lower than 40 Hz into a vibrating haptic feedback that provides a much more immersive experience.

I’ve seen a lot of different experiences at conferences over the past year using the SubPac to increase immersion, and I was able to catch up with business developer Zach Jaffe at VRLA to hear about some of their content partnerships and new S2 Backback. We talk about some of the VR experiences and songs that have good low frequency bass design that specifically take the SubPac into consideration. We also talk about the future of immersive sound design, and his prediction that music labels will want to remix albums to work well within spatialized audio environments.

LISTEN TO THE VOICES OF VR PODCAST

There isn’t a spatialized audio open standard yet, and so the music industry is waiting to see what formats emerge. At the moment, a fully immersive VR experience is the best option to get fully audio spatialization, but a yet-to-be determined, standard format for object-oriented or spatialized audio could be used with head-tracked headphones like Ossic X as well as with future versions of SubPac devices that have directional bass incorporated.

Here’s a number of VR experiences and music videos that are SubPac-optimized:

Run The Jewels – Crown (Official VR 360 Music Video)

Grandtheft – Summer In The Winter | 360 VR

Jazz Cartier – Red Alert / 100 Roses (360° Virtual Reality Video)

Moderat – Running (Official Video)

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Brian-NillesOptiTrack premiered a new demo at GDC that shows the extent of their tracking technology precision. They put passive tracking markers on a basketball and football that allowed people to go toss a ball back and forth to each other in VR. I had a chance to catch up with OptiTrack’s Chief Strategy Officer Brian Nilles at SIGGRAPH who talked about how OptiTrack is being used as the primary tracking solution within the different VR Arcade solutions including The VOID, VRCade, and Holovis. He also talked about OptiTrack being used for motion and facial capture for AAA studios, and for indoor GPS systems for robots and drones. There are a number of yet-to-be announced VR Arcade solutions out there that are pushing the limits of OptiTrack’s technology, and Brian gives us an idea of what’s possible by saying that he’s seen solutions that use as many as 75 HMDs within a space up to 165ft x 120ft.

LISTEN TO THE VOICES OF VR PODCAST

Here’s a video of their Basketball demo that premiered at GDC:

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip