Jens Christensen is the CEO and co-founder of Jaunt VR, which is a VR startup that has raised over $35 million to develop the toolchain for live-action, 360-degree spherical, 3D video. Jaunt’s technology combines “computational photography, statistical algorithms, massively parallel processing, cutting-edge hardware and virtual reality.

jens-christensenjaunt-304xx865-1297-152-0Jens talks about Jaunt VR’s mission to capture the world around you, and to be able to put that within a VR experience. It requires capturing a ton of video data, and they’ve developed a parallel processing data farm process to be able to stitch together a fully spherical video. They’re also developing a method for capturing ambisonic 3D audio.

The Jaunt VR Twitter account has been posting various photos of covering a wide range of different sporting events, and Jens says that Jaunt is very interested in eventually providing a live broadcasts of immersive content. Their primary focus in the beginning are sports events, music events, as well as experiments with narrative content ranging from a horror story to a story set in World War II. They’ve haven’t released any VR content to the public yet, but are looking forward to releasing some experiences here soon.

Jens said that they’re still working on there is no full positional tracking yet and at the moment it’s just rotationally tracked, but it’s something they’re still looking into. Storytelling within VR is going to require the developing of a new language and syntax, and that in the beginning they prefer to keep the camera steady to minimize motion sickness.

Jaunt VR is also more interested in leveraging the existing video post-production toolchain for editing, compositing and color correction. It sounds like they’re likely working on some plug-ins or video players to be able to directly integrate and preview VR content while you’re editing it, but Jens didn’t provide any specifics and said that it’s still an open problem. Finally, he sees that VR is going to change a lot of industries ranging from news, music and travel and he’s looking forward to helping play a small part with the work that Jaunt VR is doing.

Theme music: “Fatality” by Tigoolio

Danfung Dennis talks about the process of making the first commercially available, 360-degree, fully immersive VR cinematic experience called Zero Point, which was released today on Steam. I see Zero Point as a historic VR experience that will likely catalyze a lot of other ideas and experiences within the realm of cinematic VR, and so it’s definitely worth checking out.

Danfung-Dennis Danfung is the CEO of Condition One and actually started looking at 360-degree video starting back in 2010 when creating some immersive experiences for the iPad and iPhone. He sees Zero Point as a camera test and experiment that shows the evolution of 360-video capture technology starting from monoscopic 180-degree video and maturing to the point of 360-degree stereoscopic video using RED cameras shooting at 5k and 60 frames per second.

Danfung hopes that Zero Point will be a catalyst for other experiments with 360-degree cinematic experiences that help to develop and cultivate the new syntax, language and grammar for creating immersive VR experiences. Zero Point uses the constructs of a traditional documentary film, but yet I was personally left craving something that was focused more upon putting the physical locations at the center of the story rather than the other way around. Danfung seemed to agree with that sentiment, and that moving forward what is going to matter more is using the strengths of the VR medium to create a more visceral and emotionally engaging experience where the location, visuals, and audio are all synchronized together.

In terms of the language of VR, Danfung had many insights ranging from seeing that cuts between drastically different scenes is a lot more comfortable than moving around within the same scene. And while fading to black seems to be a safe way to navigate between scenes and minimize the disorientation from teleporting instantly to a new place, he also sees that it could be overused and that he’s interested in experimenting with longer shots with few cuts. He’d also like to iterate more quickly by creating shorter experiences that are long enough to create a sense of presence, but to be able to create a wider range of different types of focused experiments in order to learn more about the language of cinematic VR which is still in it’s very early days of being developed.

There’s also an interesting experiment of blending computer-generated experiences with live-action footage, and Danfung wants to experiment with this more. He believes that when the resolution gets to be high enough, then 360-degree video has the potential to provide a sense of presence more quickly than a computer-generated environment.

Finally, Danfung sees that there’s a tremendous potential of VR to be able transcend the limitations of the mass media, and to provide a more advanced language for sharing stories that take people into another world, another mind, and another conscious experience. There is a lot of potential to explore more abstract concepts and to use the VR medium for pro-social causes and to help make the world a better place.

Zero Point is now available on Steam here.


  • 0:00 – Introduction
  • 0:35 – Process of innovating and creating 360-degree video. Create powerful immersive experiences, and tried an early prototype of the DK1 and knew that VR was going to be his medium. Started with astronomy photography and use 180-degree fisheye lenses to look at capturing the night sky with a DSLR. Saw limitations to 180-degree lenses and started to add more cameras and now have an array of Red Cameras shooting at 5k and 60 fps.
  • 3:20 – Mix of different video styles and techniques. It’s an evolution of different capture systems and playback. Started with 180-degrees and mono and keep innovating the technology to the point of 360-degree sphere in stereoscopic 3D. There is quite a mix throughout the VR experience. Get it out and get feedback, and it’s a signpost of what’s to come. It’s a camera test and an experiment.
  • 5:00 – It’s easy to break to break presence. Seams and black spaces are immediately noticed and it’ll pull them out of the experience and it’ll break the sense of presence. A large process of trying to iron out the glitches. Have to have a very seamless image because people will notice the errors. The resolution needs to be higher than the DK2 to take
  • 6:54 – Mixing the form of documentary with being transported to different locations within VR. What is VR film the strongest for? Drawing on the techniques for what we know before. Those film techniques don’t really apply to VR. Zero Point is a more traditional documentary approach, but moving forward the core experience is going to matter more. How can emotionally visceral an experience will be is what is going to be the most compelling. The scenes and environments will be key. How to best convey emotion with the VR medium is an open question. It’s a new medium and a new visual language that’s being developed.
  • 9:00 – Zero Point as a “stub article” that will catalyze ideas for what the strengths of VR film medium. Designing around the location first. VR will be a visuals-first, and then audio. Design around that. This is what high-resolution and high framerate VR looks like. Hopes that Zero Point is a catalyst to start putting out experiments to see what works and what doesn’t work with the medium. What would happen if you put audio first?
  • 11:14 – Length of Zero Point as a VR experience, and optimizing around that. What would the optimal length be? It’ll still be determined. It has to be comfortable. DK2 is a lot more comfortable than the DK1. You need enough time to really get that sense of presence, and at least a couple of minutes. 15 minutes does feel like a long time to be in VR watching a video. Hope to iterate more quickly on shorter experiences.
  • 12:58 – Introductory sequence is a CGI experience, but most of it is 360-degree video. Interested in blending the CG to film. They’re very different. When you’re in video, then the brain can be more quickly convinced of being in another place with video IF the resolution is high enough. CG is more acceptable with the resolution of DK2. Video resolutions is kind of at the level of a DK1 at the moment. Need another iteration of panels at a higher resolution.
  • 14:58 – Cuts and fading to black and the language of VR for what works and what doesn’t work. It’s a new syntax and new grammar that needs to be developed. Fading to black works, but isn’t satisfying. Long-shot, unbroken and uncut is the starting point. It’s not too slow and still dynamic, and not motion. Easier to cut between scenes that are very different than to cut between a similar scene. Longer shots and less cuts is where he’d like to go. But it took years for the language of cinema to develop. Still in the research phase of what works in cinematic VR experiences
  • 16:50 – Movement within VR. Had some shots with motion, but also some stationary shots. Trying to push movement and moving through space. Need instant acceleration to make it comfortable. In film, you do easing from stationary to movement, but need to go straight to speed in VR movement. Head bob is too much when moving and causes motion sickness. Movement needs stabilized by a three-gimbal drone, and you get more 3D information. But VR needs new rigs
  • 19:00 – Experiments with audio. Different techniques used throughout the feed. Put binaural ear phones within your ear canals. Had a binaural mic in his ears recording the scene, but there’s no head tracking.
  • 20:32 – Different locations and different experiments with the scenes. The driving force is what is an effective VR experience? Direct engagement? Having things happen around you? Be a character and have an avatar and have some interaction. There will be a wide area of different experiences. People and animals and complex environments is compelling. Directly addressing the camera, and you start to feel like you’re there in real life. If you react as if you’re there and react to social cues, then it’ll get that sense of presence.
  • 22:34 – Power of VR is that it can put you in another conscious experience, put into another mind and another world. It’s a new form of communication and a new form of emotion. Apply this to situations that are abstract like global warming, and potential use it for pro-social goals. We can go beyond the type of content that we’re used to that is mainly controlled by mass media. It could provide a more advanced language for sharing stories in a new way. VR has tremendous potential, and it’s such early days still.

Zero Point Official Trailer from Condition One on Vimeo.

Theme music: “Fatality” by Tigoolio

Max Geiger works at Wemo Lab, which is a content studio in LA that exploring gaming and simulation in VR, but also exploring panoramic VR capture and the software to make that happen. Wemo Lab is located in LA, and they have a number of award-winning special effects artists on staff for creating who are helping create various VR experiences.

max-geigerThey’re focused on bringing emotional investment into VR, and Max talks about the spectrum of cinematic VR storytelling ranging from computer-generated to captured material, as well as differing levels of interactivity within each of those. He says that we’re still inventing the language of VR, and that the most surprising applications and interactions for VR haven’t been discovered yet.

Max could neither confirm nor deny that they were collaborating with any specific directors, but being so near to Hollywood it would not be surprising if they were getting interest from the film industry. He also talked about how close-up magic, immersive theater experiences and haunted houses have lessons to teach VR in terms of how to direct and misdirect attention.

Finally, he talks about he doesn’t like to do too much speculation about VR either in the short or long-term because of Amara’s Law, which states that “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” While there’s a lot of VR hype train overestimation in the short-term, we tend to underestimating the long-term impacts because so many of the changes are so unpredictable.

Reddit discussion here.


  • 0:00 – Intro. Wemo Lab content studio in LA. Showing an immersive ocean simulator called Blue.
  • 0:35 – The Blue was an open platform to contribute fish to various environments.
  • 1:48 – Exploring gaming and simulation in VR, but also exploring panoramic VR capture.
  • 2:15 – Interested in writing software to make it easier for other people to make captured
  • 2:36 – Using off-the-shelf solutions at the moment, and investigating other proprietary solutions as well.
  • 2:52 – Looking at 360Heroes rig with Go Pros, Wemo Lab’s Dennis Blakey is a pioneer of stereoscopic video who created a rig with 84 cameras.
  • 3:50 – Presence is the real selling point of VR, and so Frontrow VR can help provide that sense of presence. Getting fooled by close-up magic within VR.
  • 4:50 – Tradeoff vs recreating it in 3D to be more efficient vs video capture. It’s getting easier to store and manipulate large quantities of data.
  • 5:57 – When would it be better to recreate vs. when would you need to create? You know how much a camera will bias things, and an editor can weave a story out of individual moments. Interested to see what Peter Watkins would do with VR, who used documentary format to explore fictional stories. Explores film create a world and expectation and biases the viewer towards certain things. The map of a film is the territory of the subject
  • 7:50 – Different interactions within VR and approaches to storytelling. 6-7 different levels of experiences spectrum between completely computer-generated vs. filmed and captured experiences. And adding interactivity to captured experiences. Still inventing the language. The most surprising applications and interactions for VR haven’t been discovered yet.
  • 9:00 – Getting interest from Hollywood directors at Wemo Lab? Neither confirm or deny working with any Hollywood directors.
  • 9:40 – What is Wemo Lab trying to do in VR? World Emotion is the goal. Bringing emotional investment to VR experiences. Combine emotions with physical interactions in VR.
  • 10:30 – Directing attention in VR experiences. Look at first-person games and how they direct attention, but also look at other arts of directing attention like how magicians will direct and misdirect attention. Breaking down the fourth wall in theater has lessons to teach us as well.
  • 11:54 – Sleep No More immersive theater experience is a high-brow, but there’s also a haunted house or a dark ride and there’s lessons to be learned from them all.
  • 12:38 – 3D audio. Not a lot of great solutions at the moment, but there’s a renaissance in that realm not. Binaural audio is a capture technique, and 3D positional audio is the post-production and mixing process involved.
  • 13:12 – Ultimate potential for VR – Tries not to do too much speculation, and refers to Amara’s Law of “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Hype train overestimation in the short-term , but underestimating the long-term impacts.

Theme music: “Fatality” by Tigoolio

I’m joined by the Kite and Lightning team including co-founders Cory Strassburger & Ikrima Elhassan as well as developer/VFX artist John Dewar.

kite-and-lightning-allThey talk about the creative process behind converting the mini-opera song of Senza Peso into a 2D motion graphics film and then into an immersive virtual reality experience, which created some impressive buzz within the VR community.

They also discuss a number of the reasons why they went with using Unreal Engine 4 over Unity 3D, and how it enables them to more rapidly prototype on the look and feel of their VR experiences. They also have more control by being able to change the source code. They also talked about the decision to record stereoscopic video of the characters rather than using motion captured avatars.

Cory also talks about his background in the sci-fi film Minority Report, and his interest in helping develop 3D user interfaces in VR as demonstrated in The Cave & The K&L Station experience..

Finally, everyone talks talks about some of the major take-aways and lessons learned from working on all of their VR experiences over the past year, where they see VR going as well as how many exciting, open questions there are right now.

To keep up with all of the latest developments with Kite and Lightning, then be sure to sign up on their newsletter listed at the bottom of their website here.

Reddit discussion here.


  • 0:00 – Intros
  • 0:51 – Backstory behind Senza Peso. Getting a DK1 changed everything. Switching to Unreal Engine
  • 2:56 – Comparing Unreal Engine to Unity, and what UE4 provides
  • 5:25 – Translating the story to a 2D motion graphics film, and then translating it into a cinematic VR experience
  • 9:35 – How they did the character capture with stereoscopic video
  • 11:06 – Programming challenges for creating this cinematic VR experience
  • 12:47 – Visual design considerations & working with the Unreal Engine 4 in contrast to what the workflow would’ve been with Unity.
  • 15:29 – Ikrima’s take-aways from working on this project, and Kite and Lightning’s
  • 17:14 – 3D user interface prototypes in the Cave & insights from working on sci-fi films like Minority Report
  • 21:51 – Other 3DUI interface insights from the VR community including Oliver Kreylos’ Virtual Reality User Interface (Vrui)
  • 25:56 – Tradeoffs between file sizes in using different motion capture techniques
  • 31:38 – Experimenting with experiences that are either on-rails, some triggers, completely open world
  • 35:17 – What type of innovations they’re working on in terms of motion capture and graphics. Optimizing their production pipeline processes.
  • 37:14 – Lessons learned for what works and doesn’t work within VR
  • 44:51 – The ultimate potential for what VR can provide
  • 52:35 – What’s next for Kite and Lightning

Theme music: “Fatality” by Tigoolio

Other related and recommended interviews:

Cosmo Scharf is a film student and co-founder of VRLA. He talks about some of the challenges of VR storytelling and the differences between a free-will VR environment vs. a traditional 2D film. Film has a series of tools to direct attention such as depth of field, camera direction, cues to look to left or right, contrasting colors or movement in the frame. Some of these translate over to VR, but you can’t use all of them since there’s no camera movement, focus or framing in VR.

cosmo-269x200He talks about the process of starting a meet up in Los Angeles called VRLA, and that a lot of people in the film industry are seeing VR as the future of storytelling and entertainment.

Cosmo also sees that VR experiences are a spectrum from ranging from completely interactive like a video game, semi-interactive cinematic experience, and the completely passive. There’s not a whole lot of people are looking into the completely passive experiences yet, but that’s what he’s interested in exploring as a film student.

He strongly believes it’s the future of storytelling, video games, computing as well as disseminating information in general, and he’s very excited to get more involved in the VR industry

Reddit discussion here.


  • 0:00 – Founder of VR LA. Initially interested in VR after hearing about Valve’s involvement in VR. Reading /r/oculus and listening to podcasts and heard a podcast about starting a meet up.
  • 1:44 – Film industry’s involvement and how many were new to VR? Weeks after the Facebook acquisition, and so there were over 200 people who came out.
  • 2:34 – What type of feedback did you receive? A lot of people in the movie industry are seeing VR as the future of storytelling. Cosmo wants to provide emotionally-engaging experiences.
  • 3:22 – What type of story things are interesting to you. Not a lot of storytelling in VR happening yet. VR is early. Differences between film and VR. Filmmaking rules and practices to use 4 frames for 100 years. VR is a new medium. How do you effectively tell a story without relying upon the same filmmaking techniques.
  • 4:36 – What are some of the open problems in VR storytelling? How to direct someone’s attention. With filmmaking you can use depth of field, camera direction, cues to look to left or right, colors or movement in the frame. Some of the cues in real life are if others are looking in a direction and which direction sounds are coming from. Passive vs. Interactive VR: completely interactive like a video game, semi-interactive cinematic experience, and the completely passive. Not a whole lot of people are looking at the third way.
  • 6:22 – Familiar with Nonny de la Pena’s work. She attended VRLA #1. It’s interesting in terms of VR as a tool for building empathy. When you’re placed virtually in someone else’s shoes, you’ll feel what they’re going to feel. You can connect to people via a 2D screen, but you know that there’s a distance. With VR, you’re in it and completely immersed and engaged.
  • 7:31 – What about choose your own adventure vs. a linear film: VR experience similar to Mass Effect where you have different options to say to characters. Your response will change how the story unfolds. Would like to see a natural feedback between you and the AI characters
  • 8:22 – Where would like to see VR going? We’re still very early with VR, no consumer product is out yet and that will determine if VR is a real thing. Strongly believes it’s the future of storytelling, video games, computing & disseminating information in general. HMDs will be portable. Convergence of augmented reality with VR. Hard to determine how the industry will evolve within the next month, but most exciting industry to be a part of. All of the leaders of the consumer VR space are at SVVRCon.

Theme music: “Fatality” by Tigoolio

This interview with Nonny de la Peña was by far my favorite discussion from the Silicon Valley Virtual Reality Conference & Expo. It’s moving to hear about the type of emotional reactions that she’s receiving from her human rights-centered, immersive journalism pieces that are experienced within virtual reality. She has some amazing insights into VR storytelling, virtual identity, and the importance of bringing in more diversity into VR.


Nonny has been working on VR storytelling since creating a Virtual Guantanamo Bay prison cell in Second Life in 2009. She started working on with VR HMDs before the Oculus Rift existed, and in fact was a part of USC’s Institute for Creative Technologies when Palmer Luckey was there. Luckey even provided Nonny with a pre-Oculus Rift HMD for her 2012 Sundance showing of “Hunger in LA.”

She’s also worked with Mel Slater, who has explored a lot of interesting effects of Positive Illusions of Self in Immersive Virtual Reality

Nonny has a ton of insights on the components of creating a compelling VR experience from starting with great audio to creating a believable virtual humans. I also found her vision of a tiered VR future of the untethered IMAX-like experience to the Oculus Rift home experience, and then finally mobile VR to be a compelling distinction for the different levels of VR immersion and associated technologies.

For more information on the service that Nonny uses to create her virtual humans, then be sure to check out this interview with the founder of Mixamo.

Reddit discussion is here.


  • 0:00 – Intro to Immersive Journalism & how it got started
  • 1:29 – Recreating a scene of a Guantanamo Bay prison cell in Second Life
  • 3:30 – Taking control of somebody’s Second Life avatar, and the type of reactions of going through an virtual re-enactment of being a Guantanamo Bay prisoner
  • 4:29 – How people identified with their avatar being bound
  • 5:14 – What were some of your first immersive journalism stories that used a fully immersive, virtual reality head mounted display? Identifying with a VR dody in stress position
  • 7:12 – Institute for Creative Technologies, Mark Bolas, and her connection to Palmer Luckey
  • 8:02 – Immersive VR piece on “Hunger in Los Angeles” & starting with audio
  • 9:20 – Palmer Luckey & pre-Oculus Rift, VR HMD prototype for Sundance January 2012, and audience reactions
  • 11:42 – Commissioned VR piece on Syrian refugees shown at the World Economic Forum
  • 13:21 – Witnessing a border patrol taxing death
  • 13:56 – Next projects and the potential of immersive storytelling
  • 15:20 – What are some key components of storytelling within an immersive VR environment?
  • 17:32 – Why is the reaction of empathy so much stronger in immersive VR?
  • 18:38 – What are the risks of putting people into a traumatic VR scene and triggering PTSD?
  • 19:47 – How do you direct attention within a immersive VR story?
  • 20:55 – Are your immersive journalism pieces interactive at all?
  • 21:30 – How else are people using this immersive VR medium to tell unique stories?
  • 22:47 – What type of software and hardware are you using for your virtual humans in your immersive VR pieces?
  • 21:15 – Being the only woman panelist at SVVR and importance of diversity to VR’s resurgence.
  • 26:36 – Bringing into more diversity into VR storytelling
  • 28:19 – The tiers of VR experiences of IMAX, home and mobile.
  • 29:20 – Location-based, untethered VR experiences being equivalent to going to an IMAX movie.

Theme music: “Fatality” by Tigoolio