robin-arnott-2020SoundSelf is a technologically-mediated, psychedlic experience (aka “technodelic”) that is being released by Andromeda Media on Wednesday, April 22nd. Developer Robin Arnott has been working on it on 8 years now, and he just released a The Technodelic Manifesto that describes his vision for how immersive technologies paired with game design principles can help facilitate the “transformation of the player into deeper and more self-realized modes of consciousness.”

I’ve previously talked with Arnott about his SoundSelf experience and the concept of technodelics in episodes #484 in 2016, #655 in 2018, and #779 in 2019. The release of SoundSelf next week marks a new phase of the genre of consciousness hacking and technodelic experiences that are explicitly being designed for consciousness transformation.

I’m personally really excited to see how this experience is received, and how the broader consciousness hacking movement continues to evolve. I found that using SoundSelf for the past couple of weeks to be a wonderful catalyst to get into VR every day, and to start my day with an experience of gamified chanting that produced a mode of being that I found my body craving after the first couple of days. One of Arnott’s explicit goals was to try to gamify the process of chanting and meditative practice, and he’s been able to do a great job of that. If you find it difficult to become motivated or hard to maintain focus, then the gamification of your voice into unpredictable feedback loop cycles is a really compelling catalyst for maintaining engagement into the scaffolding of the practice.

I had a chance to catch up with Arnott to reflect upon the larger context of a pattern and habitual interrupt that the coronavirus pandemic represents, and how this can be an opportunity to create some new habits that can deepen your sense of presence & commitment to a consistent contemplative practice.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rob Morgan is a game writer, narrative designer and voice director. He got into the realm of writing for VR experiences from his experience at Sony London Studio, and then freelanced with nDreams on their Gear VR game Gunner as well as their conspiracy-theory/moral dilemma adventure game called The Assembly.

Rob brings a very unique perspective about what’s different about writing narratives and telling stories in VR after working on a number of different projects of significant scope & budget across the Morpheus, Gear VR and Oculus DK2. One of the big takeaways that Rob had is that there are a whole level of social & behavioral interactions that we expect to have with other humans and so you can’t treat NPCs in a VR experience the same way that you might in a 2D experience. For example, there are social queues that you expect a human to react to based upon where you’re looking, whether you seem like you’re paying attention or if you’re threatening other people in some way. There’s a whole range of interaction that we demand and expect to have, and so there’s a lot of interesting nested body language and social queues that if they’re added within a VR experience could add another dimension of immersion.

Rob talks about the importance of having other human-like characters within the narrative experience in order to go beyond an interesting 5-minute tech demo, and to start to have an engaging narrative. He says that there’s a distinct lack of human characters in VR demos because it’s hard to not fall into the trap of the uncanny valley. But Rob suggests that one way to get around the lack of visual fidelity within VR is to start to add simple interactive social behaviors in NPCs to create a better sense of immersion.

He also talks about how important the voice acting is within VR as well because the uncanny valley goes beyond just the look and feel of the graphical representation of humans. Humans are really great at detecting fakeness, and Rob says that this is a vital element of immersion if you’re acting is somehow stilted or not completely authentic or believable.

This was one of my favorite interviews from GDC because Rob lists out so many different interesting open problems and challenges with storytelling in VR. He says that the rules haven’t been written yet, and so there’s a large amount of space to experiment with what works and what doesn’t work.

He eventually sees that there will be a convergence between VR, AR and wearable technology in general, and he’s excited for the possibility of creating a fictional layer of reality for people that they can interact and engage with in a way that’s just as real as the rest of their reality.

Rob presented a talk at GDC called “Written on your eyeballs: Game narrative in VR at GDC 2015” which can be seen on GDC Vault here if you have a subscription.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Ikrima Elhassen of Kite & Lightning talks about the process of developing the INSURGENT: Shatter Reality virtual reality to promote the Lion’s Gate film called The Divergent Series: INSURGENT. He talks how the project came about as well as many of the lessons learned along the way.

Ikrima also talks about the two plugins that they developed for the Unreal Engine 4 in order to complete this project

  • UE4 Stereo 360 Movie Export Plugin – to easily create GearVR ports of their passive desktop experiences. It’ll be available as an open sourced & free plug-in here soon.
  • Alembic Cache Playback – enables playback of Alembic files in UE4 so that they can import vertex cache animation such as water simulations or rigid motion animation to handle up to 10k fragments.

They wanted to have the widest release possible for the INSURGENT experience, and so it’s available via:

  • A DK2 Version
  • A Gear VR Movie Theater experience to watch the movie trailer and GearVR port of the DK2 Version (via Oculus Home on GearVR)
  • A traveling city tour with a custom designed/built chair from the VR experience with full haptic feedback and 4D components
  • Google Cardboard mobile apps (Android & iOS)

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.


eleVR is a three-person research and design group doing a number of different spherical video experiments as well as developing their own WebVR-compatible web video player. eleVR includes Andrea Hawksley as the developer, Vi Hart as the director, & Emily Eifler as the producer.

eleVR is doing some of the most cutting edge and innovative content experiments that I’ve seen, especially the WebVR integrations that I think are going to be huge.

They also discuss some of their concerns about diversity within Virtual Reality with what they see as a gross underrepresentation of women at the invite-only Oculus Connect developer conference. Emily also talks more about the sexual assault that she experienced at Oculus Connect.

Some of measures that the larger tech industry have been implementing to help prevent this are things like a Code of Conduct that outlines as well as diversity statements like the one implemented by O’Reilly Media.

I also mentioned the work of Ashe Dryden who has written a couple of really great blog posts about fostering diversity at tech conferences. In this post on “Increasing Diversity at Your Conference” she says:

The easiest way to get feedback on your efforts is to publicly state what you’ve tried and ask for constructive criticism. Be transparent and truthful. I’ve seen many conferences write blog posts about what they’ve done to address the issue of the lack of diversity and the positive or negative results that they ended up with. This is important for a few reasons: it signals that this is important to you and that you are open to more ideas as well as letting people within marginalized groups know that you are considering their needs and the reality of their situations.

Here’s another great excerpt from a post titled “So you want to put on a diverse, inclusive conference”

How do you advertise that you want to see a diverse community at your conference when you don’t already have one?

  • Admit you have a problem. There is nothing wrong with going to colleagues or to twitter and saying “We want to provide an inclusive, diverse conference experience, but we need help. Can you help us?”
  • Explicitly ask for constructive criticism. Write a blog post on your conference’s site explaing what you have done and ask where you are going wrong or what you might have forgotten. Maybe you didn’t notice that all of the pictures on your conference site are of white people or that the language you use in your CFP is gendered.
  • Be gracious, humble, and kind. It’s hard to hear that you may have misstepped or made a mistake, but it happens to everyone. Before responding to criticism (constructive or not), take some time to examine the truth in it. For best results, ask an unbiased third party to examine the evidence and the criticism and help you understand the problem. Then, humbly apologize and make known the steps you’re taking to correct the situation.

I’d agree that there’s a lot that the VR community can collectively do to help foster more diversity, and I’m really glad to see that Oculus is starting to take steps towards being more deliberate about diversity concerns. They now have a Diversity Lead with Brandi House, and tonight at GDC, Oculus is co-sponsoring a party with Women in Games International. The Eventbrite page says,

VR is still very young, and now is the best time to define what’s possible. To help VR reach its enormous potential, we need a diverse and talented community of developers to make it a fun and engaging experience for everyone. As part of that effort, we want to welcome and support women developers both to the VR community and to the Oculus team.

I’ll be there tonight and look forward to meeting and featuring more women within the VR community.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Jens Christensen is the CEO and co-founder of Jaunt VR, which is a VR startup that has raised over $35 million to develop the toolchain for live-action, 360-degree spherical, 3D video. Jaunt’s technology combines “computational photography, statistical algorithms, massively parallel processing, cutting-edge hardware and virtual reality.

jens-christensenjaunt-304xx865-1297-152-0Jens talks about Jaunt VR’s mission to capture the world around you, and to be able to put that within a VR experience. It requires capturing a ton of video data, and they’ve developed a parallel processing data farm process to be able to stitch together a fully spherical video. They’re also developing a method for capturing ambisonic 3D audio.

The Jaunt VR Twitter account has been posting various photos of covering a wide range of different sporting events, and Jens says that Jaunt is very interested in eventually providing a live broadcasts of immersive content. Their primary focus in the beginning are sports events, music events, as well as experiments with narrative content ranging from a horror story to a story set in World War II. They’ve haven’t released any VR content to the public yet, but are looking forward to releasing some experiences here soon.

Jens said that they’re still working on there is no full positional tracking yet and at the moment it’s just rotationally tracked, but it’s something they’re still looking into. Storytelling within VR is going to require the developing of a new language and syntax, and that in the beginning they prefer to keep the camera steady to minimize motion sickness.

Jaunt VR is also more interested in leveraging the existing video post-production toolchain for editing, compositing and color correction. It sounds like they’re likely working on some plug-ins or video players to be able to directly integrate and preview VR content while you’re editing it, but Jens didn’t provide any specifics and said that it’s still an open problem. Finally, he sees that VR is going to change a lot of industries ranging from news, music and travel and he’s looking forward to helping play a small part with the work that Jaunt VR is doing.

Theme music: “Fatality” by Tigoolio

Danfung Dennis talks about the process of making the first commercially available, 360-degree, fully immersive VR cinematic experience called Zero Point, which was released today on Steam. I see Zero Point as a historic VR experience that will likely catalyze a lot of other ideas and experiences within the realm of cinematic VR, and so it’s definitely worth checking out.

Danfung-Dennis Danfung is the CEO of Condition One and actually started looking at 360-degree video starting back in 2010 when creating some immersive experiences for the iPad and iPhone. He sees Zero Point as a camera test and experiment that shows the evolution of 360-video capture technology starting from monoscopic 180-degree video and maturing to the point of 360-degree stereoscopic video using RED cameras shooting at 5k and 60 frames per second.

Danfung hopes that Zero Point will be a catalyst for other experiments with 360-degree cinematic experiences that help to develop and cultivate the new syntax, language and grammar for creating immersive VR experiences. Zero Point uses the constructs of a traditional documentary film, but yet I was personally left craving something that was focused more upon putting the physical locations at the center of the story rather than the other way around. Danfung seemed to agree with that sentiment, and that moving forward what is going to matter more is using the strengths of the VR medium to create a more visceral and emotionally engaging experience where the location, visuals, and audio are all synchronized together.

In terms of the language of VR, Danfung had many insights ranging from seeing that cuts between drastically different scenes is a lot more comfortable than moving around within the same scene. And while fading to black seems to be a safe way to navigate between scenes and minimize the disorientation from teleporting instantly to a new place, he also sees that it could be overused and that he’s interested in experimenting with longer shots with few cuts. He’d also like to iterate more quickly by creating shorter experiences that are long enough to create a sense of presence, but to be able to create a wider range of different types of focused experiments in order to learn more about the language of cinematic VR which is still in it’s very early days of being developed.

There’s also an interesting experiment of blending computer-generated experiences with live-action footage, and Danfung wants to experiment with this more. He believes that when the resolution gets to be high enough, then 360-degree video has the potential to provide a sense of presence more quickly than a computer-generated environment.

Finally, Danfung sees that there’s a tremendous potential of VR to be able transcend the limitations of the mass media, and to provide a more advanced language for sharing stories that take people into another world, another mind, and another conscious experience. There is a lot of potential to explore more abstract concepts and to use the VR medium for pro-social causes and to help make the world a better place.

Zero Point is now available on Steam here.


  • 0:00 – Introduction
  • 0:35 – Process of innovating and creating 360-degree video. Create powerful immersive experiences, and tried an early prototype of the DK1 and knew that VR was going to be his medium. Started with astronomy photography and use 180-degree fisheye lenses to look at capturing the night sky with a DSLR. Saw limitations to 180-degree lenses and started to add more cameras and now have an array of Red Cameras shooting at 5k and 60 fps.
  • 3:20 – Mix of different video styles and techniques. It’s an evolution of different capture systems and playback. Started with 180-degrees and mono and keep innovating the technology to the point of 360-degree sphere in stereoscopic 3D. There is quite a mix throughout the VR experience. Get it out and get feedback, and it’s a signpost of what’s to come. It’s a camera test and an experiment.
  • 5:00 – It’s easy to break to break presence. Seams and black spaces are immediately noticed and it’ll pull them out of the experience and it’ll break the sense of presence. A large process of trying to iron out the glitches. Have to have a very seamless image because people will notice the errors. The resolution needs to be higher than the DK2 to take
  • 6:54 – Mixing the form of documentary with being transported to different locations within VR. What is VR film the strongest for? Drawing on the techniques for what we know before. Those film techniques don’t really apply to VR. Zero Point is a more traditional documentary approach, but moving forward the core experience is going to matter more. How can emotionally visceral an experience will be is what is going to be the most compelling. The scenes and environments will be key. How to best convey emotion with the VR medium is an open question. It’s a new medium and a new visual language that’s being developed.
  • 9:00 – Zero Point as a “stub article” that will catalyze ideas for what the strengths of VR film medium. Designing around the location first. VR will be a visuals-first, and then audio. Design around that. This is what high-resolution and high framerate VR looks like. Hopes that Zero Point is a catalyst to start putting out experiments to see what works and what doesn’t work with the medium. What would happen if you put audio first?
  • 11:14 – Length of Zero Point as a VR experience, and optimizing around that. What would the optimal length be? It’ll still be determined. It has to be comfortable. DK2 is a lot more comfortable than the DK1. You need enough time to really get that sense of presence, and at least a couple of minutes. 15 minutes does feel like a long time to be in VR watching a video. Hope to iterate more quickly on shorter experiences.
  • 12:58 – Introductory sequence is a CGI experience, but most of it is 360-degree video. Interested in blending the CG to film. They’re very different. When you’re in video, then the brain can be more quickly convinced of being in another place with video IF the resolution is high enough. CG is more acceptable with the resolution of DK2. Video resolutions is kind of at the level of a DK1 at the moment. Need another iteration of panels at a higher resolution.
  • 14:58 – Cuts and fading to black and the language of VR for what works and what doesn’t work. It’s a new syntax and new grammar that needs to be developed. Fading to black works, but isn’t satisfying. Long-shot, unbroken and uncut is the starting point. It’s not too slow and still dynamic, and not motion. Easier to cut between scenes that are very different than to cut between a similar scene. Longer shots and less cuts is where he’d like to go. But it took years for the language of cinema to develop. Still in the research phase of what works in cinematic VR experiences
  • 16:50 – Movement within VR. Had some shots with motion, but also some stationary shots. Trying to push movement and moving through space. Need instant acceleration to make it comfortable. In film, you do easing from stationary to movement, but need to go straight to speed in VR movement. Head bob is too much when moving and causes motion sickness. Movement needs stabilized by a three-gimbal drone, and you get more 3D information. But VR needs new rigs
  • 19:00 – Experiments with audio. Different techniques used throughout the feed. Put binaural ear phones within your ear canals. Had a binaural mic in his ears recording the scene, but there’s no head tracking.
  • 20:32 – Different locations and different experiments with the scenes. The driving force is what is an effective VR experience? Direct engagement? Having things happen around you? Be a character and have an avatar and have some interaction. There will be a wide area of different experiences. People and animals and complex environments is compelling. Directly addressing the camera, and you start to feel like you’re there in real life. If you react as if you’re there and react to social cues, then it’ll get that sense of presence.
  • 22:34 – Power of VR is that it can put you in another conscious experience, put into another mind and another world. It’s a new form of communication and a new form of emotion. Apply this to situations that are abstract like global warming, and potential use it for pro-social goals. We can go beyond the type of content that we’re used to that is mainly controlled by mass media. It could provide a more advanced language for sharing stories in a new way. VR has tremendous potential, and it’s such early days still.

Zero Point Official Trailer from Condition One on Vimeo.

Theme music: “Fatality” by Tigoolio

AJ Campbell is the founder of VRSFX, and he got inspired to getting 3D audio for 360-degree virtual reality experiences after seeing Beck’s Hello Again concert experience. He noticed the microphone rig in that experience and decided that he could write software to create a fully 360-degree, binaural experience with the right microphone hardware.

AJ contacted Jeff Anderson of 3DIO Sound and learned that he was working on a Free Space Omni-Binaural Microphone, which he was one of the first people to buy. AJ has been working on a Unity plug-in that would allow you to use the head tracking of the Oculus Rift to be able to cross fade between the four different binaural audio recordings that would be perfect for immersive, binaural audio for 360-degree video productions.

He talks about his low-resolution 4-node, 90-degree resolution approach of fading between four different analog binaural audio feeds. This is much less processor intensive to calculating object-oriented, 3D spherical audio and gives satisfactory results in a lot of different use cases. He’s still playing with the best crossfade to provide a seamless and smooth listening experience when you’re turning your head, but expects to be finished with the Unity plug-in soon. For more information, then be sure to sign up for VRSFX’s e-mail list.

Theme music: “Fatality” by Tigoolio

Max Geiger works at Wemo Lab, which is a content studio in LA that exploring gaming and simulation in VR, but also exploring panoramic VR capture and the software to make that happen. Wemo Lab is located in LA, and they have a number of award-winning special effects artists on staff for creating who are helping create various VR experiences.

max-geigerThey’re focused on bringing emotional investment into VR, and Max talks about the spectrum of cinematic VR storytelling ranging from computer-generated to captured material, as well as differing levels of interactivity within each of those. He says that we’re still inventing the language of VR, and that the most surprising applications and interactions for VR haven’t been discovered yet.

Max could neither confirm nor deny that they were collaborating with any specific directors, but being so near to Hollywood it would not be surprising if they were getting interest from the film industry. He also talked about how close-up magic, immersive theater experiences and haunted houses have lessons to teach VR in terms of how to direct and misdirect attention.

Finally, he talks about he doesn’t like to do too much speculation about VR either in the short or long-term because of Amara’s Law, which states that “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” While there’s a lot of VR hype train overestimation in the short-term, we tend to underestimating the long-term impacts because so many of the changes are so unpredictable.

Reddit discussion here.


  • 0:00 – Intro. Wemo Lab content studio in LA. Showing an immersive ocean simulator called Blue.
  • 0:35 – The Blue was an open platform to contribute fish to various environments.
  • 1:48 – Exploring gaming and simulation in VR, but also exploring panoramic VR capture.
  • 2:15 – Interested in writing software to make it easier for other people to make captured
  • 2:36 – Using off-the-shelf solutions at the moment, and investigating other proprietary solutions as well.
  • 2:52 – Looking at 360Heroes rig with Go Pros, Wemo Lab’s Dennis Blakey is a pioneer of stereoscopic video who created a rig with 84 cameras.
  • 3:50 – Presence is the real selling point of VR, and so Frontrow VR can help provide that sense of presence. Getting fooled by close-up magic within VR.
  • 4:50 – Tradeoff vs recreating it in 3D to be more efficient vs video capture. It’s getting easier to store and manipulate large quantities of data.
  • 5:57 – When would it be better to recreate vs. when would you need to create? You know how much a camera will bias things, and an editor can weave a story out of individual moments. Interested to see what Peter Watkins would do with VR, who used documentary format to explore fictional stories. Explores film create a world and expectation and biases the viewer towards certain things. The map of a film is the territory of the subject
  • 7:50 – Different interactions within VR and approaches to storytelling. 6-7 different levels of experiences spectrum between completely computer-generated vs. filmed and captured experiences. And adding interactivity to captured experiences. Still inventing the language. The most surprising applications and interactions for VR haven’t been discovered yet.
  • 9:00 – Getting interest from Hollywood directors at Wemo Lab? Neither confirm or deny working with any Hollywood directors.
  • 9:40 – What is Wemo Lab trying to do in VR? World Emotion is the goal. Bringing emotional investment to VR experiences. Combine emotions with physical interactions in VR.
  • 10:30 – Directing attention in VR experiences. Look at first-person games and how they direct attention, but also look at other arts of directing attention like how magicians will direct and misdirect attention. Breaking down the fourth wall in theater has lessons to teach us as well.
  • 11:54 – Sleep No More immersive theater experience is a high-brow, but there’s also a haunted house or a dark ride and there’s lessons to be learned from them all.
  • 12:38 – 3D audio. Not a lot of great solutions at the moment, but there’s a renaissance in that realm not. Binaural audio is a capture technique, and 3D positional audio is the post-production and mixing process involved.
  • 13:12 – Ultimate potential for VR – Tries not to do too much speculation, and refers to Amara’s Law of “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Hype train overestimation in the short-term , but underestimating the long-term impacts.

Theme music: “Fatality” by Tigoolio

I’m joined by the Kite and Lightning team including co-founders Cory Strassburger & Ikrima Elhassan as well as developer/VFX artist John Dewar.

kite-and-lightning-allThey talk about the creative process behind converting the mini-opera song of Senza Peso into a 2D motion graphics film and then into an immersive virtual reality experience, which created some impressive buzz within the VR community.

They also discuss a number of the reasons why they went with using Unreal Engine 4 over Unity 3D, and how it enables them to more rapidly prototype on the look and feel of their VR experiences. They also have more control by being able to change the source code. They also talked about the decision to record stereoscopic video of the characters rather than using motion captured avatars.

Cory also talks about his background in the sci-fi film Minority Report, and his interest in helping develop 3D user interfaces in VR as demonstrated in The Cave & The K&L Station experience..

Finally, everyone talks talks about some of the major take-aways and lessons learned from working on all of their VR experiences over the past year, where they see VR going as well as how many exciting, open questions there are right now.

To keep up with all of the latest developments with Kite and Lightning, then be sure to sign up on their newsletter listed at the bottom of their website here.

Reddit discussion here.


  • 0:00 – Intros
  • 0:51 – Backstory behind Senza Peso. Getting a DK1 changed everything. Switching to Unreal Engine
  • 2:56 – Comparing Unreal Engine to Unity, and what UE4 provides
  • 5:25 – Translating the story to a 2D motion graphics film, and then translating it into a cinematic VR experience
  • 9:35 – How they did the character capture with stereoscopic video
  • 11:06 – Programming challenges for creating this cinematic VR experience
  • 12:47 – Visual design considerations & working with the Unreal Engine 4 in contrast to what the workflow would’ve been with Unity.
  • 15:29 – Ikrima’s take-aways from working on this project, and Kite and Lightning’s
  • 17:14 – 3D user interface prototypes in the Cave & insights from working on sci-fi films like Minority Report
  • 21:51 – Other 3DUI interface insights from the VR community including Oliver Kreylos’ Virtual Reality User Interface (Vrui)
  • 25:56 – Tradeoffs between file sizes in using different motion capture techniques
  • 31:38 – Experimenting with experiences that are either on-rails, some triggers, completely open world
  • 35:17 – What type of innovations they’re working on in terms of motion capture and graphics. Optimizing their production pipeline processes.
  • 37:14 – Lessons learned for what works and doesn’t work within VR
  • 44:51 – The ultimate potential for what VR can provide
  • 52:35 – What’s next for Kite and Lightning

Theme music: “Fatality” by Tigoolio

Other related and recommended interviews:

Cosmo Scharf is a film student and co-founder of VRLA. He talks about some of the challenges of VR storytelling and the differences between a free-will VR environment vs. a traditional 2D film. Film has a series of tools to direct attention such as depth of field, camera direction, cues to look to left or right, contrasting colors or movement in the frame. Some of these translate over to VR, but you can’t use all of them since there’s no camera movement, focus or framing in VR.

cosmo-269x200He talks about the process of starting a meet up in Los Angeles called VRLA, and that a lot of people in the film industry are seeing VR as the future of storytelling and entertainment.

Cosmo also sees that VR experiences are a spectrum from ranging from completely interactive like a video game, semi-interactive cinematic experience, and the completely passive. There’s not a whole lot of people are looking into the completely passive experiences yet, but that’s what he’s interested in exploring as a film student.

He strongly believes it’s the future of storytelling, video games, computing as well as disseminating information in general, and he’s very excited to get more involved in the VR industry

Reddit discussion here.


  • 0:00 – Founder of VR LA. Initially interested in VR after hearing about Valve’s involvement in VR. Reading /r/oculus and listening to podcasts and heard a podcast about starting a meet up.
  • 1:44 – Film industry’s involvement and how many were new to VR? Weeks after the Facebook acquisition, and so there were over 200 people who came out.
  • 2:34 – What type of feedback did you receive? A lot of people in the movie industry are seeing VR as the future of storytelling. Cosmo wants to provide emotionally-engaging experiences.
  • 3:22 – What type of story things are interesting to you. Not a lot of storytelling in VR happening yet. VR is early. Differences between film and VR. Filmmaking rules and practices to use 4 frames for 100 years. VR is a new medium. How do you effectively tell a story without relying upon the same filmmaking techniques.
  • 4:36 – What are some of the open problems in VR storytelling? How to direct someone’s attention. With filmmaking you can use depth of field, camera direction, cues to look to left or right, colors or movement in the frame. Some of the cues in real life are if others are looking in a direction and which direction sounds are coming from. Passive vs. Interactive VR: completely interactive like a video game, semi-interactive cinematic experience, and the completely passive. Not a whole lot of people are looking at the third way.
  • 6:22 – Familiar with Nonny de la Pena’s work. She attended VRLA #1. It’s interesting in terms of VR as a tool for building empathy. When you’re placed virtually in someone else’s shoes, you’ll feel what they’re going to feel. You can connect to people via a 2D screen, but you know that there’s a distance. With VR, you’re in it and completely immersed and engaged.
  • 7:31 – What about choose your own adventure vs. a linear film: VR experience similar to Mass Effect where you have different options to say to characters. Your response will change how the story unfolds. Would like to see a natural feedback between you and the AI characters
  • 8:22 – Where would like to see VR going? We’re still very early with VR, no consumer product is out yet and that will determine if VR is a real thing. Strongly believes it’s the future of storytelling, video games, computing & disseminating information in general. HMDs will be portable. Convergence of augmented reality with VR. Hard to determine how the industry will evolve within the next month, but most exciting industry to be a part of. All of the leaders of the consumer VR space are at SVVRCon.

Theme music: “Fatality” by Tigoolio