Miyubi is Felix & Paul Studios’ first scripted narrative piece that is 40 minutes in length, and it transports you to the early 1980s as you embody an imminently obsolete toy robot. Felix & Paul instructed the individual actors to unexpectedly try to disrupt a scene with unscripted behavior in order to preserve he improv quality of their long takes. There is minimal editing within individual scenes in order to cultivate a deeper sense of presence, but it also allows the viewer to pay attention to many different aspects of the unfolding group dynamics. I was struck by how you could watch reaction shots of characters who weren’t directly participating within the primary conversation, and that VR is able to capture the full family constellation and relationships between characters in a way that is completely deconstructed and controlled within the medium of film.

I had a chance to catch up with Felix & Paul Studios co-founders Félix Lajeunesse and Paul Raphaël as well as Chief Content Officer Ryan Horrigan back at Sundance 2017 to talk about their process of directing Miyubi, as well as some of their insights and lessons learned from creating a 40-minute VR narrative.

LISTEN TO THE VOICES OF VR PODCAST

This is a listener-supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip

colin_nightingalePunchdrunk’s immersive theater piece of Sleep No More opened in March 2011, and it’s inspired countless experiential designers of immersive theater, and virtual and augmented reality experiences. It was a paradigm shift that broke the fourth wall, and encouraged audience members to become fully immersed within the experience. Rather than sit in seats watching actors on a stage, audience members could explore an open world of 100 rooms across five floors, and make choices as to which actors to follow through the space or which parts of elaborate set design to inspect. Sleep No More has been a catalyst to other types of experiential entertainment including escape rooms, site specific performances, immersive and interactive art museums, and the digital mediated versions of these experiences through virtual and augmented reality.

I had a chance to catch up with Punchdrunk creative producer Colin Nightingale at the Immersive Design Summit on January 6th where I had a chance to talk to him about his journey into becoming an experiential designer for immersive theater, how immersive theater is being used for education and medical purposes, and the latest Punchdrunk productions which are all focused on blurring the line between what is reality.

LISTEN TO THE VOICES OF VR PODCAST

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip

laura-e-hallThe Immersive Design Summit brought together over 150 of the top experiential designers from around the world, mostly from the performance community of immersive theater productions and escape rooms, but also with a small number of people focusing on VR/AR. The future of storytelling is going to be immersive and interactive, and immersive theater and escape rooms are pushing the boundaries of physical interaction and interactive storytelling with live actors. The genre of escape rooms probably has the most seamless translation for patterns that cross over between physical and virtualized environments. The most popular VR escape room is I Expect You To Die, which I covered previously in episodes #223 and #482.

Laura E Hall of 60 Minutes to Escape, presented her best practices for designing escape rooms and immersive environments at the Immersive Design Summit. I had a chance to talk with her about the unique affordances and differences between physical and digital escape rooms, as well as her broad range of experiential design inspirations ranging from multi-user dungeons, narrative-driven indie games, alternative reality games, immersive theater, and artist collective installations like Meow Wolf.

LISTEN TO THE VOICES OF VR PODCAST

Experiences mentioned in this podcast

Here’s the VR games and narrative experiences that I recommend checking out at the end:

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip

The Columbia University School of the Arts’ Digital Storytelling Lab has been conducting interactive storytelling experiments to create guidelines, rules, and structures for facilitating emergent and interactive stories through an open-ended, collaborative process. Interactive storytelling requires surrendering of complete control over authorship of the narrative, and it’s open question for how to best structure an experience that allows for full immersion, creative expression, and a balance of diverse participation.

lance-weilerI had a chance to talk with ColumbiaDSL co-founder and director Lance Weiler about their Sherlock Holmes & the Internet of Things project, the major principles that guide their experiential design process, how they’re using collaborative storytelling to examine and shape solutions to larger policy and ethical issues in a playful way.

LISTEN TO THE VOICES OF VR PODCAST

Columbia DSL’s immersive and interactive experience called Frankenstein AI will be showing at Sundance New Frontier later in January.

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality

zimtok5VRChat is becoming one of the top streamed VR game as different Twitch personalities like pokelawls, dyrus, greekgodx, and LIRIK and YouTube personalities Jameskii & Nagzz21 start to role play in virtual worlds, and then capture clips of candid and funny interactions that are shared on the LiveStreamFail reddit. It may be surprising to some that these types of open-ended and casual VR experiences are gaining such traction with the live streaming communities, but VRChat is providing an experimental virtual context for IRL streams, improv role playing, and storytelling types of experiences for streamers. Livestreaming is providing important insights into the future of interactive media, and the importance of personality, character, and environmental context for immersive storytelling.

At Oculus Connect 4, I had a chance to catch up with Twitch Streamer Zimtok5, who is a variety streamer focusing solely on VR. He talks about how some of the more casual experiences make for great streaming since you’re able to connect more with the audience, or provide a context for you to connect to the personality and character of the steamer. We also talk about streaming VR titles, what makes for a good streaming experience in VR, as well as some of his favorite experiences to play on stream.

LISTEN TO THE VOICES OF VR PODCAST

Zimtok5 says that there are skill-based streamers and character-based streamers. Within the last couple of months, the character-based streamers are starting to find a new outlet and home in social VR experiences like VRChat. Some Twitch streamers have established audiences that they’ve connected with via chat, but now they’re able to connect to their audiences in a real-time in shared virtual spaces. Handling text-based, Twitch chat interactions while streaming is still a challenging open problem, but Zimtok5 has found some creative workarounds by peaking at a second screen through the nose hole of the VR HMD. There’s more tools available for the Vive with the OpenVRTwitchChat plug-in that puts a text chat window on your wrist as you’re in VR. Given for how VR Chat has started to blow up over the last couple of months, 2018 looks like a banner year for social VR and Twitch streaming. You can tune into highlights from Zimtok5 on his YouTube channel, or livestreams on Twitch.

Zimtok5 also livestreamed our conversation on Twitch, and posted the VOD to YouTube

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip

OSSIC’s Jason Riggs told me at CES last year that the future of music is going to be immersive and interactive. Interactive sound design where user agency is considered a part of the experience is a key ingredient to creating fully immersive audio experiences, and the AR & VR tech platforms are creating new opportunities for musicians to explore interactive music experiences. Sigur Rós’ interactive music collaboration with Magic Leap was recently featured in Rolling Stone, and that interview reminded me of a deep dive discussion that I had with VR immersive audio evangelist Sally Kellaway talking about the fundamental principles of immersive sound design. Kellaway provides an overview for how audio designers for games have been creating demand for toolsets and plug-ins with existing Digital Audio Workstations to help break out of the normal linear, authored paradigms. At the time of my interview, Kellaway was a creative director for OSSIC, but has since moved on and is freelancing as a VR/AR audio designer, strategist, and immersive audio evangelist.

LISTEN TO THE VOICES OF VR PODCAST

Because designing an interactive experience is an iterative process, then there’s no one philosophy or paradigm for how to plan, execute, or post-produce all of the audio components of an immersive experience. The process is driven by the tools that are available, and so Kellaway described to me some of the most popular production pipelines and workflows for immersive audio. There are plug-ins to Unity and Unreal like FMOD and Wwise, but there are built-in, object-oriented audio features in Unity & Unreal, as well as a range of different audio spatialization options for VR.

Hearing is sense that can sense a full 360-degree space, and it adds a layer of emotional engagement that speaks to the more primal aspects of our brain. There are also a lot of decades old audio technologies like ambisonics and binaural audio that have found a new home with VR, and so the surge of immersive technologies of AR and VR as a technological baseline is catalyzing a lot of innovation and experimentation within the realm of music and sound design. The Magic Leap Tonandi music demo by Sigur Rós shown to the Rolling Stone journalist Brian Crecente was not “a recorded piece of music”, but more of “an interactive soundscape.” Crecente describes his experience:

Tonandi starts by creating a ring of ethereal trees around you and then waiting to see what you do next. Inside, floating all around me are these sorts of wisps dancing in the air. As I wave my hands at them, they create a sort of humming music, vanishing or shifting around me. Over time, different sorts of creations appear, and I touch them, wave at them, tap them, waiting to see what sort of music the interaction will add to the growing orchestral choir that surrounds me. Soon pods erupt from the ground on long stalks and grass springs from the carpet and coffee table. The pods open like flowering buds and I notice stingray-like creators made of colorful lights floating around me. My movements, don’t just change this pocket world unfolding around me, it allows me to co-create the music I hear, combining my actions with Sigur Rós’ sounds.

This is in line with what OSSIC’s Jason Riggs predicted as the immersive and interactive future of music that VR & AR technologies will be to enable. The OSSIC demo at CES last year was a really impressive immersive audio experience that allowed you to fully interactive with the audio environment. OSSIC’s audio hardware made it an even more compelling spatial audio experience, but it’s just the beginning of how VR and AR will change how music is composed and experienced.

Will the existing linear authoring, audio tools adapt to become more spatialized? Or will the game engines have integrate enough plug-in support to DAWs to become the defacto production pipeline of audio experiences? Either way, AR and VR technologies will enable new distribution platforms for audiences to experience the spatial dimension of sound in a way that’s much closer to how we hear the world every day, and it’ll enable musicians like Sigur Rós and Miro Shot to push the boundaries what’s possible with spatialized music.

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip

Emilie-JolyEmilie Joly says that the rules of interaction are the stories within interactive narratives, and Apelab is creating the Spatial Stories plug-in and toolset in order to make it easier to define those rules. Unity uses an object-oriented approach for creating interactive environments, but it’s optimized for creating interactive video games. The Spatial Story toolset aims to create point-and-click interfaces geared towards immersive storytellers who want to create interactive experiences whether it’s in VR, AR, or mixed reality. Joly sees immersive storytellers as a combination of world building and writing, and they want to optimize the workflow for screenwriters and storytelling creatives such that they can more rapidly iterate ideas and experiences within both Unity and Unreal engines. They’re also building integrations with AI APIs in order to better implement conversational interfaces.

I had a chance to talk with Joly at Kaleidoscope VR’s FIRST LOOK market, and to walk through some of the features of the Spatial Story toolset, integrations with existing screenwriting tools like Final Draft, and metaphors that she uses to understand the unique affordances of interactive stories.

LISTEN TO THE VOICES OF VR PODCAST

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality

Otoy is a rendering company that pushing the limits of digital light fields and physically-based rendering. Now that Otoy’s Octane Renderer has shipped in Unity, they’re pivoting from focusing on licensing their rendering engine to selling cloud computing resources for rendering light fields and physically-correct photon paths. Otoy has also completed an ICO for their Render Token (RNDR), and will continue to build out a centralized cloud-computing infrastructure to bootstrap a more robust distributed rendering ecosystem driven by a Etherium-based ERC20 cryptocurrency market.

jules-urbach-2017I talked with CEO and co-founder Jules Urbach at the beginning of SIGGRAPH 2017 where we talked about relighting light fields, 8D lightfield & reflectance fields, modeling physics interactions in lightfields, optimizing volumetric lightfield capture systems, converting 360 video into volumetric videos for Facebook, and their movement into creating distributed render farms.

LISTEN TO THE VOICES OF VR PODCAST

In my previous conversations with Urbach, he shared his dreams of rendering the metaverse and beaming the matrix into your eyes. We complete this conversation by diving down the rabbit hole into some of the deeper philosophical motivations that are really driving and inspiring Urbach’s work.

This time Urbach shares his visions of VR’s potential to provide us with experiences that are decoupled from the normal expected levels of entropy and energy transfer for an equivalent meaningful experience. What’s below the Planck’s constant? It’s a philosophical question, but Urbach suspects that there are insights from information theory since Planck’s photons and Shannon’s bits have a common root in thermodynamics. He wonders whether the Halting problem suggests that a simulated universe is not computable, as well as whether Gödel’s Incompleteness Theorems suggests that we’ll never be able to create a complete model of the Universe. Either way, Urbach is deeply committed to trying to creating the technological infrastructure to be able to render the metaverse, and continue to probe for insights into the nature of consciousness and the nature of reality.

Here’s the launch video for the Octane Renderer in Unity

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip

danny-bittmanOver the past couple of years, Danny Bittman has logged over 1800 hours in programs like Tiltbrush & Google Blocks on his way to becoming a full-time VR artist. Bittman started studying film, but realized that he had a lot to catch up with of over 100 years of cinematic history. He decided to pivot to virtual reality since the biggest thing holding him back from becoming a professional VR artist was putting in the hours to learn the tools, rapidly iterate on projects, and explore the potential of spatial storytelling. He has found a niche in creating vast landscapes in Tiltbrush that take dozens of hours to create. Over the past couple of years, Bittman has collaborated with Marvel on a Doctor Strange, helped beta test Google Blocks, created art for TheWave performances, performed at a VMWare conference, and was the lead artist on Billy Corgan’s Aeronaut music video in collaboration with Viacom NEXT.

I had a chance to talk with Bittman at Oculus Connect 4 about his journey in becoming a full-time VR artist, how he uses VR to express and reflect on his emotional states, how he’s using VR to document his dreams, and how his experiences of the Sandy Hook Elementary School shooting in his hometown of Newtown, CT have affected what he creates and wants to experience in VR.

LISTEN TO THE VOICES OF VR PODCAST

Here’s the Aeronaut music video in collaboration with Billy Corgan

Here’s the Scribbler Mixed Reality Video

Here’s Bittman’s visualization of Dream sequence

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality

eric-florenzanoSoundboxing is a VR rhythm game that has found a community of people who use it for exercising in VR, with some people reporting that they’ve lost up to 50 pounds from playing it. Soundboxing is similar to Audioshield in that you punch orbs set to the rhythm of songs streamed from YouTube, but rather than using an algorithmic approach Soundboxing allows users to record their own runs, which means that all of the content is user generated. Soundboxing allows users to record and edit their own runs by playing a song and punching an invisible wall, and the scoring system encourages streaks, which results in helping to cultivate and track flow states.

I’ve really enjoyed playing Soundboxing, and it’s an engaging game that has a lot of options to allow you to follow creators, curate playlists, and customize your gaming experience. For example, you can record a run with you dominant hand, and then flip the recording so that you can train yourself to become more ambidextrous. The official Soundboxing website also has user profiles, with an impressive set of archive and search integrations compared to other VR game websites.

Soundboxing was created by solo indie developer Eric Florenzano, who was working on a VR browser for Reddit and discovered how compelling it was to record your embodiment in the process of trying to figure out what comments look like in VR.

Florenzano has thought a lot about the larger economic systems and architecture within the VR software ecosystem, and he shares a lot of ideas for how to cultivate and design systems that would allow people to make a living within VR. Florenzano’s Soundboxing relies upon user generate content, and he was surprised to find that about 50% of his users were creating and sharing content. He sees Soundboxing a 3D immersive web browser, and takes a lot of inspiration for how the Brave browser is attempting to create blockchain-based digital advertising with their Basic Attention Token.

This is a chat that I had with Florenzano back in May talking about how people are using Soundboxing for exercise and fitness as well a lot of deep thoughts about the future of virtual economies & paying arists, and how he’s thinking about what immersive fitness experiences will look across different immersive platforms and mediums.

LISTEN TO THE VOICES OF VR PODCAST

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip