mikepodwalGoogle Earth VR has been one of the most mind-blowing experiences that I’ve had so far in VR, for so many different reasons. It’s felt like it’s been rewiring my brain to accommodate the new perspectives of the earth in a way similar to what returning astronauts report as “The Overview Effect.” It’s also enabled me to navigate the earth based upon natural landmarks and without seeing borders, and therefore start to cultivate a new type relationship with the earth. It’s also allowed to find common ground with friends and strangers by sharing stories that are based upon geographic locations, and it’s one of the most intimate and powerful social VR experiences that I’ve had so far.

Dominik-KaeserI had a chance to do an interview with Mike Podwal, Product Manager for Earth VR as well as Dominik Kaeser, Engineering Lead on Earth VR to ask them about their design process. They focused primarily on performance, comfortable navigation, and an overall immersive experience of the earth. They weighed the tradeoffs between simplicity vs usefulness in looking at what features to implement, and very few of their beta testers requested an explicit search functionality. They instead preferred to do organic exploration and navigate based upon landmarks in a way that provides a new perspective and relationship with the earth. In the future, they will looking at the 2D version Google Earth for inspiration for new features such as annotation, but they also are open to feedback for the types of features that people are requesting. You can hear a lot more insights and stories behind the process of creating Google Earth VR in the interview below.

LISTEN TO THE VOICES OF VR PODCAST

Google Earth VR is a powerful asymmetrical social VR experience where people can watch a 2D screen of someone in VR who is telling stories about their life or giving guided tours. Sharing stories based upon geographic locations has been a powerful way for me to find common ground with both friends and strangers. Google Earth feels like one of first real killer apps of VR that has made me want to share the process of annotating the earth with layers of meaning with my friends and family, and it will likely inspire a lot of people to buy their first high-end VR system.

Google Earth VR maps out the earth at many different scales ranging from human scale to being able to see entire countries. Being able to seamlessly navigate the entire globe at any one of these scales has provided something that no human has been able to really experience before, and so my brain has been stimulated in a way that feels like many new neural connections have been forged. It’s stimulated my mind with new ideas and insights unlike any other experience I’ve had before, and seems to make the explicit connection between geography and the architecture of memory. It’s also personally validated the concepts of embodied cognition theory that suggest that our cognitive processes are influenced both by our mind and body but also our environment. Google Earth seems to provide enough fidelity to your mind at the human scale to be able to evoke powerful memories, and I found myself efficiently mapping out the emotional landscape through the process of flying over my hometown in a way that I could never do before.

Google Earth VR is a free application for the Vive on Steam VR, and so I had a couple of follow up questions for Google after my interview. I asked them: “What kind of data can and cannot be collected given Google’s standard Privacy Policy within a VR experience?” and “Are there long-term plans to evolve Google’s Privacy Policy given how VR represents the ability to passively capture more and more intimate biometric data & behavioral data?”

Here is Google’s response:

Our users trust us with their information and we outline how it may be used across Google — to personalize experiences, to improve products, and more — in our Privacy policy. Users can control the information they share with Google in ‘My Account’.”

I didn’t see any explicit privacy settings related to virtual reality yet, and this is really the first application that they’ve released that starts to raise some of these deeper questions for me. I’ll be talking to more privacy and biometric data experts to get specific information about some of my concerns.

I believe that we are moving from the information age into the experiential age. Within the information age, we gave explicit consent over data that could be provided through a form. In the experiential age, companies can track head gaze and hand motions, and eventually eye gaze, emotional states, heart rate, and EEG data. As the founder of OpenBCI has suggested, EEGs and potentially other biometric data may have a unique signature that can’t be anonymized as easily and could have additional privacy concerns down the road.

These open questions about biometric data and privacy are long-term open question for the entire VR industry, as well as how to sustain vast experiences like this. I personally believe that new business models may have to be developed to really sustain these new types of services like Google Earth VR and other experiences in the metaverse. But for now, it’s an amazing service to humanity to provide this service free of charge for the world, and I expect that it will blow a lot of minds and inspire a lot of people to try out VR for the first time.

Overall, Google Earth VR has been one of those experiences that has really stuck with me and inspired me to reach out and share it with friends. It’s been a profoundly intimate way to get to know someone by having them take you on a guided tour of their locations on the earth that mean the most to them. By prioritizing immersion, Google Earth provides a completely new way of navigating the Earth the can provide some totally new perspectives. This is also an ambitious design effort that starts to explore entirely new interfaces and user experience paradigms that give a glimpse of where immersive computing is headed in the future.

This is just the first iteration of Google Earth VR, and the quality is just going to get better and better. However, I personally have my doubts that it’ll ever get to the full resolution of the Earth with all of it’s constantly changing dynamic processes. But I’ve already started to see and appreciate the earth in a new way, and it’s inspired me to want to travel and pay more attention to the beauty that’s all around me.

Google Earth VR is available for free on Steam.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

android-jonesThere was an immersive dome experience at VRLA called Samskara that was based upon the Hindu Vedas produced by 360art. It featured different Hindu mythological characters reimagined by visionary artist Android Jones. I had a chance to talk to Android about the intriguing backstory of this project that involves a mysterious Swami who is experimenting with the latest immersive technologies as a tool for spiritual transformation.

LISTEN TO THE VOICES OF VR PODCAST

Android was also publicly debuting Microdose VR for the first time at VRLA, which is a particle-emitting painting program designed for realtime VJ performances or a tool to get into the creative flow state. The experience was informed by Android’s many years doing live art performances at transformational festivals, his experiences within the games industry as a digital artist, as well as inspiration from a number of different psychedelic experiences.

You move your hands around spray painting particles in Microdose VR with a similar mechanic to Tiltbrush, but rather than drawing 3D vector lines your strokes emit a wide range of different psychedelic molecules that morph, evolve and disappear. There’s no ability to save or undo any of your creations, and so it’s like an ephemeral sand painting experience that focuses on the cultivation presence and unlocking creative flow.

Overall, Android wants to bring virtue to virtual reality and believes that it can be a tool for our own evolution. He wants to help evolve a new type of VR artist, and to create tools for the next generation of creatives and democratize the creative experience.

Here’s the trailer for Samskara

Here’s an example of what a performance in Microdose VR looks like

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

Depending on who you were rooting for in the US election, last night was either a shocking and sobering wake-up call to a reality that you don’t feel a part of or it was a jubilant celebration of a victory that was doubted and underestimated by the mainstream political and media establishments. Either way, what’s clear is that there’s a cultural divide in America that’s split nearly evenly between the percentage of people who voted in the election. Trying to understand the other side of the cultural gap can feel like entering into an entirely different parallel universe, and I feel like virtual reality has an important role to play in bringing more empathy and understanding to each side.

ian-foresterI had a chance to catch up with VR Playhouse co-founder Ian Forester at Oculus Connect 3, where he shared with me some of his vision for how VR could change the way that the learn and understand the world. He sees that there are three primary ways that we learn about the world including our direct sensory experiences, our direct observations of other people, and then a lot of indirect cultural indoctrination that comes from the mainstream media, education, and the culmination of all of our social interactions.

LISTEN TO THE VOICES OF VR PODCAST

Ian sees that VR has the potential to provide us with a wider range of direct sensory experiences with a diverse range of people and cultures within social VR experiences, and that this has the potential to give us more access to learning from our interactive direct experiences rather than from information that we’re consuming from different sources of external authority.

It feels like the United States is at real crossroads right now with the political culture gap that exists right now, and this interview with Ian starts to discuss how VR could help us move beyond our existing methods of cultural indoctrination. Rather than passive consumption, VR allows us to have interactive experiences that could help engage and connect us to each other in new ways that transcend the capabilities of any other technologically-mediated interfaces.

Tilt Brush art by 3Donimus

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

josh-carpenterGoogle announced at the W3C WebVR workshop in mid-October that they would be shipping a WebVR-enabled Chromium browser in Q1 of 2017. I had a chance to catch up with Josh Carpenter last week to talk about some of the work that Google is doing to enable innovation on the open web, and more about his W3C talk on HTML, CSS & VR and some of Google’s early experiments with hybrid apps that combine OpenGL with web technologies.

LISTEN TO THE VOICES OF VR PODCAST

Josh talks about how WebVR is drawing inspiration from the Extensible Web Manifesto in being able to provide low-level APIs that will create a common baseline of a solid experience based upon web technologies.

At GDC in March, the WebVR demos on the expo floor had trouble hitting 90 fps, but since then they’ve been able to start to meet that minimum baseline of performance. Achieving this milestone helped to show other VR companies that the web could actually be a viable distribution platform for VR.

But Josh compares talking about WebVR to a VR developer right now as sort of like what it must have felt like to be Tim Berners Lee talking about the potential of the open web to a CD-ROM developer in the early 90s. There will continue to be premiere experiences and innovations happening within native VR applications, but there will likely be unique affordances and convenience that the web can provide to an immersive experience that goes beyond what a native app can do.

Josh gives Netflix as an example to to show the power of the open web. If we were to just look at graphic fidelity as the ultimate measure of performance and value, then we all would be watching movies on Blue-Ray discs rather than on Netflix. But there’s lower friction and instant gratification with Netflix, even though the graphic fidelity is not nearly as good. This is one example of how Josh thinks about the potential of an interconnected Metaverse in comparison to a closed, walled-garden app ecosystem that by all objective measures provides a vastly superior experience.

Josh appreciates the power of strongly vertical integration and proprietary solutions, but also believes that a common horizontal baseline of WebVR could enables the same type of rapid innovation and emergent creativity that the open web has enabled.

He also says that Google’s WebVR browser is going to be based upon the open source Chromium browser, and that Oculus’ WebVR browser named “Carmel” is also based upon Chromium. He says that native web apps like Slack are built on open web standards and bundled with Chromium and Electron, and that he’s looking forward to seeing what type of innovation comes from how developers imagine what a browser could do in a VR experience. One example is an anthropomorphized NPC character that is powered by the Chromium browser.

Josh sees 2017 as a year for exploration and seeing what developers do with the draft specifications of the WebVR standard. Right now Google’s team is dealing with how to view web content within a VR context. Josh says that Apple came up with pinch-to-zoom mechanic that allowed desktop-optimized layouts to be viewed with mobile browser before responsive designs were invented, and that Google is in the process of experimenting
with optimizing 2D content into a 3D context when the pixel density isn’t high enough to do a direct translation. Google has also been experimenting with combing HTML and CSS with OpenGL content in order to do rapid prototyping of user experiences using standardized web development technology stacks.

Josh also shared with me that the Voices of VR podcast has been an important part of the evolution of WebVR since the beginning of the consumer VR gatherings starting with Silicon Valley Virtual Reality Conference in May of 2014. He said that the previous Voices of VR episodes on WebVR have been an important part of both getting the word out, but also helping to build internal buy-in at different key moments of WebVR’s history.

So here’s a list of my previous interviews about WebVR that go all the way back to episode #13. It’s pretty amazing to hear the evolution of where it started and to see where it’s at today with every single major VR company and browser vendor participating in the recent WebVR workshop.

To get more involved in WebVR, then be sure to go to WebVR.info and check out some of the additional links in the previous Voices of VR episode about WebVR.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

dmarcosSome of the earliest experiments of making VR a first-class citizen on the web originated at Mozilla in 2014. Then the WebVR spec was drafted in a collaboration between Mozilla and Google’s implementations. There’s been a lot of excitement and momentum building around WebVR over the last couple of months first with the WebVR announcements by Oculus at Oculus Connect 3, and then with over 140 WebVR developers meeting for a W3C workshop on WebVR that happened on October 19th & 20th.

chris-van-wiemeerschI had a chance to stop by Mozilla’s offices and catch up with two WebVR developers Diego Marcos and Chris Van Wiemeersch who talked about the big takeaways that happened at the recent W3C WebVR Workshop. There were some commitments made for a publicly available WebVR-enabled browser from Google in Q1 2017 and a pilot program from Mozilla also in Q1. Diego talks about Mozilla’s new experimental, high-performance web browser Servo implementing the WebVR APIs, and Chris talks the unprecedented momentum and public support that WebVR is seeing across the VR industry. We also talk about some of the open web concepts like progressive web apps, launching WebGL games as native apps using Electron, emerging technologies enabled by the blockchain like the IPFS distributed web, and some of the next steps for WebVR.

LISTEN TO THE VOICES OF VR PODCAST

WebVR Resource Links

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Kelly-VicarsOn October 28-31, there was a virtual & augmented reality art show in San Francisco called “The Art of Dying.” It featured 15 VR experiences and another dozen artists exploring death and grieving using immersive technologies. The show was produced by Kelly Vicars & Lindsay Saunders with the intention of promoting VR and AR as new art mediums that deserve to be seen within the context of an art gallery setting. They created immersive physical installations for each VR experience to help create an environment where participants could have difficult conversations about death and dying inspired a series of shared virtual and augmented reality experiences.

Lindsay-SaundersSo today’s podcast episode is a unique combination of covering The Art of Dying show with an interview with Kelly and Lindsay, but it’s also an opportunity to speak to my experience as an artist with a piece in the show. Kelly and Lindsay share their process of producing The Art of Dying as well as some of their observations in the types of conversations and reactions that were catalyzed by the VR experiences.

LISTEN TO THE VOICES OF VR PODCAST

I attended the show both as a journalist and VR enthusiast interested in having all of the experiences, but also as an artist with a VR experience within the show. I co-wrote & produced Crossover in the the Spring of 2015, and it’s an narrative story based upon my experience of losing my wife and father to suicide. I created a virtual reality grief ritual in order to explore how the affordances of VR could be used in my process of healing.

Death is already a difficult topic to talk about, and going through a suicide is an extra burden that has a lot of cultural taboos associated with it. I wanted to use VR as a medium to break those taboos because I felt that VR offered a certain amount of intimacy and emotional presence to explore difficult topics. Just as some difficult conversations need to happen face-to-face, there are some stories that just work better within VR because it cultivates an intimate face-to-face context that allows deeper topics to be explored.

Other topics covered in other VR experiences in the show included floating down the River Styx and transitioning from Earth into the Underworld, a VR conceptualization of going through bardo states explained in the Tibetan Book of the Dead, an immersive Tiltbrush world featuring a ceremonial ritual temple inspired by Mayan culture, and a series of experiences that were abstracted representations of different bardo states. A full list of all of the experiences is down below.

Here’s a 360 video of my Crossover experience that was featured in The Art of Dying:

Here is a list of artists participating in The Art of Dying show.

Virtual reality (VR)

  • Transition by Mike von Rootz & Joost Jordens
  • Ceremony for the Dead Tilt Brush scapes by Sutu Eats Flies
  • SoundSelf by Robin Arnott
  • Pearl by Google Spotlight Stories Lab
  • Das Is by Chelley Sherman
  • Bardo Thogul by John Benton & team
  • VR scene from ‘That Dragon, Cancer’ by Ryan Larson & Adam Green
  • Crossover by Kent Bye
  • Imago by Chuck Tsung-Han Lee
  • Red Patterns by Ando Shah & Pierre Friquet
  • Zen Parade by ShapeSpace VR
  • Round Round by Aimée Schaefer, Shir David, Kendra Leach & Shaffira Ali
  • Float by Kate Parsons
  • Death is Only the Beginning by Jose Montemayor, Bec Abdy & Olivia Skalkos
  • Recursion by Erica Layton

Augmented Reality (AR):

  • AR art by Zenka, Carla Gannis, Stefanie Atkinson, and Lauren Carly Shaw
  • AR installation by ecco screen

Mixed Reality (MR):

  • Holoshatter by Yosun and staRpauSe
  • Grasp, an AR installation by Tucker Heaton & Toshi Hoo
  • Hologram by Claudia Bicen

Interactive art:

  • New interactive installations by Marpi & ecco screen
  • ‘Fear,’ a sound installation by Anna Landa
  • Stefanie Atkinson, Timothy Surya Das & Kerry Boyatt
  • Sound installatin by Nick Shelton, Devon Meyers & Kelly Vicars with original music by Alex Stickels

Art:

  • Art by Bay Area artists Kevin Balcora, Victor Castro, & Kelly Vicars
  • Sculptures by Stuart Mason, Upload VR
  • Installations by Eric Cole, Liisa Laukkanen & Kelly Vicars

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

erica-southgateThe concept of embodied cognition is a hot topic within immersive education circles, and was a featured topic at during the Embodied Learning educational workshop that happened at the IEEE VR academic conference. Embodied Learning could help revolutionize education by incorporating our bodies within the learning process.
We generally believe that humans think with our brains, but embodied cognition theories suggest that we also use our bodies and surrounding environments in order to think and learn. This has huge implications for VR since it both provides a mechanism to be able to more fully engage within the learning process as well as have more control of our contextual environments that are optimized to teach different concepts.

I had a chance to catch up with educator Erica Southgate from the University of New Castle at the Embodied Learning workshop this past March. She’s from the University of Castle in Australia and is using serious games and augmented reality to teach literacy. She’s exploring how to use social VR to enable high prestige professionals to mentor disadvantaged youth, and she’s also studying how indigenous cultures use social structures and knowledge holder rituals in order to train youth, and how this could inspire open world collaborative learning environments in VR.

LISTEN TO THE VOICES OF VR PODCAST

For more information on Embodiment Theory and Embodied Cognition, then be sure to check out my previous interviews about using dance to teach computational thinking, as well as with Saadia Khan and Embody Learning workshop keynote presenter Chris North.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

benjamin-lokOn today’s episode, I talk with Dr. Benjamin Lok from the University of Florida about how they’re using Virtual Humans as patients to train medical students. He talks about the key components for creating a plausible training scenario which include both accurate medical symptom information, but also more importantly a robust personality and specific worldview. Humans hardly ever just transmit factual data, and so whether the patient says too much or not enough, the students have to be able to navigate a wide range of personalities in order to get the required information to help diagnose and treat the patient.

LISTEN TO THE VOICES OF VR PODCAST

Virtual humans help to embody symptoms that a human actor can’t display, assist in going through an extended interactive question and answer path, or they’re used within collaborative training scenarios where it becomes difficult to get all of the required expert collaborators into the same location at the same time.

Dr. Lok makes the point that creating virtual humans requires a vast amount of knowledge about the human condition and that it’s really a huge cross disciplinary effort, but one that is one of the most important fields of study since it has so much to teach us about what it means to be human.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

saschka-unseldOculus Story Studio has been exploring VR storytelling through a series of short narrative experiences starting with Lost and Henry, but they wanted to push the envelope of immersive storytelling with their next short titled Dear Angelica. They created a new content creation tool called Quill that enables artists to create immersive illustrations and animated stories entirely while being enveiled in VR. It was announced that it was going to be released sometime at the end of January when Dear Angelica premieres at Sundance.

I had a chance to catch up with Saschka Unseld at Oculus Connect 3, who is the creative director of Oculus Story Studio as well as the writer and director of Dear Angelica, to talk about Quill and the intention behind their next VR experience.

LISTEN TO THE VOICES OF VR PODCAST

Quill is a mix between Adobe Illustrator and Adobe After Effects in that there are 2D vector brushes that have a bit of motion graphics flare. This gives the “quillustrations” their own distinct feeling that is very unique to the VR medium. It’s more like stepping into a surrealistic dream or through someone’s impressionistic memories that really come alive when you are co-present with them. There’s also a lot of dynamic movement as the environment and characters are redrawn out and constructed line by line.

Their intention with Quill was to create a non-opinionated tool that feels more like the style of the artist that rather the tool. The closest analogy in the VR world is probably Tiltbrush, which also uses 2D vector-like brushes. But Tiltbrush takes a much more opinionated approach with their highly-stylized brushes, and so it’s often easier to tell that it’s a Tiltbrush creation rather than who the artist who created it.

The advantage of a tool like Tiltbrush is that it’s a lot easier for non-artists and casual creators to make something that feels amazing, just from the shear joy of being able paint with light in 3D for the first time ever. But with Quill, it’s going to be a lot harder for non-trained artists to pick up the tool and feel like they’re the next Picasso. It’s more of a blank slate, and will require more of a learning curve for each artist to be able to fully express their style.

Oculus Story Studio has also been collaborating with comic book artists to be able to empower them to create immersive VR art, but also craft an entire story within VR using Quill’s storytelling engine. Dear Angelica has been the first film/movie/VR narrative experience to be created with Quill, but it’ll be interesting to see more independent artists start to use the tools to craft the stories that they want to tell. But until then we’ll have to
wait until Sundance in January before they release it more widely and start talking about other projects that are being created with it.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

tony-bevilacquaCognitive VR is an VR analytics platform that has an impressive system for visualizing a player’s movements and gaze within a 3D representation of a VR experience. Their SceneExplorer tool translates the 3D geometry from a Unity or Unreal experience into a WebGL mesh that can be shared within a web browser. It allows VR developers to quickly and intuitively visualize how they’re moving around and what they’re looking at, but also helps to identify performance bottlenecks based upon a wide range of different hardware configurations.

LISTEN TO THE VOICES OF VR PODCAST

rob-merkiI had a chance to catch up with Cognitive VR founder Tony Bevilacqua & Product Manager Robert Merki at Techcrunch Disrupt where we talked about their VR analytics platform, and where it’s going in the future. They’re looking forward to eventually adding more qualitative feedback and more detailed eye tracking analytics that will expand their user base beyond VR developers, but also looking at how to handle augmented reality analytics where there are so many variables with changing environments.

There are a number of different VR Analytics platforms out there, but the approach that Cognitive VR is taking in correlating their visualizations within a 3D model of an experience is one of the more compelling and interesting implementations that I’ve seen so far.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip