clay-bavorAt Google’s 10/4 press conference, they announced a new Pixel 2 phone and a range of new ambient computing devices powered by AI-enabled conversational interfaces including new Google Mini and Max speakers, Google Clips camera, and wireless Pixel Buds. The Daydream View mobile VR HMD received a major upgrade with vastly improved comfort and weight distribution, reduced light leakage, better heat management, cutting-edge aspherical fresnel lenses with larger acuity and sweet spot as well as an increased field of view of 10-15 degrees than the previous version. It’s actually a huge upgrade and improvement, but VR itself only received a few brief moments during the 2-hour long keynote where Google was explaining their AI-first design philosophy for their latest ambient computing hardware releases.

I had a chance to sit down with Clay Bavor, Google’s Vice President for Augmented and Virtual Reality to talk about their latest AR & VR announcements as well as how Google’s ambient computing and AI-driven conversational interfaces fit into their larger immersive computing strategy. YouTube VR is on the bleeding edge of Google’s VR strategy, and their VR180 livestream camera can broadcast a 2D version that translates well to watching on a flat screen, but also provide a more immersive stereoscopic 3D VR version for mobile VR headsets. Google retired the Tango brand with the announcement of ARCore on August 29th, and Bavor explains that they had to come up with a number of algorithmic and technological innovations in order to standardize the AR calibration process across all of their OEM manufacturers.

LISTEN TO THE VOICES OF VR PODCAST

Finally, Bavor reiterates that WebVR and WebAR are a crucial part of the Google’s immersive computing strategy. Google showed their dedication to the open web by releasing experimental WebAR browsers for ARCore and ARKit so that web developers can develop cross-compatible AR apps. Bavor sees a future that evolves beyond the existing self-contained app model, but this requires a number of technological innovations including contextually-aware ambient computing powered by AI as well as their Virtual Positioning System announced at Google I/O. There are also a number of other productivity applications that Google is continuing to experiment with, but the screen resolution still needs to improve from having a visual acuity measurement of 20/100 to being something closer to 20/40.

After our interview, Bavor was excited to tell me how Google created a cloud-based, distributed computing, physics simulator that could model 4 quadrillion photons in order to design the hybrid aspherical fresnel lenses within the Daydream View. This will allow them to create machine-learning optimized approaches to designing VR optics in the future, but it will also likely have other implications for VR physics simulations and potentially delivering volumetric digital lightfields down the road.

Google’s vision of contextually-aware AI and ambient computing has a ton of privacy implications that are similar to my many open questions about privacy in VR, but I hope to open up a more formal dialog with Google to discuss these concerns and potentially new concepts of self-sovereign identity and new cryptocurrency-powered business models that go beyond their existing surveillance capitalism business model. There wasn’t a huge emphasis on Google’s latest AR and VR announcements during the press conference as AI conversational interfaces and ambient computing received the majority of attention, but Google remains dedicated to the long-term vision of the power and potential of immersive computing.

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

Brian-Blau-2017Brian Blau is the vice president of research for personal technologies at Gartner Research where he’s in the business of making predictions about the consumer adoption of virtual reality and augmented reality technologies. I last interviewed Blau in 2015 when he was saying that his predictions were a lot more conservative than other analysts who were predicting more explosive growth for VR, and Blau tells me that his more conservative estimates have more closely matched with reality where he slightly overestimated PC VR market and underestimated how fast the mobile VR HMD market would take off.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with Blau at Google I/O on May 17th, 2017 where we talk about the state of the VR & AR industries and what some of the potential catalysts for consumer adoption might be. A big point that Blau makes is that technologies get adopted when people are not explicitly thinking about them, and that there may be more drivers of immersive technologies through other ambient computing innovations. This interview was conducted a few weeks before Apple announced ARKit on June 5th and then Google ditched the Tango brand and depth-sensor hardware requirement for their phone-based AR on August 29th when they launched ARCore. Then on September 12th, Apple announced front-facing cameras on the iPhone X for companies like Snapchat to do more sophisticated digital avatars, as well as Animojis that provide the ability to embody emojis with recorded voices messages. Apple also announced it’s now possible to make phone calls via the Apple watch + Airpods, and so this is a push towards ambient computing with conversational interfaces, and moving away from solely relying upon screens on phones.

Like Duygu Daniels told me in 2016, Snapchat is an AR company, and it’s possible that they have had more of an influence on driving Apple’s technological roadmap than virtual reality has. The consumer use of services like Snapchat and Animoji may prove to be key drivers of immersive technologies since Apple decided to put a depth sensor camera on the front of the camera rather than on the back. The front-facing camera offers more sophisticated ways to alter your identity through AR filters, which when you can see in the virtual mirror of your phone screen changes the expression of identity through the embodiment of these virtual avatars. You can see how much Apple’s Craig Federighi changed his expression of himself while recording an Animoji during the Apple keynote:

Snapchat’s Spectacle glasses received a lot of grassroots marketing from users who were recording Snaps absent a phone. Will the additional digital avatar, face-painting features of the iPhone X inspire extra demand for consumers to want to pay $999 for these types of feature that are only made available by a front-facing depth camera? But it’s clear that the technological roadmap for mobile computing has now started to include volumetric and immersive sensors. Google made a bet with Tango that adoption would be driven by a depth sensor pointed outward into the world for AR, but it looks like Snapchat could be a key app that popularizes front-facing cameras and the use of augmented and mixed filters that change how you express yourself and connect to your friends.

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

ramez-naamRamez Naam is the author of The Nexus Trilogy sci-fi novels, which explores the moral and sociological implications of technology that can directly interface with the brain. He gave the keynote at the Experiential Technology Conference in March exploring the latest research exploring how these interfaces could change the way that we sleep, learn, eat, find motivation to exercise, create new habits of change, and broadcast and receive technologically-mediated telepathic messages. I had a chance to catch up with him after his talk where we do a survey of existing technologies, where the invasive technologies are headed, the philosophical and moral implications of directly transferring data into the brains, and whether or not it’ll be possible to download our consciousness onto a computer.

LISTEN TO THE VOICES OF VR PODCAST

Links:

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

kevin-mackKevin Mack is an Oscar-winning visual effects artist and abstract artist who creates digital spaces with fluidly moving textures that are awe-inspiring in it’s ability to create a novel experience unique to VR. In Blortasia you float weightlessly exploring the ins and outs of a series of tunnels that have a consistent topological sculpture, but with an ever-changing shader of patterned frequencies of rainbox colors that cultivate a sort of visual neural entrainment. It aspires to recreate a psychedelically transcendent or transpersonal experience that goes beyond what your verbal mind can easily understand as there’s no content, message, story, game or objective beyond providing an experience that’s only possible in these virtual worlds. It’s this unique balance between seeing an exciting and novel visual experience that’s also simultaneously relaxing and has the power to induce powerful trance states that may have unique healing properties that are being discovered in medical applications for distraction therapy.

LISTEN TO THE VOICES OF VR PODCAST

Mack has a neuroscience background, and so he’s been collaborating with brain surgeons who are experimenting with using his Zen Parade 360 video as a hypoalgesic to decrease sensitivity to painful stimuli, but it also suppresses the normal thought processes of the left brain so that it neuroscientists can map out and discover new properties of our right brains. Preliminary studies are showing that his abstract design approach to distraction therapy applications in VR are actually more effective than other VR apps that were specifically designed for pain management.

Mack describes himself as a psychonaut having experimented with a lot of psychedelic experiences, but he’s also studied meditation, lucid dreaming, and a number of other esoteric and mystical practices. His career has been in the visual effects industry where he won an Academy Award for his work on What Dreams May Come, but with virtual reality he’s finally able to synthesize all of his life experiences and interests where he can allow people to step inside of his immersive VR art experiences that are designed to expand the blueprints of our minds. He sees that verbal language has allowed humans to evolve our science and technology up to this point, but that it’s also limited us and constrained us to a whole host of verbal neuroses. He hopes that his virtual reality experiences like Blortasia and Zen Parade can help free us from the shackles of our left brains that he sees are inhibiting the deeper parts of our intuition and unconscious levels of awareness. He’s personally had a number of amazing but also traumatizing experiences with psychedelics, and so he’s trying to use virtual reality in order to replicate those transcendent feelings of awe and wonder that come from mystical experiences in a more safe and controlled fashion.

Mack also shares his out-of-this world, retrocausality backstory that includes a substance-free psychedelic experience with a time-traveling artificial consciousness that’s he’s just starting to create now with neural networks embedded within his art. Is it possible that Mack in the process of actually developing a sentient level of artificial consciousness that will evolve to master the structures of space-time to bend the arrow of time? Or was it just the vivid imagination of a four-year old that has provided him with a powerful inspiration for his entire life? Either way, his Blortasia experience has stumbled upon some important design principles stemming the desire to create art that pushes the boundaries of consciousness.

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

The sexual harassment lawsuit against UploadVR was reported to be settled via Tech Crunch on September 6th, and a week later the New York Times followed with more details about how Upload had been barely dented. The case was settled without any elaboration about what did or didn’t happen beyond a vague open letter from the founders of UploadVR. This issue has has splintered the VR community into different factions of people who are either actively blacklisting Upload or have written it off as an isolated incident that has resulted in changes and growth.

Former employee Danny Bittman wrote about his brief time at Upload in a recent Medium post and there were some women who spoke out in a Buzzfeed article in July, but beyond that not many people with first or second-hand knowledge of the lawsuit allegations have made statements on the record. (You can find my Facebook posts about Upload since May here: 1 2 3 4 5). There hasn’t been a lot of people who have been willing to talk about this issue on the record, but this seems to be changing after the latest round of news about the settlement lawsuit that has left segments of the VR community very unsettled.

selena-pinnellOne woman from the VR community who was willing to talk to me about the community fallout from the UploadVR lawsuit was Selena Pinnell, who is the co-founder of Kaleidoscope VR festival and fund. She is also a producer and featured participant within the Testimony VR project. I previously interviewed the director of Testimony VR project about their efforts to use VR to create an immersive context for women and men to share testimony about their experiences of sexual assault so that audiences can bear witness to those direct experiences. Skip Rizzo has said that healing from PTSD involves being able to tell a meaningful narrative about your traumatic experiences while remaining emotionally present, and Testimony VR is trying to create a new form of restorative justice by capturing these stories within VR that viewers can have have an one-on-one level of intimacy while they bear witness. Pinnell talks about how powerful it was to have over 150 co-workers and friends witness her testimony about being a rape survivor within the context of a VR experience.

LISTEN TO THE VOICES OF VR PODCAST

While VR holds potential for the future of distributing new forms of restorative justice, this issue with Upload feels like it’s a long way from achieving a state of justice and a full accounting of the truth of what happened. Members from the Women in VR communities privately do not feel like justice has currently been served, and Pinnell voices those common concerns as to why she can no longer support Upload as well as why in her assessment the leadership team of Upload never fully accounted for what exactly they did wrong and what they’ve learned.

She also says that it’s hard to trust the leadership after they originally declared that the originally allegations in the lawsuit were “entirely without merit.” Pinnell talks about how crushing it can be to have your testimony of your direct experience be so explicitly denied in this way, especially when it comes to taboo topics like sexual harassment or sexual assault. (Note that the original allegations against Upload were harassment, gender discrimination, hostile work environment, unequal pay, and retaliation, and there weren’t any allegations of sexual assault.) Pinnell emphasizes how important it is to try to listen to women when they are providing testimony about not feeling safe within a work environment, and to try not to go directly towards demanding objective proof from a frame of skeptical disbelief. Learning how to listen, empathize, and reflect the truth of a direct experience is a skillset that is needed here, and it’s something that the unique affordances of the virtual reality community can help to cultivate through projects like Testimony VR. But there’s many more unresolved issues and open questions that Pinnell and I discuss in deep dive into new models of restorative justice and the community fallout surrounding the Upload lawsuit settlement.

This is a listener-supported podcast, consider making a donation to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

isaac-cohenARQUA! was one of the ARKit launch applications that was designed by VR veteran Isaac “Cabbibo” Cohen, and it has the same indie charm and shader art aesthetic as his previous VR experiences of Blarp! and L U N E. ARQUA’s gameplay involves you creating a rainbow aquarium by playing kelp plants, schools of fish, and 3D rods that you place around your space by turning your body into the controller. Cabbibo is really interested in providing users of his AR experience with an experience of agency, creation, and beauty in a way that recontextualizes their relationship to their surrounding environment. I had a chance to catch up with Cabbibo after a presentation about Art in AR/VR in Portland, OR, where we talked about ARKit, exploring what makes a compelling AR experience, lessons that VR has to teach AR, and how data is the ‘R’ in MR/AR/VR/XR in that it’s the transformation of real objects into data that allows us to have mediated experiences within a symbolic reality.

LISTEN TO THE VOICES OF VR PODCAST

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

kim-pallisterIntel is investing in the future of immersive computing through their Virtual Reality Center for Excellence. They’re pushing the boundaries of high-end of VR gaming experiences, pursuing initiatives to help VR reach critical mass, and exploring how RealSense depth sensor cameras and WiGig wireless technologies fit into the VR ecosystem. I was able to demo an early prototype demo of an HTC Vive game rendered on a PC and transferred wirelessly to a mobile headset, and it’s part of a research project to search for additional market opportunities for how high-end PCs could drive immersive experiences.

I was able to sit down with the Kim Pallister, the director of Intel’s VR Center for Excellence to talk about their various initiatives to advance immersive computing, their WiGig wireless technology, RealSense and Project Alloy, and some of the experiential differences between their lower-end and higher-end CPUs. He predicts that immersive gaming markets may mirror differences in mobile, console, and PC markets, and that there will be a spectrum of experiences that have tradeoffs between price, performance, and power consumption. Intel is initially focusing on pushing the high-end of VR gaming experiences, but they believe in the future of immersive computing and are looking at how to support and are looking at how to support the full spectrum of virtual reality experiences.

LISTEN TO THE VOICES OF VR PODCAST

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

paul-raphaelThe White House VR documentary People’s House by Felix & Paul Studios won a Emmy for the outstanding original interactive, and I had a chance to talk with Paul Raphael about how the challenges of producing a high-profile piece. They didn’t know how many rooms they’d be able to shoot, and President Obama was such a fan of the project that he literally opened doors for the crew to record more than twice the number of originally scheduled rooms. They were limited to only two 15 minutes interviews with Barack and Michelle Obama, and so they collaborated with speech writers to capture memories and stories for this virtual guided tour.

Felix & Paul Studios create their own VR camera hardware, and they’re starting to use their fourth generation cameras while designing a next-generation, digital lightfield camera. Raphael said lightfield VR shoots are essentially visual effects shoots, which require shooting in different wedge segments that need to be composited in post-production. He also said that they’ve been consulting with most of the major HMD manufacturers including Facebook on an open standard for immersive 3D audio. Even though they’ve been creating a lot of hardware, they’re more interested in using it to stay on the bleeding edge so that they can continue to innovate and push the creative limits of what’s possible in immersive storytelling.

LISTEN TO THE VOICES OF VR PODCAST

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

betty-mohlerDo patients with anorexia nervosa suffer from body image distortion due to how they perceive their body or is it due to attitudinal beliefs? Betty Mohler has been using VR technologies to study whether body representation is more perceptual or conceptual. She captures a 3D body scan of patients, and then uses algorithms to alter the body mass index of a virtual self-avatar from a range of plus and minus 20%. Patients then estimated their existing and desired body using a virtual mirror screen which tracked movements in real-time and showed realistic weight manipulations of photo-realistic virtual avatars. Mohler’s results challenge the existing assumption that patients with anorexia nervosa have visual distortions of their body, and that it’s possible that body image distortion is more driven by attitudinal factors where patients consider underweight bodies as more desirable and attractive.

Mohler works at the Space & Body Perception Group at the Max Planck Institute for Biological Cybernetics. She’s collaborates with philosopher of neuroscience Dr. Hong Yu Wong to research foundational questions about self perception like “Who am I?” Where am I? Where is the origin of my self? Where is the frame of reference? What is the essence of me? How do we know that there’s an external world? What does it mean to have a shared self where multiple people share the same body experience? What does it mean to have a body? How big is my body? Is it possible to be at multiple locations at once while in VR?

I interviewed Mohler for the third at the IEEE VR conference in Los Angeles this past March exploring all of these provocative questions (see my previous interviews on the uncanny valley and avatar stylization).

LISTEN TO THE VOICES OF VR PODCAST

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

barnaby-steelMarshmallow Laser Feast is a collective of artists who are interested in using VR technologies to capture the aesthetic beauty of nature, and provide immersive experiences that inspire people to cultivate an even deeper with the reality that surrounds them. Their Treehugger provides an immersive experience of the lifecycle of water in trees as rain makes it’s way up from the roots of a Sequoia tree and is released as oxygen in a highly-stylized & beautiful point-cloud aesthetic. Their experience included smells and passive haptic feedback to make their simulated volumetric time-lapse even more immersive, and it actually won the Storyscapes award at the Tribeca Film Festival. I caught up with co-founder Barnaby Steel to talk about how VR could be used to inspire us to cultivate an even deeper relationship with the world around us.

LISTEN TO THE VOICES OF VR PODCAST

Here’s a clip of Treehugger:

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip