denise-quesnelThere was a VR Village again this year at the SIGGRAPH conference where a lot of experimental interactive technologies were on display. Virtual Reality was the common thread, but there were other immersive technologies like different prototype haptic suits and devices, along with a number of collaborative social games, and demos including redirected walking, eye-tracking, facial retargeting, and augmented reality. There was also an entire VR storylab with different narrative experiences being show. I had a chance to talk to the program chair Denise Quesnel about some of the content themes and technologies this year at the SIGGRAPH VR Village as well as the different technology trends that were emerging.

LISTEN TO THE VOICES OF VR

Here’s the call for submissions for SIGGRAPH 2017.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

hao-chenBack in February, Amazon announced that they had purchased and forked Crytek’s Cryengine and made the full source code available in a free AAA game engine offering called Lumberyard. The only catch is that if you want to use any public cloud service, then you have to use Amazon Web Service offerings.

At SIGGRAPH, I had a chance to talk with Hao Chen, a senior principle engineer for the Amazon Lumberyard, about Cloud Canvas visual scripting interface to AWS and GameLift multiplayer offerings. We also talked about some of the research and development areas such as integrated artificial intelligence offerings, natural language processing with Alexa, potential ecommerce solutions, and research into digital lightfield capture, compression, and delivery. Amazon wasn’t making any specific new product or gaming content announcements, but it’s clear that part of Amazon’s long-term strategy is to rely upon game developers using their public cloud services in order to fund and sustain future development on their Lumberyard game engine. You can hear more about some of the existing features and functionality of Lumberyard as well as some future research on today’s episode of the Voices of VR podcast.

LISTEN TO THE VOICES OF VR PODCAST

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

alex-faaborgAt Google I/O this year, Google announced the Daydream VR platform and mobile headset that will be coming to the latest Android phones later this year. There’s a DIY dev kit that you can start using today to start developing Daydream-ready apps, and Google has also released a Google VR Unity SDK that includes a number of DaydreamLabs Controller Playground examples to demonstrate different user interactions with the 3DOF controller.

I had a chance to catch up with Google VR’s Alex Faaborg at the Casual Connect conference where talked about some of the VR design best practices, some of the early survey results from Google showing an average play time of 30 minutes per session, what can be learned from Pokémon Go, the differences between Tango and Daydream app design, social norms around using VR around other people, and the future of conversational interfaces.

LISTEN TO THE VOICES OF VR PODCAST

Here’s the presentation from Google I/O on Designing for Daydream:

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

zenkaThe artist Zenka has been documenting the evolution of virtual reality by making raku sculptures of head-mounted displays. She’s also created an interactive timeline of some of the major VR and AR HMDs. Technology has been progressing so quickly that looking back at cell phones from 10-20 years ago starts to feel like ancient history. Zenka feels the same way about VR and AR headsets as we start to see more patents like Sony’s smart eye contacts or Google’s cyborg eye implants.

I had a chance to catch up with Zenka at the Rothenberg Founder Field Day in May where we talked about her VR HMD art project, her other augmented reality art projects, some of her thoughts about identity and revisiting nostalgic memories in VR, and some of her other anthropological observations about this moment in history.

LISTEN TO THE VOICES OF VR PODCAST

Here’s a video of some of Zenka’s recent AR installations at the Rothenberg River headquarters:

Here’s a picture from the 2014 IEEE VR conference of a collection of head-mounted displays curated by NASA’s Stephen Ellis.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Neil_TrevettThe Khronos Group announced that the open standard of glTF was gaining momentum by some of the key players within the graphics industry before SIGGRAPH this year. glTF provides a standardized baseline and interchange format to deliver 3D meshes to different tools and services, and it’s been described as being analogous to the JPEG format for images. The traction for a glTF open standard means that one of the fundamental building blocks for the metaverse is coming into place.

I had a chance to sit down with the Khronos Group President Neil Trevett at SIGGRAPH where he explained the significance of the emerging consensus around the glTF standard. He expands upon what glTF includes, and what it doesn’t. For example, there are not (yet) any point clouds or light fields within glTF, but that glTF is extensible. He also emphasized that previous efforts for an open formats such as VRML and X3D have included definition of run-time behavior, but glTF is meant to be simply a general-purpose, lightweight container for 3D objects and textures. The code and logic for what to do with these assets will be left to the application coded in any language such as JavaScript, C#, C++ or other emerging languages

LISTEN TO THE VOICES OF VR PODCAST

Neil said that many major companies had been working independently on proprietary formats for transmitting 3D asset data so that agreeing on a common open standard prevents fragmentation and silo’d content that can only be understood by a single application. glTF is solving a different problem than authoring formats such as COLLADA, which enables exchange of 3D objects between all of the major authoring programs, and instead focuses on the efficient transmission of 3D assets to a run-time application, a much simpler problem. The glTF spec was released by Khronos in December 2015, the feedback from a growing number of companies such as Oculus and OTOY has been positive.

There are extensions being developed for glTF, such as physically-based rendering to compactly describe realistic material properties. But Neil emphasized that they want to keep the initial glTF specification lean and simple in order to make it simple to implement and to maximize adoption. They’ll be paying attention to industry adoption, and popular extensions can be rolled into future versions of the official glTF specification.

There’s a glTF validator that’s already available, and for more information, then be sure to check out this glTF resource page on the Khronos Group’s website.

UPDATE: I’ve incorporated a number of clarifications from Neil into this article.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

Cosmo-ScharfVisionary VR premiered their Mindshow interactive storytelling platform at VRLA, and was quite a unique experience to be able to record an improv acting session within the body of a virtual character and then step outside of myself to then watch my performance. I’ve recorded myself with a 2D camera plenty of times before, but there’s something qualitatively different in being able to watch my body movements while immersed within a spatial environment.

The core mechanic of reacting to a story prompt was simple and intuitive, and the number of variations in how a scene plays out is only limited by human creativity. The initial Mindshow demo at VRLA had a simple linear capture where you could layer additional characters within a scene while you have previous takes play back to you. You could develop an entire story by rapidly iterating different performances of yourself much like a looping musician might construct a song.

But the true power of Mindshow will be in the collaborative features where you’ll be able to communicate with your friends with the power of the direct experience of a story, rather than by using abstracted and symbolic language. You could pass a scene back and forth to each other like an asynchronous improv performance, or you could eventually interact in real-time, once that feature is implemented.

I had a chance to catch up with Visionary VR and VRLA co-founder Cosmo Scharf where we talked about some of the inspiration behind Minshow including the Buddhist philosophy of Alan Watts and the post-symbolic, direct experience ideas from Terence McKenna.

LISTEN TO THE VOICES OF VR PODCAST


Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

natashafloskyCerevrum Inc is building an ambitious educational platform starting with training people to become better public speakers with Speech Center VR. The basic mechanic is that you stand in a variety of different virtual rooms in front of an animated crowd of virtual listeners as you give a presentation. The app is designed to help people get over their fears of public speaking, but there are many other educational learning opportunities from a number of upcoming courses featuring public speaking coaches.

Olga-PesheI had a chance to catch up with CEO Natasha Floksy and COO Olga Peshé to talk about designing their two educational applications Speech Center VR and their brain training application Cerevrum

LISTEN TO THE VOICES OF VR PODCAST

Natasha has an art degree, and there’s a strong design aesthetic is imbued within every dimension of Speech Center VR from the different rooms to the user interface to the highly customizable avatar system, which is one of the more impressive aspects of the experience. At the moment, you are a disembodied ghost, and so you never fully appreciate your own selected digital identity. But there is a wide array of identity choices, along with many different features and functionality within this app.

You can download Speech Center VR for free, upload a presentation PDF, and record yourself talking to a room full of virtual strangers. There are also interactive social components as well just in case you wanted to hold an intimate meetup there. There are a number of in-app purchases for getting a chance do some practice training within a variety of other different public speaking engagements. There’s actually a surprising amount of functionality included within the experience, including a supplemental eye training application to help to improve your vision.

There’s a number of small improvements that could be made including having a monitor for the presenter, improving the social behaviors of the virtual audience to be a little less uncanny, and having a little bit more intuitive way to advance slides more than swiping down on the side of the Gear VR headset. But overall, Cerevrum Inc. has built a robust educational platform with a lot of room to grow into many specific domains.


Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

bob-petteSIGGRAPH is a conference where GPU manufacturers announce new versions of professional graphics cards designed for the visual effects industry and high-end virtual reality content producers in architectural visualization and engineering. NVIDIA announced two new Pascal-architecture cards including the Quadro P6000 with a 24GB frame buffer and a staggering 3820 CUDA cores, as well as the Quadro P5000 card with 2560 cores and 16GB frame buffer.

I had a chance to catch up with the Bob Pette, who is the general manager of the ProVis business unit at NVIDIA where he talked to me about their new Quadro GPUs, VR-related software announcements, and updates to their physically-based Iray renderer. NVIDIA is moving towards being able to do live interactive ray tracing, but they’re not there yet since it’s still a very computationally-intense process. They were showing some demos of being able to change stationary camera position with a photorealistic-rendered room with the option to chose between four different lighting conditions.

LISTEN TO THE VOICES OF VR PODCAST

Bob also talks about their the parallel-processing capabilities for these NVIDIA GPUs are enabling a lot of innovation within the deep learning and machine learning fields. He sees a trend of software tools starting to think about how to leverage the GPU processing in order to add artificial intelligence features within content creation software. For example, Bob sees that the perceptual capabilities of machine learning techniques that leverage the GPU might be able to help optimize ray tracing algorithms in reaching a “good enough” visual threshold, and to be able to detect ray tracing errors. He also acknowledged that the computational demands for training neural networks are still high enough that he sees that they’ll be primarily trained through cloud-based computing with supplementary local GPU updates and tuning.

There are still a lot of open problems to solve before we see live, interactive ray tracing. But what’s clear is that NVIDIA’s GPU technologies are at the center of catalyzing the current groundswell of virtual reality technologies and machine learning innovations.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

andrew_robbTeam training scenarios are often difficult to schedule due to the logistics involved coordinating many different people’s schedules. One solution has been to use virtual humans as stand-ins for actual humans in team training scenarios where the conversation is mediated by Wizard of Oz interactors who are puppeting the virtual humans. The goal is to recreate a sense of social presence so that the person being trained forgets that they are interacting with virtual humans instead of actual humans.

LISTEN TO THE VOICES OF VR PODCAST

There still needs to be a human in the loop to be able to interpret and respond to the primary person being trained, but the human interactor operating the virtual human can respond by selecting a number of pre-recorded scripted responses. Even though real humans are almost always preferred, using virtual humans can give more accuracy and repeatability to the training scenario, and provide similar strong results with more efficiency.

Andrew Robb is a post-doc at Clemson University, and he’s been researching how to use virtual humans in these types of team training scenarios. Specifically, he talks about training nurses to stand up to surgeons when they want to proceed with a surgery despite replacement blood not being ready yet, which would put the patient’s life in danger if there’s a complication in the preparation process. These are complicated social dynamics and if a nurse isn’t comfortable in speaking up, then it could result in a patient dying. So Andrew has been focused on how to recreate a sense of social presence using virtual humans in order to create a team social dynamic that allows nurses to get practice and training so that they have the confidence to speak up against someone on their team who wants to violate safety protocol.

Andrew mentions a paper by Frank Biocca and Chad Harms titled “Defining and measuring social presence: Contribution to the Networked Minds Theory and Measure, which sets out some definitions for social presence and a networked minds theory for understanding the mechanics of social presence in a virtual environment. They say that: “Most succinctly defined as a ‘sense of being with another in a mediated environment’, social presence is the moment-to-moment awareness of co-presence of a mediated body and the sense of accessibility of the other being’s psychological, emotional, and intentional states.”

I had a chance to catch up with Andrew at the IEEE VR conference where he talked about his experiments in using virtual humans within team training scenarios, some of the research of how humans self-disclose more information to virtual humans, how gaze behavior could provide an objective measure for social presence, and more details about other theories of social presence and co-presence that provide models for how we create models of people’s minds, feelings, and motivations.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

anthony_steedOne of the gold standards of a VR experience is being able to achieve presence, but presence is an elusive concept to precisely define. Mel Slater is one of the leading researchers into presence and Mel says that it’s a combination of the Place Illusion and Presence Illusion, which Richard Skarbez elaborates by saying that the Place Illusion represents the degree of immersion that you feel by being transported to another place, and the Plausibility Illusion is the degree to which you feel that that the overall scene matches your expectations for coherence.

Anthony Steed is a professor in the Virtual Environments and Computer Graphics group in the Department of Computer Science, University College London. Anthony studied under Mel Slater, and he was a co-author of one of the major presence surveys referred to as the Slater, Usoh & Steed survey in the “Depth of Presence in Virtual Environments” paper. Anthony was also the winner of the 2016 Virtual Reality Technical Achievement Award presented at the IEEE VR conference this year.

I had a chance to catch up with Anthony at the IEEE VR conference where he talks about doing distributed presence research with a Gear VR, the role of plausibility in presence, how social presence fits into Mel’s two illusions of presence, and some of the discussions about sharing knowledge between game developers and academics that happened at GDC and IEEE VR conferences this year.

LISTEN TO THE VOICES OF VR PODCAST

Here’s a video of the Presence Experiment that Anthony conducted on the Gear VR, and where he found that tapping on your body during the music without having your hands tracked had a negative impact on embodiment.

Here’s the 2015 IEEE VR poster from Richard Skarbez talking about his presence research into the Place Illusion and Plausibility Illusion:

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip