cymatic-bruceCymatic Bruce Wooden talks about the latest developments in the AltSpaceVR social application now that they’ve opened it up to a public beta. One of the things that Bruce mentioned is that people often think about social media when they hear about social VR, and he suggests that perhaps a more descriptive term would be “Community VR.”

As AltSpaceVR prepares for the consumer launch of the virtual reality HMDs, building communities is going to be one of their places where they’re focusing their attention. They’re also going to be continuing to add features and functionality to push the envelop on different virtual interactions within virtual environments.

AltSpaceVR has been implementing more expressive gesture controls by using the Leap Motion and Kinect, and they’re starting to implement the Perception Neuron suits as well. I’ve personally noticed that there can be a power differential with more social capital going to those who have access to more technology because it enables them to be more expressive and command the conversation more. Bruce says that his observation is that having more expressive gestures seems to improve the experience for everyone involved, but that the power differential is something to watch and look out for. He suggests that perhaps in the future that special guest speakers will come to the AltSpaceVR headquarters and get geared up with all of the latest technologies,

One of the other innovations that AltSpaceVR has been pioneering has been their teleportation locomotion technique. This is a very elegant solution for people who are susceptible to motion sickness caused by VR locomotion. But yet Bruce warns that there are downsides and new social norms developing because it is weird and awkward to be in a group conversation and then just phase out and disappear without a trace.

Bruce talks about the evolution of the user flow, and how they initially hid the action bar based up 2D design standards, but it was difficult for people to find the controls and so they exposed it. They’ve also been optimizing the sound design in order to find the right levels that are comfortable and have the right amount of decay.

Bruce also talks about the choice to go with robots instead of more human-like characters. They experimented with avatars that were really photorealistic to being more abstract, and they felt more emotionally connected to the abstract avatars. There was some creepy dead eyes and unexpressive faces when using the more photorealistic avatars.

There was also a recent internal 48-hour hackathon using their Web SDK that allows you to bring interactive, 3D web content into virtual reality via JavaScript and three.js. They developed a Dungeons & Dragons tabletop application, hand puppets and tone garden. They also brought in some external developers who created a multi-player Floppy Bird clone called Floppy Dragon where others can try to crash the dragon. They’ll also be searching for other developers to come on to make some multi-player experiences with their Web SDK.


Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Yuta-ItohFor anyone who’s gone through a calibration process in order to have eye tracking working within a HMD knows how annoying it can be. Yuta Itoh is working on a number of different techniques to be able to automate this process, and is a leader in this area.

Yuta is a Ph.D. Fellow at the Chair for Computer Aided Medical Procedures & Augmented Reality, Munich, Germany. He specializes within augmented reality since calibration is much more important when you’re overlaying virtual objects within a mixed reality context through an optical see-through, head-mounted display.

Academic VR researchers submit their latest research to present at the IEEE VR conference, and if it sufficiently advances the field forward enough, then they’re published in the IEEE Transactions on Visualization and Computer Graphics journal. Each year the TVCG publishes the accepted long papers within a special Proceedings Virtual Reality edition.

Yuta was able to have three co-authored papers accepted as TVCG journal papers, which is quite an accomplishment. Here’s his three papers:

hmd1
Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays

hmd-calibration
Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays

hmd2
Subjective Evaluation of a Semi-Automatic Optical See-Through Head-Mounted Display Calibration Technique

I found it interesting that Yuta and a lot of other AR researchers get a lot of inspiration from the 3D user interfaces shown in science fiction blockbusters like Iron Man and Minority Report.

This calibration work that Yuta is doing could help make eye tracking within VR applications a lot more user friendly, and more resilient to shifting movements of the VR HMD. One of the complaints that Mark Schramm had about FOVE on a recent podcast discussion is that if the HMD moves at all then it’ll ruin the eye tracking calibration procedure. Some of the light-field corrections and corneal reflection calibration techniques that Yuta is working could provide a way to automatically adjust the calibration for any movements of the HMD or for any new user.

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

andre-lauzonAndré Lauzon is a producer at Cirque du Soleil Média and head of their Digital Studio. At SVVRCon, he was giving a preview of an elaborately choreographed 10-minute, 360-degree video called “INSIDE THE BOX OF KURIOS™ – Cabinet of Curiosities.” Kurios has now been released on the Oculus VR store and is free to download.

The majority of the experience includes a lengthy single-take shot that puts you on stage of a Cirque du Soleil performance that was specifically created for the virtual reality medium.

VRDigest’s Ian Hamilton calls Kurios, “The most technically accomplished 360 stereoscopic video yet released, it features perfect stitching to create a seamless wraparound stage, which the cast fills with comedy, music and acrobatics.” I’d have to agree with that, and it felt like one of those elaborate OK Go music videos that gets more and more impressive as you realize how much coordination and rehearsal that this must have taken in order to create. In fact, André says that it took over 6 months in all to produce, and it’s definitely the most impressive 360-video that I’ve seen to date.

Félix & Paul Studios originally collaborated with Cirque du Soleil over a three-week period to produce a short segment for the out-of-box experience video for the Gear VR. André says that Cirque didn’t feel like they were able to really fully explore the VR medium in that short timeline, and with some support from Samsung they were able to invest more time and energy in creating Kurios.

The press release for the Kurios experience provides the following description:

With INSIDE THE BOX OF KURIOS – Cabinet of Curiosities from Cirque du Soleil, Cirque du Soleil Média and Félix & Paul Studios have created an original virtual reality experience that immerses the viewer in a mysterious and fascinating realm that disorients viewers’ senses and challenges their perceptions. Just like the theatrical version of KURIOS, INSIDE THE BOX OF KURIOS transports the viewer into the curio cabinet of an ambitious inventor who defies the laws of time, space and dimension in order to reinvent everything around him. The virtual reality version allows anyone with a Samsung Gear VR and compatible Samsung smartphone to immerse themselves in a world that is an ingenious blend of unusual curiosity acts and stunning acrobatic prowess, showing that anything is possible through the power of imagination.

André says that the proprietary VR production platform from Félix & Paul Studios is the best video solution that he has seen so far, which include both stereoscopic 360° recording and processing technology.

One of the really interesting insights I got from this interview is that André said that Cirque du Soleil has always been on the cutting edge of experimenting with different mediums like 3D video, but that they’ve always fallen flat. He said that it’s really difficult to capture the kinetic energy of a live performance, but that virtual reality is already showing a lot of promise in this way.

André also talks about how live-action 360° video directors could learn a lot from Cirque du Soleil because they’ve had a lot of experience at creating productions where there are multiple points of primary focus and secondary focus. Directors of these productions like to vary the pacing, and change the primary and secondary focus over time in order to keep it engaging and interesting. It would take multiple viewings in order to really see all of the action that is happening within Kurios. As a viewer, you have a lot of choices as to what to pay attention to, and having 360-audio cues help to direct your attention to the primary focus throughout different points of the video.

It’s still early days for these types of productions, and being able to create a fully immersive emotional arc beyond the spectacle of incredible circus performances that are largely non-verbal. The Kurios production does contain either a foreign language or “Cirque-speak” jibberish that certainly isn’t critical to the overall storyline.

It’s still largely unknown how appealing this will be to the mass consumer audience, and André is just as anxious to get a wider range of feedback once the big Gear VR consumer marketing push launches this Fall. I think that it’s certainly a compelling experience, and so be sure to check out Kurios via the Oculus VR Store if you want to see some of the best 360° video that’s out there today.

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

zach-jaffeThe Subpac tactile bass system gave me one of the most viscerally immersive experiences of my life, and it really blew my mind. I felt like it was feeling the pounding bass on the level of a dance club, but yet nobody around me could hear a thing.

I had a chance to demo a new SubPac designed specifically for VR on the streets of San Jose after a SVVRCon party. Lead Bass Officer Zach Jaffe was doing some guerrilla marketing giving VR developers demos, and it was one of the most immersive experiences I had at SVVRCon — and I wasn’t even using a VR HMD.

SubPac is simply takes the audio output of any audio and it converts the frequencies from 5Hz to 130Hz and converts it into vibrations in their wearable device. Your ears have difficulty hearing frequencies that low, and so we’re left to feel it in our body.

Zach told me that there’s some VR manufacturers who call SubPac one of their favorite VR peripherals just because it’s so elegant and easy to implement. You just literally feed the audio track that’s already in the experience into the SubPac receiver, and then put on the wearable unit. And that’s it. No SDK or any other specific integration is needed. And yet the benefits of immersion and presence of using something like the SubPac are going to be pretty incredible.

I’m really looking forward to hearing more about the VR-specific products from SubPac, and I’d highly recommend trying to find a demo of it at your next VR meetup. It’s really one of the most transformative experiences I’ve had in VR, and it speaks to the power of being able to use sound as a source of haptic feedback. #spreadBass

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

henryfuchs
Henry Fuchs has been involved with virtual reality technologies for over 45 years since 1970 when he first heard about Ivan Sutherland’s Sword of Damocles from his 1968 paper titled “A head-mounted three-dimensional display.” He talks about traveling to the University of Utah to study with Ivan Sutherland, and how he was inspired to work on his thesis of using lasers for depth-capturing 3D objects after watching some of Sutherland’s students hand-digitize his VW bug into polygon shapes.

Fuchs has also been recently working on telepresence applications of VR and talks about some of the open problems and challenges facing having a compelling telepresence implementation within a virtual environment.

In this interview, Fuchs provides a lot of really interesting insights into the history of virtual reality ranging from those first interactions with Ivan and how the Sword of Damocles came about, and how VR has been sustained over the years. He points out the importance of flight simulation in the history of VR, and some how much more robust computer-generated flight simulators were from the model-train style of building physical models with cameras.

Overall, Fuchs is full of really interesting insights about the history of computer graphics and some of the major milestones that virtual reality has had over the years.

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Aldis-SipolinsAldis Sipolins says that brain training is broken, and that VR can help to fix it. He suspects that how our brains work while being immersed within virtual environments more closely resembles how they work within real life. But at this point, it’s really difficult to prove that doing brain training tasks within a 2D context would “transfer” to improving overall cognitive skills. Aldis hopes to change that with his VR brain training game called Cerevrum, which has a tagline of “Not training. Learning.”

Aldis is finishing his Ph.D. in Cognitive Neuroscience at the University of Illinois Urbana-Champaign, and he’s been researching videogame-based brain training to enhance cognition. He was giving demos of his Cerevrum game at SVVRCon, and he hopes to eventually be able to scientifically show that doing these types of brain training exercises within VR will have benefits that are transferrable to our every day lives.

Aldis was hesitant to hype up any capabilities of Cerevrum because it’s at this point largely unproven. He’d actually prefer to not refer to it as a brain training application, but rather a game that will be able to captivate other hardcore gamers like himself. If it’s not fun to play, then it’s ultimately not going to succeed within the initial audience of gamers. He identifies as a hardcore gamer himself, and so he wants to create a game that’s both cognitively challenging and fun.

He says that given the choice to do something that we’re good at versus something we’re bad at, then we’ll usually choose to do what we’re good at. By using advanced machine learning on the backend of Cerevrum, he hopes that the game will be able to detect the area where we’re weak and then help us improve on it. Eventually we’ll be able to quantify our abilities in these different cognitive areas and be able to compare yourself with your friends.

To me what Aldis is doing with Cerevrum is one of the most exciting possibilities of the potential of virtual reality. He says that our brains display the most neuroplasticitiy while we’re in a flow state, and so being completely immersed & engaged within a game within a virtual environment might have the capability to rewire and expand the capacity of our brains in a way that transfers into our everyday lives. The potential cognitive improvements will be different for every person, and he’s looking forward to continuing to develop the game and do the research necessary to scientifically validate it’s effectiveness.

Here’s a video trailer from the Cerevrum site:

Aldis said that Palmer Luckey tried out the game and made it to Wave #14 in the game, and that he really enjoyed playing it.

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio
Subscribe to the Voices of VR podcast.

Joe_LudwigJoe Ludwig is on the VR team at Valve, and was part of the original group with Michael Abrash that initiated the experiments into virtual reality and wearable technologies at Valve around January 2012. Joe says that Valve’s ultimate goal is to eliminate the gatekeepers in VR, and to foster a more open ecosystem like PC gaming. While Valve would love if VR developers use Steam to distribute their VR apps and experiences, there’s no requirement that force Vive developers to distribute only via Steam.

Joe talks about the different terms that Valve uses to describe their VR initiatives and products:

  • SteamVR is the umbrella term for all of Valve’s VR efforts.
  • OpenVR is the API, SDK and runtime that “allows access to VR hardware from multiple vendors without requiring that applications have specific knowledge of the hardware they are targeting.”
  • HTC Vive is the virtual reality head-mounted display
  • And the Lighthouse technology is the tracking solution that they hope becomes an open standard for tracking

Joe says that will not be a lot of time between Vive’s developer edition and the consumer release, and so his expectation is that most of the developers not initially selected will not be able to start developing on the Vive until the consumer release comes out in the Winter of 2015.

Some of the other topics covered by Joe include:

  • SteamVR Plugin for Unity
  • Valve’s goal is to eliminate the gatekeepers in VR to make it more open like PC gaming
  • Developers will not be forced to distribute their VR apps via Steam
  • Best-case scenario for Lighthouse is to open it up, and have other hardware manufacturers start to have it available in public spaces.
  • Some challenges for large spaces with Lighthouse
  • The Demo Room at #SteamDevDays and how Valve worked over the past year to turn that into the Vive product
  • Michael Abrash and a few other people at Valve started working on wearable displays around January 2012
  • Place Illusion & Plausibility Illusion within VR. “The VR Giggle”
  • Some of the more popular Vive VR demos: Job Simulator, Tilt Brush, Google Earth demo
  • Application process for Vive dev kits. Not much time between the developer edition and the consumer release, and so that their expectation is that most of the other developers will start with the consumer release.
  • Everything that Valve builds is iterated on with monitored play tests, and so that’s their hardware QA strategy
  • Joe is looking forward to being places that he can’t be and do it with other people and going through rich and adventurous experiences
  • Joe hopes that interactions between humans will become more positive with VR and that it’ll change online behavior because it’ll communicate more of our humanity

More information about applying for a Vive dev kit can be found here.


Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Dr. Martin Breidt is a research technician at the Max Plank Institute for Biological Cybernetics. His bio page says that he’s part of the Cognitive Engineering group where they “develop and use systems from Computer Vision, Computer Graphics, Machine Learning with methods from psychophysics in order to investigate fundamental cognitive processes.”

Martin only had time for a very quick 5-minute chat, but this was enough time for him to give me some pointers to his research about the uncanny valley effect as well as to some work is being done in order to capture facial animations while wearing a VR HMD. This led me to learn a lot more about the research that Oculus is doing in order to capture human expressions while wearing a VR HMD.

Martin named Hao Li as doing some very important work in being able to predict facial expressions with partial information based upon statistical models. Hao is an assistant professor of Computer Science at the University of Southern California, and he has a paper titled “Unconstrained Realtime Facial Performance Capture” at an upcoming Conference on Computer Vision and Pattern Recognition. Here’s the abstract.

We introduce a realtime facial tracking system specifically designed for performance capture in unconstrained settings using a consumer-level RGB-D sensor. Our framework provides uninterrupted 3D facial tracking, even in the presence of extreme occlusions such as those caused by hair, hand-to-face gestures, and wearable accessories. Anyone’s face can be instantly tracked and the users can be switched without an extra calibration step. During tracking, we explicitly segment face regions from any occluding parts by detecting outliers in the shape and appearance input using an exponentially smoothed and user-adaptive tracking model as prior. Our face segmentation combines depth and RGB input data and is also robust against illumination changes. To enable continuous and reliable facial feature tracking in the color channels, we synthesize plausible face textures in the occluded regions. Our tracking model is personalized on-the-fly by progressively refining the user’s identity, expressions, and texture with reliable samples and temporal filtering. We demonstrate robust and high-fidelity facial tracking on a wide range of subjects with highly incomplete and largely occluded data. Our system works in everyday environments and is fully unobtrusive to the user, impacting consumer AR applications and surveillance.

Here’s a video that goes along with the Unconstrained Realtime Facial Performance Capture paper for CVPR 2015

Hao Li is also the lead author on an upcoming paper at SIGGRAPH 2015 titled that is able to capture human expression even while wearing a VR HMD.

Facial Performance Sensing Head-Mounted Display
Hao Li, Laura Trutoiu, Pei-Lun Hsieh, Tristan Trutna, Lingyu Wei, Kyle Olszewski, Chongyang Ma, Aaron Nicholls
ACM Transactions on Graphics, Proceedings of the 42nd ACM SIGGRAPH Conference and Exhibition 2015, 08/2015

Three of the co-authors of the paper work at Oculus Research including Laura Trutoiu, Tristan Trutna & Aaron Nicholls. Laura was supposed to present at the IEEE VR panel on “Social Interactions in Virtual Reality: Challenges and Potential,” but she was unable to make the trip to southern France. She was going to talk about faces in VR, and had the following description about her talk:

Faces provide a rich source of information and compelling social interactions will require avatar faces to be expressive and emotive. Tracking the face within the constraints of the HMD and accurately animating facial expressions and speech raise hardware and software challenges. Real-time animation further imposes an extra constraint. We will discuss early research in making facial animation within the HMD constraints a reality. Facial analysis suitable for VR systems could not only provide important non-verbal cues about the human intent to the system, but could also be the basis for sophisticated facial animation in VR. While believable facial synthesis is already very demanding, we believe that facial motion analysis under the constraints of an immersive real-time VR system is the main challenge that needs to be solved.

The implications for being able to capture human expressions within VR are going to be huge for social and telepresence experiences in VR. It’s pretty clear that Facebook and Oculus have a lot of interest in being able to solve this difficult problem, and it looks like we’ll start to see some of the breakthroughs that have been made at SIGGRAPH in August 2015 if not sooner.

As a sneak peak, one of student Hao Li’s students, Chongyang Ma, had the following photo on his website that shows an Oculus Rift HMD that has a rig with a camera in order to do facial capture.

2015_fp_thumbnail

Okay. Back to this very brief interivew that I did with Martin at IEEE VR. Here’s the description of Martin’s presentation at the IEEE VR panel on Social interactions in VR

Self-Avatars: Body Scans to Stylized Characters
In VR, avatars are arguably the most natural paradigm for social interaction between humans. Immediately, the question of what such avatars really should look like arises. Although 3D scanning system have become more widespread, such a semi-realistic reproduction of the physical appearan ce of a human might not be the most effective choice; we argue that a certain amount of carefully controlled stylization of an avatar’s appearance might not only help coping with the inherent limitations of immersive real-time VR systems, but also be more effective at achieving task-specific goals with such avatars.

Martin mentions a paper titled Face Reality: Investigating the Uncanny Valley for Virtual Faces that he wrote with Rachel McDonnell for SIGGRAPH 2010.

Here’s the introduction to that paper:

The Uncanny Valley (UV) has become a standard term for the theory that near-photorealistic virtual humans often appear unintentionally erie or creepy. This UV theory was first hypothesized by robotics professor Masahiro Mori in the 1970’s [Mori 1970] but is still taken seriously today by movie and game developers as it can stop audiences feeling emotionally engaged in their stories or games. It has been speculated that this is due to audiences feeling a lack of empathy towards the characters. With the increase in popularity of interactive drama video games (such as L.A. Noire or Heavy Rain), delivering realistic conversing virtual characters has now become very important in the real-time domain. Video game rendering techniques have advanced to a very high quality; however, most games still use linear blend skinning due to the speed of computation. This causes a mismatch between the realism of the appearance and animation, which can result in an uncanny character. Many game developers opt for a stylised rendering (such as celshading) to avoid the uncanny effect [Thompson 2004]. In this preliminary work, we begin to study the complex interaction between rendering style and perceived trust, in order to provide guidelines for developers for creating plausible virtual characters.

It has been shown that certain psychological responses, including emotional arousal, are commonly generated by deceptive situations
[DePaulo et al. 2003]. Therefore, we used deception as a basis for our experiments to investigate the UV theory. We hypothesised that deception ratings would correspond to empathy, and that highly realistic characters would be rated as more deceptive than stylised ones.

He mentions the famous graph by Masahiro Mori, who was a robotics researcher who first proposed the concept back in 1970 in Energy. That article was originally in Japanese, but I found this translation of it.

I have noticed that, as robots appear more humanlike, our sense of their familiarity increases until we come to a valley. I call this relation the “uncanny valley.”

Martin isn’t completely convinced that the conceptualization of the uncanny valley that Mori envisioned back in 1970 is necessarily the correct one. He’s interested in continuing to research and empirically measure the uncanny valley effect through experiments, and hopes to eventually come up with a data-driven model of what works in stylizing virtual humans within VR environments so that they’re the most comfortable with our expectations. At the moment, this job is being through the artistic intuitions from directors and artists within game development studios, but Martin says that this isn’t scalable for everyone. So he intends on continuing to research and better understand this uncanny valley effect.

viconMatt Oughton is the EMEA sales manager for Vicon and talks about the Vicon motion capture system that they were demonstrating at the IEEE VR conference. Vicon has been in the motion capture business since 1984, and he talks about some of the specifications and use cases for their system. Vicon cameras are used for Virtual Reality tracking, movies and gaming entertainment as well as in the life sciences and engineering applications and industry design reviews.

He talks about some of the different high-precision systems that can track up to 150,000 markers and a refresh rate that can go up to 2000 Hz. Most of the Vicon camera systems for virtual reality would range from 30 to 250Hz and be able to track up to 50 objects or around 200 individual markers.

The price of their solutions can range as low as 5000 pounds and over a million pounds, and when I asked Matt whether Vicon is considering getting into the consumer market and he said that they’re primarily focused on the high-end and high-precision applications. After hearing about the upper range of some of the specifications for what their systems are able to do in a wireless fashion, then it seems like they’ll continue to serve the needs of their industry customers. However, Matt says that the lowering cost of technology is really unpredictable and so it’s difficult to predict how the technology in the space will continue to evolve. So it’s yet to be seen whether or not Vicon will be disrupted by some of the other consumer-grade motion capture systems that are emerging.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Dr. John Quarles is an assistant professor in the San Antonio Virtual Environments lab at the University of Texas at San Antonio.

He talks about some research that his student Chao Mei did in researching the impact of customizable virtual humans for hand-eye coordination training game with adolescents who have Autism Spectrum Disorder (ASD). They expected the adolescents to be more engaged and play for longer, but they didn’t expect that they would actually perform better when they’re able to customize the virtual humans within their Imagination Soccer training game.

asd

John talks about their findings as well as some of their future research that they’ll be looking into how to use eye tracking technologies in order to better train adolescents with ASD to improve the abilities of maintaining joint attention. He talks about using Tobii eye tracking along with a Kinect sensors. They’re not using VR HMDs yet because the eye tracking technology isn’t affordable enough to be accessible to all of the therapists who could use it.

John is skeptical as to whether or not virtual reality technologies will ever be able to fully replace human therapists. Even though adolescents sometimes prefer to interact with virtual humans over real-life humans, being able to successfully navigate social interactions with real people is something that they’ll ultimately need to be able to learn how to do.

The interesting takeaway that I got it that there’s something powerful and potent in allowing the users to customize the virtual humans that are in virtual environments. It seems to make people more invested and engaged, and as a result could actually enable them to perform better at specific tasks. There’s further research that needs to be done investigating this, but it adds another incentive for virtual reality developers to allow for the customization of specific elements within the experiences that they’re creating.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.