Dr. Martin Breidt is a research technician at the Max Plank Institute for Biological Cybernetics. His bio page says that he’s part of the Cognitive Engineering group where they “develop and use systems from Computer Vision, Computer Graphics, Machine Learning with methods from psychophysics in order to investigate fundamental cognitive processes.”

Martin only had time for a very quick 5-minute chat, but this was enough time for him to give me some pointers to his research about the uncanny valley effect as well as to some work is being done in order to capture facial animations while wearing a VR HMD. This led me to learn a lot more about the research that Oculus is doing in order to capture human expressions while wearing a VR HMD.

Martin named Hao Li as doing some very important work in being able to predict facial expressions with partial information based upon statistical models. Hao is an assistant professor of Computer Science at the University of Southern California, and he has a paper titled “Unconstrained Realtime Facial Performance Capture” at an upcoming Conference on Computer Vision and Pattern Recognition. Here’s the abstract.

We introduce a realtime facial tracking system specifically designed for performance capture in unconstrained settings using a consumer-level RGB-D sensor. Our framework provides uninterrupted 3D facial tracking, even in the presence of extreme occlusions such as those caused by hair, hand-to-face gestures, and wearable accessories. Anyone’s face can be instantly tracked and the users can be switched without an extra calibration step. During tracking, we explicitly segment face regions from any occluding parts by detecting outliers in the shape and appearance input using an exponentially smoothed and user-adaptive tracking model as prior. Our face segmentation combines depth and RGB input data and is also robust against illumination changes. To enable continuous and reliable facial feature tracking in the color channels, we synthesize plausible face textures in the occluded regions. Our tracking model is personalized on-the-fly by progressively refining the user’s identity, expressions, and texture with reliable samples and temporal filtering. We demonstrate robust and high-fidelity facial tracking on a wide range of subjects with highly incomplete and largely occluded data. Our system works in everyday environments and is fully unobtrusive to the user, impacting consumer AR applications and surveillance.

Here’s a video that goes along with the Unconstrained Realtime Facial Performance Capture paper for CVPR 2015

Hao Li is also the lead author on an upcoming paper at SIGGRAPH 2015 titled that is able to capture human expression even while wearing a VR HMD.

Facial Performance Sensing Head-Mounted Display
Hao Li, Laura Trutoiu, Pei-Lun Hsieh, Tristan Trutna, Lingyu Wei, Kyle Olszewski, Chongyang Ma, Aaron Nicholls
ACM Transactions on Graphics, Proceedings of the 42nd ACM SIGGRAPH Conference and Exhibition 2015, 08/2015

Three of the co-authors of the paper work at Oculus Research including Laura Trutoiu, Tristan Trutna & Aaron Nicholls. Laura was supposed to present at the IEEE VR panel on “Social Interactions in Virtual Reality: Challenges and Potential,” but she was unable to make the trip to southern France. She was going to talk about faces in VR, and had the following description about her talk:

Faces provide a rich source of information and compelling social interactions will require avatar faces to be expressive and emotive. Tracking the face within the constraints of the HMD and accurately animating facial expressions and speech raise hardware and software challenges. Real-time animation further imposes an extra constraint. We will discuss early research in making facial animation within the HMD constraints a reality. Facial analysis suitable for VR systems could not only provide important non-verbal cues about the human intent to the system, but could also be the basis for sophisticated facial animation in VR. While believable facial synthesis is already very demanding, we believe that facial motion analysis under the constraints of an immersive real-time VR system is the main challenge that needs to be solved.

The implications for being able to capture human expressions within VR are going to be huge for social and telepresence experiences in VR. It’s pretty clear that Facebook and Oculus have a lot of interest in being able to solve this difficult problem, and it looks like we’ll start to see some of the breakthroughs that have been made at SIGGRAPH in August 2015 if not sooner.

As a sneak peak, one of student Hao Li’s students, Chongyang Ma, had the following photo on his website that shows an Oculus Rift HMD that has a rig with a camera in order to do facial capture.

2015_fp_thumbnail

Okay. Back to this very brief interivew that I did with Martin at IEEE VR. Here’s the description of Martin’s presentation at the IEEE VR panel on Social interactions in VR

Self-Avatars: Body Scans to Stylized Characters
In VR, avatars are arguably the most natural paradigm for social interaction between humans. Immediately, the question of what such avatars really should look like arises. Although 3D scanning system have become more widespread, such a semi-realistic reproduction of the physical appearan ce of a human might not be the most effective choice; we argue that a certain amount of carefully controlled stylization of an avatar’s appearance might not only help coping with the inherent limitations of immersive real-time VR systems, but also be more effective at achieving task-specific goals with such avatars.

Martin mentions a paper titled Face Reality: Investigating the Uncanny Valley for Virtual Faces that he wrote with Rachel McDonnell for SIGGRAPH 2010.

Here’s the introduction to that paper:

The Uncanny Valley (UV) has become a standard term for the theory that near-photorealistic virtual humans often appear unintentionally erie or creepy. This UV theory was first hypothesized by robotics professor Masahiro Mori in the 1970’s [Mori 1970] but is still taken seriously today by movie and game developers as it can stop audiences feeling emotionally engaged in their stories or games. It has been speculated that this is due to audiences feeling a lack of empathy towards the characters. With the increase in popularity of interactive drama video games (such as L.A. Noire or Heavy Rain), delivering realistic conversing virtual characters has now become very important in the real-time domain. Video game rendering techniques have advanced to a very high quality; however, most games still use linear blend skinning due to the speed of computation. This causes a mismatch between the realism of the appearance and animation, which can result in an uncanny character. Many game developers opt for a stylised rendering (such as celshading) to avoid the uncanny effect [Thompson 2004]. In this preliminary work, we begin to study the complex interaction between rendering style and perceived trust, in order to provide guidelines for developers for creating plausible virtual characters.

It has been shown that certain psychological responses, including emotional arousal, are commonly generated by deceptive situations
[DePaulo et al. 2003]. Therefore, we used deception as a basis for our experiments to investigate the UV theory. We hypothesised that deception ratings would correspond to empathy, and that highly realistic characters would be rated as more deceptive than stylised ones.

He mentions the famous graph by Masahiro Mori, who was a robotics researcher who first proposed the concept back in 1970 in Energy. That article was originally in Japanese, but I found this translation of it.

I have noticed that, as robots appear more humanlike, our sense of their familiarity increases until we come to a valley. I call this relation the “uncanny valley.”

Martin isn’t completely convinced that the conceptualization of the uncanny valley that Mori envisioned back in 1970 is necessarily the correct one. He’s interested in continuing to research and empirically measure the uncanny valley effect through experiments, and hopes to eventually come up with a data-driven model of what works in stylizing virtual humans within VR environments so that they’re the most comfortable with our expectations. At the moment, this job is being through the artistic intuitions from directors and artists within game development studios, but Martin says that this isn’t scalable for everyone. So he intends on continuing to research and better understand this uncanny valley effect.

viconMatt Oughton is the EMEA sales manager for Vicon and talks about the Vicon motion capture system that they were demonstrating at the IEEE VR conference. Vicon has been in the motion capture business since 1984, and he talks about some of the specifications and use cases for their system. Vicon cameras are used for Virtual Reality tracking, movies and gaming entertainment as well as in the life sciences and engineering applications and industry design reviews.

He talks about some of the different high-precision systems that can track up to 150,000 markers and a refresh rate that can go up to 2000 Hz. Most of the Vicon camera systems for virtual reality would range from 30 to 250Hz and be able to track up to 50 objects or around 200 individual markers.

The price of their solutions can range as low as 5000 pounds and over a million pounds, and when I asked Matt whether Vicon is considering getting into the consumer market and he said that they’re primarily focused on the high-end and high-precision applications. After hearing about the upper range of some of the specifications for what their systems are able to do in a wireless fashion, then it seems like they’ll continue to serve the needs of their industry customers. However, Matt says that the lowering cost of technology is really unpredictable and so it’s difficult to predict how the technology in the space will continue to evolve. So it’s yet to be seen whether or not Vicon will be disrupted by some of the other consumer-grade motion capture systems that are emerging.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Dr. John Quarles is an assistant professor in the San Antonio Virtual Environments lab at the University of Texas at San Antonio.

He talks about some research that his student Chao Mei did in researching the impact of customizable virtual humans for hand-eye coordination training game with adolescents who have Autism Spectrum Disorder (ASD). They expected the adolescents to be more engaged and play for longer, but they didn’t expect that they would actually perform better when they’re able to customize the virtual humans within their Imagination Soccer training game.

asd

John talks about their findings as well as some of their future research that they’ll be looking into how to use eye tracking technologies in order to better train adolescents with ASD to improve the abilities of maintaining joint attention. He talks about using Tobii eye tracking along with a Kinect sensors. They’re not using VR HMDs yet because the eye tracking technology isn’t affordable enough to be accessible to all of the therapists who could use it.

John is skeptical as to whether or not virtual reality technologies will ever be able to fully replace human therapists. Even though adolescents sometimes prefer to interact with virtual humans over real-life humans, being able to successfully navigate social interactions with real people is something that they’ll ultimately need to be able to learn how to do.

The interesting takeaway that I got it that there’s something powerful and potent in allowing the users to customize the virtual humans that are in virtual environments. It seems to make people more invested and engaged, and as a result could actually enable them to perform better at specific tasks. There’s further research that needs to be done investigating this, but it adds another incentive for virtual reality developers to allow for the customization of specific elements within the experiences that they’re creating.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Gareth Henshall is from Bangor University in the United Kingdom, and was presenting a poster at IEEE VR titled “Towards a High Fidelity Simulation of the Kidney Biopsy Procedure.” Their goal was to create a low-cost training simulation that could allow doctors to train on having the experience of giving someone a kidney biopsy. They tried to do it without haptic feedback, and found that it was not effective at all.

kidney-biopsy

They ended up using a haptic needle that was able to simulate a force profile for the different tissues of the kidney, liver, and spine. They captured these force profiles of the tissues by using a Force Sensitive Resistor Glove that they created that’s able to measure the pressure in Newtons measured over time for different substances.

They’re using a zSpace holographic imaging display to show a stereoscopic torso with the organs that are surrounding the kidney, and in combination with the haptic feedback then they’re able to recreate the feeling of doing this medical procedure in a safe and repeatable fashion.

The takeaway point for me is that to do haptics well, then you have to have a very specific use case. Here they’re recreating a specific medical procedure. And they plan on expanding this to other procedures with other force profiles so that this one system could simulate 30-40 different procedures, which is pretty much impossible to do right now since the physical models that exist today are created for each different procedure.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Josh Farkas is the CEO of Cubicle Ninjas, and he was tweeting his highlights from reviewing all the Milestone 1 submissions for the Oculus Mobile VR Jam, and so I invited him to come onto the Voices of VR podcast to discuss some of the highlights of what he found interesting and compelling.

I also reviewed all of the experiences in the App and Experience track, and noticed that it’s a bit hard to get a quick overview of the submissions without clicking through over 500 times. So I decided to create an Unofficial Spreadsheet of VR Jam Milestone 1 submissions that included the title, the tagline, URL, Game or App Track, Team Size, and names of all of the team members. I also included a separated sheet all of the names of the participants and all of their related projects for quick reference.

spreadsheet

At the recording of this podcast, there were 534 total apps with 316 games and 218 apps / experiences on the submission page. Note that Erisana from Oculus said on Reddit that they received 342 games and 238 apps or experiences, but they were going to filter out some of the ones that didn’t fully qualify, and so those numbers are not final.

NOTE: This spreadsheet is unofficial may not have all of the active submissions. Some may be waiting to be approved, and there may be some that have since been disqualified. Feel free to e-mail kent@kentbye.com if you’re not on this list and would like to be.

The VR Jam will be rated by an initial panel of judges from the Oculus developer relations team on it’s potential to be of interest to the wider VR community as well as what types of innovations that they’re contributing.

Josh quickly read through all of the entries and noticed some genres and themes that emerge including gaze shooters, co-op games, relaxation experiences, speaking to virtual audiences, first-person puzzlers, first-person fliers, and adaptations of one medium into another whether painting galleries or writings. Josh also saw these two themes emerge: “This is a dream world” and “This is occurring in your mind.”

I only had time to read through all of the App and Experience submissions, and so most of our conversation focused on the highlights from this track. Links to the specific experiences that we were discussed are included below.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

rob-lindeman
Rob Lindeman was the chairman for the IEEE 10th Symposium on 3D User Interfaces this year, and he’s currently the director of the Interactive Media & Game Development Program at Worcester Polytechnic Institute.

Rob believes that the 3D user interfaces that are often depicted in popular science fiction movies is not a sustainable solution. That may work in short-term situations, but it is very fatiguing to hold your arms above your waist for long periods of time. Rob is really interested in researching non-fatigued user interfaces that can be used in immersive environments.

One of the more difficult problems with VR locomotion is that it is difficult to use a single type of travel interface that allows you to do short-term, medium-term and long-term travel. He talks about some of his research into using multitouch tablets, and using a walking motion with your fingers in order to do VR locomotion across all three spans of time from short-term to long-term.

The 3DUI symposium is shifting from incremental research topics looked at in isolation to trying to solve real-world problems with a hybrid approach of combining the low-level tasks in interesting ways. They’re striving to create more holistic integrations. Also because the graphics from game engines are so good, then his lab has shifted to integrating multi-sensory feedback into immersive experiences.

Rob is actually pretty skeptical about room-scale VR immersive experiences because of what he’s seen with the evolution of Kinect and Wii. People found that it was effective to play the games with smaller and more efficient wrist motions rather than full swings of the arm. Even though there was an intent to recreate the natural motions, the limitations of the system ended up that after the novelty wore off that people would play with much more efficient motions. Rob says that there is a tradeoff between efficiency of operating in a game environment verses how immersive the experience is. He prefers a very immersive driving experience, but he can’t compete with his brother who uses a more efficient game controller. He hopes that it takes off, but recommends people look at some of the 3DUI & IEEE proceedings to avoid making some of the same mistakes that they’ve discovered over the years.

The idea behind Effective Virtual Environments is to build a VR system that allows people to do something that they couldn’t do before. For Rob, he believes that the killer app for VR is gaming. He sees that gaming is really important and that having fun is a good use of your time.

Rob’s research has been about how can you have more long-term VR experiences in a way that’s non-fatiguing. He suggests thinking about bursting behaviors with actions that may be fatiguing over long periods of time because having resting periods is how we naturally do things in the real world.

Haptics includes everything from sense of touch like wind on your body, pain, temperature, pressure and vibration on the skin as well as our proprioception system which helps us identify where the relative position of our body parts are located. The input and output are very tightly coupled in an extremely short feedback loop, which makes haptics difficult. Also our skin is the longest organ of our body, and it has variable sensitivities in different parts of our body.

There are two types of haptics including feedback force feedback and cutaneous feedback, and to do fully generalized haptics would require an exoskeleton plus a skin-tight suit which is pretty crazy proposition. Because generalized haptic solution is so difficult, most of the successful haptic solutions are very customized to doing a very specific task in a very specific use case. You can also compensate for one sensory cue with another one, and so it’s much better to think about these experiences in a multi-sensory and holistic way.

Rob is a fan of Ready Player One, and he’s really looking forward to jacking into a world and going to places that he couldn’t go before. He’s looking for experiences to change his view or to take him to another world. He think that entertainment and fun is really important thing that should be considered a first-class citizen in our lives. He’s also looking forward to more game developers coming to the IEEE VR & 3DUI conferences in the future.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

richard-skarbezRichard Skarbez in a Ph.D. candidate at University of North Carolina at Chapel Hill who has been researching how to measure presence in VR. Mel Slater has proposed that there are two key components of having the sense of presence that he elaborated in a paper titled “Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments.”

Slater describes the two components of presence by saying:

“The first is ‘being there’, often called ‘presence’, the qualia of having a sensation of being in a real place. We call this Place Illusion (PI). Second, Plausibility Illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring… when both PI and Psi occur, participants will respond realistically to the virtual reality.”

Richard had poster at IEEE VR where he wanted to try quantify the impact of each of these two components. Richard used the phrase “immersion” to describe the feeling of Place Illusion & being in another place, and “coherence” to describe the Plausibility Illusion.

In his research, Richard set out to research the impact of both immersion & coherence through a VR experience and then using the standard battery of presence surveys including ones by Slater, Usoh & Steed and Witmer & Singer, as well as a number of other physiological and behavioral metrics.

What he found is that the presence survey scores were the highest when both the sense of immersion and coherence were strong. If either of these were weak, or if both were weak, then the presence scores were low, and there was no real statistical difference between those results these three conditions. He is finding that both immersion and coherence need to be present in order for there to achieve a strong sense of presence.

He also suspects that coherence is a lot more fragile than immersion. Immersion can be handled through a lot of technical innovations like low-persistent screens, low-latency head tracking, and high frame rates. However, coherence is more like a mental model that almost needs to maintain 100% logic in it’s construction. As soon as there’s something that doesn’t quite feel right, fit in the scene, or if there’s some uncanny valley-like behaviors, then the sense of presence can be broken like a house of cards falling. Richard says that most breaks in presence are due to a break in coherence and that while you can recover from it, it does take time.

Achieving a consistent coherence has a lot of implications in terms of choosing the fidelity of your VR experience. Richard reiterates that the uncanny valley isn’t just a one-dimensional issue that applies to just avatars, it n-dimensional because it affects every aspect of the VR experience.

If you’re designing a VR experience and want to achieve a photorealistic look and feel, then you’re going to need to achieve just as high fidelity in the sound design, the social and behavioral interactions of people, and perhaps even haptics. You may be able to create an incredible sense of immersion, but to achieve true presence then you’ll have to make the entire experience coherent based upon the expectations that the user has based upon their previous interactions with that stimulus or environment. If it looks real, then it better feel and behave at the same level of that visual fidelity.

Richard cautions against going overboard on the visual fidelity while ignoring the overall coherence of the experience, and it may actually create a better VR experience to strive for 100% coherence in your environment rather than 100% immersion through the visuals alone.

Richard talks about this spectrum from low-fidelity to high-fidelity by looking at some of the old 8-bit and 16-bit video games. He says that a lot of those games still hold up because they were able to maintain that complete coherence and consistency of what we might expect for how these games would behave. He says that the history of video games started to tread into that awkward uncanny valley in the PS2 & PS3 game console era when 3D games were first coming around, but that they had a number of glitches or behaviors that would take you out of the experience.

There’s still a lot more research to be done in this area, but to me it really holds true that the combination of place illusion with immersion and plausibility illusion with coherence are the two key factors from some of my most immersive VR experiences.

Finally, Richard talks about what he sees as the potential for that virtual reality embodied telepresence may be something that may eventually replaces the telephone or video VoIP like Skype. He sees that once the technology gets to be good enough that we might even start to use it for serious meetings such as seeing a doctor or meeting with a lawyer within a VR environment. It’s got a ways to go to get there, but he sees it as a viable short-term goal for a really powerful and potent application of this immersive technology.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Betty-MohlerBetty Mohler is a virtual reality researcher from the Max Planck Institute for Biological Cybernetics where she’s the project leader of Perception & Action in Virtual Environments Research Group in Tuebingen, Germany.

Her research interests include computer graphics, space perception, locomotion in immersive virtual environments, and social interactions in VR

At IEEE VR, she was on a panel discussing “Animation of Bodies and Identity.” Here’s the blurb for the research that she’s doing:

The Space & Body Perception research group at the Max Planck Institute for Biological Cybernetics investigates the perception of self and other body size and how to create positive illusions of self and space. We have investigated the importance of the animation of the body of multi-users for effective communication. Through this research we can discuss our experience with different motion capture technology and animation techniques for the body, as well as insights into the importance of self-identification with a self-avatar for social interactions. Additionally, we are conducting research where we use high-res body scans to create self-avatars. We can further discuss the real-time challenges for the future if a photo-realistic self-avatar is part of the virtual reality application.

Some of the topics we covered were:

  • Space and body perception
  • Positive illusions of self & collaborating with Mel Slater on the VR-HYPERSPACE project. People identify with their avatar and how to use that to make them more comfortable. If you change size of someone’s avatar, then that impacts your real-world physical movements & can also change your attitudes.
  • Currently working with eating disorder patients and see if VR & something like a high-end Kinect can help them see their body differently
  • Even healthy people don’t even have an accurate perception of their body. You perceive your body in order to act. Seeing if eating disorder patients see themselves differently
  • Helping with the doctoral consortium & presenting on social interaction challenges & potential in VR. What are the technology & human-in-the-loop challenges to social interactions
  • Timing is crucial in social interactions because that changes meaning of social meaning can be lost, changed or unknown to the user. We adapting to social cues very quickly in real-time. What can we do that’s unique in VR? We can assess each other’s state, and hope to reduce timing limitations.
  • Models for social interactions. Must understand how it works in the real-world first, and they looked at language learning through body language interactions. Must quantify success. For language learning, it’s guessing the right word in another language.
  • Non-verbal social interactions like gestures and posture can communicate a lot of ease and comfort. A lot of big Telepresence implications for being able to feel like you’re sharing space with other people
  • Look for synchrony between two people. You can change, amplify, or turn off someone’s body language within a social interaction to measure it’s impact. Both are providing important feedback in an interaction, and turning one side off breaks that synchrony that happens.
  • How to make the most effective avatar in VR and measuring that. Taking high-resolution photos and then morphing it to a Marvel or Disney type of stylization. There’s some percentage that’s idea. How to navigate around the uncanny valley? Measure appeal and trying to get feedback from people about their preferences across a spectrum of stylizations.
  • The uncanny valley can be thought of creepiness and that something not right. It’s about rules that we learn in our life, and we have certain expectations for the social interaction rules and cultural norms. And the uncanny valley is likely a product of these rules because the VR NPCs are subtly violating these rules. When it looks a human, then there’s a lot of expectations that have to be met. Having holes and defects in a telepresence avatars can help increase immersion
  • Breaks in Presence, and how expectations can play into that. Low fidelity can provide more presence because we don’t have a lot of expectations for these fantasy worlds.
  • Germany & France are powerhouses in VR. Works at the Max Plank institute because she sees it as one of the best labs in VR in the world. Germany’s Fraunhofer Institutes do applied research. Germany’s car manufacturing has driving a lot of support for VR over the years
  • Redirected walking and challenges in VR. Motivated by being a marathon runner and wanted to run through any city in the world in VR. Virtusphere has issues if you’re not the right weight. They’ve created a Virtual Tübingen to walk around freely and explore a virtual city. Our vestibular system is not perfect, and can take advantage of that flaw to trick someone to walk in a circle but make them feel like they’re walking in a circle
  • Would need 30mx30m or larger to do redirected walking well. User can always do something against what’s suggested, and need multiple techniques. Can use a stop sign, and have someone turn around, and then turn around the environment 180-degrees.
  • Currently interested in using VR with medical patients, and need better robustness with better battery life. Need to think about computer vision and how VR and AR will blend into a more mixed reality. Lots of challenges, and it make a big difference in the aging population.
  • Consumer VR and where it’s going. Doesn’t think it’ll ultimately primarily be a gaming application for VR. How do you integrate it into society to be as widely used as a phone. Will people start to use VR in public transportation more?
  • VR is potentially life changing, and hopefully will make her more connected, healthy and intelligent as she ages.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

anthony_steedAnthony Steed is a Professor in the Virtual Environments and Computer Graphics group at the University College London. He started his Ph.D. in Virtual Reality back in 1992 during the first wave of VR. Some of his research interests include distributed virtual reality systems and collaborative environments, 3D interaction, haptics, networked virtual reality protocols, massive models, and telepresence.

Here’s some of the topics that we discussed at the IEEE VR conference:

  • Latency in VR depends on the the context and it can range from a target of 1ms for visual stability to 10ms to 20ms.
  • Collaborative virtual environments & asymmetric interactions in VR that result in a difference in social power. How the UI in VR can either get in the way or support interactions
  • Some of the areas of research include 3D user interfaces, haptics, sensory motor integration, & remote telepresence. Starting to build their own VR hardware
  • Fidelity of avatars in telepresence applications. High-quality avatars must also behave with a high fidelity. Tend to use lower fidelity avatars. Full body tracking without full facial expressions result in zombie-like experience. Telepresence is often task-based where the avatar’s representation of identity is less important. Working with sociologists who look how eye gaze gives cues for turn taking in conversations
  • Most VR don’t utilize our own bodies for haptic feedback. Creating external haptics is a huge problem because they’re very limited. Potential for body-worn haptic devices.
  • On the intersection of neuroscience and VR, looking at our visual system has a left-hand side bias for visual attention, and it’s an open question as to whether they can recreate this neuroscience effect in VR. The impacts on body image when you are tracking your body within VR. Looking at frequency bands of head movement & whether the VR display matches what our proprioceptive senses are telling us about our body’s orientation. Using VR as a platform for neuroscience research into looking at discrepancies of sense queues and looking at persistent illusions
  • There’s a lot of potential for education and training, and a lot of progress being made in this realm.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

I recently traveled to southern France to cover the biggest gathering of virtual reality academics in the world, the IEEE VR & 3DUI conferences. I was able to record over 15 hours worth of interviews and talk to over 50 attendees, which was a little over 10% of the 520 attendees.

In this podcast and video, I give a brief overview of some of the highlights of the coverage that I’ll be releasing over the next 3-4 months. The video includes photos of the more than 100 academic posters that were shown as a part of the IEEE VR and 3DUI conferences.

It’s worth noting that lack of coverage coming out of the IEEE VR conference last year was part of the reason why I started the Voices of VR podcast in the first place. I celebrated my 100th podcast interview with an interview with Sébastien Kuntz, and gave a bit of backstory that’s worth repeating again:

I first discovered Sébastien’s work during the IEEE VR conference last year because he was tweeting about different presentations talking about the academic community’s response to the Facebook acquisition. Here’s a couple of examples of his tweets that captivated my attention:

I wanted to hear more from Sébastien and attendees at IEEE VR, but there weren’t any consumer VR publications covering what was happening in academia or with VR researchers. In fact, there was hardly any coverage from any publication of last year’s IEEE VR conference beyond tweets from attendees, with the most prolific being the ones from Sébastien.

Because of this lack of coverage, I decided to start my own podcast. I reached out to interview a couple of other attendees of the IEEE VR conference including Eric Hodgson and Jason Jerald. I also really wanted to hear more from Oliver “Doc_Ok” Kreylos who was a respected commenter on the /r/oculus subreddit, and also happened to be working in VR within an academic context.

So with that, I hope that you enjoy my exclusive coverage of the IEEE VR conference over the next 3-6 months.

I’ll also be attending the SVVRCon conference on May 18th and 19th, and I start to mix that into the IEEE VR coverage as well.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.