Gareth Henshall is from Bangor University in the United Kingdom, and was presenting a poster at IEEE VR titled “Towards a High Fidelity Simulation of the Kidney Biopsy Procedure.” Their goal was to create a low-cost training simulation that could allow doctors to train on having the experience of giving someone a kidney biopsy. They tried to do it without haptic feedback, and found that it was not effective at all.

kidney-biopsy

They ended up using a haptic needle that was able to simulate a force profile for the different tissues of the kidney, liver, and spine. They captured these force profiles of the tissues by using a Force Sensitive Resistor Glove that they created that’s able to measure the pressure in Newtons measured over time for different substances.

They’re using a zSpace holographic imaging display to show a stereoscopic torso with the organs that are surrounding the kidney, and in combination with the haptic feedback then they’re able to recreate the feeling of doing this medical procedure in a safe and repeatable fashion.

The takeaway point for me is that to do haptics well, then you have to have a very specific use case. Here they’re recreating a specific medical procedure. And they plan on expanding this to other procedures with other force profiles so that this one system could simulate 30-40 different procedures, which is pretty much impossible to do right now since the physical models that exist today are created for each different procedure.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Josh Farkas is the CEO of Cubicle Ninjas, and he was tweeting his highlights from reviewing all the Milestone 1 submissions for the Oculus Mobile VR Jam, and so I invited him to come onto the Voices of VR podcast to discuss some of the highlights of what he found interesting and compelling.

I also reviewed all of the experiences in the App and Experience track, and noticed that it’s a bit hard to get a quick overview of the submissions without clicking through over 500 times. So I decided to create an Unofficial Spreadsheet of VR Jam Milestone 1 submissions that included the title, the tagline, URL, Game or App Track, Team Size, and names of all of the team members. I also included a separated sheet all of the names of the participants and all of their related projects for quick reference.

spreadsheet

At the recording of this podcast, there were 534 total apps with 316 games and 218 apps / experiences on the submission page. Note that Erisana from Oculus said on Reddit that they received 342 games and 238 apps or experiences, but they were going to filter out some of the ones that didn’t fully qualify, and so those numbers are not final.

NOTE: This spreadsheet is unofficial may not have all of the active submissions. Some may be waiting to be approved, and there may be some that have since been disqualified. Feel free to e-mail kent@kentbye.com if you’re not on this list and would like to be.

The VR Jam will be rated by an initial panel of judges from the Oculus developer relations team on it’s potential to be of interest to the wider VR community as well as what types of innovations that they’re contributing.

Josh quickly read through all of the entries and noticed some genres and themes that emerge including gaze shooters, co-op games, relaxation experiences, speaking to virtual audiences, first-person puzzlers, first-person fliers, and adaptations of one medium into another whether painting galleries or writings. Josh also saw these two themes emerge: “This is a dream world” and “This is occurring in your mind.”

I only had time to read through all of the App and Experience submissions, and so most of our conversation focused on the highlights from this track. Links to the specific experiences that we were discussed are included below.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

rob-lindeman
Rob Lindeman was the chairman for the IEEE 10th Symposium on 3D User Interfaces this year, and he’s currently the director of the Interactive Media & Game Development Program at Worcester Polytechnic Institute.

Rob believes that the 3D user interfaces that are often depicted in popular science fiction movies is not a sustainable solution. That may work in short-term situations, but it is very fatiguing to hold your arms above your waist for long periods of time. Rob is really interested in researching non-fatigued user interfaces that can be used in immersive environments.

One of the more difficult problems with VR locomotion is that it is difficult to use a single type of travel interface that allows you to do short-term, medium-term and long-term travel. He talks about some of his research into using multitouch tablets, and using a walking motion with your fingers in order to do VR locomotion across all three spans of time from short-term to long-term.

The 3DUI symposium is shifting from incremental research topics looked at in isolation to trying to solve real-world problems with a hybrid approach of combining the low-level tasks in interesting ways. They’re striving to create more holistic integrations. Also because the graphics from game engines are so good, then his lab has shifted to integrating multi-sensory feedback into immersive experiences.

Rob is actually pretty skeptical about room-scale VR immersive experiences because of what he’s seen with the evolution of Kinect and Wii. People found that it was effective to play the games with smaller and more efficient wrist motions rather than full swings of the arm. Even though there was an intent to recreate the natural motions, the limitations of the system ended up that after the novelty wore off that people would play with much more efficient motions. Rob says that there is a tradeoff between efficiency of operating in a game environment verses how immersive the experience is. He prefers a very immersive driving experience, but he can’t compete with his brother who uses a more efficient game controller. He hopes that it takes off, but recommends people look at some of the 3DUI & IEEE proceedings to avoid making some of the same mistakes that they’ve discovered over the years.

The idea behind Effective Virtual Environments is to build a VR system that allows people to do something that they couldn’t do before. For Rob, he believes that the killer app for VR is gaming. He sees that gaming is really important and that having fun is a good use of your time.

Rob’s research has been about how can you have more long-term VR experiences in a way that’s non-fatiguing. He suggests thinking about bursting behaviors with actions that may be fatiguing over long periods of time because having resting periods is how we naturally do things in the real world.

Haptics includes everything from sense of touch like wind on your body, pain, temperature, pressure and vibration on the skin as well as our proprioception system which helps us identify where the relative position of our body parts are located. The input and output are very tightly coupled in an extremely short feedback loop, which makes haptics difficult. Also our skin is the longest organ of our body, and it has variable sensitivities in different parts of our body.

There are two types of haptics including feedback force feedback and cutaneous feedback, and to do fully generalized haptics would require an exoskeleton plus a skin-tight suit which is pretty crazy proposition. Because generalized haptic solution is so difficult, most of the successful haptic solutions are very customized to doing a very specific task in a very specific use case. You can also compensate for one sensory cue with another one, and so it’s much better to think about these experiences in a multi-sensory and holistic way.

Rob is a fan of Ready Player One, and he’s really looking forward to jacking into a world and going to places that he couldn’t go before. He’s looking for experiences to change his view or to take him to another world. He think that entertainment and fun is really important thing that should be considered a first-class citizen in our lives. He’s also looking forward to more game developers coming to the IEEE VR & 3DUI conferences in the future.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

richard-skarbezRichard Skarbez in a Ph.D. candidate at University of North Carolina at Chapel Hill who has been researching how to measure presence in VR. Mel Slater has proposed that there are two key components of having the sense of presence that he elaborated in a paper titled “Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments.”

Slater describes the two components of presence by saying:

“The first is ‘being there’, often called ‘presence’, the qualia of having a sensation of being in a real place. We call this Place Illusion (PI). Second, Plausibility Illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring… when both PI and Psi occur, participants will respond realistically to the virtual reality.”

Richard had poster at IEEE VR where he wanted to try quantify the impact of each of these two components. Richard used the phrase “immersion” to describe the feeling of Place Illusion & being in another place, and “coherence” to describe the Plausibility Illusion.

In his research, Richard set out to research the impact of both immersion & coherence through a VR experience and then using the standard battery of presence surveys including ones by Slater, Usoh & Steed and Witmer & Singer, as well as a number of other physiological and behavioral metrics.

What he found is that the presence survey scores were the highest when both the sense of immersion and coherence were strong. If either of these were weak, or if both were weak, then the presence scores were low, and there was no real statistical difference between those results these three conditions. He is finding that both immersion and coherence need to be present in order for there to achieve a strong sense of presence.

He also suspects that coherence is a lot more fragile than immersion. Immersion can be handled through a lot of technical innovations like low-persistent screens, low-latency head tracking, and high frame rates. However, coherence is more like a mental model that almost needs to maintain 100% logic in it’s construction. As soon as there’s something that doesn’t quite feel right, fit in the scene, or if there’s some uncanny valley-like behaviors, then the sense of presence can be broken like a house of cards falling. Richard says that most breaks in presence are due to a break in coherence and that while you can recover from it, it does take time.

Achieving a consistent coherence has a lot of implications in terms of choosing the fidelity of your VR experience. Richard reiterates that the uncanny valley isn’t just a one-dimensional issue that applies to just avatars, it n-dimensional because it affects every aspect of the VR experience.

If you’re designing a VR experience and want to achieve a photorealistic look and feel, then you’re going to need to achieve just as high fidelity in the sound design, the social and behavioral interactions of people, and perhaps even haptics. You may be able to create an incredible sense of immersion, but to achieve true presence then you’ll have to make the entire experience coherent based upon the expectations that the user has based upon their previous interactions with that stimulus or environment. If it looks real, then it better feel and behave at the same level of that visual fidelity.

Richard cautions against going overboard on the visual fidelity while ignoring the overall coherence of the experience, and it may actually create a better VR experience to strive for 100% coherence in your environment rather than 100% immersion through the visuals alone.

Richard talks about this spectrum from low-fidelity to high-fidelity by looking at some of the old 8-bit and 16-bit video games. He says that a lot of those games still hold up because they were able to maintain that complete coherence and consistency of what we might expect for how these games would behave. He says that the history of video games started to tread into that awkward uncanny valley in the PS2 & PS3 game console era when 3D games were first coming around, but that they had a number of glitches or behaviors that would take you out of the experience.

There’s still a lot more research to be done in this area, but to me it really holds true that the combination of place illusion with immersion and plausibility illusion with coherence are the two key factors from some of my most immersive VR experiences.

Finally, Richard talks about what he sees as the potential for that virtual reality embodied telepresence may be something that may eventually replaces the telephone or video VoIP like Skype. He sees that once the technology gets to be good enough that we might even start to use it for serious meetings such as seeing a doctor or meeting with a lawyer within a VR environment. It’s got a ways to go to get there, but he sees it as a viable short-term goal for a really powerful and potent application of this immersive technology.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Betty-MohlerBetty Mohler is a virtual reality researcher from the Max Planck Institute for Biological Cybernetics where she’s the project leader of Perception & Action in Virtual Environments Research Group in Tuebingen, Germany.

Her research interests include computer graphics, space perception, locomotion in immersive virtual environments, and social interactions in VR

At IEEE VR, she was on a panel discussing “Animation of Bodies and Identity.” Here’s the blurb for the research that she’s doing:

The Space & Body Perception research group at the Max Planck Institute for Biological Cybernetics investigates the perception of self and other body size and how to create positive illusions of self and space. We have investigated the importance of the animation of the body of multi-users for effective communication. Through this research we can discuss our experience with different motion capture technology and animation techniques for the body, as well as insights into the importance of self-identification with a self-avatar for social interactions. Additionally, we are conducting research where we use high-res body scans to create self-avatars. We can further discuss the real-time challenges for the future if a photo-realistic self-avatar is part of the virtual reality application.

Some of the topics we covered were:

  • Space and body perception
  • Positive illusions of self & collaborating with Mel Slater on the VR-HYPERSPACE project. People identify with their avatar and how to use that to make them more comfortable. If you change size of someone’s avatar, then that impacts your real-world physical movements & can also change your attitudes.
  • Currently working with eating disorder patients and see if VR & something like a high-end Kinect can help them see their body differently
  • Even healthy people don’t even have an accurate perception of their body. You perceive your body in order to act. Seeing if eating disorder patients see themselves differently
  • Helping with the doctoral consortium & presenting on social interaction challenges & potential in VR. What are the technology & human-in-the-loop challenges to social interactions
  • Timing is crucial in social interactions because that changes meaning of social meaning can be lost, changed or unknown to the user. We adapting to social cues very quickly in real-time. What can we do that’s unique in VR? We can assess each other’s state, and hope to reduce timing limitations.
  • Models for social interactions. Must understand how it works in the real-world first, and they looked at language learning through body language interactions. Must quantify success. For language learning, it’s guessing the right word in another language.
  • Non-verbal social interactions like gestures and posture can communicate a lot of ease and comfort. A lot of big Telepresence implications for being able to feel like you’re sharing space with other people
  • Look for synchrony between two people. You can change, amplify, or turn off someone’s body language within a social interaction to measure it’s impact. Both are providing important feedback in an interaction, and turning one side off breaks that synchrony that happens.
  • How to make the most effective avatar in VR and measuring that. Taking high-resolution photos and then morphing it to a Marvel or Disney type of stylization. There’s some percentage that’s idea. How to navigate around the uncanny valley? Measure appeal and trying to get feedback from people about their preferences across a spectrum of stylizations.
  • The uncanny valley can be thought of creepiness and that something not right. It’s about rules that we learn in our life, and we have certain expectations for the social interaction rules and cultural norms. And the uncanny valley is likely a product of these rules because the VR NPCs are subtly violating these rules. When it looks a human, then there’s a lot of expectations that have to be met. Having holes and defects in a telepresence avatars can help increase immersion
  • Breaks in Presence, and how expectations can play into that. Low fidelity can provide more presence because we don’t have a lot of expectations for these fantasy worlds.
  • Germany & France are powerhouses in VR. Works at the Max Plank institute because she sees it as one of the best labs in VR in the world. Germany’s Fraunhofer Institutes do applied research. Germany’s car manufacturing has driving a lot of support for VR over the years
  • Redirected walking and challenges in VR. Motivated by being a marathon runner and wanted to run through any city in the world in VR. Virtusphere has issues if you’re not the right weight. They’ve created a Virtual Tübingen to walk around freely and explore a virtual city. Our vestibular system is not perfect, and can take advantage of that flaw to trick someone to walk in a circle but make them feel like they’re walking in a circle
  • Would need 30mx30m or larger to do redirected walking well. User can always do something against what’s suggested, and need multiple techniques. Can use a stop sign, and have someone turn around, and then turn around the environment 180-degrees.
  • Currently interested in using VR with medical patients, and need better robustness with better battery life. Need to think about computer vision and how VR and AR will blend into a more mixed reality. Lots of challenges, and it make a big difference in the aging population.
  • Consumer VR and where it’s going. Doesn’t think it’ll ultimately primarily be a gaming application for VR. How do you integrate it into society to be as widely used as a phone. Will people start to use VR in public transportation more?
  • VR is potentially life changing, and hopefully will make her more connected, healthy and intelligent as she ages.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

anthony_steedAnthony Steed is a Professor in the Virtual Environments and Computer Graphics group at the University College London. He started his Ph.D. in Virtual Reality back in 1992 during the first wave of VR. Some of his research interests include distributed virtual reality systems and collaborative environments, 3D interaction, haptics, networked virtual reality protocols, massive models, and telepresence.

Here’s some of the topics that we discussed at the IEEE VR conference:

  • Latency in VR depends on the the context and it can range from a target of 1ms for visual stability to 10ms to 20ms.
  • Collaborative virtual environments & asymmetric interactions in VR that result in a difference in social power. How the UI in VR can either get in the way or support interactions
  • Some of the areas of research include 3D user interfaces, haptics, sensory motor integration, & remote telepresence. Starting to build their own VR hardware
  • Fidelity of avatars in telepresence applications. High-quality avatars must also behave with a high fidelity. Tend to use lower fidelity avatars. Full body tracking without full facial expressions result in zombie-like experience. Telepresence is often task-based where the avatar’s representation of identity is less important. Working with sociologists who look how eye gaze gives cues for turn taking in conversations
  • Most VR don’t utilize our own bodies for haptic feedback. Creating external haptics is a huge problem because they’re very limited. Potential for body-worn haptic devices.
  • On the intersection of neuroscience and VR, looking at our visual system has a left-hand side bias for visual attention, and it’s an open question as to whether they can recreate this neuroscience effect in VR. The impacts on body image when you are tracking your body within VR. Looking at frequency bands of head movement & whether the VR display matches what our proprioceptive senses are telling us about our body’s orientation. Using VR as a platform for neuroscience research into looking at discrepancies of sense queues and looking at persistent illusions
  • There’s a lot of potential for education and training, and a lot of progress being made in this realm.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

I recently traveled to southern France to cover the biggest gathering of virtual reality academics in the world, the IEEE VR & 3DUI conferences. I was able to record over 15 hours worth of interviews and talk to over 50 attendees, which was a little over 10% of the 520 attendees.

In this podcast and video, I give a brief overview of some of the highlights of the coverage that I’ll be releasing over the next 3-4 months. The video includes photos of the more than 100 academic posters that were shown as a part of the IEEE VR and 3DUI conferences.

It’s worth noting that lack of coverage coming out of the IEEE VR conference last year was part of the reason why I started the Voices of VR podcast in the first place. I celebrated my 100th podcast interview with an interview with Sébastien Kuntz, and gave a bit of backstory that’s worth repeating again:

I first discovered Sébastien’s work during the IEEE VR conference last year because he was tweeting about different presentations talking about the academic community’s response to the Facebook acquisition. Here’s a couple of examples of his tweets that captivated my attention:

I wanted to hear more from Sébastien and attendees at IEEE VR, but there weren’t any consumer VR publications covering what was happening in academia or with VR researchers. In fact, there was hardly any coverage from any publication of last year’s IEEE VR conference beyond tweets from attendees, with the most prolific being the ones from Sébastien.

Because of this lack of coverage, I decided to start my own podcast. I reached out to interview a couple of other attendees of the IEEE VR conference including Eric Hodgson and Jason Jerald. I also really wanted to hear more from Oliver “Doc_Ok” Kreylos who was a respected commenter on the /r/oculus subreddit, and also happened to be working in VR within an academic context.

So with that, I hope that you enjoy my exclusive coverage of the IEEE VR conference over the next 3-6 months.

I’ll also be attending the SVVRCon conference on May 18th and 19th, and I start to mix that into the IEEE VR coverage as well.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Neil-Schneider
Neil Schneider is the founder of the Meant to Be Seen forum at MTBS3D.com and he talks about how his failed business led him to deal with his depression by playing video games. He wanted to have more and more engaging and immersive experiences, which eventually led him to getting into 3D gaming with shutter glasses and CRT monitors.

Neil talks about his journey of being the 3d game evangelist within the film and video circles who were also getting into 3D. What he ultimately wanted was to have a more immersive gaming experience, and he decided that in order to do that then he’d need to cultivate a community online to demonstrate that there was indeed demand there for game developers and technology manufactures gain enough confidence that there was a market who could support the required software and hardware.

He talks about how Palmer Luckey was a moderator on the MTBS3D forum, and the famous thread where Palmer first announced the Oculus Rift and how that lead to connecting up with John Carmack and other forum members who went on to be a part of Oculus VR.

In hindsight, Neil’s efforts to help consolidate and organize gamers interested in stereoscopic 3D immersive experiences seems to have had a pretty significant impact on the resurgence of virtual reality. He says that gamers are usually the early adopters of these technologies, and for the longest time they were discounted and ignored by the major 3D hardware manufacturers who were more interested in trying to cash in with the expected boom in 3D televisions in the home. Obviously that didn’t work out as planned, and Neil cautions that virtual reality isn’t destined to succeed and may face the same fate if there isn’t enough compelling content that draws people into buying their own virtual reality headsets.

Neil also talks about the history of the non-profit that is called The Immersive Technology Alliance, and it’s mandate to help make immersive technology successful with technologies ranging from virtual reality, augmented reality and stereoscopic 3D. He also talks about bringing immersive technology events like Immersed to places beyond the hot beds of technology and entertainment in Silicon Valley and Los Angeles.

Neil talks about some of his GDC highlights, whether it’s too soon to talk much about interoperability, whether there will be an effort to control the platform & distribution, and what he sees as a concerted effort to have more collaboration and open communication amongst virtual reality hardware manufacturers.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

rob-morgan
Rob Morgan is a game writer, narrative designer and voice director. He got into the realm of writing for VR experiences from his experience at Sony London Studio, and then freelanced with nDreams on their Gear VR game Gunner as well as their conspiracy-theory/moral dilemma adventure game called The Assembly.

Rob brings a very unique perspective about what’s different about writing narratives and telling stories in VR after working on a number of different projects of significant scope & budget across the Morpheus, Gear VR and Oculus DK2. One of the big takeaways that Rob had is that there are a whole level of social & behavioral interactions that we expect to have with other humans and so you can’t treat NPCs in a VR experience the same way that you might in a 2D experience. For example, there are social queues that you expect a human to react to based upon where you’re looking, whether you seem like you’re paying attention or if you’re threatening other people in some way. There’s a whole range of interaction that we demand and expect to have, and so there’s a lot of interesting nested body language and social queues that if they’re added within a VR experience could add another dimension of immersion.

Rob talks about the importance of having other human-like characters within the narrative experience in order to go beyond an interesting 5-minute tech demo, and to start to have an engaging narrative. He says that there’s a distinct lack of human characters in VR demos because it’s hard to not fall into the trap of the uncanny valley. But Rob suggests that one way to get around the lack of visual fidelity within VR is to start to add simple interactive social behaviors in NPCs to create a better sense of immersion.

He also talks about how important the voice acting is within VR as well because the uncanny valley goes beyond just the look and feel of the graphical representation of humans. Humans are really great at detecting fakeness, and Rob says that this is a vital element of immersion if you’re acting is somehow stilted or not completely authentic or believable.

This was one of my favorite interviews from GDC because Rob lists out so many different interesting open problems and challenges with storytelling in VR. He says that the rules haven’t been written yet, and so there’s a large amount of space to experiment with what works and what doesn’t work.

He eventually sees that there will be a convergence between VR, AR and wearable technology in general, and he’s excited for the possibility of creating a fictional layer of reality for people that they can interact and engage with in a way that’s just as real as the rest of their reality.

Rob presented a talk at GDC called “Written on your eyeballs: Game narrative in VR at GDC 2015” which can be seen on GDC Vault here if you have a subscription.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

rod-haxton
Rod Haxton is the lead software developer for VisiSonics, which created the RealSpace™ 3D Audio technology that Oculus has licensed to put 3D audio into VR.

My experience is that having 3D audio in a VR experience is a huge component for creating a sense of immersion, especially when you’re able to go beyond panning the audio between the left and right channels as you turn your head. With RealSpace™ 3D Audio, they’re able to go beyond panning to simulate the elevation and whether the sound is in front or behind you. They process audio in a way that’s analogous of doing ray-tracing for ears where they take true material audio reflections and do calculations that are based upon Sabine’s reverberation equation.

Our ears filter sound in a way that helps us be able to locate the sound in space. Everyone’s ears are different, and VisiSonics can create a specific profile for your ears in what’s called a HRTF, or head-related transfer function.

They have a database of HRTFs, and use a default profile that works pretty well for 85% of the population. Rob talks about how VisiSonics has a patented a fast-capture for a personalized HRTF where they put speakers in your ears and have an array of microphones in a room. He sees a vision of a time in the future where you’d go into a studio to capture the HRTF data for your ears so that you could have a more realistic 3D audio experience in VR.

Rob also talks about:

  • Special considerations for spatializing audio & a special tool that they’ve developed to evaluate how well a sound will be spatialized.
  • Oculus’ SDK integration of RealSpace™ 3D Audio technology
  • Unity integration & VisiSonics direct integration
  • Options available for 3D audio that are provided by their SDK
  • Maximum number of objects that you could spatialize & what’s a reasonable number
  • Future features planned for the RealSpace™ 3D Audio SDK
  • Unreal Engine support coming soon
  • Originally funded by the DoD to help develop a way for nearly-blinded soldiers to do wayfinding
  • How Tesla is using their panoramic audio cameras to improve the sound profiles of cars
  • How Rod helped get RealSpace 3D audio into a game engine & how they connected with Oculus at GDC 2014
  • How they’ve developed a panoramic audio camera to be able to visualize how sound propagates
  • Good examples of 3D audio integration can be found in Technolust & demos from Unello Design & Bully! Entertainment
  • How poorly-implemented HRTFs had given them a bad name over time

This week, VisiSonics announced Unity 5 integration is now available in their latest v0.9.10 release

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.