OSSIC debuted their latest OSSIC X headphone prototype at CES this year with one of the best immersive audio demos that I’ve heard yet. OSSIC CEO Jason Riggs told me that their headphones do a dynamic calibration of your ears in order to render out near-field audio that is customized to your anatomy, and they had a new interactive audio sandbox environment where you could do a live mix of audio objects in a 360-degree environment at different heights and depths. OSSIC also was a participant in Abbey Road Studio’s Red Incubator looking at the future of music production, and Riggs makes the bold prediction that the future of music is going to be both immersive and interactive.
LISTEN TO THE VOICES OF VR PODCAST
We do a deep dive into immersive audio on today’s podcast where Riggs explains in detail their audio rendering pipeline and how their dynamic calibration of ear anatomy enables their integrated hardware to replicate near-field audio objects better than any other software solution. When audio objects are within 1 meter, then they use a dynamic head-related transfer function (HRTF) in order to calculate the proper interaural time differences (ITD) and interaural level differences (ILD) that are unique to your ear anatomy. Their dynamic calibration also helps to localize high frequency sounds from 1-2 kHz when they are in front, above, or behind you.
Riggs says that they’ve been collaborating with Abbey Road Studios in order to figure out the future of music, which Riggs believes that is going to be both immersive and interactive. There are two ends of the spectrum from audio production ranging from pure live capture and pure audio production, which happens to mirror the differences between passive 360 video capture and interative, real-time CGI games. Right now the music industry is solidly in the static, multi-channel-based audio, but that the future tools of audio production are going to look more like a real-time game engine than the existing fixed perspective and flat-world, audio mixing boards.
OSSIC has started to work on figuring out the production pipeline for the passive, pure live capture end of the spectrum first. They’ve been using higher-order ambisonic microphones like the 32-element em32 Eigenmike microphone array from mh acoustics. They’re able to capture a lot more spatial resolution than with a standard 4-channel, first-order ambisonic microphone. Both of these approaches capture a sound sphere shell of a location with all of it’s directed and reflected sound properties that can transport you to another place.
But Riggs says that there’s a limited amount of depth information that can be captured and transmitted with this type of passive and non-volumetric ambisonic recording. The other end of the spectrum is pure audio production, which can do volumetric audio that is real-time and interactive by using audio objects in a simulated 3D space. OSSIC produced an interactive audio demos using Unity that is able to produce audio in the near-field of less than 1 meter distance.
The future of interactive music faces similar challenges to the similar tension between 360 videos and interactive game environments, which is that it’s difficult to balance the user’s agency with the process of creating authored compositions. Some ways to incorporate interactivity with a music experience is to allow the user to live mix an existing authored music composition with audio objects in a 3D space or to play an audio-reactive game like AudioShield that creates dynamic gameplay based upon the unique sound profile of each piece of music. These are ways to engage the agency of the user, but neither of these actually provide any meaningful way for the user to impact how the music composition unfolds. Finding that balance between authorship and interactivity is one of the biggest open questions about the future of music, and no one really knows what that will look like. The only thing that Riggs knows for sure is that real-time game engines like Unity or Unreal are going to be much more well-suited to facilitate this type of interaction than the existing tools of production of channel-based music.
Multi-channel ambisonic formats are becoming more standardized for the 360-videos platforms on Facebook and Google’s YouTube, but there still only output binaural stereo output. Riggs says that he’s been working behind the scenes to provide higher level fidelity outputs for integrated immersive hardware solutions like the OSSIC X since they’re currently not using the best spatialization process to get the best performance out of the OSSIC headphones.
As far as formats for the other end of pure production, there is no emerging standard for an open format of object-based audio. He hopes that that eventually this will come, and that there will be plug-ins for OSSIC headphones and software to be able to dynamically change the reflective properties of a virtualized room, or to be able to dynamically modulate properties of the audio objects.
As game engines eventually move to real-time, physics-based audio propagation models where sound is constructed in real-time, Riggs says that this will still need good spatialization with integrated hardware and software solutions otherwise it’ll just sound like good reverb without any localized cues.
At this point, audio is still taking a backseat to the visuals with a limited 2-3% budget of CPU capacity, and Riggs hopes that there will be a series of audio demos in 2017 that show the power of properly spatialized audio. OSSIC’s interactive sound demo at CES was the most impressive example of audio spatialization that I’ve heard so far, and they’re shaping up to be the real leader of immersive audio. Riggs said that they’ve got a lot of feedback from game studios that they don’t want to use a customized audio production solution by OSSIC, but they want to use their existing production pipeline and have OSSIC be compatible with that. So VR developers should be getting more information for how to best integrate with the OSSIC hardware in 2017 as their OSSIC X headphones will start shipping in Spring of this year.
Virtual Reality has the potential to enable so many amazing utopian futures, but it also has the potential to become one of the most intimate surveillance technologies that could create a Big Brother dystopia of political and economic control. But if privacy considerations are built into virtual reality technologies from the beginning, then Accomplice investor Sarah Downey argues that the metaverse could actually be one of the last bastions of privacy that we have in our lives.
LISTEN TO THE VOICES OF VR PODCAST
Downey has a legal background in privacy and previously worked at a privacy start-up, and she’s currently investing in virtual reality technologies that could have privacy implications such as the brain-control interface company Neurable that can detect user intent.
Privacy Facilitates Authentic Free Speech
Downey believes that privacy is a fundamental requirement for freedom of expression. In order to have the full potential of First Amendment that guarantees the freedom of speech in the United States, then you need to have the protections of the Fourth Amendment that protects a reasonable expectation of privacy. She makes the point our digital footprints are starting to bleed into our real lives, and that this will lead to less authentic interactions in the real world.
This has a chilling effect that creates what Downey calls a “fraudulent shell that limits authenticity.” The erosion of a truly private context has created a stale and boring environment that has limited her authentic expression on sites like Facebook, and she warns that our unified digital footprints will start to spread into the real world as augmented reality technologies with facial recognition start to spread. As we start to loose the feeling of anonymity in public spaces, then we’ll all be living out the first episode of season three of Black Mirror called “Nosedive” where every human interaction is rated on a five-star scale.
Weakening the Fourth Amendment by Sharing Private Data
Downey also argues that the Fourth Amendment is based upon a culturally reasonable expectation of privacy, which means that our collective use of mobile and web technologies has had a very real legal effect on our constitutional rights. There’s a subjective interpretation of the law that’s constantly evolving as we use technology to share the more intimate parts of our lives. If we feel confident enough to share something with a third-party company, then it’s not really legally private in the sense that it can be subpoenaed and used within a court of law.
Creating Unified Super Profiles
There are different classes of private information, and so companies like Google and Facebook are able to collect massive behavioral histories of individuals as long as they don’t share access to the personally identifiable information. They can anonymize and aggregate collective behavioral information that’s provided to their advertising customers, which enables them to create a business model that is based upon detailed surveillance of all of our online behavior.
As of right now, none of the information gathered by a virtual reality technologies has been determined to be definitively classified as “personally identifiable information,” which enables VR hardware companies and application developers to capture and store whatever they like. But once there are eye tracking technologies with more sophisticated facial detection or eventually brain-control interfaces, then VR technology will have the capability to capture and store extremely intimate data including facial expressions, eye movements, eye gaze, gait, hand & head movements, engagement, speech patterns, emotional states, brainwaves from EEG sensors as well as attention, interest, intent, and potentially even eventually our thoughts.
VR Biometric Data is Not Personally Identifiable (Yet)
There are some existing biometric identifiers that can connect information gathered from your body that can personally identify you, which include things like facial features, fingerprint, hand geometry, retina, iris, gait, signature, vein patterns, DNA, voice, and typing rhythm.
Right now your gait, voices, or retina or iris as captured by an eye tracking camera could be biometric data that proves to be personally identifiable. It’s also likely that the combination of other factors like your body, hand, and head movements taken together may prove to create a unique kinematic fingerprint that could also personally identifiable you with the proper machine learning algorithm. This could mean data is being anonymously stored today that could eventually be aggregated to personally identify you, which is a special class of PII that requires special legal protections.
OpenBCI co-founder Conor Russomanno told me that EEG brainwave data may turn out to have a unique fingerprint that can not ever fully be anonymized and could be potentially be tracked back to individuals. What are the implications of storing massive troves of physical data gathered from VR headsets and hand tracked controllers that turns out to be personally identifiable? Downey suggests that the best answer from a privacy perspective is to not record and store the information in the first place.
VR Companies Are Not Being Proactive with Privacy
There’s a set of self-regulatory principles for online behavioral advertising that companies have collectively agreed to follow to help with the Federal Trade Commission’s oversight of companies protecting the privacy of individuals. But up to this point all of the major virtual reality companies have not taken a proactive approach to educate, be transparent and provide consumer controls to opt-out of what may be recorded and stored from a VR system.
Oculus’ Independence from Facebook is Fading
Lastly, Facebook owns Oculus and helps run some Oculus services, such as elements of our infrastructure, but we’re not sharing information with Facebook at this time. We don’t have advertising yet and Facebook is not using Oculus data for advertising – though these are things we may consider in the future.
Just because Oculus hasn’t shared information with Facebook as of early 2016, then that doesn’t mean that they won’t and they don’t plan to. In fact, it’s likely that they will otherwise they wouldn’t have included the legal language to do so.
The Metaverse as the Last Bastion of Privacy?
As these online profiles start to merge into our real world with augmented reality technologies, then it’s going to vastly reduce our sense of privacy. Downey is optimistic about the potential of a virtual reality metaverse could become one of the last bastions of privacy that we have, if VR technologies are architected with privacy in mind.
She encourages VR application and hardware developers to minimize data collection and to maintain as little data as possible. She also suggests to not personally identify people, and to use decentralized payment options like Bitcoin or other cryptocurrencies as to not tie information back to a singular identity. Finally to avoid using social sign-ins so as to not have people’s actions be tied back to a persistent identity that’s permanent stored and shared forever.
Some of the open questions that should be asked of virtual reality hardware and software developers are:
What information is being tracked, recorded, and permanently stored from VR technologies?
Is this information being stored with the legal protections of personally identifiable information?
What is the potential for some of anonymized physical data to end up being personally identifiable using machine learning?
Why haven’t Privacy Policies been updated to reflect what VR data is being tracked and stored? If nothing is being tracked, then are they willing to make explicit statements saying that certain information will not be tracked and stored?
What controls will be made available for users to opt-out of being tracked?
What will be the safeguards in place to prevent the use of eye tracking cameras to personally identify people with biometric retina or iris scans?
Are any of our voice conversations are being recorded for social VR interactions?
Can VR companies ensure that there any private contexts in virtual reality where we are not being tracked and recorded? Or is recording everything the default?
What kind of safeguards can be imposed to limit the tying our virtual actions to our actual identity in order to preserve our Fourth Amendment rights?
How are VR application developers going to be educated and held accountable for their responsibilities of the types of sensitive personally identifiable information that could be recorded and stored within their experiences?
The technological trend over the last ten to twenty years has been that our behaviors with technology have been weakening our Fourth Amendment protections of a reasonable expectation of privacy. As we start to provide more and more intimate data that VR and AR companies are recording and storing everything, then are we yielding more of our rights to a reasonable expectation of privacy? If we completely erode our right to privacy, then it will have serious implications on our First Amendment rights to free speech.
As virtual reality consumers, we should be demanding that VR companies DO NOT record and store this information in order to protect us from overreaching governments or hostile state actors who could capture this information and use it against us.
In order to have freedom of expression in an authentic way, then we need to have a container of privacy. Otherwise, we’ll be moving towards the dystopian futures envisioned by Black Mirror where our digital footprint bleeds over into our real life that constrains all of our social and economic interactions.
Is VR going to be the most powerful surveillance technology ever created or the last bastion of privacy? It’s up to us to decide. We need to make these privacy challenges to VR companies now before it’s too late.
The most significant VR announcement from CES 2017 was the Vive Tracker, which is a modular lighthouse tracked “puck” attachment that will enable users to track additional objects within VR experiences. It has the potential to drive a lot of new innovative applications and gameplay for consumers, to kickstart a lot more mixed reality livestreams, but also grow the overall VR ecosystem as there will be more high-end B2B applications, advertising campaigns, and VR arcade peripherals.
I had a chance to catch up with HTC’s Dan O’Brien, who is the general manager of America, Europe, Middle East, and Africa. We talked about HTC’s emphasis of growing the ecosystem in 2017 with this new Vive Tracker, and what type of applications he expects that it will enable. We also talk about some of the privacy implications of virtual reality, and more about HTC’s approach of minimizing, anonymizing, and protecting any private data that is collected. There are amazing new opportunities for application developers to learn more about individual consumers than ever before, but with that power comes a responsibility to be conscientious enough to not record and store more identifiable information than is necessary.
LISTEN TO THE VOICES OF VR PODCAST
O’Brien used to be the Global Director of Compliance and Consumer Privacy & Security for HTC, and so privacy is near and dear to his heart. He says that privacy has been an important priority for HTC from the beginning since they’ve had an privacy engineering team working to anonymize, minimize, and protect any customer information that’s captured.
O’Brien says that there’s three different layers of security including the operating system, the driver software that runs the VR hardware, and finally the application developers. There are privacy considerations at each layer, and it’s up to each application developer to decide what information to capture and keep from their users. Once eye tracking becomes an essential part of the higher-end VR systems, then the fidelity of available insights will be both vast and powerful. O’Brien says:
I sit in talks sometimes where I’m the one saying to the publishers, ‘Hey, you’re going to be able to have a one-for-one relationship with a consumer that you’ve never had before with VR. You’re going to be able to learn so much more about what they like, what they dislike, whether that ad worked, whether they were interested in that product. You’re going to be able to learn so much more about your consumer if you’re doing the right things. It’s no longer going to be about clickthroughs. You’re going to know if they actually looked at it, and picked it up and interacted with it.’ But on the flip side of that is ‘How much of that information should you be grabbing? And what should you be holding onto? Then once you hold it, and once you draw that information in, how well are you protecting it?’
Whether it’s the developer of applications, hardware, peripherals, or the operating system O’Brien says that “Some people take too much information. They really don’t need to have all of that.” He’s calling for VR hardware and software developers to be very conscientious about what information they’re collecting and how well it’s being protected, especially since the Federal Trade Commission has the power to fine companies, but also to stop companies from selling or importing their products.
He says that consumer privacy is a contract that fosters trust with consumers, and that it’s a relationship that is directly connected to their brand and whether or not consumers will recommend their product to others. But privacy is also about protecting sensitive consumer information from hostile hacks or a potentially overreaching government.
Throughout 2017, there will be more dialog between government regulators and virtual reality companies to explore the potentials and risks. Virtual reality has the potential to enable so many amazing new capabilities, but also a lot of new risks from collecting and protecting sensitive biometric data. O’Brien says, “It’s a balance because you don’t want regulation that stops innovation. You don’t want too many rules that stops just what’s getting started to really flourish into what it could be, what it should be, and even what it will be.” He says that there’s already a lot of existing consumer protections for mobile phones and gaming software that be built upon, and that it’s more of a strategy of incremental improvement rather that needing to building something entirely new.
HTC and others will continue to sit down with government regulators throughout 2017 to explain critical concepts, existing approaches to protecting information, as well as contextualizing software concepts like heat maps that have additional implications when they’re applied to virtual reality.
There have also been a lot of larger trends within the tech industry that have been moving towards surveillance-based business models that correlate all of your Internet activity into a singular identity, and I’ll be continuing to explore some of the privacy implications of virtual reality in future interviews.
Here’s a promo video of one of the Vive Tracker applications by DotDotDash, and was presented at HTC’s demo area at CES
When I attended the Experiential Technology Conference in May 2016, I heard from a number of commercial off-the-shelf brain-control interface manufacturers that their systems would not natively work with VR headsets because there are some critical portions on the head that are occluded by VR headset straps. Companies like Mindmaze VR are building in integrated EEG hardware primarily for high-end medical applications, and perhaps we’ll start to see more EEG hardware integrations in 2017.
Qneuro is an educational company that was exhibiting at the Experiential Technology Conference, and they had some early VR prototypes that used EEG as input within a lab environment. Qneuro founder Dhiraj Jeyanandarajan is a clinical neurologist and works as a neurophysiologist who looks at real-time electrophysiological signals to make corrections during brain or spinal surgeries. He’s also a father who got frustrated with the educational games that were available for his two kids, and so he started Qneuro to create educational games that integrated real-time EEG feedback.
Qneuro has been building 3D environments in Unity and launching them on the iPad, and they’re still waiting for a more integrated hardware solution before launching their virtual reality version. I had a chance to catch up with Jeyanandarajan at the XTech Conference to see what they’re able to do with real-time EEG feedback within a lab environment to improve the learning process within their educational game.
Our research facility and team continue to investigate key concepts within cognitive load theory such as, efficiency in learning, cognitive load, multi-modality, schemas, automation, the split attention effect, guided instruction and modifications to instructional design from novices to experts, through research data gathered in real time from our own experiments and primary research.
It’s an open question as to how effective brain-control interfaces (BCI) will be in providing real-time interactions within VR environments. “>OpenBCI co-founder Conor Russomanno told me in May that the real power of EEG data from brain-control interfaces is not in real-time interactions, but rather it’s the Electromyography (EMG) signals that are much stronger and easier to detect for real-time interactions:
Russomanno: I think it’s really important to be practical and realistic about the data that you can get from a low-cost dry, portable, EEG headset. A lot of people are very excited about brain-controlled robots and mind-controlled drones. In many cases, it’s just not a practical use of the technology. I’m not saying that it’s not cool, but it’s important to understand that this technology is very valuable for the future of humanity, but we need to distinguish between the things that are practical and the things that are just blowing smoke and getting people excited about the products.
With EEG, there’s tons of valuable data that is your brain over time in the context of your environment, not looking at EEG or brain-computer interfaces for real-time interaction, but rather looking at this data and contextualizing it with other biometric information like eye-tracking, heart rate, heart rate variability, respiration, and then integrating that with the way that we interact with technology, where you’re clicking on a screen, what you’re looking at, what application you’re using.
All of this combined creates a really rich data set of your brain and what you’re interacting with. I think that’s where EEG and BCI is really going to go, at least for non-invasive BCI.
That said, when it comes to muscle data and micro expressions of the face and jaw grits and eye clenches, I think this is where systems like open BCI are actually going to be very practical for helping people who need new interactive systems, people with ALS, quadriplegics.
It doesn’t make sense to jump past all of this muscle data directly to brain data when we have this rich data set that’s really easy to control for real-time interaction. I recently have been really preaching like BCI is great, it’s super exciting, but let’s use it for the right things. For the other things, let’s use these data sets that exist already like EMG data.
Voices of VR: What are some of the right things to use BCI data then?
Russomanno: As I was alluding to, I think looking at attention, looking at what your brain is interested in as you’re doing different things. Right now, there are a lot of medical applications ADHD training, neuro-feedback training for ADHD, depression, anxiety, and then also new types of interactivity such as someone who’s locked in could practically use a few binary inputs from a BCI controller. In many ways, I like to think of the neuro revolution goes way beyond BCI. EMG, muscle control, and all of these other data sets should be included in this revolution as well, because we’re not even coming close to making full use of these technologies currently.
In the short-term, it’s still an open question as to how effective EEG data will be able to provide within the context of a real-time game. The quality and fidelity of the data depends upon how many EEG sensor contact points will be able to make a direction connection to the skin on your scalp. The more sensors that available will provide better data, but may be more inconvenient to use. Since the most crucial contact points are at the same place as to where the VR straps are at, then using EEG input for a input to a VR experience may require a custom integrated headset like Mindmaze.
The Neurogaming Conference rebranded itself last year to become the Experiential Technology Conference & Expo, perhaps as a de-emphasis on real-time interactions in games and more of a focus on other medical or educational applications. There were also a lot of companies at the Experiential Technology Conference who were using machine learning techniques in order to amplify the noisy and complicated EEG signals coming from BCI devices. These AI techniques could also be used to detect the level of attention as well as different emotional states.
There are currently a lot of challenges of using EEG or EMG data to controlled VR experiences, but there is also a lot of potential ranging from individualized educational applications, medical applications, personalized narratives based upon your emotional reactions, and biofeedback experiences that help deepen contemplative practices.
Brian Van Buren is a narrative designer at Tomorrow Today Labs, and he’s also a wheelchair user who has been evangelizing how to make virtual reality experiences more accessible. I had a chance to catch up with him at the Intel Buzz Workshop in June where we talked about some of his accessibility recommendations to other virtual reality developers, some good and bad examples of accessibility in VR, as well as some of things that VR technologies enable him to do in a virtual world that he can’t do in the real world.
LISTEN TO THE VOICES OF VR PODCAST
Some of the primary recommendations that Van Buren gives is that you can’t assume the dimensions of your user. Just because he’s is 4 foot 6 inches, doesn’t mean that he should be automatically assigned a child’s body avatar. Also, because he’s primarily sitting down, he’d still like to be able to participate in games that require you to crouch down and duck. Some of the experiences that handle this well are Hover Junkers that provides a head model adjustment for people of different heights, and he’s also able to play Space Pirate Trainer. The little human mode in Job Simulator will also raise the head a foot and a half to provide access to both children as well as people in wheelchairs.
Van Buren recommends against placing objects on the ground as they’re essentially game-breaking bugs for people in wheelchairs, but also generally not ergonomically comfortable for most people. Placing buttons at waist height when standing has the side effect of being fairly comfortable for people are sitting or in a wheelchair, and that highly placed objects are completely out of reach. There are Americans with Disabilities Act (ADA) regulations that most federal and government buildings have to follow, and virtual reality environment developers should keep some of these design constraints in mind.
He says that it’s easier to take accessibility into consideration at the design stage rather than afterwards, and so the sooner that you account for mobility constraints, the better. There are tradeoffs of including kinesthetic gameplay mechanics like crouching, crawling, bending, reaching up that may provide deeper sense of presence for able-bodied people who are of a certain height, but Van Buren asks to consider whether or not some of those mechanics are absolutely vital to the game that it’s worth making the game inaccessible to a portion of people.
For a more in-depth discussion on “Making VR and AR Truly Accessible,” then be sure to also check out this Virtual Reality Developer’s Conference panel discussion featuring Minds + Assembly’s Tracey John, Radial Games’ Andy Moore, Tomorrow Today Labs’ Brian Van Buren, and independent designer Kayla Kinnunen:
In July, I was invited to give a talk about virtual reality at the bi-annual Illustration Conference with indie VR developer Ashley Pinnick, who studied as an artist and illustrator. On today’s Voices of VR podcast, we talk about the process of moving from 2D illustration to 3D VR art, some potential strategies for artists to get more involved in the process of virtual reality development, and the role of artists in creating digital avatars on the safe side of the uncanny valley.
LISTEN TO THE VOICES OF VR PODCAST
One of the big contributions that artists can make is to create stylized VR avatars that feel comfortably outside of the uncanny valley. Studies have shown that people prefer some level of stylization in their avatars, and so illustrators are particularly well-suited to help people construct their digital identities in VR and AR.
I expect to see a lot more breakout art VR experiences created by trained artists in 2017, and that Sketchfab will likely play a large role in helping to discover 3D artist talent just as YouTube has helped independent video creators be discovered.
Creating art in VR is turning out to be one of the big cultural contributions of virtual reality, and I told the artists at the Illustration Conference that I’m really interested to see what type of worlds and characters they build and stories they tell.
Here’s a video of some of the major points that I made at the 2016 Illustration Conference:
There’s been more than 30 years of research into the medical applications of virtual reality, but it’s not until the recent consumer VR revolution that the technology has been cost-effective enough to use. The research shows that the combination of immersion with interactivity can help to reduce pain up to 70%, and in some studies do as well or better as using morphine. AppliedVR was spun out of Lieberman Research Worldwide, and so they’ve been looking at previous medical VR research, creating new VR experiences, and then doing clinical research studies to prove out the efficacy of using virtual reality to manage pain and anxiety before, during, and after hospital procedures.
I had a chance to catch up with the President of AppliedVR Josh Sackman at the Experiential Technology Conference in May 2016. We talked about how VR can improve the overall patient experience metrics, the clinical metrics that VR could impact, and how VR can create a sense of connectedness, pleasure, and empowerment in patients. We also discuss the future of integrating biometric feedback like heart rate variability as a control input for VR experiences.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to try out AppliedVR’s guided meditation application, and I was struck with how engaging they were able to make it. Usually a static VR scene is fairly boring and not very interesting, but AppliedVR was able to change the light throughout the course of a sped-up sunrise scene to show a lot more progress within the scene. Google Earth VR also uses a lot of dynamic light changes throughout their pre-recorded animation sequences to provide a sense of progress and a feeling of a beginning, middle, and end to an individual scene. AppliedVR also uses music from a sound healer and a guided meditation script to help with the pacing and sense of progress within their experience.
Here’s a 360 video with some the Applied VR experiences
The other thing that was really interesting was to hear more about the changes in how hospitals are getting reimbursed with new patient satisfaction and healing efficiency metrics. If medical applications of virtual reality can demonstrate that they have impact on some of the key metrics and potentially even save insurance companies money by reducing the cost from opioid drugs or the length of stay, then VR could be seen as way to increase efficiencies and improve the overall patient experience.
This interview was conducted in May, and in June 2016 AppliedVR announced that they’re using VR technologies in the Spine Center, Department of Surgery, and Orthopaedic Center at Cedars-Sinai Medical Center in Los Angeles.
Larry Hodges is a professor of human computer interaction at Clemson University, and he was one of the co-chairs of the very first IEEE VR academic in 1999. Hodges also co-founded a start-up named Recover, which originated from a successful research project into stroke recovery done by his student Austen Hayes. Inspired by the latest research into neurorehabilitation & skill relearning, they create a Kinect-based experience that gamifies the rehab exercises that 85% of people don’t do because they’re either too boring or painful. They’re able to inspire stroke recovery patients to do extended rehab practices while also progressively increasing the difficulty of the tasks over time as they slowly get better.
I had a chance to catch up with Larry at the IEEE VR academic conference in March 2016, where we talked about the history of the IEEE VR conference, his work with stroke rehabilitation and virtual therapy along with some of the results that they’re seeing, as well as some of his research on treating PTSD with virtual reality.
LISTEN TO THE VOICES OF VR PODCAST
Stoke victims may lose the full range of movement within their limbs, but Hodge says that this is more of a brain problem than a physical problem. Recovr created a game called “Duck Duck Punch” that amplifies the movement of their arms, and so they’re able to feel like they’re accomplishing something with their limited movements. Rather than being frustrated in not being able to accomplish anything of significance, the amplification of movement within the game restores a feeling of robust agency within the patient and inspires them to do up to 600 repetitions per day. This is the level of extended practice that is required in order to relearn a skill, and to effectively rewire their brain through these many repetitions of progressively harder tasks. Hodge says that they’re able to see actual physical progress as soon after five sessions in some people who have been paralyzed for years. The supervising doctor can change the amplification multiplier as they start to regain more and more of their range of motion and motor control.
Medical applications and virtual reality therapy are one of the clear use cases where the benefits are clear, and the next big challenge for Recovr is convincing insurance companies to pay for this virtual treatment. Hodge says that there are some people who would pay just about anything to get access to this treatment, and that he’s received more feedback from the lives he’s changed than other domains of computer science. Perhaps eventually, for some treatments it’ll be cheaper and more efficient for insurance companies to pay for the installation of virtual reality systems within homes to help usher in the era of distributed telemedicine.
I had a chance to catch up with Félix Lajeunesse and Paul Raphaël at Oculus Connect 3, where they were debuting their third VR collaboration with Cirque du Soleil with KÀ: The Battle Within, which features some of the most mind-bending, physics-based acrobatic choreography that I’ve ever seen. They continue to innovate with their camera technology to do things that other camera systems cannot, including maintaining decent stereoscopic effects within the near-field, better dynamic range and control over the framerate. In their Through the Ages: President Obama Celebrates America’s National Parks VR experience, they adapted their camera so that it could capture some awe-inspiring, time-lapse sequences at Yosemite National Park.
LISTEN TO THE VOICES OF VR PODCAST
They talked to me about their process of cultivating presence through treating the viewer as a character within the scene, paying attention to camera height, and having long and slow cuts that allow you to really sink into a location. Félix told me that their overarching philosophy is that “everything we do is experiential.” They’re not trying to direct attention, but rather provide many interesting opportunities for you to pay attention to a number of different unfolding processes within any given scene. This is something that André Lauzon told me they always do in Cirque du Soleil productions, and so it’s a natural fit to translate this type of live performance experience into VR. Felix & Paul are always focusing on cultivating that sense of presence and creating an experience whether it’s a branded advertisement for Jurassic World, a documentary about LeBron James pre-season training, or a 40-minute scripted comedy MIYUBI that’s premiering at Sundance 2017.
The quality and caliber of presence that Félix & Paul are able to cultivate within a 360-video is way beyond what I’ve seen anyone else doing within the 360 video space. A big part of it has to do with their camera technology innovations, but it’s also because they have a creative philosophy that involves deeply listening to the unique affordances of the VR medium. Their continued innovation in the space is a big reason why Twentieth Century Fox and The Fox Innovation Lab have partnered with Felix and Paul Studios to develop VR experiences that are based upon Fox IP. Based upon my previous conversation with 20th Century Fox Futurist Ted Schilowitz and what they did with The Martian VR experience, then I expect that they’re going to be treating VR as more than just advertisements for movies, but rather explore how to create VR experiences that stand on their own. Felix and Paul are premiering the longest VR scripted content to date at Sundance in January with MIYUBI, and their VR-specific adaptions of Cirque du Soleil performances and camera technology has proven that their some of the biggest innovators who are dedicated to evolving the language of storytelling within VR.
Dr. Mary Whitton has been working with interactive computer graphics since 1978 and virtual reality since 1994 in collaboration with Dr. Fred Brooks at the University of North Carolina at Chapel Hill. I had a chance to talk with Mary back in 2015 at the IEEE VR in France about her fundamental research into VR locomotion, haptics, and cultivating a sense of presence.
There have been a lot of military grants over the years to research the impact of haptics, spatialized audio, latency, and field of view into cultivating presence within VR. She’s also investigated a number of different issues around walking-in-place locomotion techniques, passive haptics & redirected touch, as well the nuances of Mel Slater’s presence theory with the place illusion and plausibility illusion.
LISTEN TO THE VOICES OF VR PODCAST
In 2015, the IEEE Visualization and Graphics Technical Committee (VGTC) awarded a technical achievement award to Oculus’ Brendan Iribe, Michael Antonov, and Palmer Luckey. At the end of this interview, Mary Whitton made a call out to the role that Mark Bolas and USC ICT may have played within the creation of the Oculus Rift.