walmart-strivr2

michael-casale
There’s been a virtual reality training revolution that’s been slowly brewing over the last five years, and STRIVR has been at the forefront of innovation working with everyone from elite athletes in the NFL to thousands of Walmart employees. STRIVR has been implementing a number of proof of concepts and initial implementations, and it sounds like the results are positive enough for many of their top clients to continue to invest and expand their virtual reality trainings. STRIVR has been keeping a pretty tight lip around the details of the clients and extent of VR training, but they able to announce in September 2018 that Wal-Mart was purchasing 17,000 Oculus Go VR HMDs for training purposes after an initial report in 2017 announcing that Walmart would be bringing VR training to all 200 Walmart Academy training centers.

I had a chance to have an in-depth discussion with STRIVR’s Chief Science Officer Michael Casale at the Games for Change Conference in New York City on June 19, 2019. I’ve had two previous conversations with Casale in episodes #429 and #595, as well as Stanford’s Virtual Human Interaction Lab founder Jeremy Bailenson in episode #616. Casale told me about how the training for elite quaterbacks and Walmart employees has an amazing amount of similarities from a learning perspective.

Here’s a quick overview of all of the ground I was able to cover with Casale: I was able to get a lot more high-level details for the positive response to VR training, what they’re finding after training many thousands of people within VR, some of the underlying open questions of neuroscience and the nature of learning, how VR is allowing people to upskill and have more agency over the career path, implementing best practices for spaced repetition, how eye tracking may be able to help determine expertise, the frontiers of biophysical data and what EEG might be able to contribute to learning in VR, ensuring that there’s enough variation in learning, and how coaching is evolving with real-time feedback from specific contexts and experiences in VR, what the mobile and tetherless Oculus Quest will mean for the future of training, and finally how VR is a behavioral scientist’s dream come true in being able to simulate many aspects of the deeper context of an experience.

There’s still a lot of questions of how to assess and quantify expertise, and to determine when someone is truly ready to move from virtual training into actual deployment. But there technological roadmap for VR is that there’s going to be a lot more biometric data that’s going to be made available, and that eye tracking looks like it will have some of the most profound impacts, especially once they’re able to compare the eye gaze patterns of experts with novices. There are a lot of indications that are pointing towards that the immersive industry is headed towards a larger revolution with virtual reality training, especially with the early successes that STRIVR is reporting. There’s a lot of technological innovation that’s still left to be done and also integrated with the best practices in learning, but it looks like STRIVR is benefiting from it’s early-mover status in the industry, and they’re currently focusing on scaling out their trainings to larger and larger deployments. We’ll be hearing more information about Oculus’ enterprise offerings at Oculus Connect 6, and get more data points as to how VR is being adopted within the enterprise.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

healiumxr-2019

sarah-hill
Healium XR won the XR pitch competition at CES and SXSW this year for their immersive experiences that respond to biometric data. I saw the first demo of Healium XR (powered by StoryUp) back in 2017 at Oculus Connect 4, which I talk about more here in this previous interview with CEO and Chief Storyteller Sarah Hill, and I was able to talk to their biometric data advisor Dr. Jeff Tarrant at the Awakened Future Summit.

rickey-rockley
I ran into the Healium XR team at SXSW 2019 just after they had give their pitch to the judges, and I had a chance to try out their latest demo on the Oculus Go and with a Muse Headband, and then have a conversation with Hill, UE4 developer Ricky Rockley, and cinematography and editor Kyle Perry.

kyle-perry
We talk about the challenges of developing a biometric sensor-driven experience, the trends of wearables (including biometric sensors embedded into clothing and fabrics), how they provide rewards of cinematic experiences, and some of the open questions around privacy and data ownership when it comes to biometric data.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

ImageJ=1.47k

joel-zylberberg
Joel Zylberberg has a background in physics and cosmology, but then he got interested in theoretical neuroscience and he’s been researching the nature of perception in humans and in machines through the lens of computational neuroscience. He’s the Canada Research Chair and an assistant professor at York University in Toronto as well as Canadian Institute for Advanced Research (CIFAR) Associate Fellow of Learning in Machines and Brains.

Zylberberg was an attendee to the CIFAR workshop on the Future of Neuroscience and VR, and I had a chance to debrief him after the two-day workshop. He was really impressed with how much virtual reality technology has progressed over, and he’s shared that he’s a part of the EEG in the Wild research grant from CIFAR where he’ll be collaborating with Craig Chapman and Alona Fyshe on their project to automatically label EEG data with motion & eye tracking data from VR.

I had a chance to talk to Zylberberg about the neuroscience mechanics of perception, his work in computational neuroscience in trying to understand the neuroscience of perception, back-propagation, top-down feedback connections, & the credit assignment problem, deep learning machine learning models of perception, feed forward network architectures of convolution neural networks and recurrent neural networks, optogenetics,the ambitious goal allowing people to regain sight by trying to find a way to write information directly into the brain, the challenges of the data processing inequality from information theory, and the importance for VR designers to understand the fundamental mechanics of perception.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

traumatic-brain-injuries

david-menon
University of Cambridge’s Dr. David Menon is a clinical neurointensivist who is specializing in traumatic brain injury. He’s an advisor for the Canadian Institute for Advanced Research Azrieli Brain, Mind & Consciousness program, and he was an attendee to CIFAR’s Future of Neuroscience and VR workshop in May. Menon is new to the realm of virtual reality, and he was impressed with learning about how far the immersive technologies have progressed over the last number of years.

Menon sees that there is a need for better cognitive and motor skill assessments after a traumatic brain injury, and that virtual reality could be the perfect medium to create a more engaging assessment tool. VR could also collect a lot more quantifiable data as compared to other methods for assessing the extent of a traumatic brain injury, which usually requires long periods of sustained concentration. I had a chance to catch up with Menon after the CIFAR workshop to talk about some of the open research problems he’s looking at related to the assessment of traumatic brain injuries, the limits of sensory addition or sensory substitution, the future of big data and tracking people over long periods of time as a form of assessment, and what the intersection of VR and neuroscience could learn from genomics.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

livelab-musician-eeg

laurel-trainor
Laurel Trainor is the Director of McMaster Institute for Music and the Mind, which has a LIVElab concert hall for 100 people that allows her to do a lot of studies in the relationship between musical performance and how it’s received by an audience. She’s been using a number of different immersive technologies including motion tracking to track how body sway is a form of bi-directional, non-verbal communication that happens between musicians. She’s also been able to study synchrony, and the impact of movement in the audience and how audience members communicate with each other. She’s also able to do some pretty sophisticated spatialized audio within the LIVElab, and to recreate the sound of live performances, which allows her to research the role of live embodiment when listening to music.

I had a chance to catch up with Trainor at the Canadian Institute for Advanced Research Future of Neuroscience & VR Workshop in New York City. We talked about the role of body sway and non-verbal communication in playing music, the importance of synchrony in group dynamics, and how deficits in perceiving time and rhythm could be a factor in a number of different major developmental disorders including autism spectrum disorder, attention deficit disorder, dylsexia, and developmental coordination disorder.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

ctrl-labs

dan-wetmore
CTRL-labs is creating neural interfaces for robotics and immersive worlds that leverage electromyography (EMG) signals that are radiated from muscle contractions. This gives them the ability to isolate individual motor neurons, which is opening up a whole new world of user interactions for controlling robotics, avatar embodiment within immersive environments, and it could prove to have many applications as an assistive technology. Being able to volitionally control a single motor neuron combined with the plastiscity of our motor system means that there could be an incredible number of other applications for this technology within the context of spatial computing, especially when combined with other input methods. The biggest downfall of this type of EMG input is that it doesn’t naturally contain 6 degree-of-freedom information, which means that it would likely need to be used in conjunction with other camera-based or sensor-based systems within immersive environments where the position in space could significant difference when used within the context of spatial computing.

I had a chance to talk with neuroscientist Dan Wetmore from CTRL-labs at the Canadian Institute for Advanced Research Workshop on the Future of Neuroscience and VR about why he’s so excited about the potential of EMG as an input method, the different use cases that they’re seeing for CTRL-labs so far, how the embodied cognition implications of what it means to use the movement of your body as mode of human-computer interaction.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

reinvent

sook-lei-liew

Dr. Sook-Lei Liew is an assistant professor at the University of Southern California, and the director of the Neural Plasticity & Neurorehabilitation Labatory at USC. At IEEE VR 2017, she was showing off a DIY Brain Computer Interface called REINVENT, which is an acronym that stands for “Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training.” It uses the Open BCI system, and uses 16 channels in a 10–20 system EEG arrangement. This project was funded by the National Innovative Research Grant from the American Heart Association, and was created in order to create a low-cost immersive technology solution that could be used for neurorehabilitation for stroke victims.

I had a chance to catch up with Liew at the IEEE VR 2017 conference where we talked about the development of the REINVENT BCI, how they’re using IMUs to get tracking data, the principles of neurorehabilitation, the potential role of virtual embodiment in neurorehabilitation, and the various open questions around which factors determine whether or not someone will recover or not.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

neurable

ramses-alcaide
Neurable is building a brain-computer interface that integrates directly into the virtual reality headset. Neurable uses dry EEG sensors from Wearable Sensing, which means that it doesn’t require gel in order to get a good signal from the EEG making it a lot more user friendly than BCIs that require gel. I had a chance to try the demo at SIGGRAPH 2017, which was showing off what Neurable refers to at “Telekinetic Presence.” It is the closest thing I’ve ever experienced in VR to having the technology read my mind, as it ran a calibration phase to be able to detect the brainwaves associated with intentional action. Once it’s trained, then it’s a matter of looking at specific objects in a virtual environment, and then experiencing a state of pure magic when it feels like you can start to move objects around with your mind alone.

Neurable CEO Dr. Ramses Alcaide suspects this type of magical, BCI mind control mechanic is going to be a huge affordance for what makes spatial computing unique. He said that the graphical user interface plus the mouse unlocked the potential of the personal computer, and that the capacitive touch screens unlocked the potential of mobile phones. He’s hoping that Neurable’s BCI can help to unlock the potential of 3DUI interactions with virtual and augmented reality. I had a chance to catch up with Alcaide at SIGGRAPH 2017 where we talked about the design decisions and tradeoffs behind their BCI system, their ambitions for building the telekinetic presence of the future, and their work on an operating system in a spatial computing environment that aims to create a world without limitations.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

neuroracer-gazzalley

adam-gazzaley
UCSF professor Adam Gazzaley is a pioneer in the realm of experiential medicine as he’s getting a video game approved by the FDA for ADHD treatment. He’s a co-founder of Akili Interactive and the chief scientist at JAZZ Venture partners, where he’s on the bleeding edge of the latest immersive and interactive technologies, integrating biometric feedback, and helping to create a whole new class of digital drugs through video games. I had a chance to catch up with Gazzaley briefly at the XTech Conference in 2017 where we talk about his book The Distracted Mind: Ancient Brains in a High-Tech World, the Glass Brain project to be able to see inside of your brain in real-time through an immersive EEG, the foundational principles of neuroplasticity, and how far we might be able to push the potential of neuroplasticity with immersive technologies.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s some photos of the SynSync Sensory Immersion Vessel that Gazzaley announced at the Awakened Futures Summit on May 18, 2019.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

mindmaze-hmd

tej-tadi
MindMaze is creating Brain-Computer Interface that’s integrated with VR head-mounted displays, and they’re also creating immersive technologies that neurorehabilitation. I had a chance to talk with MindMaze founder and CEO Tej Tadi at the XTech conference in 2017 about the gamification of mundane rehab tasks, how VR is accelerating neurorehabilitiation with closed-loop systems, the future of VR as a diagnostic tool, exploring other ways to stimulate the brain through Transcranial Direct-Current Stimulation (tDCS), Transcranial Electrical Stimulation (tES), & Transcranial Magnetic Stimulation (TMS), detecting cognitive and motor deficits, and the future of cognitive enhancement with biofeedback and immersive technologies.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality