OpenBCI is an open source, brain control interface that gathers EEG data. It was designed for makers and DIY neuroengineers, and has the potential to democratize neuroscience with a $100 price point. At the moment, neither OpenBCI nor other commercial off-the-shelf neural devices are compatible with any of the virtual reality HMDs, but there are VR headsets like MindMaze that are fully integrated their headset with neural inputs. I had the chance to talk with OpenBCI founder Conor Russomanno about the future of VR and neural input on the day before the Experiential Technology and Neurogaming Expo — also known as XTech. Once the neural inputs are integrated in VR headsets, then VR experiences will be able to detect and react whenever something catches your attention, your level of alertness, your degree of cognitive load and frustration, as well differentiating between different emotional states.
LISTEN TO THE VOICES OF VR PODCAST
“Neurogaming” is undergoing a bit of rebranding effort towards “Experiential Technology” to take some of the emphasis off of the real-time interaction of brain waves. Right now the latency of EEG data is too slow and it is not consistent enough to be reliable. One indication of this was that all of the experiential technology applications that I saw at XTech that integrated with neural inputs were either medical and educational applications.
Conor says that there are electromyography (EMG) signals that are more reliable and consistent including micro expressions of the face, jaw grits, moving your tongue, and eye clinches. He expects developers to start to use some of these cues to drive drones or do medical applications for quadriplegics or people who have limited mobility from ALS.
At the same time, there were a number of software-as-a-service companies at XTech who were taking EEG data and applying their own algorithms to extrapolate emotions and other higher-level insights. A lot of these algorithms are using AI techniques like machine learning in order to capture a baseline signals of someone’s unique fingerprint and start to train the AI to be able to make sense of the data. AI that interprets and extrapolates meaning out of a vast sea of data from dozens of biometric sensors is going to be a big part of the business models for Experiential Technology.
Once this biometric data starts to become available to VR developers, then we’ll be able to go into a VR experience and be able to see visualizations of what contextual inputs were affecting our brain activity and we’ll start to be able to make decisions to optimize our lifestyle.
I could also imagine some pretty amazing social applications of these neural inputs. Imagine being able to see a visualization of someone’s physical state as you interacting with them. This could have huge implications within the medical context where mental health consolers could get additional insight and the physiological context that would be correlated to the content of a counseling session. Or I could see experiments in social interactions with people who trusted each other enough to be that intimate with their inner most unconscious reactions. And I could also see how immersive theater actors could have very intimate interactions or entertainers could be able to read the mood of the crowd as they’re giving a performance.
Finally, there are a lot of deep and important questions to protect users from loosing control of how their data is used and how it’s kept private since it may prove impossible to completely anonymize it. VR enthusiasts will have to wait on better hardware integrations, but the sky is the limit for what’s possible once all of the inputs are integrated and made available for VR developers.
Here’s a partial transcript of what specifically Russomanno said about the limits of using EEG for real-time interaction:
Russomanno: I think it’s really important to be practical and realistic about the data that you can get from a low-cost dry, portable, EEG headset. A lot of people are very excited about brain-controlled robots and mind-controlled drones. In many cases, it’s just not a practical use of the technology. I’m not saying that it’s not cool, but it’s important to understand that this technology is very valuable for the future of humanity, but we need to distinguish between the things that are practical and the things that are just blowing smoke and getting people excited about the products.
With EEG, there’s tons of valuable data that is your brain over time in the context of your environment, not looking at EEG or brain-computer interfaces for real-time interaction, but rather looking at this data and contextualizing it with other biometric information like eye-tracking, heart rate, heart rate variability, respiration, and then integrating that with the way that we interact with technology, where you’re clicking on a screen, what you’re looking at, what application you’re using.
All of this combined creates a really rich data set of your brain and what you’re interacting with. I think that’s where EEG and BCI is really going to go, at least for non-invasive BCI.
That said, when it comes to muscle data and micro expressions of the face and jaw grits and eye clenches, I think this is where systems like open BCI are actually going to be very practical for helping people who need new interactive systems, people with ALS, quadriplegics.
It doesn’t make sense to jump past all of this muscle data directly to brain data when we have this rich data set that’s really easy to control for real-time interaction. I recently have been really preaching like BCI is great, it’s super exciting, but let’s use it for the right things. For the other things, let’s use these data sets that exist already like EMG data.
Voices of VR: What are some of the right things to use BCI data then?
Russomanno: As I was alluding to, I think looking at attention, looking at what your brain is interested in as you’re doing different things. Right now, there are a lot of medical applications ADHD training, neuro-feedback training for ADHD, depression, anxiety, and then also new types of interactivity such as someone who’s locked in could practically use a few binary inputs from a BCI controller. In many ways, I like to think of the neuro revolution goes way beyond BCI. EMG, muscle control, and all of these other data sets should be included in this revolution as well, because we’re not even coming close to making full use of these technologies currently.
Voices of VR: So what can you extrapolate from EEG Data in terms of emotional intent or activities in different parts of the brain? What can you tell from the EEG data?
Russomanno: I think the jury is still out on this one in terms of how far we can go with non-invasive EEG, but right now we can find attention, alertness; if something catches your attention. If you’re in a mind wandering state and you’re searching for the next thing to be interested in, if something catches your eye there’s an event related potential that’s associated with that. That’s really interesting data. Presenting a user or a player with little flags or trigger moments and seeing what stimuli are actually eliciting interesting responses. Emotional states; we’re getting to the point now where we can distinguish between different emotional states. Specifically anxiety, fear, happiness; some very general brain states. That’s kind of where we’re at right now but I think that we’re going to learn a lot more in the next few years.
Donate to the Voices of VR Podcast Patreon