#365: Democratizing Neuroscience with OpenBCI & Adapting VR Content with Biofeedback

conorOpenBCI is an open source, brain control interface that gathers EEG data. It was designed for makers and DIY neuroengineers, and has the potential to democratize neuroscience with a $100 price point. At the moment, neither OpenBCI nor other commercial off-the-shelf neural devices are compatible with any of the virtual reality HMDs, but there are VR headsets like MindMaze that are fully integrated their headset with neural inputs. I had the chance to talk with OpenBCI founder Conor Russomanno about the future of VR and neural input on the day before the Experiential Technology and Neurogaming Expo — also known as XTech. Once the neural inputs are integrated in VR headsets, then VR experiences will be able to detect and react whenever something catches your attention, your level of alertness, your degree of cognitive load and frustration, as well differentiating between different emotional states.

LISTEN TO THE VOICES OF VR PODCAST

“Neurogaming” is undergoing a bit of rebranding effort towards “Experiential Technology” to take some of the emphasis off of the real-time interaction of brain waves. Right now the latency of EEG data is too slow and it is not consistent enough to be reliable. One indication of this was that all of the experiential technology applications that I saw at XTech that integrated with neural inputs were either medical and educational applications.

Conor says that there are electromyography (EMG) signals that are more reliable and consistent including micro expressions of the face, jaw grits, moving your tongue, and eye clinches. He expects developers to start to use some of these cues to drive drones or do medical applications for quadriplegics or people who have limited mobility from ALS.

There are a lot of privacy implications once you start to gather some of this EEG data, and Conor is particularly sensitive to this. He says that recent research is indicating that EEG signals are very unique to each person, and represent a unique digital signature that could trace anonymously submitted data back to you. He says that companies of the future will need to take into consider a strict privacy policy, and not use this data to exploit their users.

At the same time, there were a number of software-as-a-service companies at XTech who were taking EEG data and applying their own algorithms to extrapolate emotions and other higher-level insights. A lot of these algorithms are using AI techniques like machine learning in order to capture a baseline signals of someone’s unique fingerprint and start to train the AI to be able to make sense of the data. AI that interprets and extrapolates meaning out of a vast sea of data from dozens of biometric sensors is going to be a big part of the business models for Experiential Technology.

Once this biometric data starts to become available to VR developers, then we’ll be able to go into a VR experience and be able to see visualizations of what contextual inputs were affecting our brain activity and we’ll start to be able to make decisions to optimize our lifestyle.

I could also imagine some pretty amazing social applications of these neural inputs. Imagine being able to see a visualization of someone’s physical state as you interacting with them. This could have huge implications within the medical context where mental health consolers could get additional insight and the physiological context that would be correlated to the content of a counseling session. Or I could see experiments in social interactions with people who trusted each other enough to be that intimate with their inner most unconscious reactions. And I could also see how immersive theater actors could have very intimate interactions or entertainers could be able to read the mood of the crowd as they’re giving a performance.

Finally, there are a lot of deep and important questions to protect users from loosing control of how their data is used and how it’s kept private since it may prove impossible to completely anonymize it. VR enthusiasts will have to wait on better hardware integrations, but the sky is the limit for what’s possible once all of the inputs are integrated and made available for VR developers.

Here’s a partial transcript of what specifically Russomanno said about the limits of using EEG for real-time interaction:

Russomanno: I think it’s really important to be practical and realistic about the data that you can get from a low-cost dry, portable, EEG headset. A lot of people are very excited about brain-controlled robots and mind-controlled drones. In many cases, it’s just not a practical use of the technology. I’m not saying that it’s not cool, but it’s important to understand that this technology is very valuable for the future of humanity, but we need to distinguish between the things that are practical and the things that are just blowing smoke and getting people excited about the products.

With EEG, there’s tons of valuable data that is your brain over time in the context of your environment, not looking at EEG or brain-computer interfaces for real-time interaction, but rather looking at this data and contextualizing it with other biometric information like eye-tracking, heart rate, heart rate variability, respiration, and then integrating that with the way that we interact with technology, where you’re clicking on a screen, what you’re looking at, what application you’re using.

All of this combined creates a really rich data set of your brain and what you’re interacting with. I think that’s where EEG and BCI is really going to go, at least for non-invasive BCI.

That said, when it comes to muscle data and micro expressions of the face and jaw grits and eye clenches, I think this is where systems like open BCI are actually going to be very practical for helping people who need new interactive systems, people with ALS, quadriplegics.

It doesn’t make sense to jump past all of this muscle data directly to brain data when we have this rich data set that’s really easy to control for real-time interaction. I recently have been really preaching like BCI is great, it’s super exciting, but let’s use it for the right things. For the other things, let’s use these data sets that exist already like EMG data.

Voices of VR: What are some of the right things to use BCI data then?

Russomanno: As I was alluding to, I think looking at attention, looking at what your brain is interested in as you’re doing different things. Right now, there are a lot of medical applications ADHD training, neuro-feedback training for ADHD, depression, anxiety, and then also new types of interactivity such as someone who’s locked in could practically use a few binary inputs from a BCI controller. In many ways, I like to think of the neuro revolution goes way beyond BCI. EMG, muscle control, and all of these other data sets should be included in this revolution as well, because we’re not even coming close to making full use of these technologies currently.

Voices of VR: So what can you extrapolate from EEG Data in terms of emotional intent or activities in different parts of the brain? What can you tell from the EEG data?

Russomanno: I think the jury is still out on this one in terms of how far we can go with non-invasive EEG, but right now we can find attention, alertness; if something catches your attention. If you’re in a mind wandering state and you’re searching for the next thing to be interested in, if something catches your eye there’s an event related potential that’s associated with that. That’s really interesting data. Presenting a user or a player with little flags or trigger moments and seeing what stimuli are actually eliciting interesting responses. Emotional states; we’re getting to the point now where we can distinguish between different emotional states. Specifically anxiety, fear, happiness; some very general brain states. That’s kind of where we’re at right now but I think that we’re going to learn a lot more in the next few years.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to The Voices of VR Podcast. Today I talk to Connor Russomano of the OpenBCI Project. That's the Brain Control Interface. It's sort of like putting on a DIY EEG onto your head to be able to measure your brain waves in order to extrapolate all sorts of different information about what's happening in your brain. So I'll be talking to Connor about his open source initiatives in order to democratize neuroscience and how he sees it fit into the future of virtual reality. But first, a quick word from our sponsor. Today's episode is brought to you by the Virtual World Society. The Virtual World Society wants to use VR to change the world. So they are interested in bridging the gap between communities in need with researchers, with creative communities, as well with community of providers who could help deliver these VR experiences to the communities. If you're interested in getting more involved in virtual reality and want to help make a difference in the world, then sign up at virtualworldsociety.org and start to get more involved. Check out the Virtual World Society booth at the Augmented World Expo, June 1st and 2nd. This interview took place at the Rothenburg Founders Day at AT&T Baseball Park, where there are all sorts of different companies that were showing off their products that were funded by Rothenburg River, and Connor just happened to be there wearing a really crazy OpenBCI sensor on his head, and so I had to go up there and talk to him about his project. So with that, let's go ahead and dive right in.

[00:01:44.401] Conor Russomanno: My name is Connor Russomano. I'm co-founder and CEO of OpenBCI. And what we do is we design open source hardware and software for interfacing the human brain and body.

[00:01:56.135] Kent Bye: Great. So this is something that I think there would eventually be a big application for virtual reality because you could start to be in VR and have your input control be actually your thoughts or your brainwaves. So maybe you could talk a bit about your vision of the OpenBCI, which is an open brain control interface of where it is now and where you see this all going.

[00:02:16.985] Conor Russomanno: Sure, so I mean I think the BCI industry is really just getting started. The applications are only just being conceived of and really the implications are limitless in terms of what we can do with all of this data that's coming out of our heads and our bodies. Not just our brains but all of the muscles and our heart and you know it's data that we couldn't retrieve easily up until recently and now it's portable we can take it anywhere with us. My vision for the BCI industry at least at this point is getting the technology as cost-effective and approachable as possible and encouraging Young people to get involved in the game and learn at an as early an age as possible But since this podcast is about VR. I think we should talk about BCI plus VR I think one thing that I'm really excited about, you know, there's a lot of hype around VR right now, and what comes along with that is putting stuff on your head. So I think in many ways, the BCI industry is going to be able to kind of ride the curtail of VR as it becomes more socially acceptable to be sticking hardware and devices on your head. Why not stick a few sensors on it and be recording brain activity at the same time? So that's one great aspect. You know, VR and AR are kind of paving the path for brain-computer interfacing, and it's just going to be like, Oh sure, might as well stick some sensors on there.

[00:03:33.016] Kent Bye: So maybe you could tell me a few stories or anecdotes that really describe the capabilities of the data that you're getting from the OpenBCI and then what you're able to then do with that.

[00:03:42.239] Conor Russomanno: Yep, I mean I think it's really important to be practical and realistic about the data that you can get from a low-cost, dry, portable EEG headset. A lot of people are very excited about brain-controlled robots and mind-controlled drones. But in many cases, it's just not a practical use of the technology. I'm not saying that it's not cool, but it's important to understand that this technology is very valuable for the future of humanity. But we need to kind of distinguish between the things that are practical and the things that are just kind of blowing smoke and getting people excited about the products. With EEG, there's tons of valuable data that really is your brain over time. in the context of your environment. So, you know, looking at not EEG or brain-computer interfaces for real-time interaction, but rather looking at this data and contextualizing it with other biometric information like eye tracking, heart rate, heart rate variability, respiration, and then integrating that with the way that we interact with technology. So where you're clicking on a screen, what you're looking at, what applications you're using, All of this combined creates a really rich data set of your brain and what you're interacting with. And I think that that's where EEG and BCI is really going to go, at least for non-invasive BCI. That said, when it comes to muscle data and microexpressions of the face and jaw grits and eye clenches, You know, I think this is where systems like OpenBCI are actually going to be very practical for helping people who need new interactive systems. So people with ALS, quadriplegics. It doesn't make sense to jump past all of this muscle data directly to brain data when we have this rich data set that's really easy to control for real time interaction. So, you know, I recently have been really preaching like BCI is great. It's super exciting, but let's use it for the right things. And then for the other things, let's use these data sets that exist already, like EMG data.

[00:05:36.141] Kent Bye: So what are some of the right things to use BCI data then?

[00:05:39.816] Conor Russomanno: So as I was kind of alluding to, I think like looking at attention, looking at what your brain is interested in as you're doing different things. So, you know, right now there are a lot of medical applications like ADHD training, neurofeedback training for ADHD, depression, anxiety, and then also new types of interactivity such as like someone who's locked in could practically use a few binary inputs from a BCI controller. But, you know, in many ways I like to think of the neuro revolution goes way beyond BCI like EMG muscle control and all of these other data sets should be included in this revolution as well because like we're not even coming close to making full use of these technologies currently. Other applications though like you know the gaming industry I think is going to be hugely affected by EEG and EMG for both new forms of interactivity but also experience design where imagine like a much more nuanced choose your own adventure where the game is adapting to your response to it. So characters that you agree with more subconsciously over the course of the game are actually becoming your companions in the game or plot twists that are affected by the way that you're responding to early plot points. So, you know, like new types of interactive fiction is going to be embedded in gaming. It's going to be purely based on your body's reaction to the experience.

[00:06:56.158] Kent Bye: And so what can you extrapolate from EEG data in terms of like emotional intent or activities in different parts of the brain or what can you tell from the EEG data?

[00:07:05.639] Conor Russomanno: Yeah, I think the jury is still out on this one in terms of how far we can go with non-invasive EEG, but right now we can find attention, alertness. If something catches your attention, you know, if you're in a mind-wandering state and you're kind of searching for the next thing to be interested in, if something catches your eye, there's an event-related potential that's associated with that. That's really interesting data. It's just kind of like presenting a user or a player with little flags or trigger moments and seeing what stimuli are actually eliciting interesting responses. Emotional states, we're getting to the point now where we can distinguish between different emotional states, specifically anxiety, fear, happiness, you know, some very general brain states. Yeah, that's kind of where we're at right now, but I think we're going to learn a lot more in the next few years.

[00:07:51.742] Kent Bye: And so there seems to be a trade-off between the number of sensors that you have and the fidelity that you have, as well as with the ease of use that you have with sticking something on your head that may or may not need some sort of contact gel to be able to get a really good connection. So maybe you could talk about those trade-offs that you have to make in terms of ease of use versus the fidelity of the EEG.

[00:08:11.403] Conor Russomanno: Sure, I think this will always be a trade-off and this is one thing that the consumer developers of BCI are having trouble with because if you're going to make a sleek headset or factor product design and wearability into your product, you're making sacrifices. If you're focusing on the prefrontal cortex over the forehead, you're passing up on a lot of really interesting data from your motor cortex on the top of your head or your visual cortex on the back of your head. so that the product is comfortable and wearable. I think what we're going to see is people designing BCI hardware for specific use cases. It's going to be very difficult to create a product that's kind of a catch-all product unless it looks like this ridiculous thing I'm wearing on my head. What I think we'll see is we'll see the BCI tree begin to branch off in different directions and different wearable products focusing on different directions of this whole industry.

[00:09:01.347] Kent Bye: So it seems like OpenBCI is something that could potentially democratize neuroscience in a certain way. People being able to have access to the hardware, but also to be able to do their own research into neuroscience. So maybe you could talk a bit about what people are already starting to do with OpenBCI.

[00:09:15.894] Conor Russomanno: Sure, yeah. I think you said it about as well as you can say it. What we're trying to do is democratize neuroscience and also neurotechnology. I think neurotechnology is a kind of new phrase. Neuroscience has been around for a long time, but now we have We have neuroengineers who are, that's what they call themselves, I'm a neuroengineer. I'm not a neuroscientist, I build tools for interfacing the body and the brain. But really what we're trying to do is not confine people to using BCI in a very specific way, but rather open up the possibilities and say like, you know, here is a very crude and powerful foundation for you to start laying new building blocks on top of. Take what we've done and move it into a direction that is an applied direction. And so, you know, like our community so far has built games, there's been motor imagery research, so classifying motor cortex signals to help people with ALS or locked-in syndrome to communicate. EMG data, you know, like we are open BCI, but we are really a multi-purpose biosensor, so looking at prosthetics, so tapping into residual limbs and relaying that data into a robotic hand that is serving as a animated prosthetic. looking at neurofeedback or clinical applications. So taking the OpenBCI device and using it for clinical neurofeedback practices. So taking a group of people who have attention disorders and trying to give them new tools to first identify that they're in a low attention state and then helping them discover inside of their own head what tools they actually already have access to to navigate their mind out of these states. So that's neurofeedback in a nutshell is essentially like teaching someone that they actually have the power themselves to consciously shift their attention and shift their mind in a new direction. Realizing that you actually do have a certain control over your mind and your environment doesn't have ultimate control over you. So, you know, these are all directions that researchers are exploring with our technology and that's just a short list. You know, we can go on and on and on, like, you know, new types of new media installations or musical applications where we're extrapolating brain activity into soundscapes or using music theory to map it into a key and then playing digital music based on brain activity.

[00:11:26.499] Kent Bye: So do you see the BCI movement moving into using your brain as a primary control interface, or do you feel like it's moving away from that?

[00:11:34.627] Conor Russomanno: I think that it should be moving away from that, at least in the way that the media portrays it. I think that, like I was saying earlier, BCI is very valuable for many things. In most cases, this is not practical as a real-time controller. That said, tons of valuable data for helping us understand our mind in our environment, what things we're doing over the course of the day are influencing or affecting our brain activity, how do we optimize our lifestyle to have the mind or the consciousness that we want to have. That said, you know, like EMG is this other tool that's just totally untapped as an interactive means. As you move your tongue or you make small squints of your eyes, there are these super ripe signals that can be latched onto and used as inputs for a machine, an RC helicopter, a drone. And they're just as subtle as you doing it with your brain. And we already have control over them. And so, like, for interactivity, I just, like, pretty much preach to everyone, like, just try to figure out how to use EMG. Use muscle data that's existing, and even ALS patients still have this. BCI, let's figure out how to harness it for improved learning, better sense of mindfulness, and really that relationship between inner self and external world. I think that is the future of BCI, is like just building a better understanding of your own mind in the context of your environment.

[00:12:53.487] Kent Bye: Well, my impression of the OpenBCI kit at this point is it's been a lot of DIY hackers who are really ready to get dirty with going through a lot of open source hacking to get these things working and that at this point they're really in control of their own data. But I can imagine a future where BCI interfaces become a lot more prevalent and that maybe we are having to make the choice as to whether or not we're disclosing some of our biometric data, whether it's our heart rate variability, or our brain waves, or our emotional states over time, to a company that is doing all sorts of just data aggregation on us. And so, what are some of the implications in terms of ethics and the decisions that people are taking in terms of maintaining control over the data, but also perhaps yielding that control over to get certain benefits from that, from a company who is taking that in and being able to extrapolate that into all sorts of other apical insights?

[00:13:43.757] Conor Russomanno: This is a great question and I'm really glad you asked it. This is something I care a lot about and think a lot about. I think that BCI, it is inevitable that BCI is going to be heavily integrated into the future of human evolution. We have perpetually used technology to extend our consciousness. This is the most literal form of consciousness extension that we have stumbled upon. And now we're at this kind of like really critical point in human evolution where there is extremely valuable and private data that can be literally extracted from your mind using a brain computer interface. Now what we need to do as developers, designers, entrepreneurs is figure out how can we ensure that this data is being harnessed, you know, monetized in ways that are not really just taking advantage of the user and exploiting individuals. So I think everything we've done with OpenBCI, we try and keep the technology as transparent as possible. It becomes a little bit trickier when you're talking about data because data transparency often is not a good thing in terms of protecting the user because there's all of this recent research in terms of each person's brain activity is almost like a fingerprint. The brain data that you produce is unique to you. So that means if you're trying to anonymously submit your data to an open repository, it could ultimately be tracked back to you as an individual. So this poses some interesting questions in terms of like, in an ideal world, I think we should be submitting data to an open database and have it be anonymous. But those are seemingly mutually exclusive scenarios. You can't have one with the other. That said, I think it's really important that companies do not exploit the individual and their data purely for monetary gains. So I think there is a company that could emerge that could make the number one design constraint of their business. User has final say and ultimate control over data they produce. And we can still find a way to help them monetize this data while we monetize it too. And let the user figure out where this data goes, who they want to give it to, whether they want to donate it to an ALS study or they want to sell it to Facebook or Twitter. These are things that we should start thinking about and actually really be very mindful of the fact that this data You can learn a lot about what someone's thinking, what they're interested in, and what they're looking at, what buttons they're clicking on, and all of this stuff. And if we don't start protecting it now and thinking about how to design for this, it's going to be not good.

[00:16:04.697] Kent Bye: Can you tell me the story of how you got into this? What was the itch that you were really trying to scratch?

[00:16:10.820] Conor Russomanno: Yeah, actually, it has evolved over time in terms of why I've continued to stay in the BCI space. Initially, I just kind of happened into how to hack toy EEG tutorial online. And it was like, take this MindFlex toy with a NeuroSky chip inside of it and tap into the serial stream and plug it into an Arduino and here's a processing sketch, here's all your brain activity. And I was like, holy shit, this is so cool. But I think the reason I thought it was so cool is because Over the course of my college experience, I played rugby and I suffered a series of concussions. And I actually felt and experienced my brain, like the fact that I sustained injury to my brain, I felt it affect my mind. And it was really the first time I thought about mind-body dualism and this whole idea that you can physically alter matter, which ends up affecting this spiritual entity that is your mind. And then I got really interested in this whole idea and then I found this tutorial and I was like, you know, this is the first time I've ever been able to quantify my mind, right? Where I can see zeros and ones or scribbles on a screen that in a way are quantifying the qualitative. You know, that was the beginning and I think simultaneously one of my best friends suffered a neck injury and was paralyzed from the neck down and I saw like, you know, I knew it's the same thing, right? It's electrical signals that are trying to work their way from the brain to the body and unable to do so. And so you see these like the fragility, but also the resilience of the human body to sustain injuries like this and recover from it. So my friend is now like he can walk again. He's like 95% recovered because he suffered an incomplete spinal cord injury, meaning like it wasn't a severed spinal cord, but he was like fully paralyzed for four months and then he recovered. And so it was just like all of these things were kind of happening simultaneously as I was discovering brain computer interfacing. And it just made me realize that, like, we know nothing about the way that our body sends little pieces of information, electrical data, you know, from the root of the tree to the leaves and the flowers that are our fingers. And it's just this, like, you know, it's one of the final frontiers, I think, is the complexity of the human brain and the way that it influences our mind and the things that we do. You know, that and outer space are basically the two things that we don't understand. the inner world and the outer world. And I think since then there have been many other things and also like hearing other people's experiences as to why BCI lights them up. Like everyone has their own story like oh you know my sister suffers from anxiety or my you know my mom died of dementia or you know things like this where like all these cognitive disorders which are literally like the failing of the nervous system or you know some piece of it and it's seemingly things that we can fix we can correct them. So that's the original story, and then I think over time I've been inspired by other people's stories.

[00:19:00.772] Kent Bye: And so you had a Kickstarter, and then you've since had other projects. So maybe you could give me a little bit of an overview of where the OpenBCI project is at now and where it's going.

[00:19:11.059] Conor Russomanno: Yeah, so we've run two successful Kickstarter campaigns now. The most recent one was to take our existing platform and really the piece of hardware, strip it down to the bare minimum components to be getting still very interesting data. but sell a device that's $100 as opposed to $500. And the whole goal here is really to get, from the beginning, the goal was to target the maker market and education and show the world that young people can do very interesting things with this technology. It doesn't have to just be the NASAs, the MITs. People should have access to this technology, not just top-level researchers. And so we're still moving in that direction. working to design curriculums to integrate these $100 boards into classrooms where every student has their own board. And so my vision is like Biomed 101 or CogSci 101. Every student has an OpenBCI board that they can take home with them in their backpack and be collecting their own data, submitting their data to class repositories. And then brain research just becomes a whole new game where You know, as opposed to having one $50,000 EEG system that's in the laboratory of the department that your class is a part of. Now every single student has their own device. And yeah, it's $100. The data quality is not as good, but every single student has the ability to submit data. And now that data set is like a thousand samples as opposed to one.

[00:20:32.353] Kent Bye: Yeah, and it seems like with VR that you could start to really control the input in terms of whatever type of experience someone might be experiencing and start to do all sorts of really interesting neuroscience research where you're putting people into controllable environments and be able to measure the results of whatever their brain activity is doing.

[00:20:49.837] Conor Russomanno: Totally. I mean, VR I think has, of all the emerging techs, there's VR and there's AR. VR is going to be the most immersive experience that humans have experienced to date. It already is. And so I think in tandem with EEG, there's an immense potential to design immersive experiences that are not just immersive, but responsive. So like literally as immersive as possible, but also changing and evolving based on human mental state. So it's this, you know, I think the possibilities are incredible in terms of combining VR and EEG for immersive experience design.

[00:21:26.051] Kent Bye: And finally, what do you see as kind of the ultimate potential of these immersive technologies and what they might be able to enable?

[00:21:31.895] Conor Russomanno: I mean, I think we're just going to keep moving forward. We're just going to keep finding cool applications, cool games. I'm excited for VR, for some really, really great games and also cinematic experiences. I'm really looking forward to interactive cinematic experiences where I think it's going to be very difficult to do it in an elegant way where as a director you can keep the attention of the viewer and also direct the attention of the viewer. I think that's the hardest challenge for VR cinema right now is like You know, you don't want someone looking away from a plot point that's super critical, but I think there's ways we can tease that out with three-dimensional sound and conversation in 3D space. But then also just, you know, like, you know, my thesis in grad school was a neuro-immersive narrative where it was a three-chapter story where each chapter was based on your brain's response to the previous chapter. And it was really a subconscious choose-your-own-adventure. I think that is going to be really interesting to see play out as people put more resources. And I was one person working on a short story. But if you built a whole video game or a whole cinematic experience that was a branching narrative or an interactive narrative that you are just flowing through and everybody's experience is different, that's art right there. That has not been done yet, or at least not effectively. So I'm excited for that.

[00:22:53.262] Kent Bye: Awesome. Well, thank you so much. Yeah, thank you. So that was Connor Russomano of the OpenBCI project. And it was a really interesting interview for me because I was kind of surprised about where this neural interface is kind of headed and going. I had really expected that we'd be able to control VR with our minds, but it's really not a great use case to do real-time interactions, especially in VR when you need to have very low latency. it's really difficult to actually have very precise control of your brain, but also these EEGs aren't necessarily clean enough in order to get the signals that you would want with the level of fidelity in order to actually control real-time interactions. And so I was also at the Experiential Technology and Neurogaming Conference, and there were a lot of different applications of people that were doing different things with the brainwaves, but Again, no real real-time interactions and so it kind of see this move away from Neural gaming as the branding of this technology and more towards experiential technology because it's less about the gaming and it seems to be a little bit more about healing and medical applications I think are gonna be the real killer apps in terms of being able to monitor what's happening in your body and doing all sorts of different biofeedback as well as monitoring your brain during learning and training. Training was another big application for these neural hacking devices where you are able to detect your level of cognitive load and be able to dynamically adapt the game based upon how much you're struggling. And so I'm really excited to see where these brain control interfaces go into the future. I did hear from a lot of different companies that the current generation of commercial off-the-shelf neural devices aren't necessarily really compatible with the existing virtual reality head-mounted displays. There is a company called MindMaze, which is doing a deliberate integration between a VR headset and a neural interface. Most of the other devices you would probably have to have a custom strap in order to do it. It's not something you could just put over a VR HMD because it's just occluding a lot of the sensors that it would need. So with that, I'm at the Google I-O conference right now. I'm actually standing in a little 8x8 booth waiting for the keynote to start. And so I'll probably go and start editing my podcast as the keynote's going to be starting here later this morning. So I'll be doing a little bit more interviews with what's happening with Google throughout the week at Google I-O, but especially on Friday. So next week, we'll have a lot more information as to some of the news that's coming out here at Google I-O. But if you do enjoy the podcast, then please do consider becoming a contributor to my Patreon at patreon.com slash Voices of VR.

More from this show