When I attended the Experiential Technology Conference in May 2016, I heard from a number of commercial off-the-shelf brain-control interface manufacturers that their systems would not natively work with VR headsets because there are some critical portions on the head that are occluded by VR headset straps. Companies like Mindmaze VR are building in integrated EEG hardware primarily for high-end medical applications, and perhaps we’ll start to see more EEG hardware integrations in 2017.
Qneuro is an educational company that was exhibiting at the Experiential Technology Conference, and they had some early VR prototypes that used EEG as input within a lab environment. Qneuro founder Dhiraj Jeyanandarajan is a clinical neurologist and works as a neurophysiologist who looks at real-time electrophysiological signals to make corrections during brain or spinal surgeries. He’s also a father who got frustrated with the educational games that were available for his two kids, and so he started Qneuro to create educational games that integrated real-time EEG feedback.
Qneuro has been building 3D environments in Unity and launching them on the iPad, and they’re still waiting for a more integrated hardware solution before launching their virtual reality version. I had a chance to catch up with Jeyanandarajan at the XTech Conference to see what they’re able to do with real-time EEG feedback within a lab environment to improve the learning process within their educational game.
LISTEN TO THE VOICES OF VR PODCAST
Jeyanandarajan said that they’re using Cognitive Load Theory to improve the efficiency of learning. They’re using the EEG data to detect how hard users are thinking, and then they’re dynamically reducing distracting factors like visual and auditory complexity or increasing the frequency of hints that are provided. Here’s more from their blog as to how they’re using Cognitive Load Theory:
Our research facility and team continue to investigate key concepts within cognitive load theory such as, efficiency in learning, cognitive load, multi-modality, schemas, automation, the split attention effect, guided instruction and modifications to instructional design from novices to experts, through research data gathered in real time from our own experiments and primary research.
It’s an open question as to how effective brain-control interfaces (BCI) will be in providing real-time interactions within VR environments. “>OpenBCI co-founder Conor Russomanno told me in May that the real power of EEG data from brain-control interfaces is not in real-time interactions, but rather it’s the Electromyography (EMG) signals that are much stronger and easier to detect for real-time interactions:
Russomanno: I think it’s really important to be practical and realistic about the data that you can get from a low-cost dry, portable, EEG headset. A lot of people are very excited about brain-controlled robots and mind-controlled drones. In many cases, it’s just not a practical use of the technology. I’m not saying that it’s not cool, but it’s important to understand that this technology is very valuable for the future of humanity, but we need to distinguish between the things that are practical and the things that are just blowing smoke and getting people excited about the products.
With EEG, there’s tons of valuable data that is your brain over time in the context of your environment, not looking at EEG or brain-computer interfaces for real-time interaction, but rather looking at this data and contextualizing it with other biometric information like eye-tracking, heart rate, heart rate variability, respiration, and then integrating that with the way that we interact with technology, where you’re clicking on a screen, what you’re looking at, what application you’re using.
All of this combined creates a really rich data set of your brain and what you’re interacting with. I think that’s where EEG and BCI is really going to go, at least for non-invasive BCI.
That said, when it comes to muscle data and micro expressions of the face and jaw grits and eye clenches, I think this is where systems like open BCI are actually going to be very practical for helping people who need new interactive systems, people with ALS, quadriplegics.
It doesn’t make sense to jump past all of this muscle data directly to brain data when we have this rich data set that’s really easy to control for real-time interaction. I recently have been really preaching like BCI is great, it’s super exciting, but let’s use it for the right things. For the other things, let’s use these data sets that exist already like EMG data.
Voices of VR: What are some of the right things to use BCI data then?
Russomanno: As I was alluding to, I think looking at attention, looking at what your brain is interested in as you’re doing different things. Right now, there are a lot of medical applications ADHD training, neuro-feedback training for ADHD, depression, anxiety, and then also new types of interactivity such as someone who’s locked in could practically use a few binary inputs from a BCI controller. In many ways, I like to think of the neuro revolution goes way beyond BCI. EMG, muscle control, and all of these other data sets should be included in this revolution as well, because we’re not even coming close to making full use of these technologies currently.
In the short-term, it’s still an open question as to how effective EEG data will be able to provide within the context of a real-time game. The quality and fidelity of the data depends upon how many EEG sensor contact points will be able to make a direction connection to the skin on your scalp. The more sensors that available will provide better data, but may be more inconvenient to use. Since the most crucial contact points are at the same place as to where the VR straps are at, then using EEG input for a input to a VR experience may require a custom integrated headset like Mindmaze.
The Neurogaming Conference rebranded itself last year to become the Experiential Technology Conference & Expo, perhaps as a de-emphasis on real-time interactions in games and more of a focus on other medical or educational applications. There were also a lot of companies at the Experiential Technology Conference who were using machine learning techniques in order to amplify the noisy and complicated EEG signals coming from BCI devices. These AI techniques could also be used to detect the level of attention as well as different emotional states.
In the long-term, virtual reality will be likely integrating more and more biometric data as feedback into VR experiences. “>The Intercept recently wrote about how VR could be used to gather the most detailed & intimate digital surveillance yet, and so there are a lot of unresolved privacy implications that come with using biometric data with VR experiences. This is something that the virtual reality community and privacy advocates will need to push back on companies to evolve their terms of service and privacy policies for what type of data is collected and stored, and how it can be used and not used.
There are currently a lot of challenges of using EEG or EMG data to controlled VR experiences, but there is also a lot of potential ranging from individualized educational applications, medical applications, personalized narratives based upon your emotional reactions, and biofeedback experiences that help deepen contemplative practices.
Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Support Voices of VR
- Subscribe on iTunes
- Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR Podcast. So on today's episode, I'm going to be talking about some of the challenges and potential of using brain control interfaces with virtual reality experiences. So, I was at the Experiential Technology Conference in May, which is really where a lot of the most cutting-edge brain control interfaces and types of interface technology is really displayed, and what's happening with being able to use artificial intelligence to extrapolate all sorts of emotional states and information by being able to read your brainwaves with these EEG sensors. Now, the only problem was, is what I heard from a lot of these different companies is that a lot of these commercial off-the-shelf brain control interfaces, they weren't necessarily compatible with virtual reality headsets out of the box, mainly because the straps from a virtual reality headset block and occlude a lot of the major points that you need to be able to put these EEG sensors on your head to be able to get a good signal, to be able to read your brain waves. But that all said, I think at some point there's going to be more and more technologies that are integrating EEG technology directly into the VR headset. One example is MindMaze. They've raised $100 million and they'll be primarily focusing on medical applications. And so there was still one educational company that was there at the Experiential Technology Conference, and they were called QNero. And I had a chance to talk to the founder, Dhiraj Janan Dhirajan, to talk about how he's planning on trying to integrate these EEG signals in order to give real-time feedback and interaction with an educational game that is teaching students how to get through the common core curriculum, mathematics. So we'll be talking about cognitive load theory and how he's using it to be able to improve the training process, as well as some of the challenges that he faces in trying to get some of this technology to work. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by the Voices of VR Patreon campaign. The Voices of VR podcast started as a passion project, but now it's my livelihood. And so if you're enjoying the content on the Voices of VR podcast, then consider it a service to you in the wider community and send me a tip. Just a couple of dollars a month makes a huge difference, especially if everybody contributes. So donate today at patreon.com slash Voices of VR. So this interview with Dheeraj happened at the Experiential Technology Conference on May 17th in San Francisco, California. So with that, let's go ahead and dive right in.
[00:02:56.078] Dhiraj Jeyanandarajan: Okay, my name's Dhiraj Janan Dhirajan. I am the CEO and founder of QNeuro, which is an educational game company that's creating educational video games for kindergarten through sixth grade, focusing in on mathematics. And we use EEG technology in order to help learn as efficiently as possible. Our game's currently designed for the iPad, but we're developed in Unity so that we've got an eye on virtual reality as the next step for the natural progression of the game.
[00:03:25.760] Kent Bye: So maybe you could just tell me the story of how this came about.
[00:03:28.662] Dhiraj Jeyanandarajan: Sure. So I come from the medical side. I'm a clinical neurologist and a neurophysiologist that we use this technology for recording electrophysiological signals during brain and spine surgery to improve the outcomes, give surgeon feedback in real time so that they have a window into the actual state of the nervous system during the surgery so they can make corrections in real time rather than finding out after the fact. So I had two small kids seeing what games that they were exposed to and what games were available and out there and then with their iPads and looking for educational video games started to create my own content and one thing led to another and then basically merging these two technologies so that we can actually use the technology that we're using on the medical side to improve the efficiency of learning on the education side.
[00:04:20.824] Kent Bye: Yeah, so we're here at the Experiential Technology and Neural Gaming Conference and so you're creating a neural game where you're taking feedback from someone's brainwaves through this EEG monitor and then feeding that into the gameplay and doing this feedback loop cycle. So, first of all, what are you receiving from the brain and then What are you able to extrapolate from that and then how do you feed that into the gameplay?
[00:04:44.339] Dhiraj Jeyanandarajan: So we're compatible with most of the major commercially available EEG headsets. Ten years ago this wouldn't have even been possible because these EEG headsets were so far out of the reach of the normal consumer. But now you've got these prosumer versions, you've got EEG headsets that are less than $100. It makes it accessible to people. What we need now is content. for these and actual uses for these that are beyond maybe just the gaming environment, but also now we're looking to combine the gaming and the education environment so that we're creating this content that uses these EEG waves. So specifically, what are we measuring with the EEG? Naturally, your brain just generates, by the firing of neurons that happen just normally in the background and through whatever cognitive processes are going on, it generates electrical activity. The EEG records that electrical activity, we're taking that raw EEG data based off of these consumer-available headsets, and we're taking that, applying our own algorithms to that, and, you know, based off of certain rhythms, alpha, beta, theta, gamma rhythms that are recorded from different points on the brain and different electrodes, and we're using different algorithms that we're running those through in order to come up with different measurements for how the brain is working. And based off of that, we're able to then apply that back into the game as a feedback among other analytics that we're using in order to improve this kind of efficiency of learning.
[00:06:08.781] Kent Bye: So you had mentioned cognitive load as being one of the factors that you're able to extrapolate from all these different signals. So what is the cognitive load and how is that related to learning?
[00:06:18.708] Dhiraj Jeyanandarajan: So cognitive load is a measure of how hard your brain is thinking or how hard your brain is working to perform a particular task. So, we've got these mathematics tasks that the kids are focusing on and they're trying to solve. And so, based off of that, it's a valuable bit of information that the EEG is actually a really good measure of cognitive load. It's well established through papers and published algorithms for how to determine cognitive load based off of these EEG measures. So we use some of these, we've kind of tweaked them and tested them in our own controlled environment and then tested them in our game itself so that they're reliable measures of this cognitive load. Now with that information, we've got a window into the brain as to how hard the brain is actually thinking. We don't have to just rely on, are you getting the question right? Are you getting the question wrong? How fast are you answering the question? How slow are you answering the question? How many times do you have to press a hint button? We're using all that information, but we're combining it with a real-time window into the brain, which is determining how hard your brain is actually working, how hard it's thinking. That is a valuable bit of information that we can now use and combine that with something called cognitive load theory, which is from the neuroeducation environment, where they've performed these behavioral studies which basically tell us that If you reduce the complexity of distracting factors, then you can improve the efficiency of learning and the efficiency of solving a particular problem or task. So with this cognitive load information, our game dynamically changes. It reduces the visual complexity. If you've got a high cognitive load that the EEG is measuring, it reduces the auditory complexity. It increases the frequency with which hints will come up and hints will come up automatically. It changes the difficulty level of the settings. That, in combination with the other behavioral measures that we're taking into account, all make for a faster processing and a faster time to completion for the kid when they're trying to work through these problems.
[00:08:15.268] Kent Bye: And so you also have a virtual reality component to this. And so how does the VR world that you're creating kind of fit into these different games?
[00:08:23.091] Dhiraj Jeyanandarajan: So we're designing this whole game in a 3D environment, a 3D interactive environment. We've got, for each one of those modules, we've got an idea of how this would look in a VR space. Everything is actually designed for a VR space. We're just taking it down, even down to the level of processing that's required. We take it up. and design everything for that VR space and then we scale it back so that it'll work on a tablet or an iPad. The reason for that is we know VR is the future, or we believe VR is the future. But we're designing for the present because we'd like to get our game out there, we'd like to get mass adoption of the game. But once VR really kind of kicks in, catches on, becomes more kind of a mass adoption, we'd like to be right there at the forefront, ready to release our product into the VR space. And we're developing in Unity, so it's quite an easy transition for us to go from our current tablet release to the VR space.
[00:09:19.441] Kent Bye: Yeah, and I know that there's companies out there like MindMaze and other companies that are producing headsets that are specifically doing integrations for being able to track EEG, but is this something that you could take a commercial off-the-shelf EEG and combine it with a Oculus Rift or HTC Vive and have it work?
[00:09:35.284] Dhiraj Jeyanandarajan: No, unfortunately not. Just because the physical setup with which, how much of the headspace and the way that the VR set is attached to the head and the straps that go across. It's perfect for putting EEG into those, but as of right now, there's none that are available that we've had for testing. But we are aware of groups, like you mentioned, that have this in the works and we'll be releasing them soon. And we're eagerly awaiting those because we would love to integrate our game into VR and using the EEG as the key differentiator.
[00:10:12.128] Kent Bye: Now you said you're using like common core educational practices. So maybe you could talk a bit about like how you're taking the standardized learning that the education department in the United States has put forth as their recommended approach to education and how you're using that to gamify it within your games.
[00:10:29.160] Dhiraj Jeyanandarajan: So the common core is just basically set certain rules and standards with which you need to know this in order to be able to demonstrate understanding of a particular concept. So for multiplication, it's not just learning 3 times 5 is 15, it's learning that 3 times 5 early on represents 3 groups of 5 objects in each group. So we're taking that, we've got a a director of education that's taking these standards and designing the curriculum and the assessments around those particular standards. So Common Core just gives you guidelines with which to follow, but we're taking those guidelines as just that, as guidelines, and we're taking our game environment, designing things that will work in the digital space and in the virtual reality environment, and designing them to meet those Common Core standards.
[00:11:20.337] Kent Bye: And at your booth, you had a little mini poster about EEG and multiplication tables. Maybe you could tell me a bit about that.
[00:11:26.547] Dhiraj Jeyanandarajan: So where we are today is being able to use some of these cognitive load measurements and other more simple factors like attention, frustration, those types of things. where we can go with this technology is really pretty amazing. What we're looking to do is potentially to be able to identify what is your signature, what is the EEG signature when you've mastered a specific topic or you've mastered a specific task. Compare that to the novice level EEG signature that you've got and be able to go from one to the other much faster. We've got lab studies that have demonstrated some very promising data on this, but that's stuff that we're still working on and is to yet be released.
[00:12:07.429] Kent Bye: And so how do you assess or measure the efficacy of what you're doing? Have you been able to test this and see that it's efficient or effective?
[00:12:16.035] Dhiraj Jeyanandarajan: So in the control lab environment, it is, we have shown that it is effective, but what we're doing now, the next step for us is once we've got enough of these assessments that are completed, we'd like to get it into schools and do a couple of pilot programs in schools and basically compare an A to B comparison with using this technology with the EEG, using this technology without the EEG and then using a control, maybe certain electronic time or some equivalent game that's not particular our game and see what the differences are. We strongly believe that there needs to be firm and solid science which is backing our claims and we'd like to get data that shows that. So that's the next step for us.
[00:12:57.489] Kent Bye: And with an EEG, there's some that have just one sensor, and 14, and 32, 64. Maybe you could talk about the different levels of fidelity of input that you're able to get from more and more sensors from an EEG, and then how you're able to either change or adapt your gameplay based upon these higher levels of fidelity.
[00:13:15.966] Dhiraj Jeyanandarajan: So the number of channels does translate to higher fidelity signals and a little bit better coverage over the brain. But surprisingly enough, some of these lower channel systems have the EEG sensors in the appropriate places. that are necessary for us to gather the required amount of information that we need. So while we're trying to go from our 64-channel research-grade system, which is in the lab environment, down to a 4-channel, 5-channel, or even 1-channel system, you know, in the consumer space, that's a challenge. But that's a challenge that we're able to get some very promising results early on. But what we'd like to do is quantify, like, what is the actual difference between the 64-channel system and this one-channel system, or the four-channel system, or the five-channel system, and be able to give some quantitative measure of how good is the system at basically detecting something like cognitive load relative to your research-grade system, and then how does that translate to being able to make your game and learning process more efficient through our game.
[00:14:17.920] Kent Bye: Do you have any stories or anecdotes of children that have been playing or using your games?
[00:14:24.233] Dhiraj Jeyanandarajan: Basically, the test subjects are my kids, my two kids. I've got a second grader and a third grader and all of their friends, which it's a fun process. They're my inspiration. They're the whole inspiration behind even creating this whole thing. And we've gotten some very positive feedback. It's interesting for them to be a part of this whole process. This is new to me. I'm from the medical space. So being new to this gaming space, it's been an experience for me. But we've got a very talented team of people. We've got neuroscience people, we've got education people, and we've got a whole team of gaming people and programmers and engineers. And we've got an office in Southern California, and we've got an office in Chennai, India, where we do all of our development work.
[00:15:05.165] Kent Bye: Do they think it's fun? Do you find that they, on their own volition, want to play the game? Or is this something that they have to do their homework by playing this game?
[00:15:12.900] Dhiraj Jeyanandarajan: Oh no no, the whole goal of this was to achieve the same level of excitement that they get with wanting to play their entertainment games, except now they're learning as a byproduct of actually playing that game. I think obviously they're biased, but we've gotten some very very good feedback from the people that we've shown it to initially, but again it's early stages, we're Getting close to that point of where we're going to start testing it more widespread, especially with test groups for kids, and getting feedback from them before we do our full release.
[00:15:46.563] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?
[00:15:53.746] Dhiraj Jeyanandarajan: So we're just beginning to tap into this combination of virtual reality with having a real-time window into the brain. It's such a powerful tool. I mean, we're limited on the iPad. or any tablet to this two-dimensional interface. It's very powerful and it's taken us leaps and bounds ahead of where we were before. But now imagine that in a 3D space and all of a sudden whole new worlds open up. So we're so excited about taking this into the VR space. We're just kind of chomping at the bit waiting for this technology. We're not hardware producers, so we got to wait for the hardware to get there. But as soon as they do, we're ready to kind of launch with our game in that space.
[00:16:33.978] Kent Bye: So from your perspective, why is virtual reality compelling for this use case, above and beyond what you can do with the iPad?
[00:16:40.285] Dhiraj Jeyanandarajan: Well, just what we've talked about here in the conference is, OK, you've gotten to a certain level with the displays, but we still have yet to catch up with the inputs, right? But one of the promising things with the inputs is you can use your entire body and you can use this whole kinesthetic sense of motion and movement, which is fairly restricted when you're just dealing with moving your hand or fingers for an iPad. So that whole possibility of being able to immerse yourself in this world and being able to use that extra additional faculty of movement allows a whole different level of brain activation that we can use now to actually increase that whole process of activating different brain regions while you're learning something so you can translate something from theory into real-world applications.
[00:17:30.273] Kent Bye: And is that based upon embodied cognition theory, where the more that you move your body, then the more you could potentially remember?
[00:17:37.478] Dhiraj Jeyanandarajan: Essentially, yeah. I mean, it's similar to that, basically. By employing more of your faculties while you're learning something, so you're employing your movement, you're speaking it out loud, those types of things all help to consolidate and to activate different circuits and you form this kind of coherence, this measure of, okay, you've got this one learned skill, which is a mathematical skill for our purposes, right? What connections can you form that will translate that into the real world environment? So where this skill really becomes helpful is not just being able to regurgitate for a standardized test, but being able to look around the world and see how this can actually be applied to real world problems in a real world environment. So, by being able to translate that and make these connections between let's say for example what we're talking about is kinesthetic movement and you correlate that with a particular mathematical skill that you learned then you're able to now have one more avenue into saying okay During this interaction with the real world, I'm using this particular movement or motion, which might then trigger it to say that, okay, this mathematical skill can be used to solve this particular task. Whereas before, you might not actually make that connection because that connection is not present in your brain. You have to work hard in order to really form that connection through a couple of different routes, so to speak. but here it's kind of this direct activation where you're activating this mathematical skill and you've got all these other areas that light up in the brain that form these coherent processes that's happening when you activate this initial mathematical recall.
[00:19:12.283] Kent Bye: Is there anything else that's left unsaid that you'd like to say?
[00:19:15.407] Dhiraj Jeyanandarajan: No, just we're very excited about this. We think this is the future of learning and we think this merging of neurotechnology with, you know, this experiential technology like virtual reality is really going to be the next major revolution in education.
[00:19:31.675] Kent Bye: Awesome. Well, thank you so much. Thanks. So that was Dheeraj Janan Dheerajan. He's the founder of QNero, which is working on a educational game that is integrating both virtual reality and EEG technologies. So I have a number of different takeaways about this interview is that first of all I think it's a bit of an open question as to how effective it's going to be to be able to use a brain control interface and EEG specifically to be able to do real-time interactions with either a video game or an immersive VR experience. The Experiential Technology Conference used to be called the Neurogaming Conference, but yet they rebranded it last year, I think in part to start to de-emphasize the role of real-time interactions with games, and to instead start to focus on more medical applications where you're looking at sets of data over a period of time. Now, it sounds like this application from QNeuro, where you're able to perhaps detect the levels of cognitive load, could be a little bit of a rolling average that is changing over time. And maybe you're able to cross a threshold to reduce either the visual or auditory complexity to be able to make it easier for people to pay attention. Or at the same time, give more hints for people as they're running into trouble. So the biggest limitation for Q-Neuro is that they're kind of waiting on the hardware technology integrations to make it easy for you to be able to actually integrate these brain control interfaces with the VR experience. Like I said, with the way that HTC Vive and Oculus Rift are built, the straps from a virtual reality headset block and occlude a lot of the major points that you need to integrate their sensors directly into that without creating a custom strap that is able to have all the proper technology. I'm not sure that in the short term, if that's going to be a viable solution. So I think that's something like MindMaze as they start to have more information and launch, probably in the medical field is we're going to see a lot of those applications. It'll be interesting to see whether or not that's even available for consumers to be able to do some of these educational applications like Q-Neuro is talking about. But overall, when I went to the Experiential Technology Conference, what I saw there was also just a lot of different companies using different machine learning techniques to be able to detect and extrapolate all sorts of different things, whether it was the quality of your attention or different emotional states. And I think that's one of the both exciting and scary parts to looking at this type of biometric data being fed into a virtual experience. I think having that type of biofeedback and an immersive experience could potentially deepen or change the trajectory of the story to be able to either amplify or de-escalate something that is too intense. This is something that Connor Russomano talked to me back in episode 365 that was actually his like thesis in college was to think about how to use these biofeedback and different emotional states to be able to feed into these interactive narrative experiences. But Connor is from OpenBCI and he was actually also cautioning against the use of using EEG as a real-time mechanism and that that necessarily wasn't its strength. There's also these electromyography which is more of these muscle twitches that end up giving a stronger signal to be able to do real-time interactions. So whether it's a jaw grid or eye twitch or something like that, you're able to give a much more discrete signal that is able to be read in real-time and potentially used in an interactive fashion. So potentially we'll see a way of doing these kind of facial expressions as a way of doing user interface. But being able to control your thoughts and your mind is something that's actually very difficult to do for most people unless you're like a master Buddhist monk meditator who's able to have this clear mind and no distracting thoughts. I think that's part of the problem is that it's actually very difficult to get not only your thoughts clear and strong signal, but just the process of putting all these sensors over your hair, and you have to put gel, and it's just not a great user experience. You have this trade-off between the ease of use, if it's gonna be easy to use, it's gonna have less sensors that you're gonna have to deal with, and so you're not gonna have as great of a signal, but then the sensors that have a lot more sensors, then it's just gonna be a better signal, but potentially a little bit bigger of a pain to actually get it all set up and make sure you have all strong contacts that are touching your scalp directly. So I think there's a lot of challenges of brain control interface and virtual reality experiences. So I definitely recommend you go back and listen to episode 365 or read some of the partial transcript that I have, both in this post as well as in that post, just to get a little bit more information as to where this is all going. CES is coming up and so maybe we'll start to see more companies get into this space. But I expect that this is likely going to be a bit of a niche application for people who have very specific neurorehabilitation, whether it's from stroke victims or other ways that they're able to use that biofeedback of the brain to feed directly into a VR experience that is trying to do some sort of rehabilitation. This is the type of thing that I just covered with Larry Hodges in episode 487, talking about virtual therapy and stroke neurorehabilitation and being able to relearn skills. And the big thing there was to be able to actually show enough of a value so that the insurance companies would be paying for a lot of the systems and to be able to pay for that software. Larry Hodges basically said that there's some people that would just pay just about anything to be able to recover from a stroke and regain access to one of their limbs that they have to retrain their brain how to use. And so I expect something probably similar with this type of technology is that there's going to be likely trying to prove its direct application into medical applications. And I think that as that gets funded and proved out, then I think eventually we'll start to see more and more into educational applications. So it'll be interesting to see whether or not there's going to be enough of a market for people to actually have the hardware that's available so that the content can be there to really push the limits as what's possible. So just grateful to Dhiraj and QNeuro of being on the pioneering leading edge of trying to see what's possible with the content and to prove out what's possible with using concepts like cognitive load theory and this real-time neural feedback in order to optimize the learning experience. So that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and become a donor. Just a few dollars a month makes a huge difference. So go to patreon.com slash Voices of VR. Thanks for listening.