MindMaze is creating Brain-Computer Interface that’s integrated with VR head-mounted displays, and they’re also creating immersive technologies that neurorehabilitation. I had a chance to talk with MindMaze founder and CEO Tej Tadi at the XTech conference in 2017 about the gamification of mundane rehab tasks, how VR is accelerating neurorehabilitiation with closed-loop systems, the future of VR as a diagnostic tool, exploring other ways to stimulate the brain through Transcranial Direct-Current Stimulation (tDCS), Transcranial Electrical Stimulation (tES), & Transcranial Magnetic Stimulation (TMS), detecting cognitive and motor deficits, and the future of cognitive enhancement with biofeedback and immersive technologies.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So continuing on in my series of looking at the future of neuroscience and VR, today I feature the founder of MindMaze, Tej Tadi. This was a conversation I had with him back in 2017 at the Experiential Technology Conference. At that point, there wasn't really a lot that was known about MindMaze. They've been a fairly secretive VR startup, but they've been fusing together VR headsets with EEG technologies. And I actually haven't had an opportunity to demo or see much of the technology, but I had an opportunity to talk to Tej about what they've been doing at MindMaze. specifically looking at neurorehabilitation for stroke victims as well as how they could start to look at traumatic brain injuries and other applications for how to use VR to access people's cognitive abilities as well as treatment. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Tej happened on Wednesday, March 15th, 2017 at the Experiential Technology Conference in San Francisco, California. So with that let's go ahead and dive right in So I'm Tej.
[00:01:22.894] Tej Tadi: I'm a neuroscientist and the company's called mind maze And what we do is at the intersection of how virtual and augmented realities will let's say passively Activate the brain. There's different ways to do it Neurostimulation is one way and we have a strong belief in the way virtual reality will help activate different areas of the brain
[00:01:43.633] Kent Bye: Great, so I know that there is a number of different VR headset manufacturers that are out there, and there's not necessarily a good way to integrate EEG into the headset. It sounds like you're doing a fully integrated solution, and perhaps starting in the medical market, or maybe you could talk a bit about some of the specific use cases and applications that you're targeting with this integration between a I don't know if you call it an EEG or brain control interface in VR.
[00:02:10.691] Tej Tadi: You bring up a very good point. The headset we've built in-house is a headset that does mixed reality, right? So it has a set of cameras, it does virtual and augmented realities, and comes with an embedded EEG headset to measure brain activity and eventually to stimulate. This headset's a medical-grade headset, so virtual reality, augmented reality headset. The reason this is interesting is the synchronization. The fact that we could see someone move their hand, and we understand the timing point when the brain actually reacts to them seeing move, has important implications the way we can set up treatment plans for someone with a certain deficit. So we use this device now for patients who've had motor deficits, someone who cannot move his left hand, let's say. Now, someone who can move his left hand puts this headset on, but he can still has the intention to move, right? So the moment you can track or tap into that signal, you're able to connect it to the virtual reality engine. So he's imagining he wants to move. He sees the hand move already in the virtual world, and that helps starting to, you know, kick on the plasticity and he slowly moves his real limb. So, you know, there's an integrated way to close the loop of feedback and stimulate the brain for someone with a deficit.
[00:03:19.554] Kent Bye: Yeah, and so I've done some other interviews about this concept of neurorehabilitation, where you're able to use an external Kinect camera and to go into a VR experience. And you're able to perhaps amplify the range of movement that you have, such that you start to train the brain in some way. But it sounds like giving the EEG, you're starting to look at the brain wave activity that may be a little bit a signifier or a signal that comes before the actual movement. So maybe you can talk a little bit more about what type of signals you can get from the brain and how that's being fed into VR.
[00:03:52.120] Tej Tadi: I mean it's two things and that's the point I'll cover after when we come down to the Kinect kind of devices but coming back to the point you say the intention to move right if we could tap into the brain and say when the time point at which you intend to move as much before the intent you know the movement actually takes place those are signals we can record from the primary motor cortex for example you can have attention, you have other cognitive signals you can get from the brain. So there's different kinds of signals you can put together and say, this is relevant for this context. See, if there's a patient who has to, let's say, zip up his pants, for example, button up the shirt versus open a door, it's different kinds of movements. So we can predict those kinds of signals in the brain with training. Hence, it's relevant because to train these patients for neuro-rehab, it's not just about saying range of motion is better. How does the range of motion actually make sense? It doesn't mean I can actually pick up a cup of coffee versus button up my shirt. Those are the differences you want to pick up from the brain. Again, coming back to your point, the reason we build our own hardware is when you look at off-the-shelf hardware, it's not meant for medical contexts, for example. The ability to do this in a situation where a patient is in the bed, commercial hardware typically don't work. So you've got to build something that's sustainable and robust and is synchronized to the millisecond to be able to do this.
[00:05:06.330] Kent Bye: And so you mentioned that you're doing like mixed reality. I'm curious if you have like a depth sensor camera to be able to actually pick up the fingers or if that's something that when you're talking about maybe if someone has had a stroke, if you're thinking about specifically looking at doing finger manipulations and being able to restore movements of the hands because right now a lot of the VR headsets are really focused on these motion track controllers which already kind of require you to hold on to these objects in a certain way. So I see the potential advantage of having some combination of either camera based system or a depth sensor camera to be able to detect the hands. But then there's the field of view issue of whether or not it can actually even be seen. So I'm just curious to hear some of that of what you're doing there.
[00:05:47.213] Tej Tadi: So you're absolutely right. Ours is a combination of depth sensors with RGB sensors and then some other sensors, right? The most important thing is how does this information tie in together, synchronize with, because you want to use this for segmentation, you want to use this for tracking, so it is a combination of multiple sensors. But coming back to your point, I'm just thinking more along the lines of therapeutic efficacy. You could do all the good signal processing you want to do, but if you do not combine the data from a depth sensor, the RGB sensors from the brain, and if you're not able to combine physiological signatures with movements, data, and you talked about finger manipulation, so finger tracking's got to be absolutely right, but that's the last piece of recovery that happens sometimes, you know, distal versus proximal limbs. So patients who have spastic limbs, you know, you can't even track the fingers. So you can have all the good technology you want with depth sensors and the rest, but the patients are just not ready for something like that. So you've got to find what is contextually right for that phase of the patient's recovery. So is it just the intent, like I just said, versus just observation, and eventually taking it down to a point where they can move the fingers, then it makes sense.
[00:06:51.906] Kent Bye: Yeah, and are you mostly looking at stroke rehabilitation, or what are the other things that you're treating here?
[00:06:57.090] Tej Tadi: So we look now, right now, because we have commercialized products out in the market. Those focus purely on stroke patients with upper extremity motor deficits, so movement problems, fingers, hands, shoulders, and the rest. And we do trunk and compensation. The next thing we do is traumatic brain TBI, brain injuries, spinal cord injuries, everything to do with motor deficits across neural indications, we're able to help rehabilitate eventually and there's some therapeutic efficacy there too, see? Because a lot of VR right now is good for rehab pieces of it, but not just to build a prognostic marker or a diagnostic marker, which is also important. You can't just rehab people without a treatment plan.
[00:07:36.432] Kent Bye: And so is there an element of trying to create these rehab exercises into a game?
[00:07:42.437] Tej Tadi: It's extremely important. I think gamification is fundamental because a lot of these patients are isolated. They're depressed after the injury. There's an importance in two ways. One, the gamification of the neuroscience-driven content just so that it's therapeutically effective. But secondly, purely from a motivational level, the difference in performance and improvement is drastic because it drives their practice. and intensity. So, yeah, absolutely. I mean, we do that and I think everyone should in this space. It's important.
[00:08:12.961] Kent Bye: And so, you had mentioned this word of a closed-loop system. Maybe you could talk a bit about, you know, what that actually means in terms of this biofeedback and taking signals from the brain and feeding it back into a VR experience.
[00:08:23.570] Tej Tadi: Right. You could look at it in a sense of... A simple example is this. I see my hand move from point A to point B. And the error correction in the brain, so I record a signal, I see the feedback, and then I'm able to change the brain signals again to get back an error correct. So think of it as an error correction. I go from point A to point B, and if I'm not able to do the point B, I get an error correction signal, which is a feedback. train it back into my intention and change it. Does it come back to you is what I'm saying? So you need the VR feedback to give the feedback. You need the recording system to record the error and feed it back again into the VR engine to say, okay, now this is the feedback you need to make the right movement again. And that's the closed loop, the loop of being able to error correct, stimulate, record, error correct, feedback, stimulate, record.
[00:09:08.241] Kent Bye: And you mentioned about this brain stimulation. Is that some sort of external electrical stimulation externally? What is the science behind what that does?
[00:09:17.172] Tej Tadi: I mean, there's different ways to do non-invasive stimulation and invasive pieces, but we use different forms. TDCS is one way to do transcranial direct current stimulations. TES, there's many ways to do the stimulation piece. The one FDA-approved method now is a method called TMS, transcranial magnetic stimulation, right? But no matter what the stimulating methodology or source, I think more important piece is what do you do with the stimulation, right? How does that feedback actually get modulated? How long do those effects sustain after you stimulate? It really depends on what you do after. So that's where our interest is too. So there's non-invasive and invasive ways to do it, electrical ways to do it, and that's the active piece, yeah. But VR is a good way to stimulate, but it's passive. That's the advantage.
[00:10:04.463] Kent Bye: Are there any clinical trials or any studies that you've done to be able to prove out the efficacy of MindMaze? And what can you tell me about that data?
[00:10:12.329] Tej Tadi: Absolutely. I mean, we have now hundreds of patients have gone through the MindMaze device. We do multi-center, proper multi-center clinical trials, randomized RCTs to show the efficacy. And we see that VR-based therapy actually has a significant impact on motricity. So yeah, we do have regulatory approvals. We do clinical trials all the time. I mean, that's the only way to establish a more regulated, respected, believable environment is data. You need clinical data without which you can't really show much else, right? So, yes, we're very active with the way we get clinical data, with the way we partner with opinion leaders, with the partners, yes.
[00:10:47.656] Kent Bye: Have you done any experiments of use cases for MindMaze for cognitive enhancement for what kind of things you could do having EEG into VR and what kind of like consciousness hacking type of things might be possible?
[00:10:59.881] Tej Tadi: Consciousness hacking maybe not yet beyond the ethical implications of that, but we of course, you know, from an R&D perspective, we've looked at patients with phantom pain, you know, patients with memory deficits, kids along the autism spectrum deficit. So we do a lot with R&D piece. Yeah, just beyond more, we do have a full pipeline for the whole set of neurological indications. So we go from motor to cognitive, for sure. The consciousness hacking piece, it's still something to look at. I wouldn't say we're still very far from it.
[00:11:32.931] Kent Bye: Well, I think that there's an element of, from a neuroscience perspective, in EEG, you're able to somehow discern different wavelengths and wave states. What type of data, specifically, could be inputs into VR, could you say?
[00:11:45.221] Tej Tadi: You know, you could do the simplest things. If you think about it, it's not a neural signal, but even a wink, you know, an artifact of an eye blink could be a good input signal to the VR, let's say eventually an operating system that runs VR. If I had to do a double click from a mouse versus I did a double blink, or I did a squint, or I did a, you know, eyebrow raise, those are all signals you can get, but biosignals. But neural signals you could get I could do more things like fatigue and attention and spatial attention and visual and signals from that could drive different aspects of VR experiences. But to get to a point where it's robust, it works for every single face form, it works for usability purposes and costs for consumer consumption, it's still a while away. But yeah, there's many, many signals you can tap into already and for a wide range of experiences, right from emotions all the way to attention.
[00:12:35.814] Kent Bye: What are some of the biggest open questions or problems that you see as really driving MindMaze forward?
[00:12:41.500] Tej Tadi: The biggest questions are fundamental. The reason we strive, let's say, is we want to crack that understanding of how the brain synchronizes for these different ailments. It doesn't matter if you're an injured brain or a healthy brain. You talked about cognitive enhancement a bit before. What is the minimalistic set of stimuli you need to provide to be able to be effective for that context? That's the big problem to solve. I don't know if that answers your question. Short term, you know, we do have a roadmap of regulatory and clinical and product milestones to solve for the healthcare piece. But from a bigger perspective, what we're trying to do is how do we combine these technologies to make sense? It's not just randomly use this as a headset, you know, put a headset on a patient, it'll work. No, it won't.
[00:13:24.813] Kent Bye: And so what do you want to experience in VR?
[00:13:30.314] Tej Tadi: What I would experience in the real world. I want to experience everything I experience in the real world in the virtual world. Why not? Why not? I mean, I don't have one specific thing. I don't know if that's too random an answer for you, but that's the whole idea of virtual reality, right? It's the sense of being present in another world. I mean, we got to get there. And then everything's an experience there.
[00:13:53.703] Kent Bye: Awesome. And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?
[00:14:01.028] Tej Tadi: If we get this right, together as an industry or as an ecosystem, we have the ability to truly enhance, I think, our capability to engage and learn in environments much faster. As human beings, we're able to adapt and learn to new situations using VR. enhance our capabilities. It's a short example again. As a patient, I don't need the patient to put this on for five hours. In 15 minutes of using VR, I could do a lot more than a patient tapping on a cursor and looking at a computer screen. You get my point? So that's where I'd see the real potential versus us trying to replace the television with a headset. You get my point, I guess. I think that's the potential across the board. Automotive, defense, healthcare, call it whatever, IOT at the end of the day eventually. Anything that's human-centric for learning and improving our abilities, disabled or not, is where VR has a big potential.
[00:14:56.941] Kent Bye: Awesome. Well, thank you so much. You're welcome. Thank you. Cheers. So, that was Satesh Tadi. He's a neuroscientist as well as the founder of MindMaze. So, I have a number of different takeaways about this interview is that, first of all, Well, with the fusion of neuroscience and VR, we're looking at getting all sorts of insights into what's happening in the brain. And I think from their perspective of MindMaze, they're able to start to do things like look at the different signals in your brain to see the intent to move and then to see how you actually move and to see the disparity between those two things could start to do assessments to see if there's any impairments when it comes to your motor movement. but also to help train and retrain and do neurorehabilitation. So he was saying that there's this error correct stimulate record where you have these different loops where your brain has all this information of what you expect and what you see. Are there ways to start to amplify or change what you see in a virtual reality environment? as you're moving and to be able to do things like match up with what those error correction codes already are. Because if you're moving and there's a disparity, then you have to have some sort of correction. We talked earlier in a previous episode about the predictive coding hypothesis of neuroscience which is essentially saying that our brain is like this prediction machine. It's controlling the movements of the body. And so as you're moving in, if there's these different errors, then you have to do these different error correction codes. And if there's already all these different systems in your brain, then are there ways to you go into virtual reality environments and be able to start to minimize the different disparities that you need to sort of retrain yourself with what's already there and maybe start to build upon those existing networks. But this feedback loop cycle with neural rehabilitation is to try to close the gap between these error codes that we have in our mind, as well as with these different stimulations and different ways of recording these movements. Now, I'm not quite exactly sure all the different ways in which MindMaze is actually capturing your body in these movements. He's saying that's some sort of sensor fusion between the depth sensor cameras and the RGB cameras. So 2D and the depth sensors, and I'm not sure what else that they're combining, but whatever they're able to do, they're able to track your motion of your body and then correlate that to what's happening with your EEG and then make these judgments based upon trying to assess your abilities but also trying to rehabilitate you if you've had either some sort of stroke or if you've had a traumatic brain injury, spinal cord injuries, anytime that there's a motor deficit across these different neural indications, they're able to help rehabilitate those. So they're saying that VR is already a great tool to be able to do this type of rehabilitation, but they're also building VR to be able to do diagnostic tools and to be able to help figure out these dynamic treatment plans. So as you're moving to be able to measure your progress and then come up with these larger treatment plans to help you on your healing path. So they're starting with the stroke patients with these extreme motor deficits, but also traumatic brain injuries, and then basically anything else that has any type of motor deficit. And that the gamification is crucial. I'm talking to Noah Falstein. He's been collaborating with MindMaze to try to help develop some of the game design techniques. But Tez said that the gamification of neuroscience is a key part of what is helping people create this sense of progress and keep motivated and doing these very difficult and mundane tasks. And through this lens of gamification, it's able to add this whole layer of allowing them to see their progress and to keep motivated to keep going. He did talk briefly about these external brain stimulation. He talked about transcranial direct current stimulation, that's TDCS, or the TES, the transcranial electrical stimulation, or the TMS, which is the transcranial magnetic stimulation, which is the only one of those which is FDA approved. And talking to other neuroscientists, I think there's an open question as to how efficient some of these external or more invasive ways of stimulating the brain. The thing that's great about VR is that it's able to stimulate your brain through all of your existing sensory systems, so you don't need to put extra energy into the brain. But in looking to the future, there could be ways in which that you may need to have an extra kick or boost when it comes to trying to rehabilitate you by using things like TDCS, TES or TMS. And it sounds like that my maze has been involved with a lot of different randomized control trials to be able to show their efficacy. And Josh says that's pretty much the only way to establish a little bit more regulated and respective and believable treatment plans is to go through all the process of doing all the different studies to prove out how effective their treatments are. And, you know, the long term of where this is all going to go, I think is what I think of as cognitive enhancement or colloquially the term is called consciousness hacking. He seemed to be very hesitant. Like that's a kind of a term that could be interpreted into lots of different negative ways, but I think it's generally used as like a positive hacking, like you are trying to improve your brain, but you know, the whole consciousness hacking movement of. trying to basically optimize your brain either through looking at this real-time biofeedback or through meditation or these other altered states of consciousness that people are doing. But the main point is to try to like get this feedback from your body and to have this cognitive enhancement and improvement and real optimization of yourself to really get into these different flow states. And that the real potential of this is the start with these neurodegenerative diseases or with strokes or with people who have experienced some sort of motor deficits to be able to use the VR to be able to help improve that. But eventually for anybody to be able to start to go into these VR experiences and to start to do these different cognitive enhancement types of experiences and to truly engage and learn and expand our minds. and that the power of VR is a lot more effective of what you can do with other mediums, especially when it's tied together with what's happening within your brain and getting this real time biofeedback. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then there's a couple things you can do. First of all, just spread the word, tell your friends, share this on social media, send it to a friend who you think might like it. Just generally, if someone's really interested in getting into VR, point them to the Voices of VR podcast. Also, consider becoming a supporting member of this podcast. This is a listener supported podcast, and so I'm supported by my Patreon supporters to be able to do these different types of interviews. And if you'd like to see this podcast continue to be free and made freely available to the entire VR community, then please do consider becoming a supporting member. Like I said, my livelihood does rely upon listeners like yourself in order to allow me to do this real-time oral history and to document the evolution of VR as it's unfolding and help you keep informed as to what's happening within the community. So, just $5 a month is a great amount to give and just allows me to continue to bring you this coverage. So, you can donate and become a member today at patreon.com slash Voices of VR. Thanks for listening.