#813: Neuroscience & VR: DIY BCI with REINVENT & Neurorehabilitation with VR

sook-lei-liew

Dr. Sook-Lei Liew is an assistant professor at the University of Southern California, and the director of the Neural Plasticity & Neurorehabilitation Labatory at USC. At IEEE VR 2017, she was showing off a DIY Brain Computer Interface called REINVENT, which is an acronym that stands for “Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training.” It uses the Open BCI system, and uses 16 channels in a 10–20 system EEG arrangement. This project was funded by the National Innovative Research Grant from the American Heart Association, and was created in order to create a low-cost immersive technology solution that could be used for neurorehabilitation for stroke victims.

I had a chance to catch up with Liew at the IEEE VR 2017 conference where we talked about the development of the REINVENT BCI, how they’re using IMUs to get tracking data, the principles of neurorehabilitation, the potential role of virtual embodiment in neurorehabilitation, and the various open questions around which factors determine whether or not someone will recover or not.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So continuing on in my series of looking at the future of neuroscience and VR, today I feature an interview with Sook Lee Liu. She's an assistant professor at the University of Southern California. And back at the IEEE VR conference that was happening in Los Angeles, they had an evening where we went to the USC's ICT lab, the Institute for Creative Technologies lab. And Suki Liu was there. She's actually the director of the Neuroplasticity and Neurorehabilitation Lab there at USC. And she's got a PhD in cognitive neuroscience, and she's looking at ways of using immersive technologies to be able to help with stroke rehabilitation. And so she had worked with a number of different students, got this grant from the American Heart Association, this National Innovative Research Grant to kind of do a DIY BCI system. So they leveraged the open BCI technology, a lot of those parts, but took their own like swim cap and created their own fusion together and they called it reINVENT. It's a big acronym, actually stands for the Rehabilitation Environment Using Integration of Neuromuscular-Based Virtual Enhancements for Neural Training. It's essentially like a swim cap using the OpenBCI combined with virtual reality technologies and some IMUs to be able to track their body position. And all these things fused together, they're able to do neurorehabilitation. So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Sook Lee Liu happened on Tuesday, March 21st, 2017 at the IEEE VR conference in Los Angeles, California. So with that, let's go ahead and dive right in.

[00:01:57.622] Sook Lei Liew: So my name is Sukli Liu. I'm an assistant professor at the University of Southern California. My background is in occupational therapy as a clinician for people to recover after stroke and also as a neuroscientist. So my PhD was mostly in cognitive neuroscience using brain imaging to study how we understand other people. So we are using VR in my lab. So I'm the director of the Neuroplasticity and Neurorehabilitation Lab. We are using virtual reality as a way to give people who have essentially a damaged body after a stroke, so after a stroke you can't use one of your arms sometimes or you have paralysis. So in virtual reality, we can give people an intact body, essentially. And really cool work from Mel Slater and Mavi Vivas-Sanchez has shown that you can embody basically any avatar that you're given in VR. So we're trying to use that same principle to help people recover after a stroke.

[00:02:48.383] Kent Bye: Yeah, and I've done a number of different interviews about this concept of neuro-rehabilitation and neuroplasticity. And so what I think is really interesting about this specific configuration and setup that you're using here is it's using a lot of commercial off-the-shelf technologies and trying to be a really low-cost EEG BCI system that you could use within a virtual reality headset. People listening to this could potentially buy all the parts and create it themselves. So tell me a bit about what you're able to create here.

[00:03:14.947] Sook Lei Liew: Yes, sure. So we actually have done a lot of work with brain computer interfaces that were much higher cost. So we used fMRI or high cost EEG systems. And what we found was that for stroke patients, it's really hard for them to come to the lab to do a study or to take part in the therapy. So we decided with this device to go to the opposite end of the spectrum and see how low we could go in terms of signal quality to keep the cost as low as possible and be able to send a device to people. We got a grant from the American Heart Association. It's called the National Innovative Research Grant, and it's for two years to develop new technology that you think could be helpful. So we've developed what we call ReInvent. It is a brain-computer interface that has 16 channels. We use an open BCI system, so you can buy it online. They were funded by a Kickstarter. And you basically build it yourself. They have all their designs and all the information about how to build it on their GitHub. You can download it. You can 3D print the parts. So the casings for the electrodes you 3D print. We print other casings. You buy wires and strip them and assemble them yourself. It's kind of like a do-it-yourself BCI or like a kind of advanced science fair project. So what we did here to keep the cost down, we bought the 16-channel OpenBCI board. We used 12 of the channels on the head for EEG with dry electrodes, and then we used four of the channels on the arm that we put on the muscles for the wrist flexors and extensors. And then we took a $9 swim cap off Amazon. It's just a neoprene swim cap. We bought them in all different sizes because they were only $9. And so some Velcro on them and poked holes in them. The holes correspond to the 10-20 placement of the electrode system. And then we put the electrodes into the places that we were interested in. So in our case, we really centered them over the motor cortex and over the prefrontal cortex.

[00:05:03.433] Kent Bye: Great, yeah. So I bought the Open PCI Kickstarter back when it first launched, and they sent me a kit with basically a bunch of wires and a chip. But the 3D printing, I think, is probably the biggest roadblock for someone to do this. I mean, that's also quite an investment to get that. Or are there places that people can send a file in and have it printed and sent to them? Or is this something that you would expect people could be able to get to the point where you can actually sell a bag so people could have everything they actually need to be able to actually assemble it?

[00:05:31.612] Sook Lei Liew: Yeah, that's a great question. So OpenBCI also puts on their GitHub plans to 3D print the head cap, essentially. That was our second iteration of this prototype, and we didn't love it. Sorry, OpenBCI, we love you. But we didn't really love the cap because it was a little bit bulky. And we had another cap in the lab that was from a $10,000 EEG system, and that was more like a swim cap, like a neoprene cap. So we liked that idea and we just bought that. We did 3D print the casings that hold the electrodes into the swim cap, but if you send us an email, we'll print them for you and send them to you. I mean, printing these parts are so small and they use so little material, and the 3D printer we bought was $2,200. So it's probably not something that just anybody wants to go out and buy, but if you're investing in trying to make a lot of EEG systems, it's not a huge expense. Yeah, it's probably cheaper now.

[00:06:22.715] Kent Bye: Yeah, and also there might be maker collectives around in your town, wherever you're living. So another question is, what kind of things do you have to put on the electrode when you put it on your head? Do you have to put saline solution? Or how do you ensure that you actually have a good connection?

[00:06:37.278] Sook Lei Liew: So the best way to ensure you have a good connection is to use someone with not very much hair. But that's a joke. It's only a half a joke actually. So with dry electrodes, so there's different types of electrodes you can use. You can use dry electrodes or you can use gel electrodes or saline electrodes. In this case we've chosen to use dry electrodes for our device only because for a lot of patients having gel in their hair is a real barrier to using it. They don't want to have gel in their hair all day. The signal quality from gel electrodes is a lot better. So for dry electrodes, we really have to wiggle it down to make sure that the electrode makes contact with the skin. You don't have to put anything else on it though. So we don't put saline or anything else. I mean, these swim caps have like a chin strap that holds the cap onto the head. So we do use that to make sure the connection is snug. But you could also, like with the OpenBCI system, you can use any type of electrode as long as it has a wire that you can connect to the board.

[00:07:28.612] Kent Bye: Yeah, so the Experiential Technology Conference used to be called the Neurogaming Conference. And I think that's because they were trying to do brain control interfaces for real-time interaction. And it turns out that that's not actually a very compelling use case. It doesn't get a very great signal. And so it's transformed into more of a medical conference, where you're looking at some of these technologies to be used in a medical use case. So I think the other challenge for people to make this and use it is that there may be just a lack of content that you could actually use if you haven't had a stroke. And so in terms of the integration with Unity and actually creating a VR experience, what type of things did you have to do? Did you have to bootstrap it yourself? Or are there existing plugins that people could start to plug and play and create other neuroscience-driven VR apps?

[00:08:16.014] Sook Lei Liew: Also another great question. So yeah, you're right. There's not a lot of purpose for anyone. Like as a therapist, if I had a patient that could move their arm, I would just have them move their arm and train that because that's the best therapy for them. The only use case for this and the only reason we went down this path was for people specifically who didn't have control of their limbs. and to provide them a backdoor to control it. So I think what you said is completely right. In the case of neurogaming, people quickly realize that it's actually much easier to just control a video game with their hand, which their brain is very, very good at controlling, as opposed to controlling a game directly from the brain, because those signals are noisy and very complicated. For us, because there isn't a lot of movement in this area, I would say, we partnered with the USC Institute for Creative Technologies Mixed Realities Lab, but we ended up having to write all of our own software to integrate the components together. So we had a great engineer, Ryan Spicer, who wrote peripheral software to take the signals from OpenBCI. He also created some low-cost IMUs, which track the motion, and took all of those signals and piped them into Unity for our specific interface. I think if somebody wanted to get involved, though, I mean, OpenBCI has a great forum. It's a community of builders and makers, essentially. So I think at this point, they've probably already started taking these signals and putting them into Unity or other gaming environments.

[00:09:37.903] Kent Bye: So you said that there was a 10-20 configuration. What does that mean in particular? I mean, is there kind of a template of different points in the head that you could have different points, depending on the context and the use case that you have?

[00:09:50.598] Sook Lei Liew: Yes, that's exactly right. So the 1020 EEG system is essentially like a roadmap or an atlas. So that if I said I put the electrode or we're recording from the C3 electrode, everybody knows that that's the motor electrode over the left hemisphere or roughly over the motor cortex. F3 is similarly the dorsolateral prefrontal cortex over the left side. So the letter and number combinations tell you a little bit about where the electrodes are and it's a standard system that many people use in order to have consistent EEG recordings across sites.

[00:10:23.242] Kent Bye: Yeah, the other thing I noticed about your setup that you had is that you said that you had three electrodes on the, like as an EMG sensor on the arm, but you also have like a, what appeared to be kind of like two IMU, three DOF controllers, one on the wrist and one on the upper wrist and one on the hand. Is that able to give you enough? sixth off information for you to actually determine how they're moving their arm and maybe you can just describe how all those are working together with the EMG sensors on the arm with the motion sensors and what's coming from the brain and how that all is all kind of fitting together to do neuro rehabilitation.

[00:10:58.773] Sook Lei Liew: Yes. OK, so you're very astute. We have many different components. The real reason we have so many pieces with the EEG, the EMG, and the IMUs is because we basically want to train people for whatever functionality they have. So we wanted to make the system flexible and modular. So for instance, if somebody after a stroke had limited movement but could still move, then the IMUs, we would primarily use the IMUs to drive the virtual arm in the environment. If they can't move their arm physically, but they have some trace muscle activity, then that's also fantastic. We really want to train that because that means there's a connection from the brain to the muscle. And if it's evident at the level of the muscle activity, then we would train that and give the neurofeedback of the virtual arm moving based on the muscle activity. In the case that they don't have either, then we will use the brain activity. So that's kind of why we have all three parts. The other purpose of the IMUs is to show a contingency between the person's own hand and the virtual arm that they see. So when they get into the device and we set them up, if they can't move their arm, we first move their arm for them and the IMUs will track the arm position and we say, this is your arm in the virtual space. So then they feel a sensory motor mapping to that arm. And we think that that's important for feeling a sense of embodiment of the avatar that they're given.

[00:12:14.353] Kent Bye: Yeah, in my interview that I did with Larry Hodges last year, he said that, you know, he's also working on the neurorehabilitation with stroke victims, is that it's not a matter of the arm and muscle. There's nothing wrong with actual parts of the body, but it's more of a brain issue that has to be retrained. So maybe you could talk about, like, when you have a stroke, what happens in the brain, and then, like, how is the VR involved with helping to retrain the brain?

[00:12:37.860] Sook Lei Liew: Yeah. So Larry's completely right. Your muscles are still fine after a stroke. It's really that you had damage to the brain. But so in a, in a healthy person, what usually happens is your motor cortex would send a signal or generate a signal that travels down through the cortical spinal tract and goes through your spinal cord out to the muscles and tells your muscles to move. Then your muscles move and you see that feedback. You get sensory feedback of your arm actually moving. You also get visual feedback where you see your arm move to the target position and that sensory feedback rewards the motor command that you issued, and it says that was the right command. You can kind of see this in play in development in children, right? Whether they are trying to learn how to grasp or reach, and it takes a lot of tries before they learn the right motor command that controls their arm in the right way to grab an object, for instance. But once it happens, it's rewarded, and they continue to do that. After a stroke, what happens is the part of the brain that sends the motor command or the pathway from the part of the brain that sends the motor command down to the arm gets damaged, essentially. There's like a roadblock there. And so there's a few different approaches that we try to take. One is we can do like both brain-computer interfaces like what we have. The goal is to try to retrain whatever neural tissue is still intact. So that let's say within a damaged region there was still like 20% of those neurons were still usable We would try to retrain and increase activation in those areas in order to promote recovery We can also do the same thing. We're using non-invasive brain stimulation, which maybe you also saw at the experiential tech conference but in that case it's more of a exogenous stimulation. So we actually put stimulation on somebody's brain and make those neurons fire. For us, as a therapist, I think we really love to see our patients engaged and feel like a sense of agency over their own recovery. And after a severe stroke, they often don't have that. So with the brain-computer interface, we're training them to control their own brain activity and try to repair it. So we try to go this route whenever we can. And the end goal, I think, is that either the brain would kind of salvage whatever neural tissue is still usable that can control movement, or it would find alternative pathways. So there are other connections from the brain down to the hand that aren't commonly used, but could become used if they needed to. So the brain is super plastic and adaptable, and we try to harness that.

[00:14:53.558] Kent Bye: And maybe you could tell me a little bit more about what exactly the virtual reality is doing in this neurorehabilitation. I've heard from some people that it could be a matter of doing an acceleration, or if they only have a small range of movement, you're amplifying it so you get a sense of progress. But for you, as you're creating this VR experience to go along with this, what exactly is the VR experience doing?

[00:15:16.688] Sook Lei Liew: Yeah, so that's a great question. That's actually one that we're setting up to test now. So maybe it's not necessary. Maybe it's just as good to show them a hand moving on a screen, and they don't need a head-mounted VR display. And in that case, we would be happy to throw it out. We basically just want to figure out out what works best for the patient. But what our hypothesis is that because you can embody an avatar in VR, that giving feedback, neurofeedback in VR of a limb moving would actually help to activate some of the motor networks that we already know are active when you see movement. And it helps to facilitate the brain training, essentially. So a lot of the brain-computer interfaces that we've developed in the past, I guess because of the technology that we had in the past, use things like a thermometer that increments and gets hotter, or a pong ball that you control, and the height of the ball kind of is controlled by your brain activity. In those cases, what's interesting is when we're giving motor feedback, or feedback from motor cortex, we ask them to try to imagine moving and use the feedback to control their thoughts. But they have to actually close their eyes to block out the feedback so they can imagine a movement. And then they open their eyes to see whether it worked or not, essentially. The nice thing about VR and using a virtual arm is that that is the feedback. So imagining the movement is prompted by seeing the movement. which is the feedback. So it's a lot more intuitive, we think, and it will be a lot easier for them to control.

[00:16:37.226] Kent Bye: Interesting. Yeah, this is the first I've heard of any sort of perhaps neuroscience proof of the virtual body ownership illusion that being embodied within an avatar could actually have a direct neurological feedback loop within the context of the process of rehabbing from a stroke.

[00:16:55.700] Sook Lei Liew: So we don't have proof yet. We don't know that for sure. That's our hypothesis. So that's what we're setting up to test. Hopefully next year we can tell you if it works or not. But like I said, if it doesn't work, we're not tied to it. Our lab's main goal is just to figure out what works best for stroke patients. I think, in my opinion, the less technology, the better. You know, the simpler and more natural it is, the better for the patient. But we add the technology when we think it'll be useful. So in this case, with the VR, I mean, I think the concept makes sense. And we have evidence of each of the components. So there's neuroscientific evidence that when you watch somebody else move, you activate motor regions in the brain that are similar to the regions that you activate when you try to imagine moving or when you actually move. So I think putting that all together, we're going to test this hypothesis to see if that is also true.

[00:17:43.500] Kent Bye: Yeah, back in May of 2014, when I first started the Voices of VR podcast, I talked to James Blaha. And he's since created a company called Vivid Vision. But he used VR to be able to essentially rewire his brain to be able to cure his lazy eye, essentially. He was able to. not seeing stereoscopic 3D and use VR to be able to train with his neuroplasticity principles. He was reading a lot of the research and there was a thought that there was a critical age, but the science of neuroplasticity was showing that the brain was very malleable and you could actually change it. And you could actually give it the seamless input and kind of retrain it. And so he took that idea, made a game, and was able to basically cure his lazy eye. So for me, this concept of neuroplasticity has been, I think, probably one of the most exciting potentials of virtual reality. Because I don't even know what the ultimate potential of that. But for you, how do you think about what we know about neuroplasticity and what it might be able to enable?

[00:18:37.253] Sook Lei Liew: Yeah, so I think that what you said is exactly right. And a lot of, so I think a lot of the work by Mel Slater and Mavi Viva Sanchez has been super interesting in this area because they do show that we adapt so quickly, not only to a virtual avatar, but to whatever kind of manipulation we're given. So in the case that you just described, somebody actually cured something that was a deficit. in some sense. And the research that they've shown, they show that people can actually adapt to totally weird manipulations. So if you have asymmetric arms where your right arm is way longer than your left arm, you adapt to that. And you adapt to that really fast. And also, when you're taken out of virtual reality, you still interact with the world as though you have a really long right arm until some time goes by. And you remap it back to your normal arm. Then there's been decades of research all throughout neuroscience on this topic of adaptation and plasticity and how fast we can adapt to different scenarios. So I mean, I think that's part of what makes us human. And I think that what virtual reality is, is just another tool that lets us test different plasticity mechanisms, essentially. So it's a tool that we've never had before, but you can actually embody a whole different body. And I think that that part is the most interesting to me.

[00:19:48.783] Kent Bye: Yeah, I think just in talking to David Eagleman, he's essentially using the torso to send the signals into the brain and turning the torso into an ear for people who are deaf. And so this principle of neuroplasticity, of being able to basically adapt and take inputs and put it into the brain, I think is, to me, I just feel like there's all sorts of untapped potential of that. But what I saw a lot at the Experiential Technology Conference is that there's a certain section of people who you might call them consciousness hackers or people who are really into cognitive enhancement. So being able to really do things that our brain have never been able to do before. And so for you, what are some of the most exciting potentials that you think, kind of theoretically, that you could start to apply into different ways, the principles of neuroplasticity and what that might be able to allow someone to do? Like, if there's some sort of untapped human potential that we don't even know about.

[00:20:39.846] Sook Lei Liew: So I would say I'm a little bit skeptical about that. I think, you know, the movies where you take a pill and you unlock that last 90% or 10% of the brain that you've never used before. I don't really think that that's true because there's been millions and millions of years of evolution in which we have become, evolved to be our most efficient, essentially. So I think in terms of like cognitive enhancements or trying to like increase our potential past what it is. I don't know. I kind of feel like your body and your brain has like a natural homeostatic level that it gravitates towards and we can try to increase it or decrease it but it will be by a few percent. I think where it's really powerful and really interesting for me at least is when somebody has a deficit and they need to recover and get back to a normal baseline. And that's where I think the power of plasticity is super fascinating. Like, for instance, you see all the time, you know, like on YouTube or whatever, there's videos of people who are congenitally amputees or they lost limbs. Maybe they lost their arms and now they play the piano with their feet or something like that. You know, we can remap and we can, we're such adaptable creatures that we can learn to do anything. And so in my mind, it's, it's those use cases where it's somebody who was able to do something initially and then couldn't, and then the plasticity allows them to do it again. Yeah.

[00:21:55.359] Kent Bye: Well, I'm an optimist in this realm, because I like to not know, because there's a lot of things we haven't known. We don't really know. It's sort of an unknown. It's an open question.

[00:22:05.003] Sook Lei Liew: I think it's a much more exciting perspective, the one that you have.

[00:22:09.185] Kent Bye: But for you, what are some of the biggest open questions that are really driving your work and research forward?

[00:22:15.539] Sook Lei Liew: You know, I would just say that the main question that we ask ourselves every time we do a study is, will this help patients? Like, does this help patients who right now don't have any other type of treatment or any other way of getting better? Because that's the kind of area that we like to work within. There's a lot that we know already about how people recover after a brain injury. And I think that when somebody can follow those basic principles, like repetition and practice and meaningful activities as part of their recovery, all of that is fantastic. And we should just do that. But I think the area that we really want to work on is when you can't do that. And so that actually stems from my background as a clinician. As an occupational therapist, when I would go into a patient's room, I actually didn't know if they were going to recover or not, no matter what I did. And I got into research because I wanted to know for the people that don't recover, what else could we do? Or is there anything else we can do? And so I think those are interesting questions both scientifically, like from a basic science standpoint, like what is it that makes somebody recover and someone else not recover? I mean, similar to like what is it that makes somebody a super great athlete and someone else not a great athlete, you know? What are these differences in our brains that can support our ability to learn or not learn? And then how can we take those and actually make people better? So how can we manipulate the brain in what we call neuromodulation, whether it's through brain computer interfaces or brain stimulation or anything?

[00:23:32.363] Kent Bye: Awesome. And finally, what do you think is the ultimate potential of virtual reality and what it might be able to enable?

[00:23:41.603] Sook Lei Liew: I feel so bad. I'm kind of a practical thinker on this topic. I feel like I should say something really cool like teleportation or something. But I mean, I guess because we focus just so much on helping people recover, I think that that would be the best possible use case is if somebody with any type of disorder or disease could see their body remediated in virtual reality and then actually have some plasticity and remediation. I don't know how possible that is. So I don't know. But I think that would be pretty exciting to me. Yeah.

[00:24:15.936] Kent Bye: Awesome. Yeah, I often say I think the killer app of VR is neuroplasticity. So I'm right there with you. So awesome. Well, thank you so much for joining me today.

[00:24:23.742] Sook Lei Liew: Thank you so much for having me.

[00:24:25.517] Kent Bye: So that was Sook Lee Lu. She's an assistant professor at the University of Southern California. She has a background in occupational therapy, working with stroke rehabilitation. She's got a PhD in cognitive neuroscience, and she's the director of the Neural Plasticity and Neural Rehabilitation Lab that uses virtual reality. So, I have a number of different takeaways from this interview, is that first of all, well, just looking at the different systems I've talked to, both MindMaze and Neurable, both of these have EEG built into the headset, like the bands of the headset. It's all in one system. This approach is a little bit different. It's a little bit more DIY. They're creating a much more robust sensor, which has 16 channels. The OpenBCI, I did an interview with Conor Russo-Monor back in episode 365, back in 2016. So they had a very specific 3D printed way of putting it onto your head. That does not work at all working with a virtual reality headset. And so they had to look at other alternatives to be able to get these sensors onto your head into this specific arrangement called the 1020 system of EEG. You can look it up on Wikipedia, but it basically has like 21 sensors that are hitting onto different parts of the brain of the prefrontal, frontal, temporal, periodal, occipital, and central. And they have these different sensors at these very specific placements. It looks like the 1020 system uses up to like 21 different locations. It looks like the reInvent system that they used was using OpenBCI with 16 channels. so they created a swim cap that was trying to replicate all those and it has a nice snug fit the open bci sensors are the dry sensors and they don't need any gel but they need to have a strong connection and then on top of that you put on a vr headset and then you have all these other ways to be able to track the way that you move your hands around In some ways you could look at existing six degree of freedom controllers, the hand track controllers from either Oculus Rift or HTC Vive, but they were doing their own custom rolled IMUs that were being fused together to be able to do their approach for what they were doing with re-invent, which again is an acronym that stands for the rehabilitation environment using the integration of neuromuscular based virtual enhancements for neural training. So the basic idea is that the brain is plastic and you have an injury, a stroke. There's nothing wrong with your actual physical body. It's just that some of the pathways in your brain get disrupted and then you need to retrain and find new pathways to be able to have. The motor commands get sent through your body, through your spinal cord and have this like feedback loop. And so basically they're trying to retrain and rehabilitate those pathways and to find different ways to either give you visual feedback or to stimulate the brain from an external perspective in order to actually catalyze and get movement. But essentially they're doing this fusion of virtual reality and neural rehabilitation. And so Suclu cited some research from Mel Slater and Maria Sanchez-Vives basically saying that when you're in virtual reality, you can start to adapt to all sorts of different weird and wacky configurations. If you have a longer arm, then your brain does this neural mapping of your motor cortex into what your virtual representation of your body is in these virtual environments. I don't know if that requires the invocation of the virtual body ownership illusion, but I suspect it does to some degree in that part of what Mel Slater found in order to invoke the virtual body ownership illusion is to have at least some correlation to as you're moving your hands and your limbs and your body around you see some correlation and then at some point if you start to have this mapping as you move your body around and you see that it's correlating to what you accept with what your perception is seeing what your body is sensing versus what visual feedback you're getting, then you start to invoke and identify with these virtual avatars. And then once you do that, then potentially you could start to expand and adapt to all sorts of really weird and strange embodiments within virtual reality. Our neuroplasticity and our ability to adapt is something that this research by Slater and Sanchez-Vives was able to kind of prove out that we're actually extremely flexible and plastic. So they were starting to experiment and do different hypotheses. If you give just visual feedback in the virtual reality, what impact does that have with your brain? Does that actually help you rehabilitate faster? That's some of the stuff that they were testing as well as the different aspects of invoking different levels of virtual embodiment. as you're in these virtual reality environments, what's it mean to start to invoke the virtual body ownership illusion? Does that help accelerate or help rehabilitate your ability to after you've had a stroke, and you've had some sort of impairment in your movement, what are the bounds in which that you can start to rewire your brain and find ways to kind of rehabilitate yourself and kind of playing with different variables there. So it sounds like they were doing a number of different investigations there, but this is an area of active research. As I go to both IEEE VR and to these different neuroscientists, they're talking about this mind maze and Nurable. This seems to be on the forefront of a lot of the early wins when it comes to virtual reality is this type of neural rehabilitation. So just the final thoughts is, you know, Suku Lu is really interested in the pragmatic aspects of how this virtual reality technologies can help people recover and heal. And, you know, some of the open questions are, you know, what are the variables as to when people will be helped by this? When will they not, there's a lot that they still have to learn about all the different variables of what it takes to recover from these different either strokes or potentially other traumatic brain injuries, anything that's kind of impairing their movement. So what happens when you can't do the normal path of recovery? What else can you do? And what makes some people recover and other people not? And what are the differences in our brain that are going to allow us to respond to this type of neuromodulation versus not respond to it? So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. Just $5 a month is a great amount to give and allows me to continue to bring you this coverage. And like I said, if you donate and become a member, you're helping keep this podcast free for you and free for everybody else in the community, not only now, but also in the future. So you can become a member and donate today at patreon.com slash voices of VR. Thanks for listening.

More from this show