#814: Neuroscience & VR: Using Muscles & EMG for Neural Interfaces with CTRL-labs

dan-wetmore
CTRL-labs is creating neural interfaces for robotics and immersive worlds that leverage electromyography (EMG) signals that are radiated from muscle contractions. This gives them the ability to isolate individual motor neurons, which is opening up a whole new world of user interactions for controlling robotics, avatar embodiment within immersive environments, and it could prove to have many applications as an assistive technology. Being able to volitionally control a single motor neuron combined with the plastiscity of our motor system means that there could be an incredible number of other applications for this technology within the context of spatial computing, especially when combined with other input methods. The biggest downfall of this type of EMG input is that it doesn’t naturally contain 6 degree-of-freedom information, which means that it would likely need to be used in conjunction with other camera-based or sensor-based systems within immersive environments where the position in space could significant difference when used within the context of spatial computing.

I had a chance to talk with neuroscientist Dan Wetmore from CTRL-labs at the Canadian Institute for Advanced Research Workshop on the Future of Neuroscience and VR about why he’s so excited about the potential of EMG as an input method, the different use cases that they’re seeing for CTRL-labs so far, how the embodied cognition implications of what it means to use the movement of your body as mode of human-computer interaction.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to The Voices of VR Podcast. So continuing on in my series of looking at the future of neuroscience and VR, On today's episode, I talked to Dan Wetmore. He works at control lab working on partnerships, but control labs, what is essentially is they're looking at EMG. So EMG is electromyography. It's looking at the muscles that are activated. So they put these sensors to be able to detect muscle contraction and be able to isolate down to. individual neurons, but you're able to wear these different bands to be able to detect your motions, motor control, and be able to then project that into immersive virtual augmented reality, controlling robotics, all sorts of stuff that you can do with that. So we'll be covering EMG versus EEG and what Control Labs is doing on today's episode of the Oasis of VR podcast. So this interview with Dan happened on Thursday, May 23rd, 2019 at the Canadian Institute for Advanced Research workshop on the future of neuroscience and VR that happened in New York City, New York. So with that, let's go ahead and dive right in.

[00:01:20.473] Dan Wetmore: Hi, I'm Dan Wetmore. I work at Control Labs. I focus on our partnerships, and my background is in neuroscience, so Control Labs is a company founded out of Columbia Neuroscience, and we believe that one of the best ways to extend human interactions and partnership with machines is to enable new ways for us to control them and interact with them. and that a very nice way to do this is to use what already exists in our human abilities of our motor system. Our abilities to control our hands and our fingers to manipulate a paintbrush or a musical instrument are incredible. Can we leverage those same things to create ways to control machines? And so we've built an array of muscle sensors that you wear on your arm or your wrist And we use machine learning based on those signals to allow you to create immersive experiences where you can see your hand in real time, as well as ways for you to control machines or even implement buttons or other interfaces.

[00:02:14.032] Kent Bye: Yeah, and when I talk to neuroscientists about what they can detect either in a device that's on their head, do you have like EEG and EMG? Maybe you could just kind of describe the difference between the EMG and EEG and the approach that you're taking at Control Labs.

[00:02:29.705] Dan Wetmore: Sure. So EEG is a signal generated by many, many neurons in the brain that's detected through the skull. And so this is kind of the equivalent of sitting outside of a stadium while a football game is going on, right? You can get a sense of when there's been a big score from the roar of the crowd. You don't know which team scored necessarily, and it's hard to get too much insight about what's happening inside. With EMG, we're recording from your muscles. And this is actually an interface to your nervous system, because each of your muscles is controlled by a motor neuron. And so, when a motor neuron from your spinal cord activates some muscle fibers, there's an electrical signal, and we detect that through the skin, and that's referred to as EMG. Now, what's interesting about EMG is because the skin is a much better access point for this type of signal, In some cases, and with our system and many research-grade systems, you can get down to listening to a single fan, essentially, down to the single neuron level, which enables new forms of control and just a more direct interface to your nervous system.

[00:03:33.726] Kent Bye: Yeah, because my understanding from talking to different neuroscientists is that sometimes the EMG signal is just a lot stronger. And it sounds like you're, in some ways, taking a signal that's coming from the brain, or it's maybe a feedback loop from the body into the brain, but it goes through the spinal cord and kind of gets distributed out to a place that has a lot more surface area, where you're able to maybe look at an entire part of your arm, see a muscle, but that muscle is kind of being isolated back to a very specific signal that's coming to that muscle to be able to activate that. Maybe you could talk about how many different muscle groups and how many different signals, because I imagine if you want to use this as an input control, you'd have to have a variety of either different muscles or combinations that you could have those muscles be combined.

[00:04:16.443] Dan Wetmore: So first of all, you're absolutely right. The muscle is a great way to access these signals and actually serves as an electrical amplifier of the neural signal, which is a benefit. But you're also touching on different ways that we can create control schemes based on EMG. So one example is to just have you use a biomimetic type of interaction. So, if you want to type, you type on a table without a keyboard. If you want to point, you point your finger. If you want to implement a button, you tap your index finger or your middle finger to your thumb. Those are biomimetic. There's also forms of interaction that are not necessarily biomimetic, and some of those go to the point of accessing the activity from single neurons. Right? And so in that way, you can train up to volitionally control one of these thousands and thousands of motor neurons that are controlling the 14 muscles in your forearm to implement a button, for instance. And this has really interesting applications for control where you don't need to move. There's some applications where that might be necessary. And it's certainly important for certain accessibility domains. So someone with a degenerative disorder and injury who may be able to drive a single neuron that we could detect and implement a button but not able to move a finger.

[00:05:29.370] Kent Bye: Yeah, it reminds me of things like Leap Motion, where you're able to do hand tracking, but yet you have issues of occlusion, where they have to do a lot of machine learning to be able to predict. And sometimes they won't know for sure. But in some ways, it sounds like the EMG would either get around that occlusion issue, just because you're looking at it at the level of muscles. But in the long term, I imagine that all these things are going to be fused together. So you could maybe eventually get to the point where maybe you do a little bit of computer vision just Looking at your hands with a camera, but doing some combination of that with the EMG data But I'm just curious to hear kind of the different trade-offs that you see of being able to do gesture control based upon computer vision and AI Versus something like EMG

[00:06:11.052] Dan Wetmore: So the systems that have been created for camera-based gesture detection are really quite good, right? Leap motion is good. There's a lot of academic groups that focus on this area. And the reality is they're fantastic, but they're limited, like you said. With occlusion, you certainly can't get force. So that's one of the advantages of EMG. We can detect whether you're making a fist lightly or co-contracting your muscles to make a strong fist and use that as a control regime. But the reality is fusing these together in some context could be really powerful. One of the advantages of our system, it can walk with you down the street, whereas a camera-based system might not be able to, or it might be limited to a particular field of view. That's certainly one advantage. At the same time, I think in a long-term vision there's a way to integrate these sensors together. We certainly can't, with our current system, provide six degrees of freedom absolute position, right? So understanding for certain virtual reality applications where your hand is in three-dimensional space is important as much as whether you are preparing to grip an object or throw an object or something like that, which we could detect through the EMG.

[00:07:18.993] Kent Bye: Yeah, one of the things that comes to mind is Douglas Engelbart's Mother of All Demos back in 1968, where he was demonstrating the computer mouse for the first time publicly in a big way, but also like teleconferencing. But he also was using a corded keyboard, which was a little bit what stenographers use in order to type really fast. But I've also seen Steve Mann translate that corded keyboard into like a handheld device where he could just type with his hand with different key combinations. And so it was less about like typing individual keys like in a keyboard, but more of a musical instrument, but to be able to do text input with your hand. And so I imagine that with these types of devices, like from Control Labs, you'd be able to maybe create those different types of interfaces. Like if you only have one EMG sensor on your arm, but you need to be able to have a full complexity of input, you might be able to do like, let's play a guitar or these different chords in a chorded keyboard type of fashion. Is that the type of things that you're also looking at in terms of like the different types of gestural controls you might be able to combine together to be able to interface with machines?

[00:08:23.526] Dan Wetmore: Those are great questions and great areas for future development. So the corded keyboard, I mean, look, we should probably all trash the QWERTYs and go corded. It would certainly be an avenue that fits well with what we're building at Control Labs. At the same time, I don't think it's clear what this next generation of interactions, what the best practices there are. That's an area that we work on with, you know, internally and with our partners and developers. and we're making a lot of progress, but could it be your hand in your pocket? Could it be your hand on a dumb object? Could it be your hand on an object that has some, like a controller that has some buttons, but you add new functionality while you hold it? There's a lot of different regimes where if you combine control based on your motor activity with either a dumb or a smart object, it can really enable new forms of interaction.

[00:09:13.501] Kent Bye: Well, what are some of the use cases that you see for Control Labs, EMG, or what's the device called again?

[00:09:19.222] Dan Wetmore: So we're Control Labs, and we're shipping out or sampling Control Kit to certain developers. That's its name. We've had an incredible amount of energy and attention in VR and AR in general. People are looking for new forms of interaction and control in those regimes. Industrial applications have been an area of great interest. To the extent there's a learning curve at all with the system, some of those applications are very interesting. On the clinical side, which is one of the areas of personal interest, there are accessibility applications that we'll be working on, ways that we can enable people who have limited motor abilities to extend those capabilities, as well as certain rehabilitation and prosthetic applications. So I think it's pretty broad across consumer and clinical.

[00:10:03.965] Kent Bye: And from a neuroscience perspective, what are the interesting insights that you get into how this works from a neuroscience perspective?

[00:10:11.054] Dan Wetmore: So, you know, I've been working in motor neuroscience for a long time, but there's a few things that I didn't realize and were surprising and interesting to me when I joined the control labs. One of the things is that we can volitionally control a single motor neuron in our spinal cord. It's been known for 50 plus years, which is pretty impressive and something that we focus on internally. but also just the plasticity of our motor nervous system in general, our ability to learn how to use a new joystick, how to play a new instrument, right? This is a universal ability, right? And so to the extent that we're tapping into something that's shared by 7 billion, 8 billion people, that's really the mechanism that we have the potential to have a large impact, right? And be able to be quite scalable.

[00:10:54.788] Kent Bye: And I'm wondering if you could expand a bit on both extending our capabilities, whether through a prosthetic, but also potentially controlling robotics or having an exoskeleton. What are the other applications you see in terms of using our motor system and muscle control to be able to extend our agency through either robotics or prosthetics?

[00:11:14.369] Dan Wetmore: So there's a couple areas that people, you know, our partners now who are working in robotics and prosthetics. One is control of biomimetic systems. So basically to map movements or articulations of our motor system onto a robot. But there's no reason we can't control a robot using our anatomy and motor system to control a robot that has different dynamics and kinetics and joints and Instead of controlling a five-fingered hand, why can't we control an eight-fingered hand, right? Why couldn't it be an octopus, virtual octopus with eight legs, right? We have the capabilities to do so. Will there be a little bit of a learning curve? Maybe, but it's going to be a lot easier than having an array of buttons on a joystick to be able to have a direct interface between motor control and output. Prosthetics, robotics, surgical robotics are definitely a focus area.

[00:12:10.400] Kent Bye: I know Jaron Lanier and Jeremy Baylinson have looked at studies where you could train yourself to feel like you have a tail. So if you have the visual feedback, but you're able to move your body a certain way, that you could start to do some really weird things when you're in VR and to be able to do this sense of sensory addition or sensory substitution where you're able to extend your sensory input for what you're able to detect, especially if you're getting feedback back in through some sort of haptic device. So you're able to maybe use your motor control to be able to control something. But if you're getting this feedback loop cycle of both the haptic and the sensory input, then I imagine there's going to be this whole realm of sensory addition and sensory substitution that's also going to potentially open up as well.

[00:12:51.291] Dan Wetmore: Those sound like fantastic areas to push into. I think having feedback, whatever modality, even if it's auditory feedback that makes an experience more immersive, improves one's ability to engage in that novel environment

[00:13:06.031] Kent Bye: So for you, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?

[00:13:12.533] Dan Wetmore: So at Control Labs, clearly one of our challenges is to make our system amenable for easy use by everyone. And that's an area where we put a lot of attention to make our systems and our models broadly applicable. I think of personal interest on the clinical side, we have the potential to make a really large impact and that will be one area where we begin to discover ways that insights that we gain from recording what happens in your motor system might relate to motor disorders, but also mood disorders, psychiatric disorders, how well you slept last night, is your diet quite right, and much more because it's the only output of our nervous system, of our brain, is through movement.

[00:13:52.928] Kent Bye: Yeah, and there's a big focus on embodied cognition as well. So I don't know. Is that something that you looked at as a neuroscientist, but to see how we don't just think with our minds and our brains, but we think with our entire bodies?

[00:14:04.473] Dan Wetmore: I've never really thought of it a different way, to be totally honest. Even the word mind bothers me, right? I mean, we have neurobiology and we have anatomy and we have processes, but mind is more of a psychology construct than a neurobiological one for me. And so connecting the mind and the body through movements, through our somatosensory system, all of this, I mean, that's what it means to be human.

[00:14:31.510] Kent Bye: Great. And finally, what do you think the ultimate potential of all these immersive technologies are and what they might be able to enable?

[00:14:41.562] Dan Wetmore: So I mean, look, you and your audience are the pros on this. The sky's the limit, I think. In neuroscience and interactions, we have a long way to go to make these systems truly ubiquitous, but I think as we create ways to work in them and with them that are more and more useful and have utility, whether it's in the enterprise or in education or in surgical training, I think these are going to be the avenues that really unlock a lot of potential and where we all collectively discover what are the best practices, the best ways to build these experiences and to engage with them as end users.

[00:15:18.905] Kent Bye: Is there anything else that's left unsaid that you'd like to say to the immersive community?

[00:15:23.129] Dan Wetmore: Thanks for your interest in Control Labs. I'm learning a lot from all of you. This is a new world. Keep creating amazing things. Thank you.

[00:15:30.975] Kent Bye: Awesome. Great. Well, thank you so much. Thank you. So that was Dan wetmore. He works at control lab, working on partnerships. He's got a background in neuroscience and control labs was founded out of Columbia neuroscience. And they're looking at ways to control and interact with machines using your muscles. So I've a number of different takeaways about this interview is that first of all, Well, just probably looking at the differences between EEG and EMG. I love the metaphor that Dan gives, which is like EEG is kind of like you're standing outside of a stadium. You can hear the roar of the crowd, but you don't necessarily know exactly what is happening. You know, EEG is able to isolate to general regions within the brain, but to get down to the singular neuron level. It doesn't have that good of resolution. However, EMG is able to do that. They're able to isolate down to singular neurons firing and have a lot more precise control. I hear a lot about when you look at EEG, you can actually look at muscles, like if you for your brows or different muscles that are contracting, that'll actually show up as spikes on EEG. So sometimes you have to kind of filter out those EMG signals. to me it's just interesting to see how there's going to be very specific use cases where something like control labs where they may have these sensors on your hands and you're able to get very fine control like you said this kind of biomimicry where you're like tapping on an imaginary keyboard and you're just like typing but seeing the different motion of your fingers may able to actually kind of activate different aspects of either typing or pushing buttons. The big challenge, I think, with EMG is that you don't have the 6DOF information, so you don't really necessarily know where your hand is exactly in space. You need to do a bunch of sensor fusion, either with other inside-out cameras, depth sensor technologies, ways of orienting where your hand actually is in space. And so that, to me, seems to be, you know, one of the biggest drawbacks, but also, you know, the way that things are going, you're just going to kind of be fusing all these things together anyway. So I think EMG is going to have a very specific place when it comes to all these other ways of interfacing with immersive technologies. In this series of the future of neuroscience and VR, we've talked about brain control interfaces, and we haven't necessarily talked too much explicitly about eye tracking, but eye tracking is another big part. Then you add that with voice interactions as well as EMG, you can start to see this picture of the future where there may be lots of different things that we're wearing on our bodies with these different cameras. maybe EEG, being able to tap into all this different type of information and kind of fusing it together in new and interesting ways. So Control Labs very much focused on the EMG as an approach, trying to either connect to mapping your movements of your body onto machines, whether it's a robot or doing different actions within these virtual and augmented reality environments, Dan was saying that, you know, you can control up to eight fingers. Their brain's very plastic into how to adapt and, you know, figure out ways. As long as you're able to have some sort of visual feedback, you can learn to do all sorts of different stuff. There's been lots of studies that both Jaron Lanier and Jeremy Balanson have looked at in terms of like training your body to be able to. make it feel like it has a tail to be able to kind of expand the plasticity of all this input. It reminds me at SIGGRAPH this year at 2019, they actually had like this whole pneumatic mechanical tail that you're wearing on your body. That was a whole huge production for all the robotics that took into that, but you don't need to even go that far to be able to actually train yourself to have a tail. If you're in a virtual environment, as long as you have. some sort of haptic feedback and you have a visual feedback, then you can start to learn how to use your muscles or find ways to control different aspects of your body within these virtual environments. As long as you have that visual feedback and visual synchrony, I think your motor system is extremely plastic and it can just kind of like pick it up and learn how to control all sorts of different things. And that's, I think of what a lot of what control labs is doing, looking at different medical applications, motor disorders, mood disorders, And for Dan, he's always just been fully on board with this whole concept of embodied cognition and saying that the only output to the brain is sending signals to your body for movement. And this whole concept of the mind, it's still from the philosophy of mind is confusing as to what actually is happening to have these higher order concepts. But at least from a cognitive neuroscience perspective, when you're looking at the output of the brain, it is getting translated into movement. And so for me, it just makes sense that as we move forward, we're going to have all sorts of new ways of measuring our movement and have much more of these movement-based ways of interfacing with technology through the electromyography and EMG and the types of sensors that Control Labs is building. I haven't had a chance to actually try out any of the demos, but I've certainly seen a range of different types of ways of kind of detecting different aspects of your muscles. You know, the thing that comes to mind a lot is this thing of the corded keyboard that I talked about a little bit here with Douglas Engelbart. And you know, what Dan had said is that, you know, whether it's going to be. you know, in your hand, in your pocket, or whether you're taking into what he called a dumb object or a smart object. There could be these things that we're holding that allows different ways of pressing and controlling and typing. It has some sort of physical haptic feedback, like the corded keyboard that someone like Steve Mann has been prototyping, experimenting with for a long, long time. I know there's a documentary about Steve Mann that shows him like walking around the park, using a corded keyboard as he's like typing up his dissertation or emails, or just being able to actually like compose and think about things as you're walking around and using this corded keyboard to be able to type. So that I think is very interesting to see where that ends up going with. these types of user interfaces, whether it's a corded keyboard or some of these dumb or smart objects or whatever it is, just thinking about how you can start to use the movement of your body to be able to integrate into these immersive experiences by having something like these different bands that are measuring different muscle movements. So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, leave a post on social media, send this to someone that you think that might enjoy it. You know, the podcast does really grow from the grassroots marketing and just word of mouth. It's the number one way that people hear of the podcast. Also, if you want to help support and sustain this podcast, I want to just send a quick shout out to my Patreon supporters. I wouldn't be able to do this without your support. Very much appreciate all the support that you've given me over the years. And if you'd like to see me continue to bring you this type of coverage, to continue to document the evolution of these spatial computing mediums, then please do consider becoming a member of the Patreon. This is a listener supported podcast. And so I do rely upon donations from listeners like yourself in order to continue to bring you this coverage. so you can become a member or donate today at patreon.com slash voices of vr thanks for listening

More from this show