#807: Neuroscience & VR: Automatic Labelling of EEG Data with Eye Tracking & Motion Data

Craig Chapman is a movement neuroscientist at the University of Alberta, and he’s received an EEG in the Wild grant form the Canadian Institute for Advanced Research in order to study the combination of movement tracked by VR technologies, eye gaze, and EEG data. Chapman is collaborating with other CIFAR scholars Alona Fyshe and Joel Zylberberg to use the motion data and eye tracking data from immersive technologies in order to automatically label the EEG, which will then be used to train machine learning models to see if they can start to detect the neural signatures that will be able to predict future behavior. I had a chance to talk to Chapman at the Canadian Institute for Advanced Research workshop on the Future of Neuroscience & VR that took place May 22 and 23, 2019. We talked about about his research, embodied cognition, and how virtual reality is revolutionizing his neuroscience research in being able to correlate the trifecta of movement data, eye gaze, and EEG data.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to The Voices of VR Podcast. So the next series that I'm going to be diving into on The Voices of VR is the future of neuroscience and virtual reality. So this is what I think is one of the more interesting aspects of VR is that you have these different research neuroscientists who are starting to integrate it into their process of neuroscience research. So not only is VR going to learn so much information about the nature of perception from neuroscientists, but we have the capability of VR to actually create all these new insights into the nature of the mind itself by trying to control very specifically what someone is experiencing and to be able to unpack the different aspects of what's happening with different parts of our brain that's being activated, correlated to eye tracking and EEG data, correlating that all with our motion data. That's something that's typically been very difficult to do, but with VR, you can have this magical trifecta. So I'm going to be doing about 13 different interviews looking at the future of neuroscience and VR, both from experienced creators, hardware designers that are trying to integrate different aspects of the hardware with EEG, both from MindMaze as well as Neurable, and different neuroscientists. The Canadian Institute for Advanced Research actually had me go to a Game Developers Conference back in 2018 and help moderate a discussion between neuroscientists and the broader game development community. And that's a panel discussion that I'll be including here in this series, as well as this Future of Neuroscience in VR workshop sponsored by the Canadian Institute for Advanced Research that happened back in May in New York City, where I was a part of 30 different people that came together, people from the industry and also these neuroscientists looking at how VR could start to be applied to their research. And it was kind of like bringing together different people from the industry and different people from neuroscience and starting to compare notes a little bit. And so this is something that I think is very fascinating. It's a deeper trend of this interdisciplinary collaboration that are happening between neuroscientists and game designers, architects, and all these different disciplines that are starting to come together and through the medium of virtual reality, start to collaborate with each other in different ways. So I'm excited to dive in to see what some of these leading researchers and neuroscience are doing, as well as some other perspectives of how people are actually implementing different neuroscience and biometric feedback into some of their experiences that they're creating. So this first interview, I'm kicking it off with Craig Chapman, who's a neuroscientist at the university of Alberta. And he is trying to combine different aspects of the motion track data from VR, as well as eye tracking data and use that to automatically label and tag EEG data to be able to use the different machine learning algorithms to be able to. use the motion that is coming from VR as well as what behavior they're seeing to automatically deduce what those actions are and be able to label that correlated to whatever is coming from the EEG. So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Craig happened on Thursday, May 23rd, 2019 at the Canadian Institute for Advanced Research Future of Neuroscience and VR Workshop in New York City, New York. So with that, let's go ahead and dive right in.

[00:03:21.682] Craig Chapman: My name is Craig Chapman. I'm a neuroscientist at the University of Alberta in Edmonton, Canada. And with respect to what I'm doing in virtual reality, I'm trying to help people understand that we have access to a measurement tool when we're using virtual reality devices. They're really sensitive motion trackers for your head and hand, as well as the newest generation has eye tracking. These are signals that we can really use, not just to make experiences more immersive for our game users or whatever, but also potentially has applications to things like measuring movements in a clinical population.

[00:03:55.940] Kent Bye: We're here at the Canadian Institute for Advanced Research, the Future of Neuroscience and VR workshop here in New York City. Yesterday you gave a talk talking about how in your lab you were used to spend up to like $100,000 worth of equipment to be able to do what you can now do with a $2,000 piece of the Vive Pro with eye tracking. And so maybe you could talk a bit about what you're able to now do with the technology, but significantly cheaper.

[00:04:20.551] Craig Chapman: Yeah, so just starting with what is sort of conventionally used is there's a whole host of sort of research-grade motion capture systems that are available, and they cost on the order of tens of thousands of dollars to get a system capable of full-body motion tracking. And then eye tracking, similarly, tens of thousands of dollars to get a head-mounted system that's kind of like a pair of glasses. But now the Vive Pro and other similar devices allow you to get six degrees of freedom, so position and rotation of the head and hands at about the same temporal frequency. And we're now testing how good it is spatially. And so the kinds of things we can get is how does your hand move through space? Where are you looking at every moment in time? How is your head rotating? Now, it doesn't give you all the information. It doesn't give you all of the joints. But I think there are some interesting innovations in the VR space that might allow us to not just do hands and heads, but full body motion tracking. And really understanding the moving body, I think, is going to be key to unlocking some of the richness of the data of VR.

[00:05:15.668] Kent Bye: You were talking about the trifecta that you're aiming for. What is the trifecta?

[00:05:19.313] Craig Chapman: Yeah, so I think the one modality I haven't talked about so far is that we're also trying to build in neural recordings, and currently we're using that with electroencephalography, or EEG. And so the trifecta in my lab is the combination of motion tracking, eye tracking, and EEG, all recorded simultaneously now. Right now in my lab, we don't have the ability to do that in VR, but one of the reasons I'm excited to be part of this group is I do think the next, next generation of VR headsets will include neural recordings, and there are some sort of prototype devices out there. You know if and when we're capable of recording neural data at the same time as we have people having these immersive experiences What will that unlock for us as scientists and also as a sort of an immersive experience community?

[00:06:02.047] Kent Bye: So that would allow you to create these different virtual environments where people were doing these different tasks. And so if you have this trifecta with EEG data, with motion track data, and these virtual worlds where they're able to essentially do these different actions, you're able to look at what's happening inside of their brain through the lens of an EEG and all these other motion track data. So what type of questions are you asking? What can you tell from all that data?

[00:06:26.101] Craig Chapman: Yeah, so that's a great question. I think our first approach has been to really leverage the advances we've seen in machine learning and use that to help understand the EEG data. But of course, if we know anything about machine learning, what machine learning needs is a big data set, and it needs that big data set to have labels. And then it tries to learn about those labels and apply them to future data sets. And then the problem with the EEG data up till now has been it's difficult to know what labels to assign to the data. But here, the approach we're taking is that we can actually use the eye and hand behavior to derive the labels. So for example, let's say a label is I'm reaching towards an object. Well, that's a label that we can extract automatically from the eye and hand data, and then assign it to the EEG data. And so we're working through a process right now where we're going to automatically extract a whole bunch of these labels. use them to label the EEG data and then build a machine learning model to say, what can we know about your upcoming behavior? What can we predict you're going to do by only looking at the neural response? So that's one of the first questions. And I think if we prove that that works, there's many questions you could ask around deriving all sorts of different kinds of labels and helping that to predict what the user is going to experience or what they're going to do next.

[00:07:37.502] Kent Bye: So it sounds like there's like two layers of labeling here where you're taking all the spatial data that you're getting from the motion captured data where people are moving, which you could watch either them doing real time or be able to have a recording of that. And it may be difficult to look at all the numbers and know what they're doing. But when you see the spatialized depiction of that, it becomes pretty clear as to what the body is doing. And then you're able to. from that maybe extract out through AI that translation between those spatial movement data into that label and then that label then gets assigned into the neurological data.

[00:08:10.273] Craig Chapman: Yeah, precisely. So it really is a two-step process, and I think the thing that we're trying to innovate on is the automation of that first step. So there's been a long history, certainly in eye-tracking, for example, where the only way to analyze eye-tracking data is to literally go frame by frame through the movie and say, on every frame, was the person looking at the coffee cup or were they looking at the fork? And you just do this frame after frame, and it's exceptionally tedious. And so, especially when you start going to this immersive three-dimensional tracking, you need, essentially, the ability to automatically distill down those labels. So yeah, you've hit the nail on the head. It really is a process by which we're using our intuitions and knowledge as scientists to look at those streams of data to develop algorithms to say this particular moment in time is a reach, this particular moment in time is a grasp, this particular moment in time is an object moving. Extract those labels and then use them to apply to the EEG and further our understanding that way.

[00:09:07.375] Kent Bye: Well, are you looking at different aspects of volition or free will or things that like you're trying to determine, like the chain of events as to when people are doing these tasks, like what's happening both throughout their body, but looking at the EEG data, you're kind of getting a mapping of what's happening in the whole system. And so what type of things do you have to kind of sort through there?

[00:09:25.381] Craig Chapman: That's actually an excellent question. So I think right now we're working with relatively prescribed tasks. So by that we mean we've given a person a particular sequence that we want them to follow because it makes it easier for us to extract these labels, right? If we know you're going to move to A and not B, then we can look only for movements towards A. But I do have some other work showing that if we record people's movements while they're making decisions versus not making decisions, so either being told where to go versus deciding where to go, that really does manifest in their movement behaviors. And so presumably that would also be manifesting in some signature in their brain. And so, yeah, I think that this is the exact thing we want to do once we've proven that this technique can work. Then we want to take some of the restrictions off the task and see how much of this predictive modeling will hold when you're not being told what to do, but instead you're having the opportunity to freely explore your environment. Can we still predict that you're going to go to the left or right or near or far or reach for the coffee cup and not the fork? Are those things that we can now extract when you're not being told what to do, but rather you're just choosing where to go?

[00:10:29.472] Kent Bye: I think it was in your presentation where you were challenging a big neuroscience concept of the intentional attachment behavior or something like that. Maybe you could talk a bit about it. Was that your talk?

[00:10:39.461] Craig Chapman: No, it was someone else. You're talking attentional binding. Yeah, the attentional binding, yeah. Yeah, that was a no.

[00:10:44.086] Kent Bye: Oh, OK. Yeah, do you look at the intentional binding at all? Or maybe you could describe what that is.

[00:10:48.649] Craig Chapman: Yeah, sure. So intentional binding is this phenomenon where if I feel like I'm the one who caused an event. So let's take the simplest event of turning on a light switch or pushing a button to turn a light on. If I push the button and the light turns on and I feel like I caused it, When you ask me to judge how close in time those two events were, so the button press and the light turning on, I'll literally suck them in closer in time. So if I feel like I'm the cause, then I'll rate those two things as occurring closer in time. But if you do something to make me feel like I'm not the cause, and there's lots of different ways you can do this, one is to sort of just, the button presses and I didn't move and all of a sudden the light turns on. even though the timing between those two events might be identical to the other condition, I'll judge it as being further apart. So there's this way in which we event bind to our own intentional acts, and in time will suck those events close to me. And so Anil showed yesterday this very interesting effect in VR where it's maybe actually not the intention to move, but just the moving that matters. So he showed in VR if you see a representation of a hand, that's in the same location as your virtual hand or your real hand, depending on how you do it. It's just enough to see that hand move and push a button and see it from a perspective where it could be your hand to bring those things close to you. And the key was that the people weren't actually moving. So there was nothing about the movement. It was all of the visual, audio-visual experience of what a movement should look like was enough to have that event be sucked close to you.

[00:12:16.291] Kent Bye: I think he was also having haptic feedback as well. So you don't move your hand and you get a haptic press on your finger so you feel in your body that you pressed it. So you see a visual representation of your hand moving, your hand's not moving, and then it gives a haptic feedback. So I think the haptic feedback also is a huge part as well.

[00:12:32.440] Craig Chapman: Yeah, absolutely. So yeah, you remembered it better than me. So there's this event that happens on your fingertip that is like a buzz when the button gets pressed, regardless of whether or not you moved to press it. So that's a key insight, actually, that there needs to be some registration of that event happened on my body, possibly for the intentional binding effect to occur. So that's actually something we're really excited to do. Sort of next steps in VR is how can we get haptic events into VR in a meaningful way? And so one of the tools we're going to try to use is if you use conventional motion capture to track the position of real objects, now you can put those real objects visually in VR so that when a person reaches in virtual reality, there really is an object there in the real world. And so you can get, you know, you can sort of go virtual reality in reverse and use the real world to provide haptic feedback for your virtual experiences. And those are the kinds of questions we're going to ask. Is a buzz enough? or what does the extra experience of actually grabbing an object get you on top of that buzz and how does that change not just your movement behavior but also your eye movement behavior and where you're looking.

[00:13:36.298] Kent Bye: I think they call it passive haptic feedback within VR, which is essentially mixed reality, where you have objects that are representing it. So yesterday, you were also talking about movement as thinking and this whole concept of embodying cognition. But what specifically is your perspective in terms of why there's this connection with all the motor system and everything in terms of our cognition or our thinking, why movement matters to that at all?

[00:13:59.437] Craig Chapman: Right. So I think that it's a, if you were to take a basic psychology course or open up a textbook, you'd see a very linear serial model of how we think the brain works. It's an information processing device. It gets input. It does some processing. It then produces an output. And so the output part is the movement system. And for decades now in sort of conventional psychological research, we just push that as that's the output and the interesting stuff has finished before the output gets generated. But actually, when you open up the brain or when you start studying the brain, you quickly realize that there are no nice boxes and arrows in the brain. Everything is almost literally connected to everything else. And so you're dealing with a massively parallel and massively recursive system. And so as a starting point, it doesn't make sense to draw these delineations. And sure enough, when you start doing neural recordings, you know, leading up to movements, we see activity that is perfectly predictive of the kind of movement you're gonna make before it actually occurs. And you get evidence in your actual physical overt behavior that you're not necessarily committed. So one of my favorite behaviors to try and understand is a change of mind. So this is when you have two choice options in the world in front of you, like, You know, you're reaching into your fridge for a juice or milk, and you start reaching towards the milk, but halfway towards the milk, you'll change your mind and go to the juice. There's no linear serial model that could predict that, right? The linear serial model says you would have selected the milk, and you would always complete that action before you do something else. Whereas the change of mind clearly shows I was representing more options in my head and they were sort of in contention the whole time and then I was able to switch. And then I think maybe the last thing I would say is one of the things is we take a too human-centric view on what the brain is doing and if we look at the evolution evolution of the brain, really the brain evolved to be a moving machine. It evolved to make the most efficient actions that it could to maximize whatever it was going to be benefit, either reproductive benefit or acquisition of food resources or whatever. Those things required that we process our world not in terms of how did it make us feel and what do we want to do, but rather how can I most effectively, most efficiently move to acquire the resource that I need right now. And those evolutionary pressures are still the most dominant way of understanding how our brains got shaped. So I think if we want to understand a higher order cognitive process like mathematics, our ability to do mathematics, it has to be derived from a brain that fundamentally evolved to produce sufficient movements. And I think there's actually a really deep and fundamental insights we can take away from that. So that's what I mean by moving is thinking, that we're designed to be moving machines. We can think incredible thoughts, but we need to understand the relationship between the two. And in doing so, we can actually see how the moving body is reflecting things that are going on inside my head at all times.

[00:16:49.315] Kent Bye: Yeah, so this embodied cognition is that we're using our entire body to think because we're moving machines. But it also feels like we're moving away from just strict linear causal modeling of the brain and body, but looking at it more of these cyclical processes that are constantly iterating. Do you have any sense of, is there a sampling frequency of the body and the brain? Things are operating at these other cycles in which there are these different loops in which even perception is being created and sort of synthesized into a moment of qualia.

[00:17:18.065] Craig Chapman: Yeah, that's a fantastic question. So actually, you've hit on the other pillar of my research, which I didn't talk about at all today, which is we look at neural oscillations as a signature of this kind of processing. And particularly, we look at neural oscillations of two types. So one is a 10 hertz rhythm that's mostly known as an alpha rhythm that seems to do precisely what you just said. It seems to be a sensory sampling rhythm that 10 times a second is acquiring new information from the world. And so there's a huge debate as to why that exists, but one of the cool theories is possibly this is what's required to both be able to sustain the sampling of a thing that's steady in the environment during the peaks, so when you're like ready to grab that information, but also to be engaged by some new novel stimuli. So you're in a state of easier to pull you away or be perturbed in the troughs. And so this cyclical nature is actually fundamental in order to be able to both see stuff, but also be attracted towards new novel things. And you can see perceptual effects of this. So if I just measure your neural recordings and present really brief flashes of light, you're more likely to see it when your brain's in a peak and less likely to see it when it's in a trough. So that's one sensory sampling rhythm. But then we also think that as that information flows through the brain, it actually gets shared broadly in the brain at a different rhythm, so at a slower rhythm in the delta or theta band, and about two to four hertz. And this is now where these, we think, sensory packages get repackaged for distribution and sort of broadcast to the other areas of the brain. And so it's a cool property of neural networks that when they come into phase, maybe that's when they're more likely to be sharing information. And when they come out of phase, they're not sharing information. So what this allows is really dynamic networks to be almost instantaneously created, held for a few hundred milliseconds while that processing is relevant, and then desynchronized in a new network forms. So I think we're really on the earliest days of this and much of what I just said is pure speculation. But I do think that yes, there are ways in which we are absolutely rhythmically sampling our world and then also rhythmically distributing and sharing that information in our brains.

[00:19:22.394] Kent Bye: So for you, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?

[00:19:31.503] Craig Chapman: So I have always taken sort of at least a two-pronged, maybe a three-pronged approach to my research. There's sort of the theoretical questions and things that I'm interested in, then there's the experimental or empirical questions, the data collection part of it, and then there's the methodological part. And I try to make contributions in all of those domains. So I'll start with the methodologies. What I'm pushing, especially at this conference and elsewhere, is we need better ways of collecting sensory motor data from more modalities. So, more is better, but it's also more complicated. So, you know, we talked about trifecta earlier, but we're also trying to add in other signals like physiological monitoring, heart rate. respiration rate, galvanic skin response, which is kind of a sweat response. So all of those kinds of measures we want to work in. We want to get muscle measurements, so EMG responses from your arm and upper body when you're doing these tasks. We want to get postural responses, so force plates on the ground as you're distributing your weight and you're deciding whether you're going to lean in or be averse and move away. So from a methodological perspective, what is it going to take to get all of those modalities in at one time at a good sampling frequency so that we can analyze them? And so we're trying to develop custom software to allow that to users who maybe don't have the resources to go out and program up all the stuff you'd need to do to imagine trying to deal with all of those data streams. So that's the methodological part. And then I would say the empirical and theoretical more overlap with each other and I'm really trying to continue to pursue what we can learn about what's going on inside the head by measuring signals outside of the head. So what can we learn about neural signatures, for instance decision making when you commit to do something or when you choose a particular item in a vending machine, any kind of like simple choice that we must make all the time. How much of that can I learn just by measuring where you're looking and how you're moving? And I think in order to do that, we need to have the data from both sides. So this is what the brain does when X, this is how the body moves and your eyes look when X, and when we can show those tight correlations, then I think maybe we don't necessarily need to be recording the brain data. We can just say, if we get those hallmarks in your behavior, we have a really good indication of what's going on in the brain. So I would say that That, in general, is what I'm trying to do. And we have some specific experiments chasing that down as well.

[00:21:50.323] Kent Bye: Great. And finally, what do you think the ultimate potential of virtual reality is? And what am I able to enable?

[00:21:58.698] Craig Chapman: Yeah, so I'm really excited about VR's potential not just as an entertainment device, which I think is a great application and will continue to be fascinating to a lot of people, but as a research and clinical tool. So my most exciting thing is imagine if you needed to do a sensory motor diagnostic when you walked into your physician's office. So, you know, usually now they would go through some quick battery of tests, like follow this light with your eyes, check your reflexes, how are you, you know, hold out your arms, and those are effective, but they're pretty limited. But imagine now I could put a headset on you, give you controllers, let you run through a five-minute really cool immersive experience, and while that's happening, I'm getting just a mass amount of data about how you're moving, what you're paying attention to, so how you're distributing your eye movements, and possibly in the future, also how your brain is responding. There's no question in my mind that that will be an incredibly more rich and detailed measurement of your sensory motor function or impairment than is currently capable. So I really think that that's the opportunity for VR, is these are low-cost sensory motor measurement devices capable of also providing really immersive experiences. So as a package, I think it just makes a ton of sense as a clinical tool.

[00:23:12.381] Kent Bye: Great. Is there anything else that's left unsaid that you'd like to say to the immersive community?

[00:23:16.160] Craig Chapman: No, but thank you for interviewing me. It's been great to overlap with you at a few of these meetings.

[00:23:21.206] Kent Bye: Awesome. Great. Well, thank you so much. Thank you. So that was Craig Chapman. He's a neuroscientist at the University of Alberta in Edmonton, Canada. So I have a number of different takeaways about this interview is that first of all, Well, Craig's whole tagline is moving is thinking. So if you could see how somebody is moving, then you can know what they're thinking, which I think is really embracing this concept of embodied cognition. And that as he's talking about it here, it's like our brains have evolved as a moving machine. Our brains are trying to tell our body how to move, but it's not like a linear system. He says it's massively recursive and it's happening in parallel. So it's a constant iteration with these different sampling frequencies that are happening. So our bodies are taking this agile approach of different rhythms where he said that there's this 10 hertz sensory sampling rhythm that is constantly taking all this information and then share it out and synthesize it, he said, probably around the 2 to 4 hertz. And so there's these different neural oscillations that are happening in the brain. He suspects that it might happen that these different frequencies, once they synchronize, then that may be an opportunity for Information to be exchanged within the brain. He said that was somewhat speculative but in some ways EEG is Fairly low resolution in the sense that you're not able to get down to the level of a neuron I'll talk to some other neuroscientists to talk about this metaphor of like you're kind of standing outside of a stadium and you can kind of hear when there is Crowd that's cheering, but you don't know individual people that are cheering what they're saying And so you kind of get this from a distance understanding where big activity is happening with the brain, but it's not something that is super high resolution there's certainly other ways that are much more invasive that you can get much higher resolution but the gist is that he's going to be taking virtual reality both with eye tracking data as well as motion data have these very specific prescribed tasks that people are going to be doing and then looking at their EEG but being able to discern what's happening EEG through machine learning and be able to extract the labels for supervised learning from what is happening with your movement as well as your motions and your behavior so your movement and your motion behaviors, as well as what you're looking at and what you're paying attention to. So all of those things combined together, being fed into your EEG to be able to start to label and tag the different things that are happening. And then the goal is to just bring in as much of the different sensory modalities as they possibly can. at a consistent sampling frequency so they can like munch together all this data and then perhaps start to figure out these different correlations and see if they could start with something that is very prescribed tasks and then move into something that's a little bit more open and you have more choices. But to be able to see if there's these different neural signatures that are sending different signals to your body to be able to move and to be able to discern what you might be thinking based upon how you're moving. So Craig is pushing forward a bunch of different methodological innovations here. We're trying to aggregate and collect all of these different sensory modalities and fuse them together. And then from the more theoretical empirical, being able to actually do these different tests, capture the data, and then make some predictions from theoretical perspective, and then dig through the data and use these different machine learning algorithms to be able to process and understand what's actually happening. something that may be way too complex for somebody to just look at this massive trove of data that's coming from your body, but generally it sounds like VR is going to be able to have these low-cost solutions to be able to start to track your body in specific ways. There's certainly much more different ways to be able to track your body both from motion track data and eventually there's going to be a lot more ways of using sort of a leap motion type of things to be able to discern what's happening with how you're moving your body, you know, leap motion started with the hands that actually got sold to ultra haptics. But that same concept of pose detection that comes from AI to be able to start to then also put how we're actually moving the full fidelity of our body. Because once you put some of these different motion trackers on your body, you don't get the level of fidelity to be able to do like inverse kinematics and be able to really get down to the level of how your arm is moving around unless you have a number of different sensors. So the trajectory of where this seems to be going, though, is that it's like moving from systems that are used to cost like tens of thousands of dollars. Now you get down to something like these commercial off the shelf VR systems that are a thousand to two thousand dollars now. And then integrating that into EEG in different ways. I'll be diving in and talking to different headset manufacturers like MindMaze and Neurable, who are both each working on their own. integrated headsets and attending the experiential technology conference. It's very difficult to integrate different aspects of a high resolution EEG onto an existing VR headsets, just because the VR headset occludes or disrupts the different connections that are being made. So I'll be talking to some of the different providers that are creating specific solutions that eventually down the road could start to be used for both medical applications, but this type of research into neuroscience as well. I think there's going to be. a pretty big desire for lots of neuroscientists to start to fuse all this data together. And it seems to me that Craig is on the leading edge of doing a lot of that different types of fusion of neuroscience and virtual reality, and he's been seeing a lot of really good results so far. The last thing is that we had mentioned Anil Seth's talking about the intentional binding, where Anil had actually created this whole experiment where you're in virtual reality, you see your hand move in VR, but you're not actually moving your hand. And they have this haptic buzzer on your finger and they're like buzzing your finger, making it feel like you actually push something. And so they're trying to stimulate this intentional binding effect, which what happens when you feel like you're pushing a button and something happens, the time between those two events, essentially collapses to the point where you just see this direct causal relationship between your action of what you're doing and they're trying to see if they can fake this type of intentional binding effect by having your hand move in VR while it's not actually moving but then giving you the haptic feedback at the same time as you see the hand in VR pushing a button and seeing if they're still able to kind of trick and fool different aspects of the intentional binding effect. So I'm super excited to see where these different lines of research continue to go. I know that within the IEEE VR academic community, they've been doing all sorts of different neuroscience research for years and years and years. But I think we're starting to see neuroscientists be able to get access to this commercial off the shelf hardware and start to experiment with getting all this biometric data and then fusing it all together and trying to get these deeper insights about the nature of the mind and the nature of movement. And as Craig says, that moving is thinking and really extrapolating and looking into these concepts of embodied cognition and seeing if they can eventually get to the point of looking at these different neural signatures and be able to predict behavior from that. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and I'm about to dive into this whole neuroscience in VR series, and if you enjoy these different series, then please do consider becoming a member of the Patreon. I just want to send out a shout out to all my Patreon supporters because you're in a large part of what helps keep this podcast free and available for everybody. And if you'd like to help support not only the work that I'm doing, but also make this type of journalism, this work available for more and more people to learn from, then please do consider becoming a member of the Patreon. This is a list of support a podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this type of coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show