#808: Neuroscience & VR: Panel Discussion at GDC 2018 on Biosensor-Driven VR Development

craig-chapman
The Canadian Institute for Advanced Research invited me to moderate a panel discussion at GDC titled “The Future of VR: Neuroscience and Biosensor Driven Development” featuring CIFAR scholars Craig Chapman and Alona Fyshe, as well as Jake Stauch, the founder & CEO of NeuroPlus.

alona-fyshe
Chapman is a movement neuroscientist from the University of Alberta, and Alona Fyshe was at the University of Victoria at the time, but is now at University of Alberta, and her focus is on computational linguistics, machine learning, and neuroscience.

jake-stauchThis is an ambient audio recording of the hour-long GDC panel discussion where we start a dialogue between the latest insights from neuroscience, and how game developers could start to engage with the neuroscience community to help design experiences that could assist with basic research into neuroscience.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So continuing on in my series of looking at the future of neuroscience and VR, this episode is going to be featuring a panel discussion that I helped moderate back in 2018. So the Canadian Institute for Advanced Research was looking at how these immersive technologies like virtual reality could start to be applied to very specific neuroscience research, and they wanted to open up this dialogue between neuroscience researchers and the game developers, both to help share to the game developer community some neuroscience concepts about perception and about things that might be helpful for the process of game design, but also to open up this larger conversation and dialogue between how game developers could start to create these different immersive experiences that could then feed back into doing very specific neuroscience research. And so this was a sponsored session that the Canadian Institute for Advanced Research had gotten at the Game Developers Conference. And they had a couple of their scholars, Craig Chapman and Ilana Fish, and they also invited Jake Staunch. He's the founder and CEO of NeuroPlus. And so he's somebody who's actually trying to take biosensor driven development of the different biometric data that is radiating from your body, and he had been working with trying to fuse that into different aspects of gameplay and different medical applications. So the audio on this is not the best I had recorded directly from the soundboard and it was clipping. And so I had also taken a recording that was from the audience. And so that actually had a little bit better quality is a little bit easier to hear, but it's still a little bit echoey. So it's not the best quality, but I feel like that there's a certain amount of historical relevance to this conversation that I just wanted to get out there as well. just because there's certain aspects of this dialogue between neuroscientists and game developers that I think is going to continue to happen over many years to come. So that's what we're covering on today's episode of the Voices of VR podcast. So this panel discussion was titled The Future of VR, Neuroscience, and Biosensor-Driven Development, presented by CIFAR, and it happened on Thursday, March 22nd, 2018, at the Game Developers Conference in San Francisco, California. So with that, let's go ahead and dive right in.

[00:02:22.956] Rebecca Finlay: Good morning, everyone. Let's get started. It's great to see so many people in the room this morning. You are in the right room. If you're here to hear about the future of VR neuroscience and biosensor driven development. And I'm Rebecca Finley from CIFAR, and we are very, very happy to be supporting this session this morning. This is the very first time we've been at GDC, first time I've been at GDC. And it's an extraordinary experience, really. I'm really very happy to be here. If you don't know CFAR, we're a global research institute. We're based in Canada. And we bring together the world's leading researchers and some of the early career top talent, as well as established researchers, to drive progress in areas of global importance. We accelerate research and innovation in many different areas, including AI and machine learning, quantum computing, strong and sustainable societies, and human health. Today's panel features CIFAR's global scholars, Alana Fish and Craig Chapman. They are a part of our CIFAR-ASRIELI program in brain, mind and consciousness. This program seeks to understand the science underlying human consciousness and human cognition. They are going to be joined today by Jake and by Kent Byke, who's going to come up and get things started. They are, as you know, both experts and entrepreneurs in the realm of game development and virtual reality. It promises we hope to be a fascinating and thought-provoking conversation. Thank you all for being with us, and I'll turn things over to Kent.

[00:04:00.543] Kent Bye: Yeah, thank you. So, virtual reality, I think, represents this new communications medium that is putting our body into experiences in a completely new way. And I think part of that is going to be, now that you're able to get access to all this biometric data, and how is that going to be actually fed into an experience, and what type of game design. where you're going to be creating with that. But I think today we're going to be exploring what type of inputs of biometric data are out there, what's possible, what can you do with that. And I think what we're going to do is we're going to start with each person giving a short little presentation and then dive into a little moderated discussion, asking different various questions about what's possible, about what's known, what's not unknown. So I'm going to have each person first introduce themselves quickly and then we'll go into the presentation. So yeah, why don't we go down the line and just have each person introduce themselves.

[00:04:49.033] Craig Chapman: Sure, so I'm Craig Chapman. I'm from the University of Alberta, and I would classify myself as a movement neuroscientist.

[00:04:55.158] Alona Fyshe: I'm Alana Fish. My background's in computer science, and I use machine learning to understand how we process language with our brains. I'm at the University of Victoria, but I'm moving to the University of Alberta this year.

[00:05:05.546] Jake Stauch: I'm Jake Stouck. I'm the founder and CEO of NeuroPlus. We make brain-controlled video games to help improve cognitive performance.

[00:05:12.572] Kent Bye: Great. So let's go ahead and dive in with the presentations, and then we'll do the discussion.

[00:05:16.950] Craig Chapman: Okay, awesome. Well, thank you so much, and thank you so much to CIFAR for organizing this session. I'm super excited to be here. And I want to start by taking you back to a cold January morning in 2007, and I woke up, and I couldn't move my right arm. And if this was a neuroscience conference, this might be the point at which I'd launch into a tale about how this was due to some rare neurological condition which launched my career. But I'm at a game developers conference, and the reason I couldn't move my right arm is the same reason I know it was January 2007, And that's because 24 hours before I woke up paralyzed, I was standing in the Toys R Us parking lot, clutching my ticket to buy one of these, a Nintendo Wii. I then proceeded to spend the next 24 hours trying to hit 10 home runs. And so, this was literally thousands of swings for the fences, and it's maybe no surprise that after hours of not successfully hitting 10 home runs, I woke up the next morning and couldn't move my arm. But maybe this is just an example of me being insanely competitive, so I want to fast forward a decade and introduce you to Sam. She's an undergraduate student in my lab, self-professed non-gamer. I've seen her play Rocket League, it's true. And yet, when we put Sam inside of the HTC Vive longbow simulator, where you're trying to defend your castle from these hordes of guys with a cartoon bow and arrow, we literally could not get her out. She was so engaged, she didn't want to take herself out of this immersive environment. So what's going on? Why are these games so compelling? Well, I want to argue first that it's not to do with their visual graphics. Contemporaries of both Wii Sports and Longbow look a lot better. What I want to argue is it's because of intuitive motion control. So what do I mean by that? Well, I mean that when you go to push a button, your brain programs a button press. In no way does it expect that you're going to swing your arms. Similarly, when you go to push a thumb stick to the left, in no way does your brain expect that you're going to turn your head. And yet, in both of the games I'm talking about, these types of controls were made much more intuitive. I want to use these two stargazers as a representation of some theories of how the brain processes information to get at why I think intuitive motion control is so rewarding. Let's let Calvin stand for the common view of how the brain processes information, in that it starts roughly as a blank slate, and then sensory information streams from the outside world and builds a representation inside of the head. Conversely, let's let Hobbes stand for a more contemporary view known as the predictive coding hypothesis. And the critical feature here is that you start with a template in your brain, a prediction of what you think you're trying to see. And then, you actively sample the world to see if what's out there matches what's in here. And if it doesn't, and here's the critical part, what you do is you engage your motor system to update your prediction. In this way, then, the motor system is itself reflecting the predictions that are going on inside your brain. And when the actively sampled part of the world now matches what's going inside your head, this is a very rewarding experience. And that's precisely what I think is going on in these games where we've got intuitive motor control. The movement I make is exactly mapped in a very biologically evolved way to the movement I expect and the consequences as such, and so the experience is very rewarding. Now, this predictive coding makes an interesting hypothesis, which is, when you're watching me move, therefore you're actually watching me think. There is reflections in my movements of inner processes in my brain. So we tested this hypothesis by designing a really simple sort of version of the quick draw game where you're trying to beat someone to a punch. So imagine your task in watching this video is you're trying to point to the target before this person gets to the target. So you're trying to beat them to it. Now, unbeknownst to you, we've actually shown you two different kinds of videos. Videos where the person you're watching is not making a decision, she's being told where to reach, and videos where she is making a decision. She's getting to decide which of those two targets she wants to reach for. Now, when we look at how quickly you're able to respond to those videos, you're much, much faster when you're watching someone make a decision than when you're watching someone who's being told where to go. So this is evidence for the predictive coding hypothesis. There's something about the decision-making process that's reflected in her movements, and you're picking up on that and driving your own behavior. Now, we can actually show that this doesn't matter. You're able to do this whether or not we only show you the head or only show you the body, suggesting that this is an entire whole body signal. And so I would say this is a really vitally important piece of information for those of us who are interested in developing new, for example, virtual reality agents. If we're gonna treat virtual reality agents as though they have thinking, then they have to move in these very subtle ways that will convince me to attribute thought to them. Now, this is exactly the hypothesis we've been chasing, and so it's perhaps not surprising that I think it's very important to be measuring both how the eye and body is moving. So this is a representation of the kind of data we would collect in my lab for both eye and motion tracking. And most of us can make sort of good sense of what's going on at the bottom, because we see biological motion. But if I asked you guys to say, hey, what is this person looking at, just by looking at these squiggly eye-tracking lines, you'd have no idea. And so what we've been working hard to do is to layer on some meaning so that this becomes more interpretable. So here I'm showing you the confluence of motion tracking data with some simple rendering to now provide a whole level of meaning that you previously would not have had. So now you can see, for example, where that person is looking, as well as how they're moving and interacting with that object. And what this allows us to do that's very interesting is to automatically subdivide that data into meaningful bins. So we can say, for example, when the hand is approaching an object, or when you're looking at the next target of action. This ability to automatically label data is even more important as we move into a new and very exciting domain that I'm working with in Alana, where not only are we able to record eye and motion data, but now we're simultaneously recording neural data. So this, for the first time ever, I'm showing the simultaneously recorded eye, motion, and EEG data in a person that's performing this task. And I'm working together with Alana, who's a machine learning expert, because this is gonna generate massive amounts of data. And I think the right answer, the right way to label this data is to use eye and motion tracking to assign the labels so we can make sense of what's going on in the neural data. Without those labels, it's gonna be like those squiggly lines in the eye tracker. So I wanna end by sort of making my classified ad pitch to you guys. What can we offer to you as scientists, and hopefully what can you guys offer to us as game developers? What I hope I've shown you is that we, as scientists, are in the business of collecting, analyzing, and most importantly, interpreting complex biometric data. And it's really the interpretation that's key. Without the ability to assign meaning to this neural data, it's gonna be very challenging for you guys to employ it in your games. As an example of the kind of thing you might be interested in doing is being able to identify a state in one of your players, a state where they're ready to be surprised. Now we can do this by collecting massive amounts of data of both how their eye, what their eye is doing and what their body is doing to measure startle responses and maybe changes in pupillometry and then correlate that with neural signatures of simultaneously collected data and then use that as a new signature of an ability or a state of ready to be surprised. In turn, I would say that there's a lot of help that we need from you guys. So as a neuroscience professor, I don't have a ton of experience generating complex virtual environments. And maybe it's easy for you guys, but it's really hard for me in my lab to create meaningful experiences that are sensitive to the kind of data that I want to do. So I would really urge that what we need to be is sensitive to non-traditional inputs. And here I'm going to make a shameless plug for a company that my brother and sister are building that is precisely trying to be an authoring tool that's sensitive to non-traditional inputs in the location-based domain. But the plea I make to my siblings is the same plea I'm making to you now. We need to expand the horizons of what we're going to consider inputs into our games. And specifically, I want to urge you guys to consider the eye, the body, and the brain as meaningful inputs for your development enterprises. And with that, I'll turn it over to Alana, who's going to talk about how we might be able to make sense of some of this data.

[00:13:36.778] Alona Fyshe: Thanks, Craig. So I use brain imaging in order to study how people process language. So I'm interested in particular in trying to tell what word a person is reading based on the brain image we capture while they're reading. So this is a very complex task because English and language in general are very complex. So there's a huge number of words in the English language. And you can imagine trying to capture brain images of all of them, including all of the sentences you perform with them, is just an impossible task. And so when I bring people into my lab, this is the sort of setup we have. They put on this huge helmet, and we have to have a special technician put it on. And so it's costly to collect, and so the kind of data we have is just very small. And so what I think we could move towards is some consumer-grade EEG being used in the home, which would allow us to have a better form factor than this, and have people collecting the kind of data we need to fully explore the English language. And so the way I use brain imaging to try to figure out how people process language is I train machine learning models on the EEG data, in order to predict the word a person's reading, and then we can look inside the model at the parameters we've learned, and that helps us to understand the underlying things that the brain is doing. So I'm going to give you a quick crash course in machine learning. I think it's interesting to think of machine learning as computer programmers, as a program that's programmed not by the person, not by the programmer, but by the data. So we can think of a program as having many parameters, and a machine learning algorithm essentially takes parts of those parameters and sets them based on data. And so at a very high level, there are two areas of machine learning. One is supervised learning, and one is unsupervised learning. And today, I'll just talk to you about supervised learning. So in general we have, in supervised learning, a data matrix X and a label vector Y, and I'll explain those more in a moment. Essentially what we're trying to do is predict the value in Y from the data that we observe in X. So if you think of this in an EEG context, we're going to observe some EEG data while a person is playing a video game, for example, and then we'd like to predict whether or not they're concentrating hard. So that would be the label in Y. And so EEG data, just so that you know, is just like a time series. It's just a bunch of sensory readings over time. It's just a bunch of squiggly lines coming out of a helmet. So we would like to take those squiggly lines and be able to pick out patterns in them that allow us to tell whether somebody is concentrating or not. So essentially, at a high level, we're just learning a new function that takes as input the data x, so e.g. data x, and returns some prediction of what the person is doing at a particular time. So are they concentrating or are they not concentrating? So you may have heard of algorithms that do this, like SVM, logistic regression, neural nets, deep learning. These are all sorts of supervised learning algorithms that can do this task of taking EEG data in and producing a prediction. So now I want to give you two example cases where we can use EEG data to predict something. And two things in particular I think might be of interest to the gaming industry. So the first is, can we tell, using EEG data, whether a person is concentrating or not? So this is from a driving simulation, and on the top row, You can think of them as brain images collected while people were mostly awake, and here's mostly drowsy. And this is a representation of the helmet from above, so the nose is pointing up. And so you can see in particular in the alpha frequency range, so waves moving at about 10 hertz, 10 cycles a second, we see lower power when people are awake, and when they're drowsy we see higher power. And so we're able to tell using just the frequency components of EEG data whether or not people are paying attention at a particular point in time. So we can produce as a prediction whether or not somebody is drowsy using EEG data. Another thing we can predict is whether or not somebody was sort of surprised or excited by a reward they got. So in this task, people are learning a mapping of images to keys on a keyboard, and so it's a predictive mapping in the first third. Guessing and eventually they learn the mapping. And so on the solid line here, is when they got a reward but they were expecting it. So they learned the task, they pushed a button, they knew they were going to get it right, and you can see there's basically no change in their EEG data. There's no response really. They knew they were going to get it right, it's not exciting anymore. But this dashed line, the dark dashed line on the other hand, is an example where they got a reward but they weren't expecting it. And so this is an example where they were surprised and excited by something that happened to them. And so you can imagine in a game, if you were able to sort of optimize for creating these sorts of responses, that you could have a more engaged and exciting player. So these are two examples where you can see with your eyes the actual difference between the states that we're trying to predict, right? We can see when somebody is excited by a reward or expects the reward. And we can see, for example, when somebody is awake versus drowsy. And so machine learning can pick out these patterns that you can see, but it can also pick out patterns that you and I are not sensitive enough to pick up. So machine learning really allows us to pick out patterns, both those that are broad and easy to see, as well as those that are more subtle. As Craig said, I think there's opportunity for us to have some feedback here. If we are incorporating EEG into games, we could change the amount of data we're collecting. So we could hook the EEG signal up to the events happening in the games, and that would give us some data in order to train machine learning models. And then we could use that data to improve our understanding of the brain. So I think we can actually create a cycle that allows us to both improve the gaming experience as well as improve our understanding of the brain. And I wanted to point out that there's actually a Canadian company called InterAxon, who has developed an aftermarket EEG add-on to both the HTC Vive and the Samsung Gear VR. So this is already happening, there's already industry interest, and I hope that in the coming years this will become more prevalent.

[00:19:11.301] Jake Stauch: Thank you so much, Craig and Alana, for that. I'm Jake, founder and CEO of NeuroPlus, and I'm here to talk a little bit about how we're making this happen, how we are putting this into practice with a consumer product that, one, incorporates this EEG brain data into game experiences, and two, generates a lot of data that people like Alana can use to refine how we think about the brain and understand more about how the brain works. So the way this works is we've developed a platform for improving cognitive performance with a hardware device that measures brain activity, body movement, and muscle tension, as well as a software platform that takes the form of games that are influenced by and respond to changes in your brain activity, as well as changes in physiological patterns of activity, your body movements, as well as your facial muscle tension. So a user would wear this headset that's measuring their EEG, measuring EMG signals, measuring accelerometer signals, and they would play a training game that's designed to improve their ability to pay attention. And so the more they focus, the more they concentrate, the more something would happen in the game. The more they are able to be calm and relaxed. the way the game would change in that way. So to give you some tangible examples, these are some of the games we've developed. In this game, the user has to concentrate to make the dragon on the left, the green dragon, fly faster. The more they concentrate, the faster this dragon flies, and if they sit really still and they relax, then they earn extra points, but if they move around too much, or they get too tense, then they'll actually lose points, the screen will shake, they'll lose health. In another game, you're on a hover bike, an infinite runner going through a tunnel, And the more you focus, the more you concentrate, the lighter the tunnel gets. So if you lose focus, if you zone out, if you don't concentrate, then the tunnel gets dark and you can't avoid these obstacles that pop up. At the same time, if you sit really still and you're very relaxed, then you stay really steady on your hoverbike. But if you move around too much or if you're too tense, then the hoverbike loses control, becomes unstable, and you have a harder time flying straight. We're plugging in these physiological patterns into the gameplay to make it, one, more engaging, but also to have an actual effect on cognition. In the third game, you're on a ship. Again, the more you focus, the more you concentrate, the faster the ship goes. If you sit still, the ship and the seas are steady. If you move around or if you're too tense, then the seas get a little rough, your ship moves around, you lose health, and you sink eventually. And so while we believe that this is a really engaging experience for people and it makes games more fun, more engaging. We also found that these have real tangible impacts on cognitive health and performance. So we conducted a randomized controlled blind clinical study with researchers at Duke and Stanford, and we had 60 children with ADHD randomized into two groups. One group received NeuroPlus training, wearing this headset and playing these games over a 10-week period, and the other group continued treatment as usual, so medications and therapy. And what we found is at the end of the 10-week period, the group that played these training games with these biosensor-driven experiences experienced improvements in their ADHD symptoms almost four times greater than the group that continued traditional treatments for ADHD. So there's real tangible benefits here. But there are also a lot of challenges. I think what Alana alluded to earlier, this would all be a lot better if we could have 128 electrode array that all these kids wore, and we could have electrode gel on all the sensors, and we could abrase the surface of the scalp, and we could get a really great signal. But the fact is, we have to balance the scientific integrity with also the user experience. People are not going to spend three hours putting these headsets on. It's tough enough to get them to spend 5 or 10 minutes putting these headsets on. Imagine you had an app that took 10 minutes to load. How many of you would use that on a regular basis, right? So we need to have a really seamless experience. That's why our headset has one sensor, even though it would be better if it had 5 or 10 or 15 or more. Another aspect of EEG that's challenging is the signal itself. EEG has a lot of noise, a lot of randomness in the data, and so even if you're collecting really high-fidelity EEG data, you're going to get a signal that isn't always intuitive and doesn't seem causal to the user. So imagine a game control that 80% of the time was correct, but 20% of the time was wrong. 20% of the time you wanted to go right, it went left instead. That would be really frustrating for the user, and yet 80% would be an amazing accuracy rate for EEG. So how do we create game interfaces that embrace that randomness, that are engaging and fun, but not frustrating, while still being essential for the gameplay and rewarding for the user when they get it right? So those are some of the challenges. Happy to talk more and get the panel started.

[00:24:11.091] Kent Bye: All right. Thanks for those presentations. So I wanted to start off right off the bat, because I know that when we did this pre-planning, I said, you know, what about privacy? And since that time, the whole Cambridge Analytica thing has blown up. And so I've been tracking virtual reality for four years now, and I've gone to the Experiential Technology Conference. And talking to different neuroscientists, the thing that they are saying is that a lot of this is in the context of medical information, which is ruled by HIPAA, all those regulations for maintaining data integrity. And so now what, in some sense, you're proposing with this initiative, with these game developers, is having people start to collect data, biometric data, which could be potentially personally identifiable. It's not personally identifiable yet, but Kano Isumano from OpenBCI said, at some point, Within the next five to ten years, we may discover that EEG has a unique fingerprint that could be identifiable for people. It's something that hasn't been proved out yet, but it's something that I think if we're starting to talk about this, we have to think about, like, hey, let's treat this as personally identifiable information. What does it mean for a bunch of game developers to start capturing and storing that information? What's the data integrity of that, and how do you actually facilitate this into a context which is a research context, which is great, but if it's a market context, that's a different context. So I think there's a lot of different boundaries here between medical context, research context, and a consumer context. And so how do you propose we bridge the gap between all of those?

[00:25:40.061] Jake Stauch: Wow, that's a tough question. So I think you're right. I think what's interesting is you have a data set that today is not personally identifiable, that we can anonymize, and that's what we do. We share any of this data. It's all anonymized, and as far as we can tell, not personally identifiable. But we have a data set and a collection of data that in one day, maybe five years, 10 years, or further into the future, could mean more than it does today. And I think that's a very interesting kind of dataset that we're not used to working with, where people don't really know what they're consenting to, even if they're consenting to sharing this data, because one, a lot of times they're not going to understand it, what it means. I mean, we don't even understand all of what the data means, let alone a typical consumer. And two, we don't know what it can do now versus what it can do in the future. So I think these are open questions. I think what we need to move to are a set of standards on how this data is handled so that we can share this data with researchers. that we don't handcuff ourselves, we can find out more about the brain and share it, but that we're doing it in the safest way possible with the techniques that we know how. And I don't have an answer to how we do that, but I think we do need to have some kind of industry standard that we accept as being the safest way to go about doing that.

[00:26:52.440] Alona Fyshe: Also, areas of machine learning actually focus on privacy in data. One of the things we do is to try to prove that you could not, from a data set, tell an individual's unique identifiable signature. So one thing we could do is release not the data itself, but the average over multiple people. So for example, when we release census data, that's what we do. We don't release The income and gender for everybody in every house we release, you know, averages over areas so that you couldn't reverse engineer the identifiable information is. So that's something that people in machine learning are already thinking about, and I think that there's room for some cross-talk there. I also wanted to point out that I think it's an area of interest to think about what it means to optimize for somebody's response. And that's already happening in platforms like Facebook. It's already being optimized for the user's pleasure, the user's engagement. And so what does that mean in the long term for their interest in an app? Is it healthy? Is it actually useful for us as a society?

[00:27:47.473] Craig Chapman: I just want to jump in and say that this is an issue that's already here today. So the ability to uniquely tag someone based on EEG is interesting, but if we're going to embed eye trackers, which is already happening, and even if we're recording how people move, we all are unique in the way that we do that. So this is definitely a problem that needs to be confronted now. And I think some of the suggestions that Jake and Alana have made are the right ones. So we need to have initiatives for making these standards, and we need to borrow from tools that exist for other personally identifiable data sets, and so I think we need to confront that as we move forward.

[00:28:20.154] Kent Bye: And I think another tension is, and I'm talking to Sarah Downey, who's working with Neurable, which is a, you know, she's a VC. And Neurable is basically doing a lot more sensors on the head and you're able to capture information over time. And so I think there's a fundamental tension between what you need for the machine learning is to capture data over time. and to be able to make insights into that. And so we're doing the data capture and then trying to then figure out what we can do with it. And so there's this back and forth. And at some point, I think, though, once we figure out those algorithms, maybe we don't need this storehouse of all this data, because then you have to have responsibility for it. So I think the thing that I've learned from the Neural Gaming Conference, or from the Experiential Technology Conference, is that The reaction times for a lot of this stuff is so long that you actually have to do some sort of rolling average over periods of time so you have these windows of data so that you're capturing all this information and then you're taking more of a rolling average. Maybe you could talk about at what point is it instantaneous of being able to figure out how to express agency with your mind versus how much do you have to take in a rolling average and see how trends are changing over time?

[00:29:25.402] Alona Fyshe: It's definitely a rolling average as well as an average over multiple trials. So single trial, single person prediction from EEG is really, really difficult. The signal is very, very noisy. So we often have to work on a rolling average over time, but also an average over trial. So every time a person reads a particular word, we would average across all of the times they read that word because EEG data is just inherently noisy, not just EEG data, fMRI data, magnetoencephalography, anything you've heard of. It's just noise and technology. I mean, you definitely know that.

[00:29:53.479] Jake Stauch: Yeah, absolutely. I mean, it's a huge challenge to use single trials, single individual to accurately predict anything. And I think that that's a challenge that we have in our product. And it only becomes really robust in the graphs that Alana showed when you have those multiple trials. So I think you're right that we could move on the research side towards really only storing the averages of those things and keeping those on the long term since that's what the ultimate data is useful for.

[00:30:16.989] Craig Chapman: I just will say we've sort of been referencing two different kinds of EEG data. So one is the instantaneous fluctuation in whatever the sensor is reading, which is essentially a voltage. But what you do with that data is you can also decompose it into its frequency components, which Alana has alluded to a number of times. It turns out that the frequency component is more stable than the instantaneous fluctuations in the voltage. And so I'm imagining that one of the things that Jake taps into when he's talking about concentration is a particular frequency band. Now this needs to be estimated over time windows on the order of probably multiple seconds, I'm guessing. So yes, we have rolling averages that way, but then you can start to get decoding accuracies on the individual trial that are at least reasonable. But like Jake said, 80% would be amazing. But for you guys as a game development community, something that only happens with 80% consistency is probably not good enough. So these are going to be probabilistic information that can guide these types of ventures, but they're rarely going to be perfect or never going to be perfect.

[00:31:12.848] Kent Bye: Yeah, also the neurogaming conference has moved more towards medical applications because those are more fit for measuring people over time if they have a neurodegenerative disease, how do they recover from that. So I think actually in the medical field we're going to see a lot of things. And so actually I think adding gamification to medicine. The thing that I noticed with some of these things that you're saying is that you actually see an improvement over time. And so a game progression curve, seeing that people are actually making progress, how can you actually capture that progress in some sort of small incremental way that allows you, if you're trying to rehabilitate yourself, then how do you make it fun for people? And that's what I've seen a lot of what's happening with neuroscience and VR is this gamification of medicine. So I'm just curious to hear your thoughts on medicine and right now we think about games where people play for fun but sometimes people have to do things that they don't want to do and how they make that interesting and I think in the medical context I'm seeing that a lot as well.

[00:32:03.950] Craig Chapman: So I'll jump in here and say that In addition to sort of the gamification of medicine, I do think that these tools that we're developing are also gonna be very important for medical purposes because they're gonna allow us a new diagnostic sensitivity that we haven't had previously. So for the very reasons that it might be dangerous to uniquely identify an individual from their data is precisely the reason why we would be able to track the progression as you were encountering a disease. So for example, we're developing new ways of assessing upper limb function for people who have advanced prosthetic devices. And these devices are getting so incredible that it's going to require this level of sensitivity in order to tell whether or not they're better than they were before. So I really see that as an avenue for this. And then yes, I think you can then use this data to help the rehabilitation experience and make it more engaging and more rewarding for the user as they progress on that journey.

[00:32:53.124] Alona Fyshe: Or you can imagine if you had some sort of neurodegenerative disease that ran in your family, like Parkinson's or Alzheimer's, what if there was a game you could play on a weekly or monthly basis to pick up the changes that could happen early that you might not even notice? That could be a really amazing advance and something I'm interested in thinking more about.

[00:33:09.475] Jake Stauch: Yeah, and on the cognitive side, I think if there's one thing that we understand about the brain, it's that you get better at things that you practice. And so we know that that's true. And so if we can design experiences where people can practice those things that have historically been difficult to practice, like paying attention or like a positive state of mind, all these different things, we can design virtual experiences around that. So we can train our brain in ways that we haven't been able to before and address a lot of cognitive health challenges.

[00:33:35.833] Kent Bye: And also another trend that I see in this space is education and how neuroscience can pick up things like cognitive load and how cognitive load is connected to education and learning. I'm just curious to hear from each of your perspectives of like the different unique things that you could identify, for example, cognitive load or intention or these different signals that you're able to see from different parts of the brain if you have the higher fidelity. I think the challenge here, of course, is that you have this tradeoff between the number of sensors you have in your head versus the level of fidelity that you can get to do some of these things like intention or other things. There's things that you get commercial off the shelf that's able to do a certain amount of baseline, but then as you add stuff, you get more and more fidelity. So I'm just curious to hear that spectrum, the lowest fidelity, and then as you increase more and more sensors, you get more and more stuff. So what is at the baseline for people, commercial off the shelf, whether it's cognitive load or intention, what are those things that you can detect? And then as you add more sensors, what can you start to do then?

[00:34:33.953] Jake Stauch: I think it's a really interesting thing. It's not just the number of sensors, but it's the quality of the sensors and electronics. So we have one sensor, but we also we use a sensor that costs $20, and that's not something you can do if you have 16 sensors and have someone afford it. So I think that there is that trade-off of the quantity of sensors, the quality of the electronics underneath, and the quality of signal processing that goes on top of that. You know, one of the things you have to think about is what is the application? If you're selling into a market that has a significant problem or significant need, then I think you have room to spend more money on a more expensive device and spend more time on setting that up. So if you have a classroom, for example, of students with severe learning disabilities, maybe it makes sense to have larger electrode arrays and you have a longer setup time. But I think for it to be mainstream, it would have to be much more accessible. It would have to be something people can put on very easily, because why are they going to spend that really valuable classroom time and the valuable classroom money and resources on something that costs more and takes more time to set up? So it is a challenge.

[00:35:35.057] Craig Chapman: Yeah, I would say I think it's really important as scientists that we don't oversell what we can provide. And so hopefully it was clear from the data that I was showing you. Assigning meaning to little squiggly lines is an incredibly challenging and arduous process for science. So I'd love to be able to measure intentionality, or I'd love to be able to measure executive function or cognitive load. But these are concepts and labels that are going to eventually be decomposed into many, many constituent parts. And so if we don't have the right labels for the kinds of algorithms that Elon is developing, then we haven't made any sense of the data. So I think there's a ton of opportunity, and I think things like concentration, that's been a pretty well-established thing in the literature now for decades. That's one that maybe we have some handle on. But if we go sort of that next level, is this person memorizing this task better? I mean, we'd have no idea where to start yet. you know, it's gradual and I don't want to make it sound like we could put this in a classroom and make you a better learner. We're miles away from that, but we can start to hopefully build this feedback cycle that Alana talked about, about getting more data to develop more understanding of the labels to then reassign those labels and so on and so forth.

[00:36:42.804] Alona Fyshe: Yeah, and I just, I wanted to add that when it comes to the the sorts of things we can do with EEG data, that's really an open question and that's the area of my research is I want to know if we can do the kinds of work I've done have been with much more accurate, sensitive equipment that's much more expensive, on the order of millions of dollars. And my research question is really what can we do with the cheaper equipment? If I can send somebody home with a piece of equipment that's $1,000, then I can afford to buy 10 of them and send them home with 10 people and have those people read some book on their Kindle at home and collect amazing data. the kinds of data that just would not be possible in the lab because they can collect every day for a month. And it would just be a completely different experience. But the question is, what good is that data? And that's really the research question I'm working on now.

[00:37:26.532] Kent Bye: And I think another trend in talking to different neuroscientists is this concept of embodied cognition, which is that we don't just think with our brains and our minds, but we think with our entire body. So what does it mean to start to add our body into the cognitive process? Which I think that why it makes VR such a compelling training platform in terms of being able to make a choice, take action, and have your entire body involved in that experience. I'm just curious to hear your perspectives on the importance of embodied cognition and what that sort of tells us from a neuroscience perspective, the nature of how people work and how they think.

[00:37:59.712] Alona Fyshe: That's sort of a left-field thing, so I'll start first, which is that language is actually represented in a very distributed way, meaning that when you read a word that has something to do with motion, like kick or even a word like hammer, you actually light up parts of your brain that are involved in movement. So body conditioning is not just actual physical movement. It's involved in a lot of what we do because we represent the world through our actions.

[00:38:20.671] Craig Chapman: Yes, this is probably my favorite question. That's why I wanted to get in there first. Yeah, I know. So I've been using the tagline from my lab for the last couple of years that moving is thinking, as I showed you guys here. So it's going to be no surprise that I buy wholeheartedly into this idea. And in fact, if I give other versions of this talk, I eventually end up saying things like, the only reason we have a brain is to move. And so, I mean, these are provocative statements, but they're actually quite true. If you think about how brains evolved and what they were for, they were very specifically for moving towards good things and getting away from bad things. And so there's a way in which I think we are entirely embodied. As I showed you though, we're embodied in ways that we're not necessarily even aware of. So when you see me gesture or you see me move and you infer thinking from the way I'm acting, you're often doing it in a way that you can't actually verbally report. So if I showed you that example of the gunslinger task, we actually asked people, hey, do you know what kind of trial you're looking at? And they had no idea. So you're unable to consciously tell me what kind of trial it was and yet it made you faster. So anyways, this is just sort of an example of the way in which we're very nuanced with our movement, but also nuanced in the way we watch other people move and assign them thinking states. And I think this is incredibly important as we move into VR. Part of what makes VR right now not particularly rewarding is that the avatars we interact with don't move in a way that feels natural. And we can't even put words to it sometimes. It's just not human, right? The uncanny valley. It's right, but not right in some way. And I think it's going to be high fidelity motion tracking and eye tracking that eventually gets us to the point where we start to assign mental states or the attribution of intentionality to these avatars.

[00:39:56.460] Kent Bye: So there was one graph that you showed that showed like this unexpectedness. And so it's like this surprise and delight and this concept of wonder and awe. And I know in the realm of neuroscience, they actually have this concept of neurophenomenology, which is in order to study wonder and awe, you have to start to connect your direct experience into the neurology as well. And so you're combining someone's direct experience with the neuroscience. And so we're trying to blend the subjectivity with the objectivity in some way. So I think that is one. But I'm just curious to hear just this concept of surprise and delight. Why is our brain so connected to novelty? What can you say from the neuroscience, why that is, and from a game design perspective, what people try to do to make a game feel more satisfying, and how to crack that code of novelty and surprise and delight, wonder and awe?

[00:40:43.130] Craig Chapman: That's a huge one.

[00:40:44.292] Alona Fyshe: That's a big one. So it's not my research area, but I mean, there must be an evolutionary advantage to seeking novelty. I mean, that must be why we're driven to be surprised and to be interested when things are new and different. I mean, there must be a reason then.

[00:41:01.893] Craig Chapman: The predictive coding hypothesis actually makes it very clear why certain experiences would be encoded in the brain more vigorously than other ones. If you go down the reward learning pathway, which is another active area in the artificial intelligence machine learning area, When signals deviate from what you expect them to be, that actually tells you a time when it should be very meaningful. So, you know, you're the little animal and you're foraging through the forest and all of a sudden a big noise appears. That was not an expected event, but it probably signals that something is out there that you should be paying attention to. So this deviation from prediction, this prediction error, ends up being the most salient signal to drive behavior. Now I think it's a big step to take that and then say it's wonder and awe, but I at least think it's some of the principles of unexpectedness is often the most salient thing we should be aware of.

[00:41:53.060] Jake Stauch: Yeah, I mean, again, it's not my area, but in my research background, reward prediction error is often what this is called. And you can measure this on a single neuron level in the brain with recordings on dopamine neurons, dopaminergic neurons. You can see that these neurons are most active when the difference between what you expect and what you get is greatest. And so I think that that's what games can bring, and that can really stimulate these reward pathways, and that's what we're seeing.

[00:42:14.883] Craig Chapman: Yeah, and I will say that that's got to be the basis for learning, right? You learn from, I tried this, and the unexpected happened, and so I'm going to change my behavior so that I can make a prediction next time. So it's very fundamental to the way we learn, and it's encoded in pretty much every circuit we've ever looked at in the brain.

[00:42:30.057] Kent Bye: Great. And the final question, I want each of you to tell me what the single biggest open question that is driving your research forward, or your work forward.

[00:42:38.062] Alona Fyshe: I already said mine, which is, what can we do with this commodity hardware? Can we get the kind of signal we need to answer the more complex questions about language and meaning in the brain?

[00:42:47.711] Jake Stauch: Yeah, as I mentioned, it's how do we take these signals that are very random and ambiguous and make them, one, essential to the gameplay so that people care about them and they want to change them, but also not frustrating because they don't do exactly what you want them to do.

[00:43:00.560] Craig Chapman: Yeah, and I'd say mine is that slide where I showed you the EEG with eye tracking and motion tracking overlay. Like I said, that's something we've just done in the last month. And I think there's immense promise for using the behavior to apply labels to that data. And now it's just a playground. What can we do now that we have the ability to collect that kind of data?

[00:43:19.203] Kent Bye: Very cool. So we've got about 14 minutes left here, and so let's go ahead and move to questions. What I would recommend is maybe, as the panelists, maybe do a little popcorn style, because we have a lot of people in line already, and I imagine there may be more questions. So maybe we can sort of get through these questions and just hear as many as we can. So yeah, go ahead.

[00:43:36.450] Questioner 1: Hello, my name is Joana. I love this presentation. I work for EA. The question I have is related to the slide Elana had. Why do you think you see those peaks when people expect the reward versus when they don't have that?

[00:43:52.888] Alona Fyshe: Well, you're just not surprised when you get the reward you're expecting, right? When something happens and you have been able to predict it, then you've done some of the legwork to do the computation when you're anticipating. It's sort of like the backtracking of that anticipation. When you anticipated something and something else happened, then you have to do an additional computation to get your worldview back to the state that is consistent with what's actually happening. You can think of it in that way.

[00:44:16.258] Questioner 1: Thank you very much.

[00:44:17.867] Questioner 2: Yeah, great stuff. I don't know if this is any of your specialties, but in the education field and social science, there's these theories about the relationship between stress and learning. There's one meme that stress reduces the ability to learn. The other thing is learning high academics causes stress. So can your science help elucidate which of those principles is really, or maybe both,

[00:44:45.228] Jake Stauch: I think that's a big goal of what our research is all about, is can we, one, identify stress well at the Eugene, and there's some evidence that we're starting to be able to do that, and then two, is that good or bad for learning? Because you don't learn if you're not challenged in a way, but also there's a certain cognitive blocking effect of having too much stress. So I think it's an open research question. I don't know that we've addressed it yet, but I think it's one thing we're trying to do with having a lot of times children wear headsets while they're doing activities, while they're learning, and over time hopefully we'll be able to collect data on is that learning improved or not improved when their brains are in certain states.

[00:45:20.088] Questioner 2: And you probably do have to track them at home too, because they may be stressed at home.

[00:45:23.810] Jake Stauch: Exactly.

[00:45:25.202] Questioner 3: So in regard to the consumer grade hardware, there seems to be a side that's more data collection and what to do with it, and then a side of the interactive and how to use that data for gameplay. So I guess my question is, what's the status of, you know, we saw some basic demos for games. But I see a lot of potential for using it as a supplementary thing, an existing story, an existing gameplay. So I guess what's the status of getting that as a number from 0 to 1 for how hard you're concentrating, for example? Is there a way to use that currently in an existing game engine?

[00:45:58.820] Jake Stauch: So I think, yeah, that's great. So we're actually, as well as with other companies that have developed consumer EEG products, have SDKs that we're building to work with Unity and other game engines so that we can start exposing these metrics into game developers. So they can start experimenting with interfaces that we haven't come up with that might be a lot superior to what we can come up with internally. Gotcha. Awesome. Thank you guys.

[00:46:22.582] Questioner 4: Hi, Warren Blythe from Oregon State University's Ecampus. I have a ton of questions about how you're working with industry, but it sounds like you aren't so much, so I'm going to throw a conversation I have a lot with industry people at UNC if you think it's legit or bullshit, stop talking about that stuff. We talk a lot about VR as a dream state. I think there's a lot of hype to treat VR like we're really going to another reality and we're really there and let's build robots you can push against and let's build textures so you can really trick your brain into thinking you're there. But more and more, a guy named Jesse Schell talked about how Entertainment experiences weirdly line up with REM state and how REM state tends to be 90 minutes to 120 minutes And I think more and more we're talking about VRs where we're maybe treated more like a structured dream state and stop trying to worry about it getting real So now I'm looking at how you study dreams and in theory that's where you guys are at Curious if you're like that's interesting or if you're like that's not it

[00:47:08.102] Craig Chapman: Yes, I'd say that's a very provocative hypothesis to say that it's a dream state. I would say that actually one of these fellow CIFAR global scholars, Anil Seth, is actually actively pursuing that line of investigation, trying to create hallucinogenic-like experiences by using virtual reality. I would say my personal speculation on that is it's not a dream state, is that we're just giving the brain the types of... more and more with virtual reality we're able to give veridical-like inputs, and so the brain's going to treat that like real data and treat it like it's the real world. Haptics to me is the final frontier right now. I haven't seen anything that convincingly shows me that we're going to get good touch experiences in VR in a way that's really meaningful and, like I said, predicted by the brain. But I will say the visual is getting there, except for our ability to track movements effectively. And I think that one's going to be another one of these things that will happen soon. It will be very realistic. But I would say the power for us as scientists is that we could start to simulate hallucinogenic experiences if we're confident that the virtual reality technology is sufficient to go there. But if we give the brain the kind of data it's expecting, it will likely treat it just like real data.

[00:48:14.320] Kent Bye: Yeah, I just wanted to jump in here as well, because Mel Slater is probably the leading neuroscience researcher in the VR field of looking into presence. And there's other people looking at this as well. Theories of presence. So what is presence? What is the theory of presence? Mel Slater says that it's the place illusion and possibility illusion, so the degree to which your sensory-motor contingencies are basically hijacked into believing that you're in another place and that whatever is happening is real. And so it's more of like putting new input into our sensory input and our brain fusing together and believing and having it be both plausible and real. And what I ran into as I had a debate with Mel Slater is because as academics, these are not experiential design creators. So they don't know how to actually create a really amazing experience that has a good story and that looks amazing. that has art style. And so the academics are actually really limited to the degree to which they can study presence because they're running into not having the experiential design experience that they need, which is why there needs to be sort of this collaboration between game designers and researchers, so that you can take care of all that stuff to create the entire experience, and then from there you can take the data and be researched. So, anyway.

[00:49:18.577] Questioner 5: Hi, first of all, thanks for the great talk. My question is about guessing which word is being read by looking at the EEG. That's really interesting to me. So I'm curious to know whether a model that you train on speakers of one language actually works on speakers of another language.

[00:49:35.711] Alona Fyshe: So I can answer, actually, the inverse question. And I should preface this with this is fMRI work, not EEG. Somebody who speaks two languages, we can train the model on them reading, for example, in Portuguese. So we train the model of them reading Portuguese words. And then we test them reading the same words, but translated to English. And you're able to do above chance across the two, which means that your brain has actually one representation for concepts. And you get at it through multiple languages.

[00:49:59.043] Questioner 6: Thanks. So I was wondering. together with ECG, eye tracking, whatever, all those sensors kind of have different sample rates, different noise levels, oddities. How would you deal with that when trying to label data from different sources and combine them to use them in some way?

[00:50:21.561] Craig Chapman: You just hit the nail on the last three years of research in my lab. Dealing with different data streams and getting them to play well with each other is exceptionally difficult, especially because these companies develop one thing really well and don't really care how well it's going to interface with something else. That being said, there's excellent work coming out of UCSD designing a middle layer that's called lab streaming layer. types of this architecture that exist, that essentially exist to provide one common timestamping mechanism for all of your different data streams. And so we've been able to leverage that, and then it's sort of independent of your sampling rate. You can at least know that they're coming in in one common time frame, and then you have to make decisions about how you either want to up-sample or down-sample the data to get it to a consistent frame rate. The other thing is you can always then go back to the raw data if you've got the co-registered thing that sits in the middle. So I think there are ways around it, but I would say it's technically still something that needs to be solved for every new data stream you want to add in the mix.

[00:51:18.491] Alona Fyshe: But I want to say that once the timestamp problem is solved, machine learning doesn't care. As long as the time is the same for all streams, the sampling doesn't matter.

[00:51:25.335] Craig Chapman: You can blend them. OK.

[00:51:26.536] Questioner 6: Thank you very much.

[00:51:28.577] Questioner 7: Thank you for an interesting presentation. I actually have a bunch of questions, but I'll probably go with the ones that have the most part. I can probably follow up with you later. First of all, the question is regarding commercial grade EEG device, how do you separate EEG from EMG? Since, you know, in virtual reality, you'll be wearing it and then you'll be moving your hair. And as a matter of fact, you'll probably be recording a good amount of EMG data along with the EEG data. So how do you separate that out?

[00:51:56.718] Jake Stauch: Yeah, it's a huge challenge, and that was one of our initial challenges when we were doing this with consumers, is you have people using this at home, you have a really noisy signal with a lot of EMG activity. So our initial solution, and what's worked really well for us, is every time we detect EMG, which we kind of measure by, EMG has such a higher amplitude than EEG. When you see that high amplitude signal in those certain frequency bands, you can be pretty sure that it's EMG and not EEG. So even though it comes from the same channel, it looks very different and it's easy to identify. And so we started using that as a feedback mechanism where we would provide some kind of punishment in the game or some kind of very harsh feedback, letting people know, hey, you're clenching your jaw, you're furrowing your brow, you're creating this activity, and you're punished in some way so that people can use that feedback to relax, to keep still, to keep calm. And so that works for our targeted application of training this state of focus and relaxation. But when we move into VR, which is what we're doing now, it becomes a challenge because in VR you want to be able to move. You want to move your head. You want to have different facial expressions and do all these things. We haven't solved yet how we integrate EEG while you're doing all those things and still have a reliable and an individual basis subject. If you're averaging across thousands of subjects, you can get rid of a lot of that noise. that is an open question. I think you still need some kind of isolated period where they're not moving to get good EEG.

[00:53:16.152] Questioner 8: So I'm with Unity Labs. We're working on a spatial UI UX system. I've been doing it for a while with the strict intent of having BCI being one of the means to be able to drive this type of system. I've been doing it for a little while now. Firstly, thanks for giving the talk, but I obviously heard the mention of what I consider a pretty big moral dilemma. So I want to just quickly, this doesn't have to be a long answer, are you at all aware of OpenMind.org? The initiative for anonymous training, storage, and protection of user data, which obviously the intent has been on my mind and some people on my end, to be able to protect this type of data as you're basically creating what will end up being the most important encryption key we will possibly have, so that you can do training. Training can be anonymized. The user can be completely protected, even compensated if you want, you know, a 7 Ethereum coin or whatever. But more importantly, you're not hindered, everything's anonymized, and what you're collecting here, which if this is ever actually shared at some point, will be the most unique identifier that we'll ever be able to be gained off of a human. Is this at all on your minds?

[00:54:15.480] Alona Fyshe: I hadn't heard of open, what's it, OpenMind?

[00:54:17.901] Questioner 8: OpenMind.org. Yeah, I hadn't heard of it, thanks. There's also Cortex Labs, but I would edge towards OpenMind. An initiative like this, if this was built into the core of your research and your work, you could spur further this development and this being pushed further beyond your means, especially when these truly hit a consumer level. We're in discussions with some of the entities maybe that you're discussing here, and they're informing our process, but every chance I get, I try to push this out, and you're obviously aware of the importance.

[00:54:42.889] Craig Chapman: So open, I'm not familiar with Open Mind either, and I would say that's absolutely the kind of platform that needs to exist, so thank you for telling us. I will say, though, as university researchers, we have to go through many ethical Procedures to guarantee that the data that we collect for research purposes is protected stays within the university is Anonymized like there's very a lot of protocols that we go through and so this is a new foray for me to think about the ways that this might be applicable for industry and that would raise a whole new level of Privacy concerns that we address very locally as university researchers and would need to be considering these kinds of things moving forward. I

[00:55:20.312] Questioner 8: Just super quick yes or no. Would any of you be interested in seeing a VR app that could potentially induce hypnagogic states?

[00:55:28.117] Alona Fyshe: Cool.

[00:55:30.159] Kent Bye: Great. So that's the top of the hour here. We should probably wrap up. And what I would suggest is if you want to talk, I think people will be available afterwards. Maybe if we go outside, there's a wrap up area. But let's just give our panelists a round of applause.

[00:55:47.467] Kent Bye: So that was the future of VR neuroscience and biosensor driven development presented by CIFAR at the Game Developers Conference in 2018. And it featured Craig Chapman. He's from the University of Alberta. He's a movement neuroscientist, Alana Fish. She at the time was at the University of Victoria, now at the University of Alberta. She's looking at computational linguistics and machine learning and neuroscience. And finally, Jake Staunch, he's the founder and CEO of NeuroPlus. So I have a number of different takeaways about this interview is that first of all, Well, one is that there's a lot of really interesting interactions that I think is going to be happening between these two different communities of the neuroscientists and the game developers. Mostly because I think that game developers are really focused on actually creating these really engaging gameplay mechanics and the neuroscientists are trying to, you know, understand what's happening within the mind and having all these different theories about how to make sense of our cognition, both from the theories of embodied cognition, which is that, as Craig Chapman says, movement is thinking. And so Craig says that our brain is designed to move and the output is movement, but it's also the input. It's like this recursion where as we move our body, it actually changes the way that we think. As Craig says, moving is thinking. And that there's this whole theory of neuroscience called the predictive coding hypothesis of neuroscience, which means that we are taking all from our memory, things that we've seen before, and then as we're perceiving the world, we're taking the input of what we're seeing, we're making sense of it based upon what we know about the context. And we have a certain mental model that we're building up of the world of what we expect of what that reality is. And then if there's a difference, then there's these different error codes that get produced. And then that produces a bunch of dopamine with the intention to try to correct what is happening in our mind so that as we go forward, we can have less and less errors and that we have a good mental model as to what to expect within the world. And so that's the predictive coding hypothesis, which came up a number of times within this conversation. Different aspects of surprise and delight, wonder and awe, looking at the predictive coding model, it would predict that we would get some sort of evolutionary impulse to find novelty within our lives, because novelty is things that we don't predict or don't expect, and that if we are getting this matching up of what our mental models are, and if we experience something that we don't predict or don't expect, that's when we get those dopamine hits. And so The gamification, you know, turning things into games is all about trying to find those different game loops and trying to find those aspects of novelty. And, you know, I think the big challenge is to try to help people do things that they actually need to be doing either for their health or for their neurorehabilitation or different aspects of improving themselves in different ways and how to utilize these different aspects of game mechanics that try to break down into a science, how to actually harness that novelty and create these different game loops and make it so that there's this game progression curve so that you can always have this constant level of improvement, applying those different principles into these different aspects of either research or medical applications by using biosensor feedback for neuroscience with this fusion of virtual reality. Also, just interesting to hear from Ilana Fish, who is coming from much more of the linguistics aspect. And so what are the ways in which that we turn our experience of the world into language and linguistics. And so she's been looking at a lot of this computational linguistics and machine learning and neuroscience and trying to find ways to do machine learning on EEG and collaborating with Craig Chapman to be able to take this fusion of eye tracking as well as their motion data and trying to automatically Label that data because in machine learning you need lots of data But it needs to be labeled in order to be useful for supervised learning and so they're trying to automate the process of trying to do that labeling by using these motion controls and different aspects of eye tracking data and be able to do those automatic correlations and Just in the previous episode published in a conversation with Craig Chapman where he goes into that a little bit more detail But he's been collaborating with a lot of fish And then for Jake, you know, he's trying to bring in all this EEG data into the experience. And the thing that I had learned both from Jake, as well as from the experiential technology conference is that it's actually very difficult to use BCIs and brain control interfaces to be able to like take what's happening in your body and to feed it back into the experience, into this real-time feedback. EMG is probably a little bit more consistent in terms of like how your muscles are moving. Like if you twitch your eyes, you can get very specific spikes, but it's very difficult to take an instant in a moment and be able to discern what is happening within your body. You have to take larger swaths and sampling data sizes in order to really get useful information to feedback and to do this kind of feedback loop cycle. And so he at the neuro plus I think has been working on a number of these different types of experiences. And so You know, one of the challenges that he has is like trying to deal with the low signal noise. There's lots of noise. The machine learning can definitely help with that, but. You know, there's still a lot of randomness and a lot of ways in which that it doesn't feel like it's in that real time. It may be 80% you can get there, but 20%, it just feels like you are trying to express your agency in different ways and it's difficult. And so that's one of the existential challenges when doing this type of real time biometric feedback and feeding into what's happening for your body and feeding back into the different experiences. At the time of this conversation was back in March, 2018, the whole Cambridge analytical story had broken wide open on March 17th, 2018. So there's a lot of thinking about the different implications of privacy. And so in a lot of ways, a lot of the research context is different than what the gaming context might be, which is a lot more consumer. And so I think there's still a lot of open questions for how to potentially collaborate with neuroscience research and how that data gets handled, especially with the different privacy concerns, as well as, being able to correlate what's happening within the experiences with that data and have it labeled in some meaningful way, which I think is a lot of what Craig and Ilana have been working on and try to actually do a lot of that automatic labeling of that data. So where that ends up and whether or not it's always going to be in a specific research context, I suspect that what's going to happen is that you're going to have different game developers who become interested in doing more of these different, very specific applications that could be used for research or be used for different medical applications. And then it's going to be within that medical context within itself, rather than more of these consumer gamers that are trying to find out how to, you know, share data back and forth. Just because I think there's a lot of just issues with consent and disclosure and privacy that logistically, it's probably just going to be easier if the game developers come over into the side of doing it within the research context or medical context. So again, this was one of the first conversations I think that was trying to bring together this research community and the broader game development community. And I expect to see a lot more of the fusion as we move forward, especially as there's so many different compelling applications in the virtual reality field for neuro rehabilitation and medical applications. But this whole aspect of neuroscience research, I think, is really interesting as well. So I think that researchers working within an academic context like Craig Chapman and Alana Fish I think we'd love to collaborate with different people who may have some extra time or expertise to help figure out how to do all these different fusions and start to do this research that's going to potentially lead to being able to discover all these different aspects of what the nature of consciousness might be, or at least to be able to do a little bit more of looking at the biosensor information that's coming out of our body and to be able to know and predict what's going to happen with our behavior in the future, which is, I think, a lot of what the research that Craig and Alana is working on right now. So that's all that I have for today. And I just wanted to thank you for listening to the voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, you know, word of mouth is a big part of how this podcast grows. And if you enjoy the podcast, then share it with people who you think might enjoy it. That goes a long way for just helping the podcast to continue to grow. And if you'd like to help keep this podcast free for not only yourself, but for everybody listening, then you could become a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from listeners like yourself in order to continue to bring you this coverage. So, if you've been thinking about becoming a member and haven't yet, then now's a great time to, you know, just $5 a month, that's a great amount to donate, and just help allow me to continue to bring you this type of coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show