Advanced Brain Monitoring is a 17-year old neurotechnology company that has been able to extract a lot of really useful information from EEG data. They’ve developed specific EEG Metrics for drowsiness, inducing flow states, engagement, stress, emotion, and empathy as well as biomarkers for different types of cognitive impairment. They’ve also developed a brain-computer interface that can be integrated with a VR headset, which has allowed them to do a couple of VR medical applications for PTSD exposure therapy as well as some experimental VR treatments for neurodegenerative diseases like Dementia.
I had a chance to catch up with Advanced Brain Monitoring’s CEO and co-founder Chris Berka at the Experiential Technology conference where we talked about their different neurotechnology applications ranging from medical treatments, cognitive enhancement, accelerated learning, and performance training processes that guide athletes into optimal physiological flow states.
LISTEN TO THE VOICES OF VR PODCAST
Advanced Brain Monitoring operates within the context of a medical application with an institutional review board and HIPAA-mandated privacy protocols, and so we also talked about the ethical implications of capturing and storing EEG data within a consumer context. She says, “That’s a huge challenge, and I don’t think that all of the relevant players and stakeholders have completely thought through that issue.”
They’ve developed a portfolio of biomarkers for neurodegenerative diseases including Alzheimer’s Disease, Huntington’s Disease, Mild Cognitive Impairment, Frontal Temporal Dementia, Lewy Body Dementia, Parkinison’s Disease. They’ve shown that it’s possible to detect a number of medical conditions based upon EEG data, which raises additional ethical questions for any future consumer-based VR company who records and stores EEG data. What is their disclosure or privacy-protection obligations if they are able to potentially detect a number of different medical conditions before you’re aware of them?
The convergence of EEG and VR is still in the DIY and experimental phases with custom integrated B2B solutions that coming soon from companies like Mindmaze, but it’s still pretty early for consumer-based applications for EEG and VR. Any integration would have to require piecing together hardware options from companies like Advanced Brain Monitoring or the OpenBCI project, but then you’d also likely need to roll your own custom applications. There are a lot of exciting biofeedback-driven mindfulness applications or accelerated learning and training applications that will start to become more available, but that some of the first EEG and VR integrations will likely be within the context of medical applications like neurorehabilitation, exposure therapy, and potential treatments for neurodegenerative diseases.
Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR Podcast. So at the Experiential Technology Conference, there were a number of different neuroscience-based companies, and some of them were starting to do some virtual reality integrations. One of them was Advanced Brain Monitoring, which has been around for about 17 years and creates an EEG sensor that can be used in conjunction with a virtual reality head-mounted display. So I had a chance to catch up with Chris Burka, who's the CEO and co-founder of Advanced Brain Monitoring, and we talked about some of the EEG metrics that they're able to determine, everything from drowsiness to inducing flow states and cognitive workload, engagement, stress, emotion, and empathy. as well as different biomarkers for neurodegenerative diseases. So they're combining their EEG sensor with a virtual reality headset to see if they can potentially maintain memory for people who have neurodegenerative diseases and potentially even improve it. We'll also cover some of the ethical and privacy issues when you're moving from a medical context into a consumer-grade context with EEG data. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by VRLA. VRLA is the world's largest immersive technology expo with over 100 VR and AR experiences. They'll have tons of panels and workshops where you can learn from industry leaders about the future of entertainment and storytelling. I personally love seeing the latest motion platforms and experiences that I can't see anywhere else. Passes start at $30, but I actually recommend getting the Pro Pass so you can see a lot more demos. VRLA is taking place on April 14th to 15th, so go to VirtualRealityLA.com and get 15% off by using the promo code VRLA underscore VoicesOfVR. So this interview with Chris happened at the Experiential Technology Conference in San Francisco, California on March 15th, 2017. So with that, let's go ahead and dive right in.
[00:02:14.993] Chris Berka: Okay, so I'm Chris Berka, I'm the CEO and co-founder of Advanced Brain Monitoring, and we've actually been in business for 17 years. We develop mobile, easy to use, scalable sensing systems focusing on the brain, but we also monitor the heart and skin conductance and breathing and muscle activity, just about anything that you can measure. non-invasively, we have some sensors that we've developed for. And what's unique about our systems is they're very comfortable, they're easy to use, and they're easy to integrate with other systems such as VR or AR or eye tracking because our whole EEG system is very soft and comfortable. So we have done a number of integration projects. We're hoping to do quite a few more in both VR and AR. We are a medical device company, so all of our products are cleared by FDA for clinical applications. But then we do a lot of novel, fun applications as well with the same systems. So we've done two things with VR so far. We've done exposure therapy for PTSD. And we've tried using exposure therapy along with all of these multi-sensor systems so that we can first train soldiers to identify their physiological changes as they're experiencing potentially stressful or combat-related scenarios in the BR. The second thing that we've done is to look at, and this is something that we hope to expand on, is for patients with cognitive impairment who may be having memory loss and may be on the progression path to Alzheimer's or one of the other dementias. We found that if you take them to a very novel, exotic environment, that that will enhance their memory. So that's a natural for VR implementation. We have some brain-based metrics that we've identified that change with increasing cognitive impairment. So the goal is to use these exotic travel locations and then embed little memory tests in the VR scenario and hope to stimulate that same pathway that people experience when they're in exotic or stimulating locations. But to do that in combination with our neurometrics that are saying we're either improving the cognitive state and the memory or might be just staying the same. I think that's really an important integration of the two products. And then we have a lot of collaborators who are using our EEG systems, but they're doing gaming applications, or they're doing military training where, again, they're using the EEG for different types of cognitive algorithms or training algorithms to see if the trainee is ready to receive information, and then using that in a closed loop. with the VR scenario to either slow it down, speed it up, make it more interesting. So I think that's another area where you're going to see a lot of these integrated applications.
[00:05:21.559] Kent Bye: Talking to Walter Greenleaf, he said that we're kind of facing this impending crisis of an aging population and a lot of these brain diseases of degeneration through either Alzheimer's and dementia. So it sounds like what I'm hearing from your experience with memory, are you trying to find some way to perhaps use VR to stimulate the brain and if there's loss of neural connections to parts of the brain and memories that you'd be able to find kind of alternative pathways through the principles of neuroplasticity?
[00:05:51.797] Chris Berka: Yeah, that's exactly right. And there's already a pretty good basis in the scientific literature for this notion that novelty and exotic environments stimulate pathways that are linked really closely to the memory system. And the taking people on trips or giving them exotic fruits and vegetables or things that experiences that they've never had before that are engaging to them. enhances all of the memory from that particular day. So again, this is ideal for VR because we can't take people on trips every single day, but we can give them little mini voyages. And then, I mean, the nice thing is, again, we have the objective brain metrics to see if it's improving their memory and is it improving the brain state that we know is associated with better memory. So it's kind of an objective as well as an experiential approach. We've been working for the last five years with the pharmaceutical industry to map out these different brain biomarkers for neurodegenerative diseases so we can use them when the drugs come out of the pipeline, but getting good drugs that actually work takes a really long time. So having made all these diagnostic metrics, we're now really interested in what are the available technologies. So we have electroceuticals, stimulation, we have cognitive training environments, and we have this whole gaming entertainment, infotainment. You know, all of these things, either in combination or alone, may give us a chance of at least preserving a person, you know, if not improving their memory, which we would love to do, but at least preserving them for many years at the level that they're at, while we wait for drugs and other devices and, you know, alternative treatments. I think all of this technology can help us, at least when people are in the early stages, to maintain for longer periods of time.
[00:07:51.070] Kent Bye: Yeah, it seems like I've seen some videos of people who have a neurodegenerative disease that when they listen to music, they sort of get up and start singing the lyrics, and it is able to tap in a different dimension of their memory. But also from my own personal experience of Google Earth VR, being able to go back to my places that I lived and be able to see the architecture of that place, it brought back those memories. And so I see that there is this connection between place and memory that I've personally experienced. Curious if that's part of this exotic location theory that if you are able to go to either new places that they've never been before or places that they grew up or things that they may have actually been to before but they weren't able to do that. So if you're able to do either a Google Earth VR or something that may be a 360 video that is able to go back to these different places that it could start to spur back different memories.
[00:08:43.295] Chris Berka: Yeah, definitely, because those older memories are preserved for longer periods of time. So the idea of using some sort of a Google Earth and taking you back to a time that's still well-preserved in your brain may stimulate that circuitry. We're also thinking in VR, you know, you can place people and objects, so you could place familiar people in the VR environment. and detect using the brain activity whether I can identify you or whether my brain is saying I don't recognize you anymore and use different people and conversations to again try to stimulate the memories that still exist and help to lay down new memories as well. There's a lot of possibilities.
[00:09:30.256] Kent Bye: In terms of the EEG, you have sort of these EEG-based metrics where you have a number of different things that you're able to extrapolate from EEG. Maybe you could run through some of those things that you're able to actually detect because, you know, as these two technologies may be on a convergence at some point with EEG and VR, I'm just curious to hear what you do in this non-invasive EEG method to be able to extrapolate these inputs that could be fed into like a real-time biofeedback VR experience.
[00:09:59.627] Chris Berka: Sure, so the first metric that we tackled was drowsiness. And you'd be surprised, you know, even though the brain changes predictably as you get more and more drowsy and fall asleep, it took us a few years and several hundred people to develop an algorithm for drowsiness that applied across the population. And part of that was we wanted to not just predict right up to sleep onset, like a few minutes or a few seconds. We wanted to be able to develop an algorithm that was predictive as early as 30 minutes before you started to actually have behavioral decrements, so slow to reaction time. And that's the time when you start to get the head nods and eye closures. So our algorithms that we've developed detect drowsiness in real time. And then we can look over a period of time and make that prediction about 30 minutes ahead of time for most people. Sometimes it's 15 minutes, but 15 to 30 minutes, we can predict that you're going to crash and burn, whether you're in a simulator or a real vehicle. So that was the first thing we did. And then we also did sleep staging algorithms so we could map the sleep architecture. You know, stage one, stage two, deep sleep, and then dreaming sleep. So those algorithms are really well-established and well-documented. Then the next thing we tackled was cognitive workload. We used working memory load as a surrogate, because cognitive workload, unfortunately, means 100 things to 100, you know, everybody has a different interpretation. So that metric we built based on increasing levels of working memory load. So most people can hold in their head at any given time about nine digits, a phone number. And for some objects or places or things, you can remember more items. But your ability to hold things in working memory is a key part of intelligence, performance, productivity. It's a very important aspect of brain states. So we built an algorithm that measures that in real time. And so what you can use that for is if you're evaluating a website or you're trying to deliver information to someone or you're trying to train someone, you don't want their workload to be too high, but you also don't want it to be too low. So there's kind of an optimal zone for that working memory load that you want to keep people in if you're trying to train them. If it's just a movie and you want to see if they're entertained and aroused and excited, we can use our engagement metric for that. Workload doesn't matter. But for things where you want to induce a flow state, or you want immersion, you need a certain level of cognitive workload. You need to be engaged, and you need to be, your cognitive propensities need to be activated as well. So we have that metric, and that's pretty well established. Also, we introduced it seven years ago, so a lot of people have used that metric. Then more recently we built an emotion metric for positive and negative emotion. That turned out to be much more challenging, primarily because different people experience different emotions differently in the brain. That's probably the easiest way to say it. So we do have a positive negative emotion metric that we have on the market right now, but it requires All those other metrics require five or three or two EEG sensors. Our emotion metric requires closer to 20. And then we have an empathy metric which we didn't develop. It was already in the literature that you can measure over the sensory motor cortex. We've implemented that. It runs in real time and it was based on existing literature. And it's proven to be very robust and useful. You know, when your mirror neurons are activated and you're mirroring other people, you get activation of that system and we can measure that. It's something that is lacking in many autistic children, or in some people who just aren't particularly. Sociopaths. And then we also look at stress and arousal, but we do that with a combination of EEG and heart rate and heart rate variability. So those are the established metrics. But then we have a whole portfolio of, as I said earlier, biomarkers for Alzheimer's, mild cognitive impairment, frontotemporal dementia, Lewy body dementia, Parkinson's disease, with and without dementia. So we've been working on stratifying the dementias for, again, mostly with funding from the drug companies. And then recently we got a big NIH grant to continue that work. So we can use those again to look at different interventions and see if we can move people away from those dementia markers into a more normal state.
[00:14:53.744] Kent Bye: Yeah, the thing that's fascinating to me is that that's a quite a large range of things that you can do with non-invasive EEG, all these different algorithms. I think as you add artificial intelligence and be able to do more sophisticated things. From your technological roadmap, are you looking at being able to do big collections of data sets to be able to use machine learning to be able to create new algorithms in that way?
[00:15:15.611] Chris Berka: Yeah, so we have about 10,000 daytime EEG studies in our database, and more, I mean, probably closer to 50,000 sleep studies, and we're adding to that every day. All of those algorithms are some type of machine learning implementation, so they're either discriminant function, support vector machines, neural networks, We've done some things with Markov modeling where we want to preserve the time. In a lot of cases you're just throwing everything into a big bucket with EEG and finding the variables out of these large variable feature sets that best distinguish the Alzheimer's. But in some cases you, and this is true for Parkinson's and Lewy body disease, where you get them going in and out of these, they're called cognitive fluctuations, where they tune in and they tune out. And the only way to measure that with the EEG is to preserve the time. So you look at repeated patterns of that tuning in and out, and we use a Markov model to track that, where we preserve chunks of time, and then put it into the big bucket. And then we've dabbled a little bit with the nonlinear metrics, Granger causality, and entropy. So far, we haven't had as much luck with some of the nonlinear metrics as other people have. We have collaborators who have done more work in that area. but it's something to keep in mind. There's so many mathematical approaches that can be tried now that we can get good, high-quality data from large populations. And of course, growing the database. I mean, we're hungry for data. So, you know, the more data we can get, the better algorithms we can build.
[00:16:52.420] Kent Bye: Yeah, I just want to ask a question about where I see the consumer VR going and, you know, we have these performance-based marketing companies of Facebook and Google who are getting a lot of big data and doing a lot of AI, but yet, I see that there's going to be eventually this convergence of EEG and VR and there's quite a lot of robust things that you can do and I think there's a context question though in terms of the ethics around privacy and personally identifiable information and what does it mean that if this consumer trajectory, if we do start to integrate more and more EEG into these consumer devices, what are the standards around what you should do with the data? Should you not record it? If you need to train the AI, you need the data, but yet there's all sorts of really intimate, personally identifiable, potentially, if there's a digital fingerprint within these EEG signatures, or if there is medical information that you could see these biomarkers that is also intimate, so that We could have the situation where Facebook or Google could know that you have some sort of medical condition before you do. And so what are the ethical implications of that? So from your perspective of your company in this medical context, as you start to potentially get into more consumer grade, how do you navigate this issue of ethics and privacy?
[00:18:13.973] Chris Berka: That's a huge challenge, and I don't think that all of the relevant players and stakeholders have completely thought through that issue. So in the medical domain, for us to put data into our database, it's always what we call de-identified, meaning we give you a number. So you've now become a number in a sea of numbers, and there's no way that anyone can link that number back to your identity. All of the studies that we do are under an institutional review board so you sign a consent form and in that consent form you know there's certain things you consent to and that's you're going to give the data we're going to protect your privacy. If we publish the data no one will be able to ever link that data back to you as an individual. If we're doing a medical report Then we relink, so we store the patient's name and social security number and everything in a separate protected file. And then we relink that to give it back to the medical professional because they need that for billing. But most of those databases that I'm talking about, we have no idea who the people are. They've just become numbers. So when you get into the consumer domain, you know, yes, many people are willing to share their brain or their sensor data. with someone who's collecting a database or you're doing citizen science projects, but I don't think that they've necessarily thought through just those issues that you discussed. I mean, what if somebody took your data and said you're gonna get Alzheimer's disease? Do you want anyone else to know that? Do you even want to know that? It's the same issue with genetic testing. Maybe genetic testing has had to face it a little earlier than this field because it's much more precise, because there are certain genetic information that is highly predictive of what's going to happen to you. Who do you want to have that information? I would never send my blood in for a genetic test without medical protection and medical privacy because I'm an employer, but I wouldn't want an employer to know and I wouldn't want my insurance company to know. So, I mean, there are some big issues that need to be discussed that I don't think people are thinking about.
[00:20:26.757] Kent Bye: I feel like I'm a little bit on the front lines of trying to talk about these because, I mean, the other issue that you brought about is that, you know, when I talked to Connor Russomano last year from OpenBCI, the thing he said is that it could actually be that, given the right artificial intelligence algorithm, you could take that de-anonymized data that is separated in that database, but yet if you independently record new EEG data, there could end up being a unique digital identifier that you get access to that if you hack into the database or If it's somehow melded and you have the correlation, it could be like a key that unlocks that anonymized process that you have.
[00:21:02.475] Chris Berka: All of these things need to be considered and thought through. Just as we're doing with Google, how much information about ourselves are we already giving away to Google and Facebook? This is just another level of information that you really should think very carefully before you either participate in a project or upload your data to a database if you're doing it voluntarily. And, I mean, we're, again, we're really very conscious of it because we live under the HIPAA guidelines. So, I mean, we have a mandate to protect patient privacy. And then if we're doing an FDA study, again, there's another level of regulatory scrutiny And so we're trying to live by those higher standards, if you will, for now. But I see all around us that there's data flying everywhere. And again, I mean, it is something that needs to be discussed in the public forum so that people are aware of what the potential of these signals are.
[00:22:01.734] Kent Bye: Awesome. And finally, what do you see as kind of the ultimate potential of virtual reality and these neural technologies and what you think that might be able to enable?
[00:22:15.535] Chris Berka: I think, you know, human performance enhancement, number one. In the case of PTSD, training people to control their physiology is kind of the first step in changing their symptomology. So, I mean, exposure therapy has been used for many years where the psychiatrist basically takes the person through, usually through imagining they're back in Afghanistan or you know whatever the triggering event was but now you can actually present that to them and measure their physiology at the same time and so one of the things we found is that if you give people first some feedback about their physiology and you hear your heart racing or you see your GSR going up or you see your brain going into a high anxiety state just that knowledge alone gives you some sense of potential control over it. So I think those kinds of therapeutic approaches are really powerful, where you have the two linked together. But then we've done some peak performance training with athletes, marksmen, and golfers, and archers, and helped them. Again, we didn't do this in virtual reality, but it could very easily be done in virtual reality, help them get into the right psychophysiological state prior to taking a shot or pulling the air. And all of those things can be done in the virtual world.
[00:23:36.767] Kent Bye: Can you invoke a flow state then?
[00:23:39.985] Chris Berka: Yeah, that's what we were trying to do. And, you know, we had access to many, many experts. We had expert marksmen, we had Olympic archers, some PGA golfers, and they let us monitor their brain while they were doing their activity. And then we developed a device that helped coach intermediates into the same physiological profile as the experts and found that it improved, you know, performance training across some of those sports. And that's kind of a simple example, but there's much more complex training that you could do by integrating virtual or AR with the brain metrics. Heart rate and heart rate variability are really useful because it shows you your stress and anxiety levels. When you wanted to be a better public speaker, you can train yourself to control your anxiety response first. So there's lots and lots of applications.
[00:24:39.847] Kent Bye: Awesome. Well, thank you so much.
[00:24:41.368] Chris Berka: You're welcome. It was a pleasure.
[00:24:43.729] Kent Bye: So that was Chris Berka. She's the CEO and co-founder of Advanced Brain Monitoring. So I have a number of different takeaways about this interview is that first of all there's just a lot of really interesting information that you can extrapolate from EEG data. Now usually the EEG data is pretty noisy and so they've developed a number of different machine learning based algorithms in order to clean up and get a better signal from the data but also to determine a number of different metrics that I think are kind of interesting. And the ones that we talked about on this podcast were being able to detect drowsiness, cognitive workload to be able to eventually induce flow states, looking at engagement, stress and arousal, emotion, and empathy. And there's also a number of different biomarkers for different types of cognitive impairment. So they're able to detect some of the early signs of Alzheimer's, mild cognitive impairment, frontotemporal dementia, Lewy body dementia, as well as Parkinson's disease. So most of the work that advanced brain monitoring has been doing up to this point has been within the context of medical applications. And so they have an institutional review board, they have consent forms and privacy protocols that are mandated by the HIPAA protections. So I think that there's just a lot of ethical implications as we start to look at the potential of integrating EEG sensor data into VR headsets. At this point, there's a lot of barriers to entry for people to kind of create a DIY system. I just did an interview from a project at USCICT that was looking at how to do a low cost EEG solution using the OpenBCI as well as some 3D printed parts in a swim cap. And I'll be airing that interview later on. without having a custom integration of EEG into headsets. And I think that there's just going to be a lack of content. It seems like a lot of the early applications are definitely going to be in the medical realm for neurorehabilitation, as well as exposure therapy for PTSD, as well as potential studies with dementia. To me, it was just really interesting that you could even start to use VR to be able to take people who are suffering from neurodegenerative diseases into these novel and exotic locations. And that if you also gave them fruit and novel food, you could start to rebuild stronger memories from day to day. And that potentially by doing that, they can first maintain their condition, but eventually potentially even improve their condition. So I think there's still a lot of research yet to be done here. But if it is possible to rewire the brain, and if it's just a brain wiring problem, if there's ways to find new pathways into those old memories by using Google Earth VR or 360 video, then I think that there's just a lot of exciting possibilities for how you could use VR to be able to treat some of these diseases. And I think the other big applications for a lot of this technology is going to be in cognitive enhancement. doing training applications or learning and to be able to monitor someone's cognitive workload and to see whether or not they're understanding it or if they're confused and being able to feed that back into the training system to be able to create these feedback loops in order to custom tailor the training applications for people depending on how they're doing. I've had some other interviews with people at GDC who were, you know, what I would call consciousness hackers and trying to use EEG technology to do better meditation or to monitor their physiological states to be able to get into the optimal state in order to do the best type of learning. One of the things that Chris said is that if you train people to be able to control their physiology, then that's part of the first step of changing their symptomology. So in the context of exposure therapy and PTSD, they're putting these soldiers into these stressful situations and giving them this real-time biofeedback of what's happening in their body so that they can start to detect that and being able to potentially do some things to change it and remediate it. And I've also just recently done an interview with Skip Brozo, who has been doing a lot of really pioneering work with using virtual reality for exposure therapy for PTSD. And, you know, one of the things that he was really emphasizing is the importance of the soldiers being able to tell the story of their trauma, but to really just be emotionally present to that process of making new meaning about what they went through. So there's a lot of other dimensions that go beyond, I think, just, you know, controlling physiological symptoms, but actually getting into the emotional catharsis of it, I think is also a huge part based upon my interview with Skip Rizzo. And finally, I think it was really interesting to look at this cognitive workload in the context of trying to induce different flow states, especially in the sense that you don't want the cognitive workload to be too low. You also don't want it to be too high. So it just implies that when you're in a virtual reality environment, in order to create this sense of immersion and presence, you have to have some level of engagement with the scene. You have to be having this exchange. And it can't be just a completely passive experience, but you have to either be challenged by your mind or if you're interacting with the scene. So I think that there's just kind of a neuroscientific approach of looking at what's actually happening in the brain is that there's a certain level of cognitive workload that is kind of the sweet spot. It's not too hard and it's not too easy, but it's just at the right spot where you just get into these flow states. So I think it's really interesting that they were able to look at these physiological profiles of these elite performance athletes and then be able to look at what is happening in amateurs and to be able to coach them into getting them into a similar physiological state and then be able to actually have better performance. And so I suspect that there's probably a lot of similar, different physiological profiles for, you know, optimal learning states for cognitive enhancement or training or different medical and healing applications. So that's all I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and become a donor. Just a few dollars a month makes a huge difference. So go to patreon.com slash Voices of VR. Thanks for listening.