#1112: OpenBCI’s Project Galea Hands-On & Fusion of Biometric & Physiological Data in VR

I had a chance to drop by the Brooklyn office of Brain Control Interface start-up OpenBCI in order to get a hands-on demo of Project Galea, which includes a range of different biometric and physiological sensors such as EOC, EMG, EDA, PPG sensors in addition to 10 EEG channels and eye-tracking into a single VR headset. I previously spoke to CEO Conor Russomanno about Project Galea in 2021 and before that about OpenBCI in 2016.

It’s still early days in terms of what the real power and potential of fusing together many different biometric and physiological data streams will be, especially as most of the demos that OpenBCI has developed only use a couple of sensors at a time, but nothing yet that combines the raw data streams from multiple types of sensors in a novel way. But the access to time-synchronized data across multiple data streams will likely open up lots of new experiments and data fusion insights for something that has otherwise been difficult to combine this many physiological data streams.

The EMG sensors on the face can be used as a real-time neural input control, which was the most notable sensor from an experiential perspective. The other data is harder to get an intuitive sense about, although I did quite enjoy their Synesthesia app which translates brain wave frequencies into colors and tones within an immersive environment providing lots of super immersive, multi-modal biofeedback for signals that are otherwise difficult to get an intuitive sense about.

I had a chance to catch up with Joseph Artuso, OpenBCI’s Chief Commercial Officer in charge of partnerships and commercialization, as well as with Co-Founder and CEO Russomanno about the pre-sales starting up on May 31, 2022 for Project Galea with hardware partner Varjo. We talked about the development process for Galea, some of the target markets of academia, XR industry, and game developers (the price will be well beyond the price range of consumers), and some of the possible use cases so far, and an update on their collaboration with Valve that was first reported by Matthew Olson in The Information.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast, a podcast looking at the future of immersive storytelling and spatial computing. If you enjoy the podcast, please consider supporting it at patreon.com slash Voices of VR. So today's episode is looking at OpenBCI's Project Galea. So Project Galea is something that I covered last year after the Non-Invasive Neural Interfaces Ethical Considerations Conference where they showed some images for the first time and I had a chance to catch up with Connor about OpenBCI, which I've covered a number of different times over the years. It's a brain control interface project. They were originally collaborating with Valve, but then announced recently that they're working with Vario to be able to actually produce and deliver these projects. It's not a consumer device. It's on the scale of more than $20,000. It's a project that's trying to integrate so many of these different biometric and physiological sensors and put them all into one integrated virtual reality headset that is one consistent timeline to be able to integrate all of these different sensors. There's a lot of ways that neuroscientists and the big major tech companies, as well as game development studios, are able to use something like the Project Gilead to be able to see what's even possible with the sensor fusion of all these different biometric and physiological sensors, so that you start to have this real-time feedback of what's happening from your body, and then fed in and measured in some ways. Being able to potentially modulate the experience that you're in so it's still really really early days I had a chance to actually try it out, and I'll have some additional thoughts there at the end about my own experience of the technology But I wanted to talk to both the co-founder and CEO Connor Russomano as well as Joseph Artuso Who's the chief commercial officer open BCI who's looking at the partnerships in the process of commercializing? This is a technology so this was the last interview for my trip from New York City and I? Last full day I was there, I was able to go out to Brooklyn and go to their offices, have a chance to do a demo of OpenBCI's Project Gilead, and then sit down for this conversation. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Connor and Joseph happened on Monday, June 13th, 2022. So with that, let's go ahead and dive right in.

[00:02:25.433] Conor Russomanno: My name is Connor Russomano and I run a company called OpenBCI and we recently launched Galia, at least the pre-sale of Galia, which is a physiological sensing add-on for AR and VR headsets.

[00:02:41.373] Joseph Artuso: Great. And my name is Joseph Artuso. I'm the chief commercial officer at OpenBCI. I've been working with Connor for a long time now and, you know, kind of handle the partnerships, projects, commercialization, most of the non-technical work that goes on with GALI and with OpenBCI.

[00:02:57.473] Kent Bye: Maybe you could give a bit more context as to your background and your journey into doing brain-computer interfaces and VR.

[00:03:04.719] Conor Russomanno: Yeah, so it's been over a decade now that I've been working with BCIs, which seems kind of crazy given how early and nascent the industry still feels. But I got into it in grad school. Actually, after suffering a couple of concussions in undergrad, I had played football in college and also rugby and banged my head pretty good a couple times. And that got me interested in psychology and the human brain. And then in grad school, I found a tutorial on how to build low cost DIY EEG devices. And then that kind of was the beginning of my career, really, which led to the creation of OpenBCI, which was an attempt at making that easier and cheaper for DIY scientists or labs with not a lot of money, you know. And now, fast forward eight years, we are, you know, I like to think kind of leading from the front in addition to kind of pushing from the back now with the creation of Gallia, where it's something that's really never been done before, so.

[00:04:11.647] Joseph Artuso: The first BCI device that Connor built was in, you know, the, actually this might've been at the gold, but the basement of one of our earlier apartments. I've known Connor for, since college and been along for the journey with OpenBCI since the start, but then started getting involved on the side. I was, my background's more marketing and advertising. Was managing things for OpenBCI as they were getting started and then joined up full time three, four years ago now. So it's been cool to see it start ever since he started strapping Arduinos to a Yankees hat in one of the apartments we live together in.

[00:04:42.434] Conor Russomanno: Joe also played rugby with me, so he also banged his head pretty good a couple times. So he has maybe a similar selfish interest in making sure that we figure out how to undo some of the damage we did when we were younger.

[00:04:58.408] Kent Bye: I know that you've done a number of different Kickstarters and now have been selling different modules to be able to do EEGs and everything from people taking the sensors and putting it on the 3D print into selling different iterations of the technology for a number of years and now integrating it with the virtual reality technology seems like this next phase of taking all this neural input and fusing it together with these different immersive experiences. I just had a chance to do a demo of Project Galea, and there's a lot of sensors that are being integrated, and I think the challenge, I think, from this point forth is as this headset is available, it'll be up to the wider research community, I guess, to start to figure out how to Synthesize and fuse all these different inputs that are coming in and so maybe you could talk a bit about At this point you've announced the project lea and then vario is now the new partner to put this into production So maybe you could talk about where you're at right now I'm just coming back from AWE and showing some of the demos and some semi public Context for the first time and you know where you're going from here

[00:06:01.132] Conor Russomanno: Yeah, so great question. I think it's important to note that Galia is not for consumer use at this point. You know, it's really a tool. I think I've been recently referring to it as the super tool. It's so many different types of sensors. I think we can honestly say that it has more sensors than a Tesla. And it's really kind of the bridge between everything that we've been doing for the past six years or eight years at OpenBCI and what I think is kind of, you know, AR and VR, which I think is really the kind of landing strip for the next generation of modern computers. Galia is a platform more so than it is a product, right? Like we want companies and researchers and labs to be discovering how to use Galia and potentially how to simplify Galia or strip some of the sensors away or put it into a sleeker form factor for specific use cases. But, you know, right now we tried to pack all the bells and whistles into a single tool and have all that data time locked and contextualized against all of the stimuli that you're putting into the VR headset. So Galia is this, you know, really, really cool. I think in a lot of our previous conversations, we've talked about this idea of closed loop computing, where you have both read and write capability to and from the human brain or the human mind. And I think that was really what we were trying to accomplish with Galia is like, how do we build the best bi-directional closed-loop super tool, right, where you have immense power and capability to modulate cognition, but also the ability to simultaneously record it.

[00:07:34.387] Kent Bye: Yeah, I'd love to hear what you've discovered now that you've been creating this as an early prototype. What have you been able to do internally in terms of like all this raw data that you're coming in and trying to synthesize it in a way that you're trying to either observe the data in real time so you're doing different activities to be able to see things move and maybe it's worth at this point going over each of the sensors and how you see them starting to integrate because I just did a demo and saw like a bunch of like sine waves with lots of numbers and data just flowing by and you know it's kind of hard for me to comprehend or understand what all the raw data means but maybe you can explain a little bit in terms of the different sensors that are there and what you see might be possible from an experiential perspective of starting to integrate that into two experiences.

[00:08:21.165] Conor Russomanno: Cool. I'll start with the kind of technical descriptions of the sensors, and then I'll let Joe touch on, I think, the two subsets of customers, one being the researchers and the other ones being more like, we don't know what to do with the squiggly lines people. But the sensors that are in Gallia, we have eight active EEG sensors on the harness of the headset. So those are custom designs so that EEG is measuring electrical brain activity from the scalp non-invasively. In the face interface, we have a PPG sensor, which is optical heart rate and heart rate variability. And you can derive some other stuff from that. We also have a electrodermal activity sensor network. So that's measuring moisture on the skin. So if you get up on stage or you're in a social situation where you're a little bit anxious, you might start sweating on your brow. That kind of stuff happens all the time to different degrees. And so we can measure those minute shifts in moisture due to being aroused or stressed or anxious. We've also got muscle sensors above and below each eye. So the demo that you just tried, we were using muscle inputs from the face to control characters in a game. So I'm a big believer in kind of repurposing micro expressions and little inputs from above the ear and around the face for interaction, as opposed to trying to use EEG or brain sensing for real time control. We also have an EOG network, so electro-oculogram, so it's kind of like electrical eye tracking. And then to complement that we do have an image-based eye tracking system that's part of the Vario headset that will complement the eye tracking data. What did I forget? I guess the accelerometer. So we also have, you know, kind of an IMU for tracking movement as well. So six different types of physiological data, in addition to obviously the Vario, Aero, VR, and potentially XR streams, which are very, very high end enterprise prosumer VR experiences.

[00:10:14.743] Kent Bye: What was the sensor on the ear that you put on? Cause that usually measures heart rate, but you also said there's an optical one as well.

[00:10:20.920] Conor Russomanno: Yeah, so the sensor on the ear is, there's two, one on each ear. One is a reference for all of the EEG channels and the other one is a ground that grounds the electrical ground plane of the PCBs into your body and kind of puts them at an electrically neutral or equal level.

[00:10:36.498] Joseph Artuso: Yeah, and going back to your previous question, just some context for Galia and who's using it and who are the intended customers. One of the big reasons why we built it in the first place was because, in addition to Connor's interest in head-mounted displays as kind of the future form factor of personal computers, VR has really been part of a renaissance in neuroscience over the last decade or so, with much more realistic experiment environments. And OpenBCI was just seeing our own customers take our existing products, combine them with other modalities. They're taking our EEG products and they're combining it with a heart sensor, they're combining it with an eye tracker, and then they're just sticking a VR headset on over all of it. And so we saw in our own product surveys for years that what they wanted was integration with VR headsets and they wanted was integration with multiple different types of sensors because, you know, you can buy an individual sensor for each, but then you have to align all the data sets together to a single clock in order to be able to draw the exact type of conclusions you want for when the body's reaction changed based on the audio or video stimuli. Really what the feat of Gallia is giving the researchers and developers that are working at the intersection of neurotechnology, BCI, spatial computing, a better starting point. You know, people that are out there that could look at this and go, yeah, I could build this myself maybe, but it would take me quite some time. It would take a pretty talented team, you know, a while to make this electrical system. You can start with Gallia as a reproducible starting point for your research and then go further. You know, another reason why we built this is because more recent research in the space, you know, one of OpenBCI's advisors at Columbia, Paul Seid has done a lot of research on this where, you know, they'll use eye tracking to predict a certain decision making point. Or they'll use eye tracking and they'll use EEG separately to predict the same action. And then they'll use them together and they'll find that their predictive capability increases with like the extra modalities that they add, you know, of data. So if you're using just eye tracking and EEG, that's way better than either one alone. And then if you're able to also layer in muscle data, if you're maybe, for instance, you're doing an exercise where they're scanning a room looking for a face that they recognize. When you can add more data types, you're able to draw more precise conclusions in a lot of spaces, which is. why researchers are so interested in having multiple modalities in one experiment rather than having to stitch it together themselves separately.

[00:13:06.316] Kent Bye: Yeah, I had a chance to attend the Canadian Institute for Advanced Research. They had a future of neuroscience and VR conference here in New York City and I had a chance to meet someone from control labs where they were talking about the EMG interfaces that they have on the wrist base where they're able to isolate down to the firing of individual motor neurons and so I haven't had a chance to try out the Control Labs interface, but I have talked to some people who have used it and talked about how it's quite magical to be able to use that as a neural interface. And so having an EMG sensor on the face, I found that being able to control a cat was usually with the biosensors, like going to the Neuro Gaming Conference that then rebranded into the X-Tech, which then eventually folded in terms of a conference that was kind of gathering the community together. But a lot of the consensus I was getting back in 2017, 2018 was that really a lot of those biosensors were not good for real-time input, but the EMG actually was really great for the real-time input. And I found, similarly, by twitching my face, I was able to move this cat figure running down the street that was able to feel like I was having my agency be expressed in real-time, which was great. Sometimes there were false positives and things were moving, or sometimes it moved unexpectedly. But more or less, I was able to control it in a way that felt like I was able to have some expression of agency. Now that was somewhat fatiguing because I'm not used to squishing my face around to be able to express agency but I'm wondering if you've had any experience with more wrist based EMGs and if you do expect that you're gonna start to see people use facial expressions as a way of triggering things or if there's something that may be on the wrist with the hand gestures that we're kind of used to using our hands to be able to express our agency in the context of Computing if you expect that might be a little bit better or just since this is one of the first demos I'd love to hear some of your reflections on the different parts of EMG and whether or not you expect people to continue to use it for taking action

[00:15:03.495] Conor Russomanno: Yeah, I think this is a great question. And anyone who's trying to use EEG for real time interaction, I think it's futile. I don't think it makes sense. And so I'm not saying that EEG is not valuable. It is valuable, but it's much more useful for Passive BCI kind of opportunistic sensing as Paul Shida our advisor puts it it's you know Looking at the brain before during and after critical moments in time and looking at shifts in brain wide mental states and attention cognitive workload things like that, but when it comes to real-time interaction You know, our muscles are very, very good at it, even if we're using one muscle for something that's not intended for. Right. And like a great example of that with technology is like we speak with our fingers now without even thinking about it. Right. When you type on a keyboard, you can type almost as fast as you can speak. And you do it without thinking about any individual key on its own, right? At this point, we're just talking through our hands. And so, you know, I think it's like, if we can learn how to do that with our fingers, then we can learn how to use other muscles for unintended uses, right? And so, I don't think that it's going to be one or the other, like whether or not it's like, oh, are we going to use our wrist or are we going to use our face for controlling things in mixed reality? I think the answer is both. There's dozens if not hundreds of muscles above the ears and around the face that we barely know how to control because we've never needed to or been trained to Voluntarily control them, right? They're kind of subconscious if they are used at all. And so With regards to the control labs, I think it's really interesting what they're doing but I also think it's not going to be useful for every AR and VR situation where you want kind of a complementary input because we use our hands to grab things and touch things and actually do what we would normally do with our hands which like on a 2D screen with a mouse and a keyboard was kind of irrelevant because our hands were taken up or accounted for by the mouse and keyboard but now with VR and AR There's hands tracking, we want to be using our hands to actually manipulate things in 3D. Having additional inputs or additional controllers, imagine like the PlayStation controller, except all of those buttons, X, square, triangle, circle, are buttons that you've mapped on your face. Adding those additional inputs to gestural control the way we would normally use our hands, I think is a really interesting combination. When it comes to risk-based EMG, I think there's going to be plenty of opportunities where we actually want to use our hands without people knowing, right? Where you've got, you know, your hands in your pockets or you're more subconsciously interacting with mixed reality content and information in a private way. And I think things like control labs and risk-based interaction where you really don't want to see someone's hands waving all over the place, but you still want high level of input or control. I think that's the most obvious use of a technology like that. But yeah, I don't think that they're going to necessarily compete with each other. I think it's going to be, we're going to find the path of least resistance for different applications and different use cases and we'll end up naturally, you know, the technology will move in that direction. Which is why I think we're starting to see most like, you know, quote unquote neural interfaces companies who are making new interaction controllers. They're either using EMG, which is smart, or they're using SSVEP, which is like flashing lights that are riddled all over your demo or your experience, which is effective, but it's also quite annoying because then everything in your environment is flickering, which is not ideal for many people, for most people.

[00:18:57.682] Joseph Artuso: in addition to using our wrists and using the muscles in our hands, which we're very used to, you know, how can we incorporate muscle signals from elsewhere on the body as interaction opportunities and as actuators, you know, to use language from another field in AR and VR?

[00:19:15.078] Kent Bye: Yeah, I know that you had originally announced Project Lea with Valve, and then once you announced the production, you're now collaborating with Vario. I'd love to hear any context for what's happening with Valve, and yeah, so just kind of an update there.

[00:19:29.360] Joseph Artuso: Yeah, you know, we're still in close contact with the team at Valve and they're still very interested in Galia. Ultimately, you know, we had always intended to bring a product to market, you know, and sell it widely. It was really essential for our customers, based on the surveys that we'd run, to have image-based eye tracking integrated into the final device. So when it kind of came time for us to get ready to manufacture the beta devices. Coming up with a custom version of the Valve Index that included image-based eye tracking didn't make a ton of sense at the time for both companies, which is why we started looking elsewhere and found the VarioAero, which is an excellent device that we're pretty excited about.

[00:20:06.634] Kent Bye: So it's got the built-in, I know you were collaborating with Toby before, so is Toby still involved or is it their own sort of eye-tracking solution that they're using?

[00:20:14.018] Joseph Artuso: Mario uses their own, their own sort of homegrown eye-tracking solution.

[00:20:18.401] Kent Bye: Okay, yeah, makes sense as you start to get into production to switch over. I'd love to hear any insights in terms of if you've already heard of different academic institutions that are, have active research programs for this kind of sensor fusion of all these different inputs of Trying to make sense.

[00:20:34.690] Joseph Artuso: So I'd love to hear any additional context there of what kind of research is being done in that front I you know, I can't just rattle off a list off the top of my head, but it's it's kind of I'll throw up all up at Columbia. I definitely throw mark billinghurst. That's another name in the space. What's the Musee lab, right? I Tiago Falk and company out there, another group that we've talked to. But it's kind of interesting, like OpenBCI with Galia is sort of branching out into a larger field, you know, of AR and VR, where lots of people are, far more people are paying attention to that space than had been paying attention to the BCI and neurotech space. Although, you know, neurotech is also shooting up into mainstream prominence. But we get lots of questions of like, does anyone want this? Is this something that people are doing? Are you building a road to nowhere? We're building this because people have been doing it and asking for it. We know that the research organizations, we know that labs at universities are working with this combination of sensors. We have a slide and one of the decks have prepared where like, you know, it's sort of over the years, different DIY attempts, you know, to combine things together into a single system. We're trying to serve those people as well as, you know, we've seen almost every major consumer tech device manufacturer from laptops, cell phones, headsets. They've been buying open BCI equipment over the years. I don't have to explain the acronyms to as many people as I used to. You know, we get on the phone and I'm not telling them what EDA is and EEG is. They're saying, yeah, we know, you know, what are the specs of yours? How soon are you able to ship it? You know, can we do this with it? So, you know, we've been pleasantly surprised to kind of, there's definitely a core group of customers for Gallia who kind of already know what they want to do because they're already doing it. And they just want somebody to solve the hardware for them, give them software package that gives them access to the raw data and they'll build from there. And that's what we've sort of set out to satisfy initially. There's a much larger community of developers and companies and users that we hope to address in the future with Gallia, which is more, I don't know what to do with the squiggly lines. I don't know what to do with the raw microvolt values of these sensors. But if you can give me a zero to one metric for attention. If you can give me a zero to one metric for stress or cognitive load or different kind of one level up quantifications and classifications of states that the sensors are meant to address, then they have lots of ideas and areas where that can be put to use. That's not necessarily who we see as the initial beta customer for Galia. We're going to be adding features to the software set that gets into that level. But we know that there's a small but extremely capable group of people, both in industry and academia, who are already working with systems like this. And they're just looking for something that's easier to start building from than completely custom.

[00:23:38.311] Kent Bye: Yeah, when I look at all the different sensors, it's actually after the non-invasive neural interfaces ethical considerations conference back in May of 2021, where I was listening to this conference that was sponsored by Facebook Reality Labs at the time, now Meta Reality Labs, as well as the Columbia Neuro Rights Institute. And it was very interesting to see all those different discussions, and the takeaway that I had was just kind of seeing the latest Project Leia prototype pictures, and I was like, oh wow, there's all these sensors. And so I did a bit of an audit of all these medical sensors. I feel like the medical domain is probably one area where the sensors have been evolving within a medical context, but now they're bringing into a consumer context. And so as we have it more into these research devices for now and enterprise context, it'll eventually get into the consumer XR devices. I think it's just a matter of time. This is the first step towards that as seeing what's useful, seeing how to optimize it and bringing it up to scale. But When I look at the different types of sensors, I see there's actions, intentions, behaviors that you get from active presence. You have the mental presence in terms of the cognitive load, what your thoughts are, what your attention is, the social dimensions of the social context. And then you have the emotional presence of the micro-expressions in your affect and the galvanic stim response and seeing to what degree are you being stimulated in these experiences. And then the embodied presence, you have different degrees of attention, eye gaze, muscle strain, you know, all these other things. As you fuse all those together, you get a map of this phenomenological experience of the user from what their actions and behaviors are, what they're thinking about, their mental states, as well as their emotional state, as well as their physiological and physical states. So, as I see that, it feels like that you're getting new insights into those and then potentially feeding that back into the experience in real time. But I think it's probably going to while for the experiential designers in a gaming context or even experiences to actually comprehend and do that because of the price and the scale of this right now. But that's where it's going in the future. And it seems like maybe the first step is to maybe use some of these in the medical context for specific healing or rehabilitation. Or, yeah, I'd love to hear any thoughts or reflections on that.

[00:25:51.590] Conor Russomanno: Yeah, I mean, I think there are many, many medical and clinical uses for a system like Galia. You know, obviously, Galia at first is not a medical device. It's a research tool. But I think derivative devices that are put through the, you know, jump through the hoops, go through the tape, FDA, get the clearance that you need. You know, I think for treating types of anxieties and stress and PTSD and things like that, you'll be able to use VR in a very meaningful way to render experiences that are digital that seem very real. So for things like immersion therapy, you can have a very safe environment for slowly introducing triggers or stimuli to people in a way where there's an immediate kill switch to the situation. You can just take the headset off. But for things, you know, even more, I guess you could say like personal medicine or mindfulness training and meditation, systems like Galia, I think are going to be really, really valuable for Building a more intuitive understanding of the inside of your own head like you tried the very crude demo synesthesia room, but the idea of synesthesia room is to Represent the frequencies of your brain with color and sound which are things that when we hear we see you know sound and color We understand those constructs red blue yellow green or if we hear a certain pitch we have memories associated with those sounds and if we can turn the mirror inward so that we finally get to see and perceive the inside of our brain in a more intuitive way. I think we'll actually, as humans, start to more intuitively understand our own thoughts. And so like, synesthesia room is really just kind of a boilerplate, hey, check it out, here's what you can do, and here's how you can represent the inner world externally. But I think that that's really just the starting point, I think. You know, we'll be able to train different internal states of mind for stress, focus, attention, meditation, many different internal states and you'll be able to in real time give a user feedback of whether or not they are hitting or holding certain mental states and you'll be able to use that for training or guiding the mind towards desired mental states and away from undesired mental states. So obviously there's a lot of ethical considerations of like if we actually can prove that we can nudge the brain or the mind in certain directions then How do we set up the appropriate guardrails to make sure that that technology or that capability is not abused? I mean, I think that's a lot of getting back to your original question That's a lot of what was talked about at that event a year ago, which you Amazingly like line by line tweeted the entire like I'll never forget that that was the best Recap that we had of the entire event was your Twitter thread, which I think Was a shame that there wasn't more effort put on by the hosts of the actual event to archive and document everything that was talked about there.

[00:28:40.806] Kent Bye: Yeah, it's a shame that they, number one, didn't record it and make it available. It felt like almost an ethics washing exercise in that sense of not, I mean, it was a great conversation. It just was a shame that it wasn't captured and preserved and shared because there's a lot of amazing content that was there. Yeah.

[00:28:55.836] Conor Russomanno: But anyway, that's sort of another putting something on, on the books.

[00:29:00.492] Joseph Artuso: There's a part of your, the question you asked just now, you were even, you were working through it like, hey, in order to create the experience that closes the loop, you know, that uses these signals in real time to change the way the experience happens in VR, I think you're right that At first, like people will be doing that, but in order for that to have its own market, it needs to be consumer scale. It needs to be much cheaper sensors or fewer sensors or more simplified. What we've got for Galia today, I think we're kind of a generation away from like the video games that use all the sensors in Galia. But right now what's happening are the game companies are using this to augment their own research into how do we build games that provoke the senses and reactions that we want. you know, any kind of VR entertainment content creation, there's a lot of user research opportunities and user experience opportunities with tools like Galia that people are doing today. And they're using that to inform, how do I make a better experience? You know, how do I use this to quantify a user's reaction that I might have only been able to previously capture after the fact through a survey? So I think it's definitely like, now it's sort of, for the companies that are creating the experiences, they're going to learn a lot about what's working and what's not, and then along the way also kind of identify hey, you know, really were, if I was going to incorporate some of these modalities into a consumer device or into the next console or into, you know, the headset that I want to ship in the future, I probably only need these three, not all 36. So I think there's kind of a generation of innovation that'll happen on top of the research tools that are out there. You are also talking about the back and forth between medical and consumer in this space. It is kind of interesting. There is something that just got announced today, RuneLabs and the Apple Watch. You know, there's an FDA approved investigation into whether, I just kind of saw some of the headlines I haven't dived in, but RuneLabs is using the Apple Watch for like an FDA approved investigation into treating Parkinson's, you know, and that's really relevant and possible because that device is at consumer scale. So it's like, hey, if, this is already out there on so many people, if we can prove that it works, great. You know, and I think that the scientific research has been there for a long time, but the opportunity to do it at the scale is only made possible because, you know, the consumer market was there. I think you'll see this kind of back and forth where it's like, first, it'll prove these sensors are even useful for identifying this type of thing that gets proved in research that gets proved in medical contexts. And then in order for us to reach the scale where there's an application for this that makes sense or, you know, where we have enough data or we can run through a whole trial with it. You know, there's usually, if we can attach it to a consumer market or a consumer device that exists, I think it's a really interesting trial that I'm paying close attention to moving forward.

[00:31:52.633] Kent Bye: Yeah, the whole medical dimension in our last conversation covered a lot of those dynamics, meaning that like there's probably a lot of these companies, including Open PCI, that would prefer not to be classified as a medical device because it introduces all sorts of additional barriers. So it sounds like the medical for research or experiments, but not something that's going to be used in a clinical context.

[00:32:11.644] Joseph Artuso: Not in its current form, not this device. There's been interest from pharma and biotech companies for using Galia in different ways, but not really like part of the clinical treatment process. It's more kind of, okay, great. This is another tool we can use to collect data that might be relevant for our treatments or for our development of other medically approved treatments or drugs or devices.

[00:32:35.442] Kent Bye: Yeah, I wanted to talk about the phenomenological experience of doing Project Galea. First of all, there's a lot of sensors that are on this headset. And as I put it on, it kind of feels like there's like, you describe it like the squid of sensors that are kind of like on your body and they're kind of poking me in different ways. But they're also just to make sure that each of them are connecting at the right level because there are a lot of sensors. And so just, you know, there was a bit of a feedback calibration sensor. And then going into the synesthesia room, There was colors that were there, and going to a number of the different NeuroGaming conference, then turned into XTAC conference, a lot of the biometric sensor demos that I've done, probably a dozen or two over the last eight years, they all kind of have this similar feeling of like, I don't quite know for sure what's happening inside my body as what's being captured and if there's any noise loss there and then to what degree is that accurately reflecting what's happening and then even if I wanted to change what's happening inside of my body to see a reflection, I feel like I don't have much agency to do that. The closest thing I'd say is probably being able to like close your eyes and maybe go into a little bit of a meditative state and it sounds like, Connor, you've been doing a lot of that. that feedback loop of using these technologies for a number of years to refine your own contemplative and mindful practice to see that. But it seems like it's a subtle thing that in order to really feel some of those aspects of being able to directly go inside of your mind and see what's happening is that using these meditative technologies is what could be some of the best ways to see some of the immediate impact of expressing your agency of your inner self. And so I'd love to hear some of your own explorations because it seems like you have been tinkering in this space and there's a lot of overlap with Consciousness Hacking and Mikey Siegel. There's a whole movement of other people that have been using technology to be able to augment and accelerate their own contemplative practices. When I saw some previous demos of you on videos, just the ability for you to know different signals and what the signals mean, and then to be able to perhaps modulate them within your own body as you go into these. So I'd love to hear about your own experiences and journey of using some of this technology to get a lens into what's happening inside of your mind and inside of your body.

[00:34:46.708] Conor Russomanno: Yeah, I'm glad you brought up Mikey Siegel. Mikey, if you're listening, we're overdue for a catch-up, but I've definitely done a number of like recorded meditations over the years and generally when I'm doing that or when I'm looking at the data afterwards, I'm hoping that I see a good bit of theta or maybe even some delta because then it means I was able to like really chill out a bit, but I'm definitely, I've used the technology that we've been building a lot less than I'd like to admit. You know, I think we've been really focused on like building the tools and, you know, we're excited to expand the team and really bring in some real neuroscientists who are going to be putting the devices to use internally and, you know, helping us do some science, some real science internal to OpenBCI. But yeah, I mean, I think with regards to training, you know, for example, the cat runner demo that you were doing today, like some of those little EMG inputs, newcomers or people who are trying it for the first time are really over flexing all of those muscles because you haven't quite trained the ability to just very subtly flex like one cheek versus the other. But those are the kind of things like now when I go into EM face based or head based,

[00:35:57.867] Joseph Artuso: EMG demos, I definitely feel like I have more control than the average person for muscles around my face and around my head You should talk about the luminance guys that came in, you know Like I just in terms of It's pretty easy to modulate your alpha waves by closing your eyes. You can provoke this big spike in an alpha signal, but it takes more practice to provoke other changes in EEG activity. We had some people in as early alpha testers of Gallia, this group called Luminance, Dan and Jack over there. They're really focused on meditative practice, you know, their own tools are all about using VR for meditation and they've been using heart rate sensors actually to create these experiences that can help you get into the right state, you know, in a meditation context. And when we had them come in and try out Galia, I mean, they were just like pumping out alpha activity, pumping out data activity and like doing things that we can't do ourselves. But because they've been at it for so long, they already know they can change their brainwaves in certain ways that, you know, to kind of the regular person seems unattainable. When you do start to provide a little bit of neurofeedback, a little red, yellow, green, you know, you're getting there or you're not, you can actually learn that kind of stuff. So it is cool when you put it in the hands of somebody who's really put in the hours on the brainwave side of things and on the meditation side, you can see real results. I think the other thing that Connor is always saying is, and to your point, like whether or not that's Intelligible to the user and not washed out by noise from movement or the environment I mean, that's the mechanical problem You know of just keeping the sensors in the right locations is one of the biggest challenges of this whole thing To add to that like it's gonna be like the keyboard right like the keyboard was introduced as a technology and somehow it like

[00:37:53.518] Conor Russomanno: provided so much utility that people were willing to go through the motions of learning a new tactile language for communicating with their fingers, right? And I think that like BCIs aren't just gonna work the same way that keyboards didn't just work. Like we had to learn how to use them, we had to learn how to adapt to a new type of input or interface. And I think that's just going to be how it is. And if people think that we're not going to need to go through the work to figure out how to translate what we understand into what the computer can understand, then they're wrong. Obviously, we try to lower the barrier of entry as much as possible, right? Like the placement of your most common keys, right? Obscure letters are on the fringe, right? Because you're barely touching them, but your primary letters are around your index fingers and your middle fingers, right? So like, there's going to be a similar user experience challenge that goes into designing next generation EMG inputs and EEG internal state controllers, if you will. But there will be a learning curve, like we're going to have to figure out how to do it with the technology.

[00:38:57.678] Kent Bye: And finally, what do you think is the ultimate potential of virtual reality and these brain-computer interfaces and what they might be able to enable?

[00:39:07.588] Conor Russomanno: Big one. I personally have been saying this a lot recently, but I just think that what we're building is the future of computers. I think that head-mounted displays combined with psychological sensing, those two forces coming together where you have the power of a desktop display but in front of your eyes you can walk around with at all times, you can switch between full opacity and half opacity and be in mixed reality or virtual reality. We're going to have one of those for work, we're going to have one of those for personal use, and computers of the future will have a mind processing unit that is a core architectural component of the next generation of computers. And so I think that we're really just throwing building blocks out there into the world, and I think that That eventual computer is going to be much sleeker, you know, much cleaner looking than Galia, but I think a lot of the principles that we're working on now are the kind of the core building blocks will be integrated into a system like that. You know, the future is highly personalized computation. And the only way we're going to get there is to have computers that can you know, not be designed for a distribution of people, but rather like they're designed to start that way. Like, hey, you know, this operating system, the default settings for this operating system are oriented for the one standard deviation away from the norm in terms of the way that people are intended to use this device. But actually over time, this operating system is going to get to know you and it's going to understand how you use your computer and not the average person. And I think that like the way that that happens is we have things like Gallia or sensors like the ones in Gallia that are decoding emotions and internal states of mind and then mapping that information on to what we can touch, what we can see, what we can hear in a modern computer. You know, so Gallia is really our best attempt at the next step in the ladder of getting from where we are with modern computers to where we think we will be.

[00:41:09.528] Joseph Artuso: Yeah, no, it's exactly, you know, we've been doing a lot of hiring lately, and you kind of, when I tell people, people ask me, like, what are we going to be doing? Like, what's the company, where's the company going, what's the long-term goal? And I think, you know, the 10-year goal, the target that we're pointing the ship at way in the future is building the whole computer. You know, we want to be involved in building the future of computers, which we believe to be head-mounted and physiologically connected to your body. And Galia, you know, maybe we'll look back at Galia and laugh, like we laugh at computers from the 60s or whatever, you know, they take up a whole room, but gotta start somewhere.

[00:41:47.456] Kent Bye: Anything else left unsaid that you'd like to say to the broader immersive community?

[00:41:51.138] Conor Russomanno: Just thank you, Kent, for everything that you do. Tell them the story.

[00:41:57.858] Joseph Artuso: You can learn more at gallia.co and openvci.com. And also, Kent, I think you should make a calendar of events for people in this space to subscribe to if you haven't already, because the list of conferences that you sort of have name dropped throughout this interview alone, it's like the hit list of places to pay attention to. So that's cool.

[00:42:19.161] Kent Bye: Awesome. Yeah, and I'm really excited to see where this is going with the caveat that I think there's still a lot of things to be figured out with neurorights and trying to get the legal frameworks in place to be able to protect privacy. I think in the enterprise context, I feel a lot safer, but as it moves into the consumer context, that's the realm that starts to really freak me out. I feel like this is the next step of something that's where it's going to go in the next five to 10 years. And so it's exciting to get a little window into the future here today. So thanks for taking the time to show me the latest demos of Project Galia and sitting down to talk about the future of computing with OpenPCI. So thank you.

[00:42:51.962] Conor Russomanno: I look forward to our next conversation already. Thanks.

[00:42:57.039] Kent Bye: So that was Connor Russomano. He's the co-founder and CEO of OpenBCI, and they've just recently launched the pre-sale of OpenBCI's Project Galea, as well as Joseph Martusso. He's the chief commercial officer of OpenBCI, who's focusing on partnerships and commercialization of their brain control interface technologies. So I've had a number of different takeaways about this interview is that first of all, well, just to talk a little bit about my experience of the project Galea. So there's a lot of sensors that are on this thing. And so one of the things when you put on the project Galea is that there's an interface to be able to see whether or not you have a strong connection or not. And my experience was that there was a lot of sensors that weren't necessarily detecting, and so they had to keep tightening it up. So it ends up feeling like this thing that's on your head that some of the different sensors are into the skin. And so when I took off the headset, there was a little bit of these divots in my head from trying to make sure that these sensors had enough of a strong connection. So I'd say one of the challenges with something like this, as you start to put more and more sensors, then just ensuring that there's enough of a connection so that the sensors are able to actually work properly, but then also balancing that with the comfort and the tolerance of, you know, if there isn't a strong connection and if it's not getting the right data, then to what degree is that going to be a failure state for all the different stuff that you're trying to capture? So the striking things, for me at least, is that there's ways in which the EMG-related stuff, so you're moving your muscles and it's the electromyography, being able to see the shifts of your muscles is something that gives instantaneous feedback. And I think in the future of spatial computing, we're going to start to see these risk-based devices from control labs, where I've done an interview with Thomas Reardon and other representatives from Control Labs that was acquired by Meta. They're doing EMG-based, wrist-based inputs that are able to detect the firing of individual motor neurons. The first iteration may be released sometime in 2022 in terms of a wristwatch. And then eventually, if that's successful, it could go into the second and third generation, start to integrate more and more of that Control Labs technology into the wrist. But if you're wearing something on your face, then there are other ways to be able to use your face as an input device. So the analogy that Connor Russomano is given is that in the advent of normal computers that you have the keyboard and mouse, that it's taking a long time for us to really get used to using that. And then, you know, to be able to type on a typewriter and then a keyboard, but also with a mouse. So that's something we had to train ourselves to do. And so with the new spatial computing, are there going to be new things that we have to train ourselves to do? And is it going to be easy to be able to train ourselves to do? For me, it wasn't necessarily comfortable to be able to use my face to move stuff around. But if I want to have my hands free, then maybe using this type of facial-based twitching of muscles, or maybe it's somewhere else in my body, that's able to interface with the technology. There's always going to be assistive uses of this technology. So for some people who don't have access to their hands, this is going to be amazing. But it was somewhat fatiguing as I was doing my face like I was moving this cat from the left and right and kind of squinting and moving my cheeks around to be able to move an object into a space. So that's something that I think is yet to be seen as to whether or not that's going to be a viable path forward. I don't think it's necessarily the most compelling aspect of the project. There's probably other sensors that they're still going to have. neuroscientist research or other ways that are maybe more of a contextual research. Like you put yourself into a situation, you're able to measure all this different stuff from a neuroscience or physiological perspectives or sociological perspectives, putting you in different conditions and seeing what kind of feedback you get. Or even presence researchers, if you're able to integrate all these different sensors all at once to see if there's going to be more objective measurements of what's happening physiologically when you have different dimensions of presence. That's all yet to be seen. I think right now a lot of this stuff is probably going to be medical applications or applications that are in the gaming context using different aspects of feedback to try to understand what's happening in someone's experience and then modulating that experience in real time. The other demo that I saw was more of the synesthesia room that is changing colors and tones as you're reaching different brain states. I think that's actually going to be a really compelling use case in terms of like consciousness hacking and mindfulness. being able to train your brain to reach different states and to get real time feedback that is multimodal. So you're seeing different colors, you're hearing different tones to help tune your body to give you feedback. So this real time feedback of what's happening inside of your body, and you're getting something that is being fed in a sensory input to help tune your states of consciousness in a way that You were able to reflect on what's happening, but able to potentially control what's happening in your body and to modulate what's happening internally in a fashion that is creating this closed feedback loop cycle that Connor has talked about from the last couple of times that I've had a conversation with him. So. Yeah. And I think there's probably going to be a lot of other enterprise or research applications that are less of an application that you just kind of show off as a demo, but more something that is a lot of other utility. There's not price for consumers. It's like, you know, in the multiple tens of thousands of dollars to be able to get access to this as a technology, but they're saying this is really. The first iteration of the next generation these brain-computer interfaces as we start to move forward just have as a research device for these different companies and to be able to have them all fused together so they have one clock all synchronized to be able to do sensor fusion from these different modalities of all the different input that they're taking in and we talked about here in this interview in the previous one to look at all these different dimensions of presence and active presence and mental and social presence and emotional presence and embodied presence and all these different kind of modulations of that and how you start to quantify and objectify those and try to put them within the context of whatever the virtual environment is. So yeah, really excited to see where this continues to evolve. It's very, very early days, still a long ways to go in terms of It's a balance between making something comfortable and getting all the utility that you need. The more sensors you have, the more that you have the potential for having a degraded user experience just because of the comfort levels that you have this squid-like device that's on your head measuring all these different things. But the potential is that there's going to be ways of taking our embodiment in these different physiological and biometric markers and feeding back into real-time experiences. With the caveat, there's a lot of neuroethics implications of the right to mental privacy, the right to identity, and the right to agency that I think are the big ones in terms of the neurotechnologies that have to be also considered in terms of what are the limits in terms of what happens as data and what kind of contexts should be constrained. Contextual integrity from Helen Eastbaum says that it should be up to the context as to determine how you're going to be using this data. and there may not be some universal things that you can say, don't ever use this for any context at all, or how do you put limits on that? Because there could be ways in which these types of technologies are giving access to all this different information, but it could be used and abused to be able to profile people psychometrically or to take data into get into the wrong hands and to be used for harm in lots of different various ways. So, lots of things to consider there. Both the power of the technology, but also the perils in terms of where this all may go in the future. So anyway, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a list of supported podcasts, so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show