#996: OpenBCI’s Project Galea collaboration with Valve & Neuro-Privacy Implications of Physiological Data

conor-russomanno
OpenBCI’s Project Galea was originally announced on November 19, 2020 as a “hardware and software platform that merges next-generation biometrics with mixed reality.” OpenBCI has been collaborating with MIT Ph.D. student Guillermo Bernal in integrating PhysioHMD’s design, which includes EOC, EMG, EDA, PPG sensors in addition to 10 EEG channels and eye-tracking into a single headset. On January 24th, Valve CEO Gabe Newell told New Zealand’s 1 NEWS that Valve was “working on an open source project so that everybody can have high-resolution [brain signal] read technologies built into headsets, in a bunch of different modalities.” 1 News reported that Valve was collaborating with OpenBCI. Then on February 4th, 2021, Tobii announced that it was “engaging in research collaboration with Valve and OpenBCI by incorporating Tobii’s eye tracking technology with elements of Valve’s Index hardware to produce developer units for the recently announced Galea Beta Program.” The Project Galea dev kits are expected to ship sometime in 2022, and Newell told 1 NEWS, “If you’re a software developer in 2022 who doesn’t have one of these in your test lab, you’re making a silly mistake.”

I first interviewed OpenBCI co-founder and CEO Conor Russomanno at Rothenberg Ventures’ Founder’s Day on May 16, 2016, which was the day before the 2016 Neurogaming Conference. It was also after the Silicon Valley Virtual Reality Conference April 27-29, 2016, which is where I first really starting covering the topic of privacy in VR as it was after an UploadVR article on Facebook & VR privacy caught the attention of Senator Al Franken, who wrote Oculus a letter. I first spoke to Russomanno about some of the privacy implications of neuro-technologies back in 2016, and the ethical implications of neuro-tech has only increased as the capabilities of physiological measurement devices and what can be inferred from them have also been increasing.

I recently heard Russomanno speak about Project Galea on May 26th at the Non-Invasive Interfaces: Ethical Consideration symposium co-sponsored by the Columbia Neuro-Rights Initiative and Facebook Reality Labs.

He was also discussing some of the ethical and privacy implications of neuro-technologies, and he got into an interesting debate with Rafael Yuste, who I interviewed about Neuro-Rights in episode #994. They were debating whether or not technologies that are able to measure physiological data should be classified as medical devices that are capturing medical data. Russomanno doesn’t believe that the hardware technology should be regulated by the FDA and medical regulations since it’s likely that a project like OpenBCI would never be able to exist as it does today, but he’s also open to the possibility of giving special treatment to the data. Ultimately, Russomanno hopes that someday consumers will have more ownership and control over the data that are captured by these devices, but that there’s a long way to get there from where we are at right now.

I had a chance to talk with Russomanno on June 4th to be able to talk about the evolution of OpenBCI into Project Galea, a little bit about how their collaboration with Valve and Tobii came about, what type of insights they’re able to gather from these different physiological and biometric measurement sensors, the value of combining different sensory modalities together, and the potential of closed, feedback loop immersive systems that are able to help track and modulate different aspects of your brain, mind, and ultimately consciousness. We also talk about some of the potential healing, quantified self, and consciousness hacking applications, but also the risks of how these technologies could undermine our rights to mental privacy but also our agency. There are still more open questions than answers right now, but the open hardware approach by OpenBCI has been able to seed quite a lot of experimentation and research evaluation by major XR companies across the industry.

I’ll be releasing a series of interviews on Neuro-Rights and the Ethical Implications of XR & neuro-technologies starting with OpenBCI, but hearing about the technology policy research papers written by the Information Technology & Innovation Foundation’s Ellysse Dick, the founder of the Contextual Integrity Theory of Privacy with Philosopher Helen Nissembaum, and then four representatives from the Electronic Frontier Foundation talking about privacy from a Human Rights perspective and reporting back from Rightscon. Also be sure to check out my recent conversations with Rafael Yuste on Neuro-Rights, Brittan Heller on biometric psychography, as well as with Joe Jerome on a historical primer on the history of privacy law.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

After hearing about all of the sensors that OpenBCI’s Project Galea was integrating, I did an audit of the different physiological and biometric sensors:

Also, here’s a state of XR privacy talk that I gave at the AR/VR Association Global Summit that provides an overview of some of the biggest issues on privacy with the intersection between XR and neuro-technologies.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So OpenBCI is a brain control interface kit that has been bootstrapping neural interfaces when it comes to the intersection between BCIs and immersive technologies. So back in 2013, they started the Kickstarter and then has recently been working on this thing called Project Galea, which was originally announced back on November 25th, 2020. and it was integrating things beyond just EEG, so things like EOG and EMG and EDA and PPG, so all these different types of sensors that are measuring physiological and biometric information from your body, and then potentially creating these closed feedback loop cycles within these immersive technologies. So it was a couple of months later on January 24th, 2021, that Gabe Newell, the CEO and founder of Valve Software, announced to One News that they were collaborating with an open source brain control interface entity, which ended up being OpenPCI. And then about a week later, so on February 5th, 2021, Toby, the eye tracking technology said that they're also collaborating on this project and collaboration with OpenPCI as well as Valve. So I was at the Non-Invasive Neural Interfaces Ethical Consideration Conference, co-sponsored by the Columbia Neural Rights Institute, as well as Facebook Reality Labs. And Connor Rosimano, the founder and CEO of OpenPCI, was there talking about the latest updates of Project Galea and all the different sensors that they're integrating, but also showed some early prototype photos of what it looks like. But there's a lot of discussions in terms of how do we deal with these neural rights and these different neuro-privacy and neuro-agency. So ways in which that these technologies could potentially undermine our rights to mental privacy or our right to agency. And so Connor Russomano was taking a little bit of a debate with one of the organizers of the conference, Rafael Yusta, who is at the Columbia Neuro Rights Institute, taking a real human rights approach. And Rafael Yusta was saying, hey, maybe we should just treat this data like all medical information, both of the devices and all the data. And then Connor Russomano is basically like, well, if this were medical devices, OpenBCI basically wouldn't exist. And so we need to maybe find another way. So that's what we'll be digging into and unpacking that a little bit in terms of just exploring all the different ethical considerations for these neurotechnologies. What can you get when you fuse all these different sensors together and their collaboration with Valve, which is working towards trying to integrate this into these real time systems and the game context, and also integrated with all these eye tracking technologies like from Tobii. So that's what we're coming on today's episode of the voices of VR podcast. So this interview with Connor happened on Friday, June 4th, 2021. So with that, let's go ahead and dive right in.

[00:02:54.124] Conor Russomanno: My name is Connor Russomano. I'm co-founder and CEO at open BCI. We founded it back in 2013 and we design, build and distribute open source hardware and software for measuring the human brain and body. So it started out as kind of a DIY kit for artists and makers and citizen scientists. And then it quickly got adopted by the research community as it was baselined against more expensive sensing technologies. And, you know, they proved that it was just as good, if not better than some of the more expensive tech. And so then, you know, over the last seven years, it's kind of gone through this growth cycle from you know, what started out as kind of like a hacker tinker kit into being used widely across top research institutions, and then in major consumer technology companies, their R&D departments. So it's been really cool to watch the whole neurotech industry move forward. And then of course, for OpenBCI to be playing a pretty big part in that. And then, yeah, so beyond OpenBCI, I did a one year stint at a company reality hardware and software company based in San Mateo. I had a really cool job there. I was the Director of Advanced Interfaces, which was kind of a glorified title for doing R&D to think about how to, you know, blueprints and what the future is going to look like when we have AR headsets that we're walking around with that have sensors that are listening to our physiology and our psychology and trying to improve the experience of the AR headset based on our own internal state of mind. Unfortunately, that company ran out of money and failed to raise its next round of funding before it had to let people go. But at that time, I was excited to come back to OpenVCI and jump into that full swing again. And at Meta, it was an amazing experience. I got to work with really brilliant people that are now scattered through Apple, Microsoft, Facebook, top companies that are now continuing to push the AR field forward, tangent to my journey in this industry. And then when I came back to OpenBCI, it was about time for us to kick off a new round of research and development on new products, as opposed to just dropping off the work that I was trying to do at Meta. I decided to implement some of the stuff that I had learned and some of the stuff that I wanted to do. And so about two and a half years ago, we kicked off a project called Galia. It used to be called Nova XR, but then our speaker kept mispronouncing it as Novaxer, which is kind of not a kosher term these days. And so we decided to switch the name to something that wouldn't be misinterpreted as we're not getting vaccinated. So Galia is like, we took all of the learnings from the open BCI products, which are EEG first and foremost. So measuring electrical signals from the brain, but we also learned that EEG by itself is not as interesting as EEG in context. So when you map EEG against other types of biometric signals, the data is much more interesting. Because ultimately, what we care about is the mind, not the brain. And I think a lot of people conflate this when they talk about it. They're like, oh, it's all about understanding the brain. And no, that's not really what we care about. We really care about the human mind and the way that the human mind is reacting to information and content. you know, how to help it, how to augment it, how to improve it. And so Gallia is an attempt at combining all these other physiological sensors, you know, PPG, which, you know, is a optical sensor for looking at heart rate, heart rate variability. And, you know, you can derive some other stuff from it. EDA, so electrodermal activity. So looking at the moisture on the skin, which can be things like stress, arousal, different types of internal states of mind related to whether you're focused, whether something's catching your attention. You know, also in Galia, we have EOG, which is electro oculogram. So it's an electrical measurement of the eyes. So it's like a electrical eye tracking. And then image-based eye tracking. So using cameras pointed at the eyes to look at pupil dilation as well as gaze detection. And then of course, 10 channels of EEG. So all of that packaged into one headset with a single PCB system that's time locking all of that data very closely and then streaming it to a computer for post-processing and in theory, real-time application so that you could be feeding, oh yeah, all of this is on a VR headset. So the VR headset is kind of what makes it interesting because it closes the loop where we have is sensing technology, which is there to listen to the human mind. But then VR is this incredible immersive neurostimulator in a sense, because you've got high fidelity audio and high fidelity visuals that you can use to design experiences, but also modulate the human mind with content and entertainment and really whatever you want to do in VR. So anyway, that was a very long winded journey, but I'll pause there. And if you've got any questions or thoughts or reactions to that.

[00:08:02.183] Kent Bye: Yeah, well, it's been quite a journey. And the one sensor that you didn't mention that I just want to throw out there is EMG electromyography. So how many different EMG sensors do you have? I don't know if you're building off of Physio HMD. I saw that you were at MIT as an advisor or working there and that there were some projects there. I don't know if you were integrating that in directly into the OpenBCI, but maybe talk about the Physio HMD as well as the EMG and what you're able to get with all the different EMG and muscle sensors and what you can detect from that.

[00:08:28.552] Conor Russomanno: Absolutely. So yeah, so the project has been a massive collaboration, but one of the key inventors and contributors on the project, his name is Guillermo Bernal. And if it weren't for him in the early stages of this, none of it would be possible. He's been incredibly prolific in terms of prototyping electronics. And, you know, he was the creator of PhysioHMD, which there's a paper out about it, which we've implemented the original PhysioHMD design and updated it and basically stitched it in the overall Galio system, but the system also, you know my product better than I do, has four EMG sensors, one below each eye and then one above each eye. So looking at the muscle data surrounding the eyes, which, you know, that information can be used for looking at emotions, so facial expressions, smiling, frowning, squinting, things like that. But it can also be used as a very reliable interactive input. for essentially having push buttons or dials or controllers in VR. And I'm a huge proponent of using EMG for interaction as opposed to EEG. I think, you know, there are a lot of researchers that are trying to push the bounds of using brain data in real time to control things, but I actually think it's a giant waste of time because even people who desperately need new types of interaction. So, you know, quadriplegics or people with locked-in syndrome, many of them still have at least some level of motor function. I mean, if you get to full-on locked-in syndrome where you have no motor control, that's really the only situation where a brain controller, where you're looking at motor function in the brain or you're trying to classify motor function for an interactive real-time input, makes sense. anyone else who has some residual motor function, those muscles can much more easily be classified and remapped into a real-time input for any type of hardware or software. So, you know, I'm a big believer in using EMG as an interactive controller for feature interfaces.

[00:10:30.792] Kent Bye: Yeah, well, I want to go back in time just a little bit because we met in 2016, I think in May, but I actually bought a Kickstarter for OpenBCI right when I was getting into virtual reality. So I bought my Rift on like January 1st, 2014. We go back, the Oculus Kickstarter was like in August of 2012. It shipped like in that spring of 2013. And so it had been out for about nine months or so, but There was a time when there was a lot of like Kickstarters, like, wow, there could be a Kickstarter of a hardware product that could completely revolutionize a whole new realm. And I remember watching the Kickstarter and getting real excited. I bought it. And then I got the shipment of the OpenBCI. And it was basically like a bunch of Lego bricks where I didn't have the expertise to like do anything with it. I wasn't like a hacker. I didn't know how to do it. And I couldn't use it with my VR headset. I remember I ran into you, we talked and you were wearing your BCI headgear with the 3D printed mount. And you've said like about a year after that or so, you worked for Meta and then you came back to OpenBCI. And I saw a slide that you put out in terms of all the different companies that you've sent OpenBCI headsets from. I'm just going to. list some of them that you listed here. Google, Bose, PlayStation with Sony and Apple, NASA, Oculus, Kernel, Hewlett Packard, Dropbox, Intel, Microsoft, Deloitte, Verizon, Nokia, Amazon, IBM, ARL, and Texas Instruments. It's basically like all of the tech industry has bought these dev kits that you've created. And so it just reminds me of like the early days of Oculus, how they put out these dev kits And that helped to seed the whole virtual reality community, where so many people that started from those democratized, easily accessible dev kits, some people needed to develop their own Unity games, or they weren't able to do anything their own. Like for me, I get the shipment of the Ubuntu PSI, and I can't do anything. But for other people, it's basically, they're able to start to build the foundations of this whole neurotechnology industry.

[00:12:19.062] Conor Russomanno: Yeah. I mean, first off, I'm sorry that you weren't able to do anything with that out of the box, but the documentation has gotten much better since 2013. So maybe now the possibilities are there, but yeah, I mean, this is the beauty of open source. This is really, it's a demonstration of just how. Laying out building blocks and this, this idea of the proliferation of technology and enabling people to pick up where you left off. you know, these are kind of the fundamental values of open source, which is that this is a collaborative concerted effort by many different people. You know, I think that's how innovation happens the fastest. Right. And, you know, there is something to be said about concentration of resources and the ability for venture capital models to be able to take something and like fast track it to application and fast track it to commercialization. But ultimately, I think that if you look back through time, a lot of the biggest innovations in technology and really like emerging industries came about as a result of open source tools, you know, even the beginning, the first computers. So, you know, I think it's been super, super cool to watch the neurotech industry grow over the last seven years. And like I said earlier, you know, I think we've been a pretty big part of that, but you know, we're not the only players in the space. You know, I'm just excited to see how far we've come in the last five years and you know, where we're heading in the next five, because it's going to be crazy.

[00:13:42.853] Kent Bye: Well, what's so fascinating to me is that you see a lot of open source software, but I don't see as much open source hardware. And you've been able to make a go of it, of creating a viable business, of going back to Palmer Luckey. That's exactly what he wanted to do. He wanted to like just ship people all the parts and have them put it together. And then they came along and say, no, no, no, you have to make it so that people can just put it on. And so you've been able to be successful at this model, but also. You're collaborating with Valve now and also Toby with this Galia headset. And so that's quite a testament in terms of being able to take this as an idea and really try to democratize access to this neural technology. But here you are working with one of the biggest players in all of XR with Valve and all the stuff that they're doing. And so maybe you could talk about like, how did that collaboration come about?

[00:14:30.182] Conor Russomanno: Cool. Um, yeah, there's only so much I can say about that collaboration, but I will say it's been really great working with Valve. you know, it's clear, you know, Mike Ambinder and Gabe Newell himself are both big believers in the future of neurotech and, you know, the massive implications that neurotech is going to have on the future of gaming and the future of VR. So it was a natural fit. As soon as they caught wind of the early stages of Galia, when it was still called Nova XR, Mike reached out to me on LinkedIn and he was like, Hey, are you interested in chatting? And at that point he was already working with the ultra cortex and, you know, doing internal research at valve. And, you know, there's articles online showing him wearing our headsets and it was a natural fit. It was really easy to kind of just figure out like, Oh yeah, we, we both want to do this. Cool. All right, let's work together. And so we now have functional Gallia prototypes, which we're testing and we're looking at all the different physiological signals in And hopefully that project continues to move forward. But yeah, I mean, I'm really excited to get Gallia to a stage where we feel comfortable and confident selling it to the first beta customers and knowing that we could deliver in a specific timeframe. But I don't see why that won't happen. Everything that's happening now, it's working, which is crazy because it all kind of happened over the course of the pandemic, which was obviously an insane journey to try and build and prototype hardware in the middle of a global pandemic where you're not supposed to be around other people. So that was a huge challenge, but we managed to stay safe and not spread COVID amongst our entire team, which was an operational nightmare, but it kept us full of purpose. I think that's the one thing to be grateful for is that we had stuff to keep us busy over the course of COVID, you know, and something that we all care about. And, you know, it's been an amazing team effort. You know, we have this very atypical way of working with some of the most talented people in the world where we just find people who are super, super passionate and want to push this industry forward and kind of share a similar ethos as OpenBCI. And I, you know, I think even the collaboration with Guillermo, you know, he's still a PhD student at MIT. And I think a IP patent filing company would have a lot more trouble collaborating with a current PhD student because of the IP stipulations and who owns what patent and yada, yada, yada. Whereas, you know, we talked with Guillermo Guillermo and with MIT, and we were like, yeah, everything that Guillermo contributes is going to be open source. And they were like, oh, okay, well, that keeps it simple. Great. Go for it. And so I don't think many companies could have pulled that off and been able to facilitate this three-way collaboration between a huge company like Valve, a super talented PhD student like Guillermo at MIT, you know, and then also a smaller open source tech company. So it's a really kind of unique situation. And I think we've built something really cool and we're excited to push it into the world.

[00:17:26.974] Kent Bye: Yeah. Well, I mean, Gabe Newell is a super hardcore fan of neurotechnology. I mean, he talks about brain computer interfaces a lot. He likes to go quite deep. And I remember watching an interview with him and there were things that he was saying about kind of like this idea of reading and writing to the brain of kind of like rewriting and like controlling different aspects of your personality. There was just something about it that kind of rubbed me the wrong way. And I tweeted about it. And then I ended up in this conversation with the transhumanist, Joshua Corvinus. I ended up doing an interview that I haven't published yet, but someone who challenged my own concepts around the limits in terms of our body, our body integrity, and what agency we should have around the control of our body and the degree to which that this is usually regulated by medical devices and sort of the restrictions that are put on that. You know, the thing that I think of, which we just shared some space here within the non-invasive neural interfaces, ethical considerations, there's a conference put on by the Columbia Neuro Rights Initiative, as well as Facebook Reality Labs, big discussion bringing together all these different neuroscientists and talking about all the different ethical dimensions. And I did like a 22 minute video kind of digesting different stuff that I thought there's certain aspects of the conversation that weren't being represented. But one of the things that I think that you were voicing was somebody who wants to kind of take ownership and own and do whatever you want with your body and your brain and your data. And I have this slide where it says, I want to be able to hack my own consciousness, but I don't want other people to hack my consciousness. So there's this fundamental issue that I see with this technology where I want all the people, OpenBCI, you and everybody else that wants to have the liberty to kind of tinker with themselves and to be able to have the freedom to that. But I'm not quite so sure that I want these big major tech corporations to have necessarily access to all this information and have the same capabilities of doing reading and writing. I mean, reading is a lot less controversial, but writing is a whole other issue that we can get into in terms of agency and what that means. But this neurotechnology in general, there's all of these debates. And I think what I see what you're really contributing to this conversation is that we should own our own data and that we maybe should second guess and question whether or not we really need to treat these as medical devices. Because what OpenBCI has shown is that if you really want to go out and put this thing together with stuff that's already out there, you can do it. So how are you going to sort of regulate that? with the modular approach that you've done to be able to build what you've built with OpenPCI, even if you were to regulate this as a medical device, what's to stop people from going and do it anyway and releasing software to put it all together? So it feels like the cat's kind of out of the bag. And at the same time, I still have a lot of fears about this existential risk in terms of how to have appropriate information flows amongst all these different stakeholders in terms of what happens to the data, who has access to it, and how I have some sort of control. And if we need a human rights framework, if we need the government to step in, because if we kind of leave things as they are, it's kind of the Wild West and it's going towards this path with Rafael Eustace saying, we kind of have the hoods of our brains metaphorically open to have both people reading and writing into our brains. And that's going to be a really bad situation for society. So how do you reckon this tension between on the one hand, giving people the liberty to have control over their own body and to do whatever they want with their brainwaves and neural modulation and allowing the unconscious aspects of this closed loop to be able to modulate their immersive experiences versus the abuse and the harm that can come from this type of data and how it could be used if in the wrong hands?

[00:20:42.253] Conor Russomanno: Yeah, this is a good one. This is, I mean, this is one of my favorite topics to talk about. Cause it's, you know, that's what gets me out of bed in the morning, you know, in the beginnings of open BCI, I was just like, I want this to move forward. Like the implications are incredible, you know, like there's so much that we can do. And now, now that I've been in it for seven years, I see, you know, the forces of capitalism and the money that's coming into the industry. And now a lot of people realize how powerful this tool is going to be, you know, to get to like one of the questions you asked, which is like, when we've achieved this read-write situation, I think we're already there, even in terms of writing to the brain. The way that we interact with applications on our phone or on the internet, where our mouse, where our cursor is, even now we've got front-facing cameras that are looking at our eyes as we're in Instagram or whatever application on your phone. This is currently reading your mind. It's reading your intentions and your behavior inside of these tools that occupy more and more of our time every day. And the right is already happening as well. You know, we're having our feeds curated, highly curated to our interests, you know, and for sure that changes your brain. It's what you're perceiving. It's what you're pumping into your consciousness on a daily basis. Right now it's pretty low fidelity though, but it's still extremely effective. But we've seen, I think 2020 was a really eye-opening year for the impacts of the closed loop computing that already exists inside of most widely used applications. We saw the most polarized election in US history, debatably, but I would say yes, like at least as long as I've been alive. And I think it's because technology has gotten so good at giving people what they want to hear and showing people what they want to see that people have ended up in these wormholes, these closed feedback loops. They're burrowing them further and further into their own beliefs. And I think it's really, really dangerous. you know, there need to be like systematic checks and people need to be more aware of the fact that these kind of whirlpools of consciousness exist on the internet and be really proactive about keeping an open mind and making sure that that they don't end up in these kind of cognitive wormholes. So to your point of, you know, privacy and the future and how do we like set up the guardrails to prevent this? You know, I think that like privacy is the kind of first step in what I think, what ultimately we care about is the agency. Like privacy is important because protecting the data or like limiting the amount of data that can be collected and sold or transacted without us knowing is the first step in maintaining agency as a user and cognitive agency. Like, am I controlling my own thoughts or is my consciousness being flooded with information that other people have prioritized or companies or whatever? You know, I'm a big believer in this idea of like hashtag own your own data. I think it's going to be really, really hard to pull that off because society glorifies the creators. It doesn't glorify the inhibitors. Right. And I think in many cases, you know, there's so much hype and there's so much excitement and glory around being the first person to do something, especially in technology now, because technology is like, what's cool. But, you know, we have this knack as a society, especially in the United States for asking for forgiveness, not permission to do certain things. And it's kind of leads to this very reactive approach to fixing problems as opposed to a proactive approach to like identifying problems before they're going to happen. You know, and this spans like way beyond just technology. I think it's a, it's a business, it's economics, it's hyper capitalism in effect. because there's so much money to be made in understanding what captures people's attention. We really do live in an attention economy at the moment. And the biggest consumer technology companies in the world, or at least a good percentage of them, are entirely based on this idea of understanding people's spending habits on the internet. And that's like I don't know if that can be corrected at this point. I hope so. And I think there has been good progress, decent progress made. As an example, I'm a big fan of Apple. Apple's not a perfect company, but I have a lot of respect for Apple's push for privacy and recent implementation on the iOS operating system of being able to choose between like, you know, this app can collect this data all the time when I'm using the app or never. And that middle option of like when I'm using the app was kind of a recent addition to the OS, but That's a really cool development because before that, as soon as you download an app and as soon as you give it access to the microphone, the cameras, whatever, there was nothing stopping that app from just recording video and audio all the time, even when you're not using the app. So little things like that are steps in the right direction. But the future that I would love to see is this kind of like data wallet, almost like this virtual bank account that everybody has. And it's treated like a bank account with the same encryption and two-factor authentication. And I need to get a text message on my phone or whatever to log in. And inside of there, you have your data. And it's like, this is me. And the way that all of my sensors that I'm connected to all that data is being collected and stored in my database. And I get to choose which outlets and pipes I'm opening up to my medical provider, to, you know, my self-help apps, you know, my meditation app, you know, Oh, someone wants to buy some of my data because I was using their app and my bank account has collected my brain or my mind in the context of that app. Great. Pay me for it. And like, I should be able to set the price. And if I don't like you, or if I don't like your brand, or I don't think that you're doing something good with my data, I don't have to sell it to you. So, I mean, that's kind of, I hate to say that it's far-fetched because I think we'll probably end up, hopefully we'll end up somewhere between that and where we are now. But I think in the case of neurotechnology, neuro-privacy and neuro-agency, we should be proactive about this one and not reactive because, you know, if you exacerbate, you know, what happened with social media between 2016 and 2020, you exacerbate that by an order of magnitude, and the fidelity of cognitive sensing and psychological sensing is 10x the quality, then so is the level of mind control and mass manipulation of people. This is a little bit dystopian. I'm also very excited about the crazy prospects and possibilities of what this technology can do for empathetic computing and really like building a healthier relationship with technology. But I think that it's a very fine line that we're walking at the moment and we've got to make sure we do it right.

[00:27:33.599] Kent Bye: Yeah, well, I appreciated hearing your perspective at this non-invasive neural interfaces ethical considerations conference, because Rafael Eusta said maybe the best path forward is to treat all of this information like medical data, because we have existing laws and regulatory systems that it's the closest analog in terms of the type of data that we're dealing with that's coming from our body and the way that you have these feedback loops to change the body. I mean, you make a really good point in terms of the degree to which that non-invasively through our perception and through our sensory experiences and how much that is writing to the brain. But the direct stimulation I think is what the neurotech starts to get into like What are the unintended consequences of that and what's even possible? But I'd love to hear your take on why shouldn't this type of both these devices and this data, why shouldn't we treat it all like medical information? What is that stopping?

[00:28:22.126] Conor Russomanno: So, I mean, The data, I don't disagree with Raphael on. I would be totally fine with this idea of like neuro data or physiological data being treated like an organ, you know, and then you have to like, I'm an organ donor. I checked that box. I'm not opposed to that, but I do think that it's important to distinguish the data from the devices that collect the data. So I think it would be detrimental to the advancement of neurotechnology to treat all of these devices like medical devices and require FDA clearance because really what that does is just limit the types of people that can build the tools and advance the science, advance the technology. the data, like with open BCI, we've never collected user data. We've never sold user data, but we build hardware for personal use research, R and D, you know, inside of consumer technology companies and other places, you know, and we've built a successful business selling hardware, kind of like Apple, you know, obviously very different brands where, you know, we're not a lifestyle brand where the kind of like the Lego of neuroscience, But I respect that. I respect this idea that like, when you buy a piece of hardware, you own the data that that hardware is collecting and no one else does, you know, while this came up in the debate. And I think you said like the juiciest part of the conversation was when Raphael and I were debating this topic, but you know, I think it's important. I get bored when I listen to panels and everyone's just agreeing with each other. So sometimes I play devil's advocate just for the sake of playing devil's advocate. But in this case, I was actually like, I don't necessarily agree with you. Cause I think like, coming from my perspective as an entrepreneur and someone that's trying to start a business and like push the technology forward, it wouldn't have happened. Like OpenBCI wouldn't have been possible, you know, if we had to go through FDA clearance first, you know, and also like there is this, I guess it came up like the oath, you know, in the medical profession where, you know, it's, you know, you're always looking out for the safety and wellbeing of the people that you're serving. But the reality is we're all a little bit good and a little bit bad, and there are good people and bad people in every industry. And when there's a lot of money on the line, people are going to make selfish decisions. There are plenty of examples of this, and the Sackler opioid crisis came up. you know, and he made the point, which was a really good point that like, because they were supplying a pharmaceutical and it was a medical product, we can hold them accountable. And now like they're going to jail.

[00:30:46.537] Kent Bye: Um, I'm going to, I just want to interject here. Cause this was the argument they said, well, we should just treat it like medical advice. You brought up the point of what about the whole Oxycontin epidemic that we had in the money that was being made. There was a documentary that premiered at South by Southwest, I'm not sure if it's been released yet, but it's called Oxy Kingpins, basically making the argument that the people at the very, very top had no accountability. So I think it's not accurate to say that there was appropriate accountability for this whole opioid epidemic that we had, because the people that are at the top are not going to jail or they haven't been prosecuted. This case is still going on. So I think your point was actually very valid in terms of saying, Hey, we shouldn't just say the medical profession is not immune from the perils of capitalism with people putting their own self-interest or greed and money above and beyond the Hippocratic oath of do no harm. There was actually a lot of harm that was caused by Oxycontin. So, and there wasn't time to really articulate that, but I actually agree more with you than what Raphael's point was.

[00:31:41.427] Conor Russomanno: Yeah. And the reality is, is like, is it just like, is it fair? How many hundreds of thousands of people's lives were ruined? And even if they do go to jail, it's a reactive thing that it's not justice, like, and people knew the whole time, you know, and how many other drugs and pharmaceuticals are being pushed into the world that maybe don't have the same level of crisis in terms of their impact on society, or maybe it's just harder to detect because people aren't overdosing and dying. But there's still clearly the toxicity of human greed existing inside of the medical profession. And I'm not ragging on doctors right now. I'm just saying that like, like you said, they're not immune from the perils of capitalism. So, and that's a long one. You know, I have like a lot of personal experience, this family experience with this exact topic and family members who have died and passed away because of the opioid crisis. So, There's a lot there. This is a bit of a tangent, but the medical world is far from perfect. Even the separation, the way that insurance and medicine and the actual customer or recipient of the medical service, that triangular relationship is so flawed at the moment because no one is haggling with doctors over what they should be charging or if they're charging too much. I have a great example. You know, I got shoulder surgery a couple of years ago and I went into surgery thinking that I was going to get a cyst removed from the end of my collarbone. And it was going to be like a one month recovery. You know, the doctor was like, Oh, I do this for NFL players all the time. And you know, like oftentimes they're playing within a month. And I was like, great. Like you can't wait to get back to doing pushups. And then as I was signing the papers, He was like, Oh, by the way, there's a checkbox there. You know, if, when you're under and we have your shoulder open, we find other things wrong. Do you want us to fix the other things? And I was kind of like, that makes sense. You know, like, sure, let's do it. So I go under, I wake up from surgery. I'm all groggy from the drugs and on the doctor's like, Oh, by the way. So, you know, when we had your shoulder open, we found out that like your rotator cuff and your labrum were also damaged. So we fixed those things too. And I was like, Oh, okay. So how long is the recovery going to be? Oh, it's going to be a year. It's going to be a year long recovery. And I was like, what? You know, like, and I'm still like totally doped up and it's hard to be mad when you're on drugs. And I was furious. I was like, what's like, how could this not have been, you know, laid out for me as a potential scenario. And anyway, you know, like I'm not blaming that doctor. There's a very good chance that those things were damaged, but you know, like, If you're there and the recipient of the surgery is not paying their insurance is paying and like they're going to pay no matter what and. something's kind of on the fence. Like, oh, it's like a little bit damaged, but is it damaged enough for me to do this surgery? And you've got a $17,000 bill versus a $70,000 bill that you can charge. That's a really interesting gray area to be put in as a surgeon when you're like 30 minutes away from tacking on an extra $50,000 to the bill, right? And so, you know, I'm not blaming that surgeon. Like my shoulder is doing pretty well now. And I'm like, okay, cool. I'm glad I got the surgery. But you know, you think about there are going to be times when someone makes a game time decision because of a paycheck, you know, and like it could have gone either way. And that's an example of how there are flaws in the medical system that, you know, human nature is going to take effect and people are going to make the greedy decisions by the time. So anyway, we didn't get into that in the debate, but while I appreciate the oath of healthcare and medicine, it's not so black and white. Yeah.

[00:35:20.132] Kent Bye: I wanted to tie this topic back to the health and BCI because there's a dimension of mental health and ways in which that there's experiential medicine and things like these neurotechnologies where I know that Brian Johnson, the founder of Kernel, has talked about depression in ways of his own mental health journey and feel like he's subjected to parts of his brain that are taking away different parts of his own agency. wouldn't it be nice to be able to have some sort of other direct neural intervention to be able to, whether it's through meditation or biofeedback or neuromodulation or whatever it ends up being, that this type of neurotechnology has the capability to open up whole new realms of medicine when it comes to people taking a little bit more ownership of their own self-care, but also adding new options for biofeedback and experiential medicine and virtual and augmented reality, but also with the BCI and all these different sensors, I'm just curious if you can speak to that a little bit in terms of what you see as the potential here in terms of once you have all these sensors and all this data, what type of stuff that you can maybe start to expect that you might be able to do in the context of say medicine or mental health.

[00:36:22.331] Conor Russomanno: Sure. I mean, I think the idea of just being able to like build a map of your own mind, you know, like if you write in a journal or you do like daily check boxes of like, Oh, today I felt happy today. I exercised today. I ate well. and tracking, you know, the kind of quantified self, I think that this type of technology has the capability to supplement that with real data and real feedback, and then use that self-labeling to build this kind of framework for like quantitatively understanding the impacts of your habits, your routines, your diet, the things that you do on your mental health. And actually the very first When I first got into BCI, I had recently, about a year before that, I had sustained a really severe concussion. I played rugby and American football in college, and I had a series of very intense concussions that culminated with the worst concussion that I had. And after that, I felt my mind change. you know, luckily, you know, especially when you're young, the human body and the human brain are very resilient, but like, you know, in the year after my last concussion, I was having trouble reading. I had a lot of social anxiety in social settings, you know, all sorts of things were off. And that's actually what got me really interested in the human brain and the mind and started like this kind of pursuit of trying to better understand my own psychology and my own consciousness. And so, The first tool I built or like, you know, prototype I built was this baseball cap with a single EEG sensor that streamed it to an Android application. And it was just measuring the EEG, but then in the application, you could you know, select from a list of moods and activities. And it was like a preset list of common things that I would do throughout the day and then the types of moods that I wanted to track. And then I could also enter like a time, whether it was a duration of time, when it started, when it ended. And the whole idea was to just build this like very simple tool where I could use EEG data to try to create these quantitative correlations between my activities and my habits and my own internal state of mind because I'd suffered depression, these bipolar swings where I would go from being like, you know, super focused and hyper and like excited about life to six hours later being like totally in the dumps, these like very, very high mental oscillations. So I was just like, I want to figure this out because I went to a bunch of doctors and they ran some like basic tests and they were like, Oh, your brain's fine. And I was like, No, it's not. I can tell you 100% that there's something off. And they're like, well, you're like, it's not bad enough for us to like, run scans or anything like that. So just, you know, see if it gets better. You know, and so that really frustrated me at the time, because I didn't think anyone could help me. And so I was like, All right, well, I'm gonna try and figure this out myself. And so you know, I think that the medical system is going to continue to be overloaded and there's going to be a need for this kind of like self medicine. And even if it's something that supplements or helps doctors manage just the total overwhelming nature of how many humans there are on the earth. And, you know, there are less in-person visits, but there are more tools and medical apps that allow you to kind of track and manage your own habit optimization and then set up little feeds or streams to your doctor where the relevant information can be passed along. And then you can have a professional assessment. I think that's really cool. That's one of the things that really excites me about the future of these, you know, more advanced closed loop computers. And, uh, while I think it's important to highlight the kind of dystopian scenarios that we should try to avoid, I think it is also really important to acknowledge just how impactful the positive aspects of advancing this technology are going to be.

[00:40:08.033] Kent Bye: Yeah, I wanted to ask about the sensor fusion aspect, because I think this is actually probably one of the more interesting aspects for what the Gali is doing and what you've been researching and looking at for a long time, which is what happens when you add all these different types of sensors together. I remember back in 2016, when we were talking, you were saying, because the Neurogaming conference was going on, it rebranded from Neurogaming to Extech because the neurotechnologies weren't so great with real-time agency and You were also saying that, you know, the EEG, you actually get better signal when you blink your eyes and it's like the muscle gauge. And now you have the EMG sensors in the headset, but you also have the PPG and the EOC and PDA and body temperature and our movements or arousal or emotions or cognitive load, different ways of making your heart rate, heart rate variability, all this real-time processing of all this information, extrapolating this big picture of what's happening within the context of these environments, and then given that, able to feed that back into the experience so that you have these closed feedback loops, but that we're kind of moving into that realm where we have all these new input options that game designers will have to be able to figure out conceptually how to take in, but also how to make sense of all this raw data and process ways in which they're either getting signals from one of those things or collectively a mosaic of information that is dictating a state of mind or mode of being or a certain amount of arousal or something that's connected specifically to an object relative to other objects in the environment. Maybe you could talk a little bit about this sensor fusion, but also in the context of these immersive environments and what type of stuff you've been able to figure out so far.

[00:41:38.076] Conor Russomanno: Well, I liked that word mosaic, a mosaic of information. Yeah. I mean, I said it earlier in the conversation and I think I'll reemphasize that what we really care about is the mind, not the brain. We know that the brain is the nucleus of the nervous system. It's kind of the hub where all of these other sensory receptors and, you know, like your eyes, your ears are listening and then it's sending messages back out, you know, your body temperature, your heartbeat, you know, all of these other bits of information that exist around your body are also very, very important. data points into the understandings and inner workings of consciousness and the mind. And so I think it's really easy to focus on the sex appeal of the brain and be like, we're going to understand the brain and we're going to do everything based off of the brain. But I think it's important to acknowledge the path of least resistance. And a lot of these other biometric signals are much easier to collect and arguably just as important, if not more important for understanding emotions and understanding the human mind. So the image-based eye tracking and just eye tracking in general is going to be super, super valuable for what is someone paying attention to? You know, I think eye tracking is probably the single most important non-brain data set for understanding cognition. you know, pupil dilation is really interesting. And one of our advisors, Paul Scheide, published a paper on the correlation between pupil dilation and ERPs or event related potentials. And basically if you have just pupil dilation, you know, and if you shift your attention and you're looking at different objects and then something really catches your attention and you're like, Ooh, that's relevant to my subconscious or it's, that's my mom, as opposed to just a picture of a random woman. your pupils will dilate suddenly when there's this kind of aha moment. Similarly, you can see that type of reaction in event-related potential from the brain. It's this kind of like this voltage surge that stands out from the noise. So when you look at either of those data sets individually, the efficacy is like 80 to 90%, but when you combine them together, the combined data set yields a much higher success rate or efficacy of, oh yeah, that's the signature we're looking for. So that's the perfect example of how when you start to combine these different data sets together, the reliability of the signatures that you're looking for, or the kind of events, biometric events related to some type of stimuli, you know, the reliability of that is much higher. And also I think it's important to subdivide the mind sensing or brain sensing into the kind of looking at stuff for real time interaction, interactivity things in space, you know, as I said earlier, I think that motor control, the stuff that control labs is doing with looking at motor neurons in the arm, and you could do the same thing anywhere else in the body where there's motor neurons for interaction. That makes the most sense. It's definitely the path of least resistance to interactive, you know, new forms of interaction, replacing the mouse, replacing the keyboard, replacing a joystick, et cetera. But there's this whole other segment, which is the passive BCI and pulse beta refers to this as opportunistic sensing. understanding emotions and internal states of mind over more extended periods of time. You know, what types of information or content are yielding certain types of emotions or certain types of states of arousal or, or shifts in cognitive workload or, you know, stress, anxiety, depression, things like that, you know, and that's not something that's necessarily real time. That's something that can change gradually over a session or even an entire day. And so, You know, I actually am more interested in that half of the equation, even though like whenever someone first gets into BCI, they're really excited about controlling a drone with their brain, or they really want to control a robot arm with their mind. I think that the opportunities for new types of entertainment interfaces that update themselves to make you as productive and reduce cognitive workload you know, if the text is too small on a screen and you don't even realize that you're squinting to try and see it, this system could know before you even consciously realize that like, Oh, we should just like increase the brightness or increase the size of that font or move this window up here so that your neck is not crimped downward. There's little things like that, where I think that there's so much that can be done with a system that is keeping track of these subconscious states of stress or fatigue and, you know, optimizing for whatever it is that you want. Like you could decide I'm going to set my schedule today from 10 to noon. I want to be highly productive and efficient with my typing and my writing because I got to crank out emails. And then from 7 to 10 PM at nighttime, I want to be very creative because I'm going to try and write that short story that I've been thinking about writing for a long time. You know, and if you've got enough data about what your brain looks like during these states of mind, because it's happened to you in the past and you were like, Ooh, wow. I felt really creative during that two hour period, drag slide, highlight label, creative. Now the system knows what your brain or these different physiological sensors look like when you're in that state of mind. So it could. start testing out a cocktail of subtle nudges of brightness, audio frequencies, you know, music you're listening to, et cetera, to try and put you in that zone or in that state of mind and essentially like close the loop. And like, it may not get it right at first, but it could know very quickly whether the things it's trying or are nudging you in that direction or away from that direction. And that's the beauty of AI is that it's better than we are at discovering those types of patterns and trends. So. Yeah. I mean, I'm, you know, for gaming, obviously like, you know, the valve collaboration, I've always been a big gamer. I love video games. I love board games, interactive games where you've got kind of interactive plot lines and narratives where you choose your own adventure. I've always been really interesting to me. And the idea of having that happen subconsciously is really intriguing. Right. So like, I don't know if you've ever played the game fable or mass effect or something like that, where you get into these kind of like multiple choice scenarios where you have to pick the way that you want to respond to a character. And then that influences your relationship with that character. Maybe they joined your party or maybe they're like, F you, I'm going to come hunt you down later. You know, that kind of stuff could happen subconsciously where like your reaction to that character, the character can feel it just like in real life. You know, we have They say that 95% of our communication happens with our body language and only 5% of it is the words, right? Like it could be the same in a game where the game just knows that you like certain characters or don't, or that you really want certain things to happen or that something is really terrifying or not. So. for both gaming and, you know, new types of entertainment. That's super interesting. Actually a low fidelity version of this was my thesis in grad school. I called it a neuro immersive graphic novel. So it was a three chapter interactive short story where the premise was you're this humanoid robot experiencing symptoms of consciousness. And the first chapter is like set text. You're reliving this moment where, you know, you were operating this power plant And you made a decision to halt the power plant's operations to save one fellow robot. You know, there was this moment of empathy and in your logic, in the code that you were programmed with, that was against the code of logic. And so there was this like moment of empathy that you felt for this other robot. And then throughout the course of the story, the whole time you're wearing an EEG headset and it's measuring your attention. And the more attentive you are to the content and the story itself, the more conscious you become. And so you end up becoming more human and discovering a sense of free will if you stay focused with the story. And then conversely, if you're like, Oh, this is boring, I'm lame. And then you start checking out the internal dialogue of the character starts becoming more robotic. And then it gets to the end and there are multiple endings based on your EEG patterns throughout the course of the story. Anyway, so that was a kind of a low fidelity. I didn't have a lot of time to write all the stuff and do all the drawings, but I think that that type of framework is going to be extrapolated into very, very detailed and nuanced types of entertainment where every player that plays it has a completely unique experience based on their own emotional and physiological reaction to the content.

[00:50:18.391] Kent Bye: My reaction as I hear all that is that I've done probably a dozen or two dozen different neurotechnology demos over the last seven years. And the thing that gets frustrating a lot of times is I can't tell how my agency of my body is impacting the system. What's really interesting to see what you've been able to do with your demos is you're like, all right, now I'm going to show you how I'm going to modulate my body. And then you change your body and you do something to your body where you can actually see the impact. So you've been doing it for long enough and maybe you've sort of trained yourself, or at least you've had enough of those feedback loop cycles. But so many times when I've done these different types of demos, it's like, I have no idea like if what I'm doing is even working. It can be just like random noise, or it's hard to have a trace of your agency. And I think that's the challenging thing, both from part of the reason why the neuro gaming did a shift to X tech, which was more from gaming to medical information, which is information that's over long periods of time and more aggregate averages in terms of these, like you were saying, these emotions over time. But also the challenge of whenever you do an interactive game, but you don't know it's interactive, but you're interacting in a way that is having this subtle subconscious level of interaction, where maybe some of the magic of this type of technology is below our level of conscious perception that we can't even necessarily even articulate. It's sub-symbolic. We can't put words to it, even it's just maybe a feeling or a vibe, and maybe we can tell the difference or maybe we can't even. And I think that's the challenge with this type of tech is that the types of experiences I've had is that the difficulty of being able to really trace your agency, but also to know that type of unconscious, subtle feedback loop cycles that are in the closed loop. How do you get feedback as to whether or not it's working or not? Yep.

[00:51:54.592] Conor Russomanno: This is a great question. And I think that this is something that we all have to be a little bit skeptical of any consumer BCI device that gives us some derivative metric of focus or attention and doesn't show you the science and doesn't show you the way it got from raw data to this metric. And I had to take back in the day when I did my thesis, I was using the NeuroSky device, which like NeuroSky changed the industry. NeuroSky made the MindFlex, the MindWave. It was a very simple EEG device, a single sensor, but they mass produced it and it was hackable. You could crack into it, you could connect it to an Arduino. And that was the beginning of my journey into BCI, was like messing around and cracking open these toys. build my own, you know, more sophisticated signal processing stuff. But there's something that you brought up, which is this kind of like not knowing and like bringing it back to like interactive gaming. There's this idea of the illusion of choice where sometimes in those multiple choice menus that you get to in a game, it doesn't matter which one you pick. The outcome is the same no matter what, but the fact that you think you're influencing the narrative makes you perceive the narrative differently. And so I, like, I studied this in a course in grad school, the course was on interactive fiction and we played all these interactive games. And this was like a really, the teacher, his name was Nick Fortunio, really harped on this point, which is that like, there's your actual impact and then there's the perception of your impact. And I think a lot of times with some of these, like, for lack of a better phrase, like snake oil, BCI products, where it's like, what, how's it working? Some people don't really care. The fact that they believe that it's working is enough for it to work. It's this kind of like placebo effect of technology. And then there are people who are more skeptical who are like, yeah, show me the science. Like, nah. But we have to be careful for that. And there may be this placebo effect of neurotech devices as they get released into the wild and people may support and promote the efficacy of these tools. And maybe that does work for them, even if the science isn't validated or legit. But then conversely, like I'm much more in your camp where I want to see it. Like I want to be able to see and at least be able to like have a bar graph. And I'm like, okay, cool. I want to like watch this bar graph go up and down. And I want to feel the connection. Like I want to feel those moments where I'm focused and feeling the attention. And I want to know that the bar is high then. And then I want to lose attention. I want to see the bar drop and I want to focus again. And I want to see the bar go up. And then I'll believe you when I'm in the story and you tell me that I'm getting to a different ending based on my focus. you know, you know, and this is why with open BCI, we don't charge for raw data, you know, cause ultimately it's about the science. It's about moving the field forward. Maybe someday we will charge for raw data, but probably not, but that's the valuable thing, right? It's the ability for you to know that you're not inside of this black box and that you have to just trust, you know, some company that they know how to figure out that you're meditating or that you're feeling attentive. Or, you know, I think that having that pipeline of point A to point B to point C to point D, You know, raw data to inferred higher level thought or emotion you can see under the hood, but nonetheless, it's all very exciting. And video games are going to get really, really fun and immersive in the near future.

[00:55:21.777] Kent Bye: Great. Yeah. Well, finally, just to wrap up this discussion here, I'm curious what you think the ultimate potential of all the virtual and augmented reality matched up together with neurotechnologies, what they might be able to enable.

[00:55:36.335] Conor Russomanno: Um, I mean, I think it's the future of humankind. I think that we talk about this like man and machine coming together. I think that's the next major technological paradigm shift is going to be us walking around with computers attached to our heads that have the new architectural component. for lack of a better acronym, like a mind processing unit, you know, there's going to be in these wearable computers, there's going to be a display. That's not a screen that you're looking at on desk, but rather an AR overlay with heads up information, your emails, the people that are calling you, you know, highlighting things in your space, giving you tips and information about whatever you want to know more about in the real world or the digital content. And then there will be this streamlined bridge between the content that you're looking at and what's going on on the inside of the computer. And it has been this gradual transition over time for us to be closer and closer to the machine. And we're just going to keep moving in that direction. And I think VR is going to be used for super immersive entertainment experiences, simulations, training, things like that. AR is going to become the next smartphone. It's going to be the thing that we walk around with and it's going to be on us all the time. Hopefully not all the time, but a lot of the time. And in the middle of this is going to be this overlap in the Venn diagram between the technologies and display and the graphics card, the processor, everything that makes the current computer work the way that it does, except it's going to be miniaturized and it's going to be on your head. And then there's going to be that coupled with the part of the computer that's measuring your psyche and feeding into all of the applications that are running on this new wearable operating system. And it'll be the, the empathetic computer of the future, highly personalized. And hopefully that little digital Jiminy cricket lives on the inside and he's your ally and he protects you from everyone that's trying to steal and harvest your data. And sets the priorities of these closed loop systems. But we'll see. And then, you know, eventually we'll have the computer inside the head. But, you know, I don't necessarily agree with Elon Musk's timeline. I think a lot of people are going to be pretty resistant. And I also think that the the utility of non-invasive BCI is going to very quickly emerge and it's just going to be so valuable and the function and capabilities of non-invasive BCIs and non-invasive mind-computer interfaces are going to be so powerful, you know, the utility will be so high that I think people will opt for that in the short term and even the medium term before going through comprehensive surgery that has health risks and also huge privacy risks because a non-invasive headset, you can take off an invasive BCI. You have to surgically remove from your head. So until we sort out the privacy stuff, I think a lot of people are going to veer away from that as an option simply because of that fact that once it's in, it's hard to take out. So.

[00:58:43.720] Kent Bye: Yeah. Privacy and security. So having your mind hacked, I mean, that's a whole other thing. So yeah, we have the technology wetware. So yeah, I'm, I'm the same. I'm going to be getting the non-invasive stuff way before I put any chips in my brain, but is there anything else that's left unsaid that you'd like to say to the immersive community?

[00:59:00.895] Conor Russomanno: I guess just thank you for the great questions and, you know, for just how prolific you are in terms of sharing interesting and relevant things. Your exhaustive Twitter thread from the Columbia ethics panel with FRL was, was really awesome because I was hoping that they would record it, but for whatever reason, they didn't. And you came in and saved the day and, and, and put it all online anyway. So, you know, thank you for all the hard work you do to, to keep everyone else informed.

[00:59:29.637] Kent Bye: Yeah. And also just thank you for all the work you've been doing. You know, like I said, you've been sort of the heart of this neurotech revolution that's been happening quietly, kind of behind the scenes, probably most people maybe not aware of all the stuff that's been happening, but you know, you've got a front row seat in terms of how this has been unfolding across all these different, well, obviously you don't have total insight, but at least to know who's interested in this space. And yeah, certainly with your collaboration with Valve and Toby, I look forward to getting more information about the Galia headset and seeing where a company like Valve is really kind of investing and trying to tinker with from a gaming context and experiential context. I'm really excited to see where this all goes and for people to have the ability to hack their own mind. but for us to come up with the larger ethical frameworks that prevent other people from hacking our minds. And I still think the big question in terms of the mechanics of how we own our own data, what's that actually look like? What are the laws that need to overthrow the third party doctrine, meaning that any data you give to a third party, the government can get access to it. you know, all these complications in terms of our current regulatory system isn't really set up for owning your own data, even if cloud services or anything like that. So yeah, just in a way that we have in a future where we want this private data wallet with a Gemini cricket AI, that's able to take all this information and let us have the agency to do what we want with our body and our data, while at the same time protecting the people from doing bad stuff for us. So I think there's a lot of ethical discussions to still be played out in this area, but I'm glad that OpenBCI and what you're doing is providing a real counterpoint to a lot of decisions that may be made that eliminates your existence. And I would rather you exist and you being a counterweight to all these other things that are out there and kind of have that DIY spirit, the hacker ethos. And yeah, I think it's a good part of the dialectic to try to figure this out. It's not an easy thing to try to figure out, and I'm glad that you're immersed in it and having your perspective and your input into these larger debates.

[01:01:20.081] Conor Russomanno: Thank you, Kent. That means a lot. And also, you know, with regards to Rafa, you stay and just the, you know, the kind of back and forth while I don't agree with him on everything, I have so much respect for him for spearheading the conversation and like keeping people talking about it. And that's a huge shout out to him for really kind of driving the conversation that we had a couple of weeks ago, you know, and we need more people like that who are making people have hard conversations and trying to figure out what is the best way to do this. So. Let's keep that going. Let's keep that energy there and keep trying to bring together the people who are willing to have the tough conversations and try and get some of the big players to speak up and actually have an opinion on a public stage.

[01:02:00.149] Kent Bye: That would be nice. Yeah. Yeah. Awesome. Well, thanks again. And, you know, it's an excited area. It's like sort of the best of worlds and the worst of worlds where how this can go. I'm going to hold the possibility that we're going to have the most exalted potential, but I also know that there's a lot of things that we have to go through in terms of overcoming the bad stuff and the abuses and the harm, just like all of life, there's going to be a mix of things. And I think we'll see how it turns out, but I'm excited where all this can go.

[01:02:24.943] Conor Russomanno: Yeah. Kent, thank you so much. This has been awesome.

[01:02:28.093] Kent Bye: So that was Connor Russomano. He is the founder and CEO of OpenBCI. So I have a number of different takeaways about this interview is that first of all. Well, there was a very provocative statement saying that we already have devices that are reading and writing to our brains in the sense that it's not direct neural interfaces, but these closed loop software systems that are able to detect what our attention is and our behaviors through sometimes looking through the camera, potentially tracking our attention or behaviors. But more or less, these apps that are able to filter all sorts of different content in terms of what we're seeing, when we're seeing it, When it comes to actually writing directly to the brain, I do think that there is a little bit difference there in terms of what is even possible. There's a thing that came up a number of times here, which is the differentiation between brain and mind. Brain is something that is a biological and physiological thing that you can measure. The mind is something that is actually referring to consciousness, which is not really well defined in terms of how to really measure it or describe it. And, you know, there's still questions philosophically as to whether or not it is computable, like Roger Penrose is saying that, you know, maybe there's some other quantum effects that are happening at this microtubule level with Hameroff. This theory of consciousness sort of goes beyond what is actually happening from just looking at the neurological correlates of consciousness. So the neurological correlates of consciousness is by far the most popular when it comes to the existing philosophies of mind. And from that, you could say that maybe we'll get to the point where we'll just be able to look at these actions that are happening in the brain and be able to extrapolate what's happening in the mind. So that's the goal, as we have all this information and data that's coming in, if we're able to actually extrapolate and combine all these different things in. Now, of course, there's all sorts of risks to our mental privacy if this is indeed possible. So what happens to the data? Do we own that data? Is that data going to be used to be able to undermine our sense of our identity or our sense of our agency? So this is why Rafael Eusta is trying to put forth a number of different human rights frameworks with the Neuro Rights Initiative, saying that we have the right to mental privacy, the right to identity, the right to agency, the right to be free from algorithmic bias, and the right to be able to have fair and equitable access to these technologies that are able to do this type of neuromodulation. So what I got from this conversation was that Conor's actually okay with the data being treated as medical information, but the actual hardware should be more freely available. I don't know if there's a precedent for that, if there's like these consumer available technologies that are producing different medical information, but yet the medical information is protected with things that are at the level of HIPAA. You know, there's a lot of these consumer devices that are measuring EMG and other physiological markers. But I think the larger point is that no matter how you try to set up the different regulatory framework, if it is the medical system, it's not free from all sorts of ways in which that could be corrupted by greed, or it may not be a full protection, I guess, is what the point was. There's this desire to be able to use this information to really live into your potential for either tracking your data, to be able to create whole new models of healthcare, to be able to provide that information to your doctor. Again and again, I come back to this concept of contextual integrity theory of privacy from Helen Nissenbaum. I have a whole series of interviews that I think I'm going to be diving into for the next couple of episodes, where I actually did do an interview with Helen Nissenbaum talking about how her theory of privacy is all about the appropriate flow of information based upon the context. Whatever data is going into these hands, that is being used in an appropriate context. I think the lack of ability to be able to have any sort of audit trail for how this data are being used, I mean, right now, within the context of the virtual reality community, a big topic is advertising and how Facebook has announced that they're going to start to be doing advertising within their virtual reality experiences. Now, on the face of it, the phenomenological experience is something that may not be that horrible of an experience. I think the real issue here, though, is that it creates a new context under which all this data that are being collected are going to be fed into this process of trying to detect what we're paying attention to and to do these psychographic profiles on us. I think the whole attention economy and the economics around how data are kind of like the oil of the 21st century. If you can get good enough data to be able to make these predictions about people, then you can start to do advertising to shift behaviors. What some people would call this behavioral modification is what Jaron Lanier calls it. But more or less, you have not only just brain control interfaces, but mind control interfaces, where your mind is controlling it, but it could also be also doing mind control onto you. It works both ways in that context. So there's a lot of potential for this. And you know, when I hear Gabe Newell talk about this stuff, he starts to get into this whole treating us in terms of like, that we can rewrite our personality and that there's going to be things that we can sort of like, just like with software, be able to shift these fundamental and essential parts of our character. And for me, I'm personally skeptical to the visions that we have with where this technology could go. But at the same time, we have to be diligent and cautious in terms of what are the guardrails that need to be in place so that as we move forward with these neurotechnologies, as they integrate with these immersive technologies, we can already look to see what has happened with the social media, especially when it comes to operating at scale. When these things operate at scale, then what are the ways to be able to ensure that it's being done in an ethical way and that there's not going to be any abuse that's happening with the data that are being made available? So I think that's still the big question and I'm not sure if you know, there's any clear answers here It's just this is a voice from someone who is from that conscious is hacking ethos So the DIY and do-it-yourself with the Lego bricks of neuroscience to be able to kind of build out and you know Really when you look at what open BCI has been able to do in terms of all the different companies they've really been seeding the entire industry with these kits to be able to tinker and experiment and see what's even possible and especially when you start to fuse together all these different sensors. Like Connor said, he's not as interested in just EEG as itself, but when you start to fuse these things together, like just pupil dilation combined with the event related potentials within the brain, you can start to do additional correlations to be able to see what's happening in your brain state and to be able to see if that's traceable. Now, this is where I sort of get into also a whole other aspect of consciousness, which is that it's an unfolding process which is context dependent, and the context is constantly shifting and changing all the time. So if you are taking some sort of historical marker of what was happening, you may be capturing a moment in time, but that may be dependent upon the context. It may be some other factors that are happening. The idea that you could just sort of dial in what you want to experience at any moment across the schedule, across the day, there's going to be lots of other things that are emerging that are outside of your control. And so there's this limit to our free will in some ways. And to what degree are we able to use technology to be able to mediate our attention and our consciousness is also I am skeptical to see how far that can actually go. I think there's a deeper part of intentionality and the larger context and how things that are in these unfolding processes that are just difficult to model, and even if the past behavior is giving us some indication, even if you have a model for that, are able to really re-sync in and kind of recreate those different aspects of those states of mind. I think this is getting into a lot of the questions about the limits of what we can do with these interfaces between our consciousness and these different technologies and how we're having these feedback loop cycles. Once you even get into the unconscious aspects, again, I go back to my experiences with this technology, which is having these traces of your agency and to really understand how your intentionality, or even if you're not able to detect it, but you have this closed feedback loop system that is taking these unconscious signals and then somehow putting them into an artificially intelligently driven system that's trying to cultivate a certain type of experience. Now, the degree to which that is successful, I think, is going to be largely phenomenological, and even Connor is saying that it's still susceptible to things like the placebo effect of you just believing that it's having impact may be enough. There may not be other ways of independently verifying that this is actually anything more than what he calls sort of the snake oil of BCI. So I think this type of discussion risks us getting into that without really understanding the underlying science and the ways in which that our intentionality and our consciousness is sort of independent from just our neurological firings. And how much can you do this direct brain stimulation and writing into the brain to be able to create the context to be able to have these specific types of experiences? So that's where I start to get more skeptical in terms of the limits, in terms of what we're going to hit, of what's possible and what's not going to be possible. But even independent of that, and just thinking about the aspects of privacy and mental modeling, even if the modeling isn't perfect, if it's good enough to be able to make a decision and you're able to see a statistical difference in terms of you're able to get better than a 50-50 chance, you're able to increase your odds. And that's why Avi Barziv has written a whole article saying that this whole advertising model is kind of like gambling, where there's an old adage within advertising that says that, you know, 50% of my budget is wasted, but I don't know what 50% to take out. In other words, that there's a whole aspect of gambling, where they're putting money out there and they don't know what's going to be effective and what's not, but it's enough of a success rate that it continues to work. If you have that kind of metaphor, any sort of way that they can make that better through all this additional data and information is going to create this asymmetry of power that is going to be like walking to a casino and trying to play the house when the house has the odds in their favor. That's going to be applying to all these different persuasive technologies when it comes to advertising and everything else. There's a lot of discussions that are happening about this in the larger community right now. I think as we move forward into the future of neurotechnologies, this is only going to get more and more of a pressing issue as to how to address this. Should there be new legislation? Do we trust the companies to self-regulate? Do we have to wait until something goes horribly wrong before we do anything about it? Or are we going to really focus on the positive aspects of what we can get out of this and then hope for the best for how this is not going to be used to be able to directly manipulate or control the population. This is the existential dilemma of all these immersive technologies. They're extremely powerful, both for the pro-social aspects, but also, in the wrong hands, be used to be able to really wreak havoc on a society. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listed supporter podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show