#812: Neuroscience & VR: Neurable Brain-Computer Interface & Telekinetic Presence

ramses-alcaide
Neurable is building a brain-computer interface that integrates directly into the virtual reality headset. Neurable uses dry EEG sensors from Wearable Sensing, which means that it doesn’t require gel in order to get a good signal from the EEG making it a lot more user friendly than BCIs that require gel. I had a chance to try the demo at SIGGRAPH 2017, which was showing off what Neurable refers to at “Telekinetic Presence.” It is the closest thing I’ve ever experienced in VR to having the technology read my mind, as it ran a calibration phase to be able to detect the brainwaves associated with intentional action. Once it’s trained, then it’s a matter of looking at specific objects in a virtual environment, and then experiencing a state of pure magic when it feels like you can start to move objects around with your mind alone.

Neurable CEO Dr. Ramses Alcaide suspects this type of magical, BCI mind control mechanic is going to be a huge affordance for what makes spatial computing unique. He said that the graphical user interface plus the mouse unlocked the potential of the personal computer, and that the capacitive touch screens unlocked the potential of mobile phones. He’s hoping that Neurable’s BCI can help to unlock the potential of 3DUI interactions with virtual and augmented reality. I had a chance to catch up with Alcaide at SIGGRAPH 2017 where we talked about the design decisions and tradeoffs behind their BCI system, their ambitions for building the telekinetic presence of the future, and their work on an operating system in a spatial computing environment that aims to create a world without limitations.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So continuing on in my series of looking at the future of neuroscience and VR, today, I talked to Nurbull, the CEO of Nurbull, Dr. Ramses Alkaid is talking about this brain computer interface that they had created. They're actually showing a demo of Nurbull back at SIGGRAPH 2017. It was kind of like this experience trying to give you this sense of telekinetic presence. So you're able to like train in your brain, thinking a specific intention. And then as you think those words or thoughts, you're able to pick up objects and move them around. I had a little bit difficulty of the demo, although I think it did eventually start to work. I was one of the first people to go through the experience. And then I did have a small taste of what it felt like to start to move objects around with your mind, which is really, really cool. So looking at the future of brain control interfaces, maybe using our mind to be able to have actions happen within immersive technologies. Maybe that is going to be the equivalent of the graphical user interface and the mouse with a 2d screen or with capacitive touch within a mobile phones. Maybe your mind and BC eyes these brain control interfaces is going to be the interface of the future for spatial computing for virtual and augmented reality So that's what we're covering on today's episode of a severe podcast So this interview with Ramsey's happened on Wednesday, August 7th 2017 at the SIGGRAPH conference in Los Angeles, California So with that let's go ahead and dive right in

[00:01:43.158] Ramses Alcaide: My name is Dr. Ramses Alkaid. I'm the CEO of Neurable. A little bit of background of our company is we make brain-computer interfaces for virtual reality and augmented reality. And really, it's kind of the next step in brain-computer interfaces and creating that next interface solution for computing.

[00:02:00.599] Kent Bye: Yeah, and with most brain-computer interfaces that I've seen, usually there's a requirement for some sort of gel that you put on in order to increase the contact. But you've been able to figure out some way to avoid having to put gel both on the contacts but also in your hair. So maybe you could talk a bit about that technology that you're deploying here with your headset.

[00:02:20.151] Ramses Alcaide: Yeah, there's two parts to that. The first part is the electrodes are from a company called Wearable Sensing, and they just make fantastic hardware. The second part is on our machine learning and in our filtration pipeline. You know, we just have a lot of experts that really understand the field of high signal to noise issue areas. And because of that, you know, we're able to overcome a lot of those challenges.

[00:02:43.098] Kent Bye: So in the demo that you're showing here at SIGGRAPH, you're able to do some training exercises and then eventually start to move objects with your mind, which, when it works, feels quite magical. So maybe you could talk a bit about what you're able to do on the back end in order to even make that happen.

[00:02:58.474] Ramses Alcaide: Yeah, sure. So there is a brainwave that corresponds to intention. And so what we do is we go through a short one minute calibration to identify this brainwave. And then once we do, you know, you're able to grab objects, throw them inside the actual experience. You can stop lasers like Kylo Ren with your mind. It's pretty cool. And then you can transform a robot dog to a balloon animal. So there's a lot of things that you can do with it. You can kind of think of it as a brain mouse. So think about how many interactions you do in a game with a mouse. We can do the same thing, but using your brain activity.

[00:03:28.515] Kent Bye: Well, in the experience that you have here, you have people repeat in their mind the word grab. With intention, you could have any number of verbs. Does it matter what word you pick, or is there a certain higher saliency with the word grab?

[00:03:42.003] Ramses Alcaide: No, and it's really not even associated to the word grab. We ask them to do that because we want them to maintain their attention so that we can get good training data. Once you get past the training phase, you can actually just consciously want it, and it'll trigger. And it becomes a far more natural experience. So really, we just do that for training.

[00:03:58.701] Kent Bye: Well, I guess there's what was used to be called the Neurogaming Conference has since rebranded to the Experiential Technology Conference. And I think that in part that was because they wanted to get away from real-time interactions because the technology just was not there yet. It was kind of disappointing to think that you're going to go in and start to do real-time interactions with your brain. And then I think they've been shifting more towards looking at brain waves over periods of time to be able to discern different health issues or maybe education and focus and meditation, these types of applications that are less about like having instantaneous real-time interaction. So, for Neurable, do you see that you're going in that direction of real-time interaction with the brain?

[00:04:40.393] Ramses Alcaide: For sure, and that's what we're doing right now. It's all in real time. And yeah, one of the biggest difficulties is actually the fact that consumers are so used to really crappy experiences with brain-computer interfaces, and they're very skeptical about it. But once they use our system, and part of our challenge is going to be showing off our system as much as possible to show people that the future of brain-computer interfaces isn't 10 years from now. It's actually right now. You just need to have the types of insights and type of quality of work that we're able to provide.

[00:05:10.670] Kent Bye: Can you talk about some of your direct experiences of being able to manipulate virtual worlds with your mind?

[00:05:16.154] Ramses Alcaide: Yeah, I mean, it's really a powerful feeling, you know. Sometimes I prefer to be inside the virtual world than the real world because you have this telekinetic presence that it just makes you feel like Professor X or Magneto or like the girl from Stranger Things. And, you know, I know I'm saying this and, you know, just to give some background, I do have a PhD in neuroscience and BCIs and I've been doing this. So, like, even I was a skeptic. I worked a lot on this, and I can understand where people's hesitations are, but once you do it, you really understand the difference between our technology and anything else that's out there.

[00:05:48.656] Kent Bye: Well, I think that, you know, in some ways, when you look at the future of interfaces, having a direct interface, what is the extent to, you know, the degree to which you can control things? Because right now, you're doing a high level of intention. How far can you break that down? Does it start to get frustrating? I just think that when I look at gesture control, there can be this thing that reminds me of the early days of speech recognition, which is that if there's 90% being correct, then that 10% actually gets really, really frustrating. And in some use cases, it's just easier to transcribe it yourself. Or in the case of human-computer interaction, you might as well just have a button there that you're pushing that you know is going to work 100% of the time rather than 90% of the time. So maybe you could talk a bit about that trade-off that you're doing and then how far you could take it in terms of the granular control that you can do with your brain and how you break that down and think about that.

[00:06:45.457] Ramses Alcaide: For sure. I mean, I've been working on this technology for about 13 years and now we finally think it's at the point where it's consumer ready. And this is just gen one of our consumer version. So we're constantly making new interaction methods, new brainwaves that we're looking at and you know, we're, we're increasing the feature set consistently. It's kind of an office joke. Every day we make great strides to mankind, you know, great leaps and bounds. So really this is just the beginning of kind of what we're trying to do. I mean right now it's a very simple control mechanism, it's just a click. Right, a click, but just think about how powerful a click is right now. But the real thing that we see the value in, especially compared to like hand controls or other controls like that, is the fact that when you look at the history of computation, the computer became the personal computer when we created the graphical user interface in the mouse. The phone became ubiquitous when we added the capacitive touchscreen. And so before you can get to the killer app, you have to have the killer interaction method. And for mixed reality devices that are going to become hands-free in the future, brain-computer interfaces are that killer interaction method.

[00:07:53.405] Kent Bye: The other thing I'm curious about is that it seems like in order to control a computer with your brain, you have to have a certain level of skill and focused contemplative thoughts. I mean, I think that that's an issue with people. If they have a meditative practice, do they do better at this versus people who may have what the Buddhists call a monkey mind?

[00:08:12.818] Ramses Alcaide: Yeah, absolutely. But the cool thing about our technology is we're actually leveraging an incredibly robust brainwave. And so within just a minute of our calibration step, people have about 95% of the control that they want. And we've had people come into our booth that have had zero brain-computer interface experience, zero wellness experience, and even sometimes zero VR experience, and a combination of all three. And they've still been able to go through the experience and really get to use their mind in a way they didn't think was possible.

[00:08:42.038] Kent Bye: One of the approaches that I've seen that's unique from the other brain control interfaces that I've seen is that there's this calibration phase where you're highlighting things that you're looking at and then you are exerting your intention or thinking about a word as you see this thing illuminate. So you're correlating the visual connection to try to stimulate something in the mind and then use that as the training. And so maybe you could talk about that calibration process of matching the stimulus and then the response.

[00:09:08.023] Ramses Alcaide: Sure, so the calibration process, really what it does is we use visual stimuli. Now we can use auditory or vibrotactile, any type of stimulation. But when you give people stimulation, it actually activates certain centers of the brain. And those centers of the brain, the activity that comes off of it, that's what we record and analyze. And so that's what allows us to also cut through a lot of the signal-to-noise issues that exist. And so those visual animations, right now we're using flashing but you can use animations instead, are really what allows us to cut through a lot of the barriers that a lot of other companies have been stopped at.

[00:09:42.971] Kent Bye: And from the other companies that I've talked to that are in this BCI and VR space, they seem to be starting in the medical field because there's a lot of opportunity for rehabilitation and possibly addressing neurodegenerative diseases. But I'm just curious if you find that you're going to be really focusing on specific use cases within medicine or if you're going for a broader consumer launch.

[00:10:03.801] Ramses Alcaide: So our technology actually started in the medical space. My PhD is on children who have cerebral palsy to help them communicate to caregivers with their brain activity. So we did a cognitive assessment test. I also did this work for people who have ALS. And so medical is something that we're really attracted to, but what's kind of core to our message, which is we're trying to create a world without limitations, is to bring this technology out to as many people as possible, make it a consumer product. And then that way, whether you have a severe disability or not, that's just the way people interact with the world. And so it creates this even playing field, a world without limitations.

[00:10:40.276] Kent Bye: Another application I've heard for BCIs is being able to detect cognitive load to be able to do specific education applications or to have some feedback into changing the experience so that there's less stimulus within the experience that could be with other types of human-computer interaction. But in terms of education, I'm curious to hear what specific brainwaves you might be able to detect and how that could be used within an educational context.

[00:11:05.565] Ramses Alcaide: There's a lot of educational applications. I mean, just in general, technology has a lot of applications by itself. The brainwave that we look at is actually also correlated to intelligence, so we're able to pick up information that would be valuable for teachers in that area. But at the same time, it's also correlated to attention, and it also can be used to keep people engaged in the activity that they're trying to do. So there is a lot of areas there where they would benefit from. We're primarily focused on more of a larger market, and we see groups that are in education would come and license our technology for those directions.

[00:11:40.582] Kent Bye: I know that at F8 this year, Facebook was announcing some of their forward-looking research of being able to eventually use BCI to be able to directly communicate, so being able to discern your thoughts. Where is the state-of-the-art for being able to actually read what you're thinking about?

[00:11:57.178] Ramses Alcaide: What I would say is when I saw that, that was one of the most awesome days of my life. I really hope that they're able to accomplish that goal. It would be a huge boon for humanity. Right now, the most cutting edge technology that exists is Neurables technology. And as sensors improve, as things become more, which is what Facebook is really working on, a sensor. As those sensors improve, our technology can take advantage of those things as well too. So we'll be able to transfer a lot of our machine learning platform and further increase the capabilities of what's possible. And so I really look forward to that. I know it's a very lofty goal. It's something that researchers have been working on for over 40 years. So, you know, it's not something new, but, you know, it's going to be an overall win for everyone.

[00:12:41.567] Kent Bye: Well, there's also a number of people who are looking at like more invasive direct neural injection, being able to interface with your brain through some sort of neural lace or something that you may be implanting into your brain. I'm just curious to hear your thoughts in terms of like where Nurble is going to draw the line in terms of like the invasive versus non-invasive BCI.

[00:13:00.217] Ramses Alcaide: Yeah, I mean, the line for us right now is in the non-invasive. But as invasive technologies start to become available, our technology transfers through as well, too. And so really, we can leverage sensors across different depths of the brain and pick up the same signals and even use the same pipeline we have. Really, what our machine learning pipeline does is it's really good at dealing with high noise. So any type of application where that's the case will be fine.

[00:13:26.224] Kent Bye: Yeah, and talking to Sarah Downey at CES, she talked about Neurable and the, you know, one of the things that these brain control interfaces bring up, these deeper ethical and philosophical issues around privacy and the extent to which this data is recorded and connected to your personal identity. And so, just curious to hear from your perspective, like how do you navigate these various ethical issues when it comes to the intimacy and information that you might be able to get from somebody based upon a combination of what they're looking at and what they might be thinking, or some brainwaves that are declaring different levels of intention. And if there's things around not recording that information, or how you navigate these difficult issues of ethics and privacy within VR.

[00:14:09.598] Ramses Alcaide: For sure. Really, we don't have that big of a problem with it. We use electroencephalography, which is very basic sensors that pick up average data that propagates on top of your brain itself that we record from your scalp. And so we're not able to pull out any identifying information or anything like that. So at least on our end, we don't have any of those problems. But 10 years from now, or as things become more invasive, it is going to be a question that we have to answer ahead of time.

[00:14:35.182] Kent Bye: In talking to Connor Russomano, one of the things he said is that at some point it may turn out that our brainwaves have some sort of unique digital fingerprint, such that it may not be identifiable right now, but eventually at some point it will potentially be able to be unlocked if we figure out this unique fingerprint that each person has, because there's unique ways that each of our brains fire neurons. Given that, I'm just curious to hear, like, right now, recording and storing EEG data may not be personally identifiable, but what happens if sometime in the future, with the right algorithm, you might be able to unlock someone's identity?

[00:15:12.371] Ramses Alcaide: That's a great question, and, you know, Connor's an awesome person, and, you know, he brings up a good point. There's a lot of groups out there who are working with EEG to do something similar to what you're describing. If you truly look at the literature and the papers that they've been putting out, What you'll notice is that I think that media groups have been portraying the results very differently from what a scientist would interpret it like. There's still a long way to go for that. And if we do get to that point, I mean, it's, you know, we're going to have to collect thousands and thousands, probably hundreds of thousands of brains and use very overfit systems to get that kind of resolution down with the EEG.

[00:15:50.246] Kent Bye: Can you talk a bit about what are you using on the back end in terms of these artificial intelligence or machine learning techniques in order to raise that signal to noise beyond what you might get without using those techniques?

[00:16:02.009] Ramses Alcaide: Sure. So most people, when they analyze brain activity, they use frequency waves. We don't use frequency waves for our brain analysis. Telling the brain that everything you do is based off a specific frequency is like telling a circle that it has 360 degrees. It wasn't until we used some sort of system that respected the circle, and we used radians, for example, where we were able to do far more complex things in mathematics and in engineering. And so with our machine learning pipeline, we do a very similar thing, where instead of telling the brainwave that, oh, you're a specific frequency, and when this frequency goes up, we're going to do x or y, we actually look at the overall shape. We look for shapes within the brainwaves. And they tell us what they mean. And that's a far more natural way of letting people interact with things.

[00:16:48.761] Kent Bye: I think one of the really exciting potentials about using BCIs within VR is that you have the potential to do this real-time biofeedback to be able to look at what's happening in your brain. And that, in a lot of ways, you have this ability to potentially get into these flow states. So Mihaly Csikszentmihalyi talks about these states of flow. And in order to get into a state of flow, there's a certain challenge that you're facing that is matching your skill that you have. And when you get into those flow states, then you have either higher levels of learning or creativity. And I'm just curious if there is a specific signature from the brain, from a neuroscience perspective, that you've been able to isolate what that flow state looks like from NeurBL's headset might be able to detect, and if there are ways to help cultivate that within people.

[00:17:34.533] Ramses Alcaide: For sure. I mean, there's some great work that's being done in that area. One professor, for example, from the university I came from was Rachel Seidler, who looked at experts versus non-experts and looked at their brain activity. And there is distinct signatures and brain formations that happen because of that kind of stuff. When it comes to Neurables technology, we're not able to get that level of resolution. We probably can with a higher density system, but we're very focused on consumer. And so that's not really kind of where we're focusing right now.

[00:18:05.512] Kent Bye: Well, I know that with these different EEGs, if you look at OpenBCI, there's different configurations of things that you're really focusing on. What are the spots on the brain that Nurble's focusing on, if there's different regions and numbers that you have to kind of say, given this, for someone who wants to perhaps make an application, what contact points are you actually making?

[00:18:26.093] Ramses Alcaide: Yeah, we're working with the frontal lobe and also with the parietal and the occipital lobes. So there's a circuit there that occurs. It's a cyclical circuit and that's really where we're taking the data from and using that as our actual analysis points.

[00:18:42.788] Kent Bye: And from what I've also heard from other people who are in the BCI field, there's this trade-off between the complexity of the number of points that you have on your brain and the user experience of that. Like, in order to get the highest resolution, it degrades the amount of user experience of that. So as you move forward, how do you navigate that trade-off between comfort and the number of sensors and resolution that you're able to get from the brain?

[00:19:06.269] Ramses Alcaide: This is actually something I'm really proud of my team about. We started with 256 electrodes with gel. So it used to take like an hour to set up sometimes. Eventually went to 32 electrodes and then 16 and now we're at 7 and they're dry. And the next version of our headset is going to have even less than that. We're trying to shoot for about two electrodes. And so yes, there is challenges and you lose a lot of data as you go smaller and smaller with your electrode sets. But that has just been a challenge for us to make a better and better machine learning pipeline.

[00:19:38.437] Kent Bye: Great. And so what are some of the biggest open questions that are really driving your work here at Nurbol forward?

[00:19:45.415] Ramses Alcaide: Really, it comes down to, at least in our work, creating the next operating system. We have a lot of great neuroscientific insights, a lot of great techniques and brainwaves that some people don't even know exist at this point. And so we have a lot of ways of using them, for example, to navigate through menus, scroll, swipe. But what we're trying to do right now is create that operating system for mixed reality devices. How do you create the best one? Those are really the more challenging questions. We've solved a lot of the big neuroscientific questions. Obviously, there's always a team working on speed and accuracy. But now we're trying to say, how would this be the OS that people want to see in the future?

[00:20:22.933] Kent Bye: ROB DODSON Great. And finally, what do you think is the ultimate potential of virtual reality and what it might be able to enable?

[00:20:31.437] Ramses Alcaide: The ultimate potential right now is really when we're talking about augmented reality and IOTs. And the reason for that is, you know, people have said, you know, IOTs really haven't taken off that much or, you know, AR still hasn't really taken off that much. And the thing is, when brain-computer interfaces come into the mix, that's when the whole world is going to connect. That's when you're going to be able to walk into a house and then if you need to change things or edit things, You'll just be connected to your AR devices, your visual platform, and then use your brain to connect to everything. So really, the ultimate potential is this ubiquity. This ubiquity that right now you have a pseudo version of it on your phone, but that's nothing compared to how natural and telekinetic it can really become in the future.

[00:21:15.688] Kent Bye: Great. And anything else left unsaid you'd like to say?

[00:21:19.062] Ramses Alcaide: The only thing I would say is we're going to be showing this technology off as much as we can. I invite all of you to come check it out. We really want to change the mental frame of what people think BCIs can do. And with that, I want all of you to join us in creating a world without limitations.

[00:21:36.307] Kent Bye: Awesome. Well, thank you so much for joining me today, Ramses.

[00:21:39.107] Ramses Alcaide: Thank you, Ken.

[00:21:40.428] Kent Bye: So that was Dr. Ramses Alkaid. He's the CEO of Neurable. So I have a number of different takeaways about this interview is that, first of all, Well, back in 2017, Facebook had announced that they had been working on these BCIs to be able to essentially read your mind. And I've been seeing a lot more advancements in that field, especially when I went to the future of neuroscience and VR workshop put on by the Canadian Institute for Advanced Research. There was a researcher there that was talking about some of the work that he's been doing and being able to actually read your thoughts and read your mind. And he kind of gave the technological roadmap for the next five to 10 years. And it's getting super sophisticated to be able to actually be able to determine specific words that you're thinking in your mind. Now that's at this point using a lot of stuff with ECOG, that's like this invasive technology. It's like putting on these electrodes. That's for people who have different neurodegenerative diseases or some sort of cognitive impairment where they're willing to take that extreme of a step of invasive technology put into your brain. But the point is, is that machine learning and all the different ways to be able to do non-invasive techniques are getting better and better. And I think we're going to start to see a lot more brain-computer interfaces that are able to do things that feel quite magical at this point. The demo that they were showing there at SIGGRAPH gave you this whole experience of trying to get this sense of telekinetic presence. It feels like you're kind of psychic. You're able to look at objects, have an intention in your mind. You're able to like pick up the objects and move them around within these spaces. So really cultivating this sense of telekinetic presence, maybe this is going to be the future of interfaces with spatial computing, is that we're going to be just commonplace, put these things in our heads and finding ways to detect our brainwaves. And that's the thing that I think Neurable is doing with these specific dry sensors from wearable sensing. And, you know, they have about seven of them on their existing prototype frames. He said that they had started with two 56 and went down to 32, then 16, then got down to seven and then potentially are going to even have less. And the more, the better in terms of being able to have higher fidelity, higher resolution, be able to do more interesting things. But then there's this trade-off of the more that you have, then the more of a pain it is for the user experience, making sure that you have it on just right. So I think they're still trying to find this sweet spot. It seems like that these different types of BCIs are going to have a real big application with something like medical field, medical applications, because it does cost quite a bit to get all this extra sensor technology and there seems to be a lot more compelling use cases and problems to be solved within the medical field by having this fusion between what's happening in your brain, your EEG and what you're able to show within a immersive experience. I don't expect to be seeing much of a huge consumer play, especially with the different price points of moving away more and more from PC VR and things like completely mobile untethered oculus quest type of systems. So I don't think that the consumer market is going to necessarily be big enough. So it'll be interesting to see what happens to normal where they're going, they really wanted to have this without limitations motto to bring that to the masses. But We'll see where that goes. Um, he said that they were working on operating systems and doing some of these more mundane aspects of kind of navigating around an operating system, which to me says that they're, they are thinking about some of these different aspects of how could the BCI be more of a fundamental user interface. There's a startup called iFluence that ended up getting bought by Google a number of years ago. I saw a demo at the tech crunch disrupt and publish an interview with them, but that was basically using eye tracking data to be able to move around and as a more of an operating system layer. That was observed by Google, haven't seen much about that. And it could be continuing on to some of the research in these different advanced 3D UI or BCI interfaces, but using the eyes and eye tracking technologies. So that to me is the most important that I've seen in terms of like some of the work of the operating system. But I expect that something like this is going to have much more application in the medical field. So we'll see what happens with Nurable as they move forward and where they end up going. I think it's kind of a compelling concept and idea, having something that is easy to put on, able to establish a pretty good connection, having that sweet spot of having just enough sensors and having it easy to use. Neurable seems to be right around that sweet spot. I think it's just more of a matter of what their specific use case and their problems that they're going to be solving and what market they're going to be really be focusing on, I think is going to be a key to what ends up happening with Neurable. So again, this interview was done over two years ago at SIGGRAPH. And so hope to catch up with them again at some point, but, uh, definitely a big player in this field of this intersection between neuroscience and the future of immersive technologies, as well as the advancement of BCIs, brain computer interfaces. I'm seeing more and more talk about BCIs. And I think that. Ramsey's is onto something when he's talking about the interface of the future, you know, when you look at personal computers and the graphical user interface and the mouse, as well as with the mobile phones and the capacitive touch, you know, I do think that there's going to be something that is needing a radical new user interface. Maybe it is a combination of eye tracking as well as with different aspects of brain computer interface. But I think all of that is still yet to be fully fleshed out. We're still at the very early phases of that. And as the technology continues to improve, then we'll see where it all ends up. But I think this concept of telekinetic presence and cultivating our own sense of making it feel like we have these psychic powers, as we're in these immersive spaces, it's pretty alluring to think about, you know, the future of immersive technologies, both with VR and AR that we're going to be using our minds to be able to interface with reality in that way. So excited to see where that ends up. So that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast and you'd like to help out the podcast, there's a number of things you can do. First of all, just spread the word, tell your friends, share different episodes on social media, send them to individuals who you think might like them. Just spreading the word about the podcast helps the podcast grow. It's a major factor for how I've been able to continue to exist as a entity. It's just through word of mouth. Also, if you'd like to support the podcast, again, this is a listener supported podcast. And so I do rely upon donations from listeners like yourself in order to continue to sustain this podcast, to be able to pay for my own livelihood, to be able to travel around and continue to record these interviews and document the real-time oral history of the evolution of spatial computing. And if you'd like to see that not only for yourself, but also for future generations to look back on, you know, this turning point in history, then please do become a supporting member. Just $5 a month is a great amount to give to be able to continue to sustain the work that I'm doing. And then potentially if enough people give to be able to expand and grow it out as well. So you can become a member and donate today at patreon.com slash voices of VR. Thanks for listening.

More from this show