#582: The Future of Invasive Neural Interfaces & Uploading Consciousness with Ramez Naam

ramez-naamRamez Naam is the author of The Nexus Trilogy sci-fi novels, which explores the moral and sociological implications of technology that can directly interface with the brain. He gave the keynote at the Experiential Technology Conference in March exploring the latest research exploring how these interfaces could change the way that we sleep, learn, eat, find motivation to exercise, create new habits of change, and broadcast and receive technologically-mediated telepathic messages. I had a chance to catch up with him after his talk where we do a survey of existing technologies, where the invasive technologies are headed, the philosophical and moral implications of directly transferring data into the brains, and whether or not it’ll be possible to download our consciousness onto a computer.



This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So on today's episode, we're going to be diving deep into the rabbit hole of direct neural interfaces and whether it's going to be possible to have a technological interface to be able to download our consciousness or to transmit information in and out, or whether or not we're going to be able to jack into the Matrix and to be able to just teach ourselves things like Neo did in the Matrix. So I have this discussion with Ramez Nam, who is the author of the Nexus Trilogy. It's a sci-fi series that explores a lot of the technological and moral implications of this ability to be able to directly interface with our minds through technology. So Ramez was actually giving the keynote at the Experiential Technology Conference and Expo in March, and just a couple weeks before that I had gone to the MIT Technology Conference where I saw Brian Johnson speak. He founded this company called Kernel. And Kernel was trying to look at the issue of neurodegenerative diseases and starting there, because if people are already on this trajectory of losing the capacity of their brain, then they're much more likely to take severe actions to be able to experiment with trying to figure out what technology can do to maybe stop or intervene in some way. and that as more of these medical applications for diseases are implemented, then perhaps eventually the technology will evolve so that it goes from more of a bespoke system to address a medical issue into more of a consumer product where just everybody is being able to jack into the matrix and upload their consciousness into some sort of internet cloud that is able to allow us to achieve this level of immortality, which is sort of like this transhumanist dream that a lot of people I think are imagining that is possible. And so then on February 21st, MIT announced that they actually have these tiny fibers that were opening up new windows into the brain where they were having these interfaces to have like this genetic chemical and optical electrical inputs and outputs in and out of the brain so that they're coming up with new and new ways to interfacing with the brain. So this is a technological roadmap that is happening. And the question is, you know, how far is it going to go and what are the other moral and ethical implications? And is it even philosophically possible to be able to capture information and knowledge outside of direct experience in our bodies and our emotions and our ability to have agency and interact? And so we're gonna be talking about all these issues and technologies that make it possible to directly communicate with our brains with technology. So that's what we'll be covering on today's episode of the Voices of VR podcast. So this interview with Ramez happened at the Experiential Technology Conference and Expo on Tuesday, March 14th, 2017 in San Francisco, California. So with that, let's go ahead and dive right in.

[00:03:06.042] Ramez Naam: I'm Ramez Nam. I wrote a series of books called The Nexus Trilogy, and it's about having technology in your brain that lets us get data in and out. And if you and I both had it in our brains, Kent, we'd get sort of a telepathic connection. And while that's sci-fi, there's real stuff happening like that. We have 200,000 people have cochlear implants that give them hearing by directly stimulating their auditory nerve. We have people that are blind that have implants in their brain that let them see. People that are paralyzed have implants in their motor cortex of their brain that let them control robotics or cursors on screens. So we have shown proof of principle that we can get digital data in and out of the brain, and even between one person's brain and another. So that's what I'm really excited about.

[00:03:56.309] Kent Bye: Yeah, so I've been looking at virtual reality for the past three years, and there's some people that say, OK, well, our perceptual system is good enough. We should just be able to go directly through our existing eyes, ears, and our senses and be able to directly stimulate that. And then there's this other thread that I hear coming up, which is sort of like this direct neural injection of being able to kind of hack directly into the brain matrix style. And so I know that we're kind of at this crossing point where a lot of this technology is being used for medical purposes, and then once that's proved out, perhaps move into cognitive enhancement. So it feels like kind of a 50-year timescale that we're talking about, like, things that are happening today kind of projecting out into the future. So I'm just curious to hear some of your thoughts about, you know, what we can already do today with input and output with indirect and direct input into the brain.

[00:04:48.192] Ramez Naam: Well, it's just way easier if you don't have to put something inside the brain. So for the near future, for the next 10, 20 years, it's going to be stuff outside the brain. Unless you're paralyzed or blind or deaf or have a memory problem because of a traumatic brain injury, patients will be the ones that get the direct stuff. But I agree, like our senses are amazing already. And so I think with VR, I'm incredibly excited. as we get VR hardware that shrinks, that gets untethered, that gets low power, that gets high resolution. Like, I've got a Vive. I love my Vive. But the reality is, it doesn't have enough pixels. Maybe it needs twice as many pixels in each direction, 4x. Maybe it needs four times as many pixels in each direction, 16x. And it needs to weigh about one-fifth of what it weighs right now, and it needs to be untethered. So you're talking about, you know, a decade of development in VR until we really get to sort of something like the glasses that Geordi wears in Star Trek, not for the blind, but where that gives you a full-on VR experience. When we get there, I think it is world-changing. I think it is the new default medium for entertainment, certainly for gaming, for education, for training. But the invasive stuff that goes in the brain can do things that VR can't, right? We have in the lab, we have rats that have brain damage, we're able to restore their full function, and more than that, we're able to do things like record their memories and play them back to them a year later. We're able to transfer memories from one rat to another rat. They both have chips in their brains, one of them runs a maze, The second one doesn't. But the second, because data flows from one chip to the other, it's like an I Know Kung Fu moment. It can still run the maze that it's never seen. We did it with two rats in a different experiment that are in labs that are thousands of miles apart. Because once the data is digital, we can send it anywhere. So that sort of stuff offers potential beyond VR to directly share our thoughts and memories, to share knowledge, share skills, share our emotions directly from brain to brain. But it's still the realm of sci-fi primarily because brain surgery is hard, and it's dangerous, and we don't want it. So we've got to make the hardware, you know, I said maybe we've got a decade of work on VR to make it really mass-consumer. We've got a few decades of work on direct brain implants to get to the point where you don't need to break open the skull. Like in my sci-fi, I make it easy. You swallow a silvery vial, and these nanoparticles get into your brain and self-assemble into little machines that attach to your neurons. Well, that's nice. That's a nice hand wave. To get to something like that, we don't know what the work we have to do is yet.

[00:07:28.936] Kent Bye: Okay, so let's talk a bit about the specifics of the direct type of brain interfaces that we're seeing already. I know that there's been a lot of news like MIT announcing being able to basically have in and out connection like genetically and have basically input and output into the brain on many multiple different levels and neural meshes. Maybe you could kind of just lay out the technological roadmap that you see of what's already being proved out in the lab today and how you see that kind of evolving over the next 50 years.

[00:07:58.238] Ramez Naam: Yeah, so what's proven out today is we have brain reading and brain simulation via electrodes in the brain. So today, if you want one of these interfaces, let's say you're paralyzed, you want to control a robot arm, they open up a part of your skull and they put, you know, a few dozen or a hundred electrodes that penetrate your brain. They're very, very small. The whole chip is like two millimeters across and the electrodes are, you know, a tenth of a millimeter long or something. But it's still causing some damage. What's on the horizon are things that are less invasive and more powerful. DARPA has a plan, they've got a solicitation out to make implants that go underneath the skull but don't penetrate the brain that can talk to a million neurons at a time instead of a hundred. So that would be a gigantic leap in fidelity and that would let us really like transmit full fidelity sight and sound into the brain. We have Elon Musk wants to make a neural lace, and he's got something probably going on in stealth mode that we'll hear about later this summer. That's based on some science fiction concepts, but also based on real work at Harvard. They have, instead of cutting open the skull in mice, they've used a needle that penetrates the skull still, but it's an injection that injects what they call a neural mesh. It's a mesh of electrodes It's all like balled up and like a roll inside the syringe, but it gets into the brain and it unfurls across the surface of your brain. There's another project like that at UPenn that uses a silk substrate. The electrodes are soft, they're flexible, and the silk biodegrades in your brain and sort of melts onto the surface of your skull. So again, they have to like put an injection in, but it's a lot easier than full-blown brain surgery. So that's the type of things people are working on now. It's not going to make it non-invasive. Like non-invasive means you can put on your headphones, right? But it can make it less invasive, where it's an injection instead of a surgery. And that'll open up the set of people who are willing to do it.

[00:09:57.996] Kent Bye: So right now, I think there's a lot of technologies here at the Exponential Technology Conference that are non-invasive, which, you know, you can only get a certain level of fidelity. The more electrodes you put on your brain, you can get specific information. Can you kind of talk about the differences of what you see between what you can achieve with the state-of-the-art EEG and what these next generation neural meshes are going to be able to do?

[00:10:21.024] Ramez Naam: Yeah. So with things like EEG or TDCS, transcranial direct current simulation, you can do some pretty cool stuff. You know, people can steer wheelchairs with EEG. EEG picks up electrical simulations inside your brain, but also from outside the skull. With TDCS, it looks like we can speed up learning a little bit, but there's a limit. Like, you can steer a wheelchair because it really has, like, turn right, turn left, forward, back. Pretty simple. You can't control a robot arm if you're paralyzed with EEG. And people that have tried to use EEG, again just a headset that you wear, to do things like game control, they've really struggled. It's just there's only so much data because it's outside the skull, the signals are noisy, they're interfered with by like the muscles in your head and so on. When you talk about processing inside the brain, you're talking about being able to control a third or fourth limb as fluidly as you control your current limbs. We're talking about being able to broadcast full sensory information of all five senses directly into the brain with as high fidelity as we have with our senses today. Touch, smell, taste. We're talking about being able to affect brain functions like sleep. Like imagine being able to like dial up sleep. You open up your sleep app and you say I want seven hours and 30 minutes of sleep and wake up refreshed at this time and it just works. You don't wake up in the middle of the night. Changing hunger, changing motivation to exercise, even stimulating your brain to instruct your body to change in certain ways. Broadcasting your thoughts. Like we in our visual cortex is also where our visual imagination is. So imagine that you could just Imagine a scene in your brain and it would just appear on a screen or be transmitted to my brain. That's the potential of really having electronics inside the brain. So it's very, very hard. It's challenging. It's dangerous. We're a long way. But when we get there, we're talking about a breakthrough.

[00:12:22.895] Kent Bye: Yeah, and it seems like that from some of the studies that I've seen, they're trying to maybe do cognitive enhancement. So they're trying to teach a specific task and the things that they're able to measure right now, it seems like could be kind of related to motor skills. Or maybe you could talk a bit about the types of learning that you could do in terms of cognitive enhancement to be able to what they've shown in mice, but also how that might translate to humans, what kind of applications and things you could do in the brain.

[00:12:48.593] Ramez Naam: Well, so far, even with technology outside of the brain, with a headset that stimulates your skull a little bit, it looks like we can speed up learning rates maybe like 10%, let's say. With stuff inside your brain, the cognitive enhancement can go dramatically beyond that. Imagine, you know, the scene in The Matrix when Keanu says, I know Kung Fu, or when he says, I need to be able to fly this sort of helicopter, and it downloads. That's pretty outlandish, but it's not completely outlandish. That's the kind of thing we can imagine doing if we have high fidelity electronics inside your brain. Now, we have not cracked the code of that, but the way to crack the code is to get the data, is to get the data on what's happening in millions or hundreds of millions of specific neurons when someone thinks about a certain thing, remembers a certain thing, and then we could talk about, you know, in a way like your phone or Wikipedia are extensions of your memory, they're exo-self, but you still have to have some interface to use them. We could talk about that being internal, in your brain. You just think about something and you know it, that you had never known before. So it's sort of mind-blowing and it challenges even what it means to be human, but that's the potential.

[00:14:02.011] Kent Bye: And some of the demos that you showed in your presentation showed kind of a cooperative gameplay with one person watching a video game and the other person actually kind of getting sent through EEG, almost like a telepathic communication, but through technology. And then the other person was kind of controlling that. Maybe you could talk a bit about what you're able to do now in this kind of multiplayer co-op of connecting brains together.

[00:14:24.038] Ramez Naam: Yeah, so this is at the University of Washington, UW. Two professors, Rajesh Rao and Andrea Stokoe, they have this setup. They're in separate rooms across campus, a mile apart. Rajesh Rao, one professor, he can see the screen of the game, but he has no controller, except he has this EEG skullcap that picks up electrical stimulation signals from his scalp. And across campus, his friend Andrea Stokoe can't see the screen. But he has the fire button for the same player. And he has this magnetic stimulator on his skull. And so when the first guy, when he sees a bad guy, he thinks, shoot. And that's transmitted across campus. And the magnetic stimulator creates a small electrical current through the part of the other professor, the second professor, Andrea Stokoe's brain. that controls his index finger. And his finger twitches, and he hits the fire button, and he fires. And it's like Andrea Stokoe's finger has become an extension of the other guy, Rudolph Shroud's brain. And is it completely unconscious? Yeah, it's unconscious. The second guy, the guy that shoots, doesn't know that he's about to shoot. He just observes that his finger has twitched.

[00:15:33.771] Kent Bye: So this brings up all sorts of different questions. When I was at MIT, a tech conference, someone asked, OK, we're dealing with hackers and a hostile internet environment. So what does it mean that if we have these neural laces in our brain, does that open up those two attack vectors for people to start to control us and control us in a way that we're not even conscious of?

[00:15:52.263] Ramez Naam: Come on, what could possibly go wrong? Yeah, so that's a lot of my sci-fi is about some of these implications. What is the backdoor? What if the NSA wants a backdoor or uses zero days to get access to your brain or criminals or spam in your brain or the occasional blue screen of death in your brain? So there's a lot, a lot, a lot of issues.

[00:16:16.749] Kent Bye: So yeah, there's plenty of dystopian futures, and I know that you're painting a very also optimistic take. I do want to take a step back and look at the philosophical implications of this, because I think that when I look at what's happening in VR, what I see is that there's this reconnection of the subjectivity of our inner way that we construct our reality. whether it's our perceptual system and the way that we take all the inputs and be able to construct a picture of what's happening. And then on top of that, we have our whole emotional effective realm, which I think is also having a dimension of encoding memories based upon peak emotional experiences. And so you have this whole concept of embodied cognition, which means that we don't just think with our brains and our minds, but we're actually using our entire body to process and understand and make sense of the world and to construct our realities internal within our mind. Given that, it seems like each person is going to have their own little kind of blueprint and architecture of the memories of their lives. And so it seems like a little bit of a philosophical challenge to be able to come up with a generalized neural coding to be able to abstract knowledge and information in a way to be able to directly inject into people.

[00:17:23.858] Ramez Naam: Yeah, it is a challenge. You have to calibrate to the specific brain, right? The way that you encode even the concept of VR might be a different neural encoding in your brain than it is in mine. But it probably has a varied degree of similarity. So in this study with rats, for instance, both rats have implants in their hippocampus, part of the brain that controls memory. and they have one rat run a maze, and the data flows to the other rat, and they're not doing any calibration, they're not doing any translation, they're not doing anything specific to each rat's brain. Same electrode array, same format of the data exactly. The second rat, you put it in the maze, it runs it as if it had run it before. So there will be things that are unique to each of us, and the technology will have to adapt to each of us. And so I imagine sort of a calibration phase where, you know, you get walked through, like, think of a mountain, think of a fish, imagine this. And the technology watches and sees what's happening in your brain to sort of learn that intermediary language. But even so, even without that, there is a fair degree of commonality. Not perfect, but a fair degree of commonality in how we encode things.

[00:18:29.321] Kent Bye: I see. And for me, this brings up all sorts of kind of like the transhumanist dreams of being able to capture our consciousness and record our memories and be able to kind of store our memories in a certain way. Do you see that this is also potentially leading towards a technological roadmap of being able to externalize our consciousness into technology?

[00:18:50.332] Ramez Naam: I mean, we have this notion of uploading. Can I take my entire brain and move it to a digital substrate, move it into the cloud, and not just as a backup copy, but actually live in the cloud? Nothing in physics or chemistry or biology tells us that that's impossible. Everything says that philosophically that should be possible, but it's devilishly hard. We don't know what we don't know. Just the other day, we had a new finding about the brain about how neurons talk to one another. that says that more parts of the neuron, these structures called dendrites, are much more actively involved in communicating from neuron to neuron than we thought. And that by itself might raise the estimate of how much computing it takes to simulate the brain by a factor of 100. Right? So we just don't know how far away we are from that. I don't think implants in the brain will get us all the way there. Today, there's no technology on the horizon that would allow you to upload your consciousness without your brain being sliced up very, very, very finely. So that's even further out. And I write about that in my sci-fi too. It's super fun. But that's even further out. And I don't want to be the first test subject. Let's just say that.

[00:20:02.699] Kent Bye: Well, I don't either. I don't think, you know, when it comes down to it, I don't think that I'm going to be in first in line to be able to do these neural injections just because it's like there's so many unknown things. And so you mentioned this very briefly, but do you see like a point at which humanity may split into people who decide to become one with machines and people who take a more naturalistic, holistic, Luddite approach?

[00:20:27.053] Ramez Naam: You know, in sci-fi, we've often had this notion that technology will divide people, and even the rich and poor is often one, right? We'll have the Morlocks versus the LOI. Only the rich will be able to afford this super cool new technology, and they will get further and further ahead and leave the poor behind. That's a great story trope. I love it. It's probably not right. If you look at the picture I showed today, it was Gordon Gekko from Wall Street. His cell phone that cost $5,000 and its charge lasted for half an hour and it had shitty reception. And that was one of the richest men on earth. And now you have 6 billion cell phones on earth and Sub-Saharan in Kenya, you have 85% cell phone penetration. In India, we expect half a billion people to have smartphones by next year. So it's actually because technology plunges in price exponentially, it actually becomes ubiquitous and it empowers the poor even more than the rich. It levels the playing field. So will we diverge? Maybe. Maybe some people will not choose to. but I think that's about as significant or as likely as divergence between those who are willing to use cell phones and those who aren't.

[00:21:41.303] Kent Bye: Great. And finally, what do you see as kind of the ultimate potential of virtual reality and neural augmentation and what it might be able to enable?

[00:21:51.649] Ramez Naam: So I think we're a species that communicates. That's what's special about us. That's why without any claws or armor or fangs or even protective fur and weak muscles, we thrive. Not just because we're problem solvers, that's nice, but because we can communicate and coordinate. That's what is special about us. From the first stories told on the fire, to the first cave paintings 37,000 years ago, to the first stone tablets, to the printing press, and being able to transmit stories and ideas across time and space, to radio, TV, the internet, VR, we want to connect. And I think that connection, despite what we see in the world today, despite Donald Trump, in the last decade, marriage equality happened in the U.S. We've legalized pot in a lot of parts of the world. People, if you ask them, are you willing to sit down with this person of a race and have a meal with them, that's at an all-time high in the United States. And I think that is because more communication breeds more empathy. One of the most moving VR experiences I've ever had was a VR film about Syrian refugees at a Syrian refugee camp. And I feel privileged to have had that. And I can't wait for the day when 7 or 8 billion people all have VR and we can live each other's lives a little bit, see each other's eyes, and connect more as just human beings.

[00:23:14.345] Kent Bye: Awesome. Well, thank you so much. Thank you. So that was Ramez Nam. He's the author of the Nexus Trilogy, which is a series of sci-fi novels exploring the technological and ethical implications of being able to directly interface with our brains, with technology. So I have a number of different takeaways about this interview is that first of all, The whole idea of invasive neural interfaces really creeps me out, I gotta be honest. I am not gonna be in the first line of being able to inject things into my skull and it's something that actually is somewhat just both scary but also somewhat disgusting and I have to like take a look at that because it's like I think that the thing that bothers me, I guess, is that there's this assumption that what it means to be human is just a neurological encoding of data and information that's stored within the context of our brains and memories. And perhaps that is a set of metaphysical assumptions of materialistic reductionism that I feel like there's something missing. And I guess it's an open question right now. Right now, nobody really knows what consciousness is. And when Ramez says that there's nothing within physics, chemistry, or biology that would disprove the possibility of being able to upload your consciousness into some sort of internet, that in some ways is technically correct given the metaphysical assumptions of consciousness being emergent from neuroscience in that Our consciousness is just a mere artifact of all the data being stored in our brains and our bodies and our life experiences, and that it's calling upon that. Now, if the metaphysical assumption is something like panpsychism or idealism, then you get into this other realm where actually consciousness could be fundamental. It could be a whole field that is below physics, or it could be universal such that it's embedded into every single photon. Instead of our brain being a repository of all the data, it would be more akin to like a TV antenna such that our experience is kind of flowing through us, and maybe consciousness is some sort of field that transcends the structures of space-time, and that it would be kind of like you trying to upload a TV show by breaking apart your TV set and looking at all the neural circuitry and trying to find the correlations between the images that you're seeing on the tv screen versus what pixels that are being sent to it through a signal that is transcendent to the tv the tv is just a vessel for being able to put the signal through and it may turn out that consciousness is like this that consciousness is a stream of stuff that's coming through us So that is more of a metaphysical assumption. So I just wanted to put that out there, is that there may not be something within physics, chemistry, or biology, but there's certainly philosophical arguments around the nature of consciousness. And it is yet to be determined whether or not consciousness is going to be emergent from our neurology. It may actually turn out to be correct. And Daniel Dennett and Sam Harris and all the physicalists, determinists, may turn out to be 100% correct. And we will be able to take our consciousness to create a copy of our brains and to upload it into a computer. And we'll be able to be immortal. For me, I feel like there's a part of the human experience that involves being embodied into the world, having emotions, being able to express agency, and to be able to interact and participate within the context and creation within our experiences. And it's those experiences that are within the context of either literal space-time or some sort of simulated space-time within virtual or augmented reality such that you need the full context of emotion and body and agency and everything to be able to actually have a degree of consciousness and awareness that is then encoded into the different dimensions of memory, which includes a what, where, and when. So there's a part of memory that is connected to a context of a place and time in which it happened. And so is it possible to then hack into the brain and just put in the data of the what without that context of the when or the where? Is it possible to have memories outside of a context of a place or a time? So I think this is the part where I think it's, as people are trying to figure out how to do these types of neural encodings and inject data into our minds, These are the types of questions that are going to have to be answered. And what would it mean to be able to get like a Wikipedia article uploaded into your brain and to have like a collective context under which this is like a baseline of knowledge and information that everybody has? And then what happens when that information changes and evolves and grows? Because science is not static. It's continually growing and evolving and changing. So what would it mean to take a snapshot of Wikipedia in 2017 and upload it into your brain? And that's reality for you for the rest of your life. I mean, there's so much things that change and grow and evolve. So how do you, there's like a bit of technical debt for how do you actually update and maintain that information to be able to match with what would actually be changing, growing, evolving with what is known at the time. And what about situated knowledges, which is like this concept, which is that there's different people of position and power that are able to interpret information in completely different ways. And so is it possible to be able to escape our category schemas that are coming from like our direct experiences? that goes back to this idea of Kant, which he starts to talk about, like these fundamental primary archetypes and metaphors in our minds that come from our direct experience, and we can't have information and knowledge in our brains absent of those embodied experiences. Those embodied experiences come from our perceptual intelligence, they come from our manipulative intelligence, they come from our Emotional intelligence and social intelligence and mental intelligence, all those things are all combined together. So the idea that you could just like reduce that down to a code and bits and bytes and be able to upload it into people's brains and call it a day. And I feel like that idea is just ridiculous to me because I don't wanna be immortal and have my consciousness on a computer and feel like that would be the same experience as an embodied experience. There's something about the phenomenological experience of being a human that includes me being born in a certain place and time and living through a culture of growing up when I did and then being here now with the context of what all I experienced out through my life. So going back to this idea of situated knowledge, it's just this idea that no one person has the claim to what truth is. It's that you actually have to look at a matrix of everybody's position of where they're at in space, but also their position of power and privilege and all their unconscious biases and all their perspectives that are able to then take a sampling of reality through the lens of their reality. And that, you know, if anything, there's a certain beauty in that of us having these different perspectives and ideas and that we could start to have ideas about what is reality, but always continually being trying to challenge our unconscious assumptions and to be able to come into dialogue and conversation about a co-construction of reality. So I think that these conversations actually bring up a lot of really important philosophical and technological questions because it may actually prove out to be possible that consciousness is indeed emergent from our neurology and that we will indeed be able to upload our consciousness onto computers. And that is a whole other line of moral and ethical questions about what that means and how we actually handle that. My suspicion is that there's going to be something that is primary to the embodied experience of being human in a specific place and time, and that each of us is going to be able to be unique in our life experiences and have unique gifts to be able to give into the world. in that even though I'm skeptical and don't like this transhumanist concept of uploading consciousness into a computer, I support it because I think it actually is going to be addressing a lot of these questions about these deeper metaphysical and philosophical assumptions. It may prove to be impossible, and if it is impossible, then maybe there's a new conceptualization for what the nature of consciousness actually is. It may prove to be fundamental, or it may prove to be universal, or it could be emergent. It could be an emergent property. So each of these are open questions. And I think that this line of thinking actually causes us to think deeply about what is real, what is possible. The other thing is, I think that artificial intelligence, as well as virtual reality technologies, both are kind of addressing different dimensions of the human experience. And that the more that we study about artificial intelligence, the more we learn about the nature of consciousness, and the more that we learn about what it actually means to have intelligence. And by studying immersive technologies like virtual and augmented reality, we start to learn much more about the nature of human experience. And taking a look at the experiential perspective, the lessons of virtual reality, you can have a virtual reality experience and make it feel like it's a full experience where you have memories of that. So we know that that is actually possible. Is it gonna be possible to just take the mental presence and mental intelligence and kind of like bypass all the other dimensions of our agency and embodiment and emotions to be able to do a shortcut directly into the brain and to be able to transfer learning and knowledge. I don't know. I'm skeptical. I believe that you need to have a holistic approach and that you have to look at the whole system and define clearly about what intelligence is and what it experiences, a broad mix of mental and social intelligence and manipulative intelligence and perceptual intelligence and emotional intelligence and many other different dimensions of intelligence. But all of those things of intelligence are also the same dimensions of presence, where you have mental and social presence, and active presence, and embodied and environmental presence, as well as our emotional presence. And to really come into that space-time dimension of being in a place and a time, and how do you actually encode memories in that way. So like I said, there's a lot of things that I'm skeptical about the direct neural interfaces, but I actually think it brings up a lot of really fascinating ethical and moral and philosophical questions that are worth exploring. And so I'm supportive of the continuation of looking at diseases and seeing how you could use something like cochlear implants or technology to allow people who are either deaf to be able to hear or people who are blind to be able to see And as we continue to pushing the edge of what's possible with the technology, then it's going to start to see how far we can take it and start to learn about some fundamental aspects of our consciousness as well as intelligence and experience. So that's all that I have for today. I just wanted to thank you for taking the time to listen to this podcast. And if you enjoy the Voices of VR podcast and enjoy these types of deep dives into these types of topics, then I encourage you to become a member of the Patreon since this is a listener supported podcast. I rely upon your gracious support to be able to continue to bring you this type of coverage. So you can become a member today at patreon.com slash Voices of VR. Thanks for listening.

More from this show