Back in 2015, Neuroscientist David Eagleman gave a TED talk about the potential to expand and create new senses. He showed off a haptic vest prototype that could translate audio input into an array of 32 vibrating motors that could be fed directly into your body. The signals would reach the brain and create a neural input that is indistinguishable from what the cochlea would produce, meaning that it’s possible to turn the torso into an ear. The principle of sensory substitution shows that the brain doesn’t care where the data comes from as long as it’s structured in the right format and correlated to feedback within the environment.
I had a chance to catch up with Eagleman at the Experiential Technology Conference to talk about Neosensory’s VEST (Versatile Extra-Sensory Transducer), the hard problem of consciousness and how reality is constructed in the mind, expanding and creating new senses, invasive neural interfaces to the brain from Kernel, the philosophical implications of simulation theory, and his metaphors for how he understands the relationship between the mind and the body.
LISTEN TO THE VOICES OF VR PODCAST
Eagleman co-founded a company called Neosensory that is creating this vibratory VEST, and they’re reaching out to developers to see what kind of applications this could have. He’s particularly curious about whether it’s possible for humans to create entirely new senses by feeding data streams into the body about imperceptible environmental data, or perhaps even abstracted data from the stock market.
Eagleman says that the body is not great at handling redundant data, and so people who are deaf learn to use the VEST a lot faster than someone who can already hear. Learning to understand data from the VEST completely happens on an unconscious level, which they can objectively verify happens by seeing consistent improvements through many repetitions. Eagleman hypothesizes that it is possible to create new senses, and that the expansion of our biological capabilities with technology will expand the range of human sensory experience.
There’s a number of philosophical questions about the nature of consciousness, the philosophy of mind, as well as ethical questions around the extend that this should be used. Just because we can create new senses, should we? What are the emotional and mental health tradeoffs of feeding mentally abstracted data directly into our bodies? There’s also a lot of potential benefits like could this be used to feed emotional and biometric data from other people so that we can cultivate a deeper sense of empathy and connection with other consenting adults.
Sensory replacement and sensory addition is one of the most profound implications of virtual reality technologies, and Eagleman makes the point that the extent of our experiential reality is constrained by our biological limitations. By using this type of technology from Neosensory to expand our range of sensory experiences, then it’s changing and evolving what the dynamic range of the human experience.
Here’s Eagleman’s TED talk from March 2015 talking about the potential to create new senses
Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR podcast. So on today's episode, I have a pretty mind-blowing discussion with David Eagleman talking about sensory substitution and sensory addition. So David's created this vest with Neosensory that is able to basically allow deaf people to start to hear. It translates sound in 32 different vibratory motors, and you're able to turn your torso into an ear. But David is really curious about whether or not it's possible to actually expand our senses and to be able to create new levels of perception. So we'll be talking about adding and expanding sensory perception on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by the Voices of VR Patreon campaign. The Voices of VR podcast started as a passion project, but now it's my livelihood. And so if you're enjoying the content on the Voices of VR podcast, then consider it a service to you and the wider community and send me a tip. Just a couple of dollars a month makes a huge difference, especially if everybody contributes. So donate today at patreon.com slash Voices of VR. So this interview with David happened at the Experiential Technology Conference that was happening from March 14th to 15th in San Francisco, California. So with that, let's go ahead and dive right in.
[00:01:36.670] David Eagleman: So I'm David Eagleman, I'm a neuroscientist, and I do research, but recently I spun a company off from my lab called Neosensory, and what we've built is, for example, this vest that I'm wearing now, which has 32 vibratory motors on it. So it's just like a little buzz in your cell phone, but you imagine you have all these all over your torso. And we can convert any kind of data stream into patterns of vibration that you feel. So what this allows us to do, among other things, is, for example, cure deafness. So people who are deaf, we capture sound from the environment, and we turn it into these patterns of vibration. And they can come to understand the spoken world that way. It's kind of like if you think about a blind person using Braille. and passing their finger over these little bumps and they laugh and they cry as they're reading some novel. It's the same thing. The idea is that you're sending the information to the brain, but via an unusual channel. In this case, you're sending the information through the torso and up the spinal cord to the brain. And so what has been very interesting to me is not just using that kind of sensory substitution, but using what's called sensory addition, which is, can we now feed in other kinds of data streams, like stock market data, or weather data, or Twitter data, or 10 other projects that we're doing right now? And one of the things that we're doing is enhancing the sensory experience of VR. So the idea is that, you know, many VR games have awesome, incredible visual and auditory input, but there's no feeling to it. And it really makes a difference when you feel patterns all over your body. So that's what I'm doing with Neosensory.
[00:03:15.565] Kent Bye: Yeah, maybe you could talk a bit about the research that really catalyzed this. What was the seed that made you sort of say that this is such a compelling idea to spin it off into this company? What was some of the early findings that you found?
[00:03:27.296] David Eagleman: Well, I've been very interested in this idea of sensory substitution, which is, can you feed information to the brain via a channel that it's not used to? And the general story is that The brain doesn't know and it doesn't care where it gets the data from. It just cares about the structure of the data and whether that is related meaningfully to something it can do in the world. And so, let's call this in the realm of hypothesis, but I have become more and more convinced that this is the case. Actually, my next book is on that topic. And so what that's led me to think about is things like eyes and ears and nose and mouth and so on, things that we really think are fundamental. I now at this point think that they're just peripheral plug-and-play devices that have evolved from this complicated road of evolution that we've come down. And there's nothing particularly special about them. They're just ways of transferring data, like electromagnetic radiation or air compression waves or whatever, into something that we can use. But when you look across the animal kingdom, you see lots and lots of peripheral detectors. And even the eye has been invented 14 times in evolution, like in different ways, different kinds of eyes. So anyway, what this suggests is that the brain is a general purpose compute device, and it is simply taking whatever data it can get and figuring out how it can be most useful. And that's what got me interested in this question of, could I build a vest and cure deafness? That's, I guess, what got me started down that road. And then why I spun it off as a company is because I submitted some grants on this to the NIH and the NSF, and both those grants got rejected on the grounds that it was not incremental enough, meaning Granting agencies like things that go one little step one little step But what happened is right around the time that I got those rejection letters I was invited to go to TED and give a talk and I gave an 18-minute talk and then every VC that I'd ever cared about in the valley said we want to fund this and so it was a real eye-opener for me as a scientist I'd spent my whole life you know, in a lab writing grants. And I just felt like, wow, there's this other end of the spectrum where you can get funded for things big time to make something that's really a leap. And that's the anyway. So, you know, by that point, we had plenty of data, we've been testing this on people who are deaf. And you know, this obviously totally works. And we have about 10 other projects that we're doing with this right now.
[00:05:53.376] Kent Bye: Right. So I I look at audio waves all the time I do a podcast. How are you translating speech into some sort of pattern into your body? And as you are wearing this around, what has been your direct experience of what is happening to your brain?
[00:06:09.479] David Eagleman: Yeah, so the very simplest algorithm that we do is we capture the sound on a cell phone, and we bust it up into different frequency bins. So this bin here is really low frequencies, and this next bin is slightly higher frequencies, and so on. And each motor just represents the amount of energy in that bin. So that happens to be exactly what your cochlea is doing. Cochlea is this part in your inner ear that's taking the sound waves, and different frequencies get sent to the brain along different lines. And so we're doing exactly that. We're turning the torso into a cochlea. And that's how it works. So my experience, whenever I zip up the vest, for the first second, I'm like, ooh, wow, there's this really interesting vibratory pattern. But it almost immediately goes into the background. I don't even notice it's there after just a moment. Just as a quick side note, the brain is not very good at learning things that are redundant. So because my ears work fine, I'm a very slow learner of the patterns of the vest. But if you are deaf, then you're not getting that information, then people learn very quickly. So we test people day one, day two, day three, and their performance just goes up and up each day.
[00:07:12.517] Kent Bye: Yeah, well the thing I find really fascinating about this is that the long technological trajectory of VR is to hack all the senses in different ways, and so we're putting these headsets on our eyes, we're having specialized audio, we're trying to do different things about haptics, but What you're suggesting is that you could start to perhaps just completely replace certain senses. However, because you already have those senses, then what do you do with that? Can you cultivate new senses? Or how do you actually see this really playing out for people who already have all their senses? What can you even do with unlocking this new pathway into the brain?
[00:07:49.513] David Eagleman: So this is really my goal, is to create new senses. And I have a hypothesis that, let me phrase this as a question, which is, why is it that vision feels totally different to you than hearing, than smell, than taste? Like you would never confuse one with the other. You would never see something and think, oh, I just smelled something. The reason I ask that question is because when you look in the brain, it's all the same stuff. If I poke an electrode into a neuron, I'm hearing pop-up, pop-up, pop-up, pop-up, pop-up. But I can't tell you if it's a visual, or an auditory, or somatosensory neuron, or what. So I've been thinking about this for years. Why is our qualia, in other words, our internal experience of these things, so totally different? Given that vision, and hearing, and touch are all made out of the same stuff. And I think now that it's about the structure of the data. So vision, you have two two-dimensional sheets of the eyes, and you're feeding information that way. Hearing is a one-dimensional signal, and so on. Touch is very high dimensional. Anyway, details aside, these all have very different structures to them, and I think that's why they feel like a different experience. The reason I bring this up is because if I now give you a completely new sense, feed in some new structure, like let's say the stock market data, Will you come to have a new qualia? In other words, will it feel like, OK, well, that's not hearing or vision or touch or taste. But it's this other thing that, of course, we wouldn't have a word for in our language because no one's ever experienced it. And language is a shared communication thing. So you'd have to make up a word. But I wouldn't even understand what you mean by that. In the same way that if you try to explain vision to a blind person, that person will never get what you mean, no matter how good an explainer you are, because they've just never experienced it. And that's what it'll be like if we can create new senses.
[00:09:30.292] Kent Bye: Yeah, and I've seen this graph that has this pyramid at the top you have language and abstraction and as you go down you get into photos and audio and then at the very bottom you have direct experience and so there's a lot of our direct experience that is happening at an unconscious level whether you know you're processing your Language and and all that stuff and so the thing that I see what you're talking about there is essentially like can you? Do inputs to that direct sensory level that is beyond any language and higher level metaphoric abstractions and be able to discern through your unconscious processing some sort of like either qualitative or subjective insight into the world
[00:10:11.855] David Eagleman: Yeah, I mean, that's exactly it. It's that your brain is used to getting these streams of data, and it just figures out how it's correlated with other senses and with what it can do. Like, can I reach out and change that or whatever? So as you feed in completely new data streams, first of all, yes, the learning is totally unconscious. We know that. Because just as an example, I mentioned day one, day two, day three, we measure people's performance. It goes up and up and up. But that is the signature of unconscious learning as opposed to something that you learn consciously You have a sudden leap this aha moment this Eureka moment where you say, oh I get what's going on But people never have that experience with the vest instead They just get better and better at it because the patterns are way too complicated to ever put into words or have a conscious understanding of. And exactly as you flagged, the conscious mind is the smallest bit of what's going on in your brain anyway. I mean, almost entirely all the hard work is being taken care of unconsciously at a level that you can't see and you don't have any acquaintance with, really.
[00:11:10.273] Kent Bye: I'm really curious to hear your thoughts about some of this trajectory of invasive neural interfaces. I know there's kernels, for example, Brian Johnson is trying to figure out if there is a way to embed chips into the brain and have a direct in and out into the brain. So essentially finding this neural code. I mean, you've been talking about the architecture and the data architectures and can you start to mimic and so as I hear that I'm like okay in the long scale of 50 years from now we may figure out a way to do neural laces or other invasive neural interfaces and are there going to be ways to crack that neural code to be able to directly interface into the brain. And I feel like that's in some contradiction to what I'm seeing in the embodied cognition strand of being able to use your entire body and maybe your emotional affective parts of your being as you're taking in your full sensory experience of your entire body. It's all kind of like being meshed together. So I have questions as to whether or not it's going to even be possible to kind of crack that neural code and do a direct neural injection.
[00:12:11.788] David Eagleman: So I would say the truth is probably in between. So as far as something like kernel goes, I'm a scientific advisor for kernel, by the way. And so this is an area I've been following very closely for a while, this issue about how are we going to crack the neural code? How can we actually tell what's going on in 80 billion neurons all at once? They're all firing off. I mean, the wrong way to think about neuroscience, of course, is the way that if you say like, oh, well, the hippocampus does this, and then it talks to the cortex that does that, and blah, blah, blah. The reason that's wrong is because Everything's happening at once all the neurotransmitters going off at once everything is moving in this way that I just feel like we're such idiots in neuroscience right now in terms of having any understanding of what What is actually going on? And so the way we do it of course is we break things down into little tiny pieces of problems And we say ah we have a solution this little piece of problem, but we're missing the big picture so what a company like kernel is doing which I think is really terrific is How can we measure from lots of things at once? And I can tell you chips in the brain is not the way to go. And Brian is pursuing some things in that direction just temporarily, but that's not any of our vision for where this actually goes. I probably can't say a lot more than that except to say that issues of using genetics or using nanorobotics is a better way. to get a report from every single neuron, or at least some adequate fraction, that you can say, ah, that is the code of what's going on. Now back to your question. The issue is, does the rest of the body have anything to do with it? The answer is a little bit, but not a lot. It's like a city where you've got the greater metropolitan area, that's the brain. And then you've got the suburbs, and that's the rest of the body. There's some information coming from that, but I don't think that's the important information. Why do I think that? It's because you can injure parts of your body. You can get parts of your body removed. You can get your gallbladder removed, your appendix removed, all kinds, your artificial heart. It doesn't really change you. But you damage the smallest part of your brain, you are completely changed. It might affect your language, your ability to see color, your ability to recognize music, to name fuzzy animals, to do a hundred other things that we see in the clinic every day when people get even very small damage to their brain. And that's how we know that the brain is the densest representation of who you are.
[00:14:29.158] Kent Bye: And so in terms of the field of study of embodied cognition, how do you see that kind of fits in? Because I've been hearing a lot of excitement about that idea that we don't just think in our brain, but we think with our entire body, but also potentially we have situated knowledges where our environment could actually be also impacting the way that we think. So I'm just curious from a neuroscience perspective, in your perspective, how do you see embodied cognition fit into that?
[00:14:50.703] David Eagleman: If you want to understand a city, a metropolitan area, you might need to understand a little something about what's happening in the suburbs. It's just, it's not the most important part. If you look at the banking and the groceries and the this and the that and the gas and all that, really you're getting most of your juice out of the main part of the city. And it is the case that, you know, the city is embodied in a wider state. So that's why I say the truth is somewhere in between, which is it's hard to ignore the suburbs, but I don't think they're the most important part.
[00:15:19.337] Kent Bye: Great. And so for you, what are some of the biggest open questions that are driving your research forward?
[00:15:25.426] David Eagleman: I mean, the thing I've always been interested in from the beginning is, how does the brain construct reality? Because it's so weird that your brain is locked in silence and darkness in the vault of your skull, and all that it ever experiences are electrochemical signals that, as I said, look all the same. Every neuron's just doing this stuff. And yet you have this impression that we're standing here in this hall, and there's colors all around you, and you're seeing stuff, and you're hearing stuff. and you're smelling and tasting, and all of that is a construction of the brain. I mean, obviously there's no smell and taste in the world. These are qualities that your brain assigns to things, and it assigns them in a way that's evolutionarily important. This is why fecal matter smells bad to you, but, you know, a fresh apple smells good. It's just because that is what is useful to you. It's just molecules of particular shapes that bond to receptors in your nose and so anyway that's always been my main interest is how does the brain construct reality and the way I got into the sensory substitution is you know how could we do it differently if we didn't accept exactly the machinery that we were given but instead try to feed something in a different way
[00:16:33.137] Kent Bye: Just on that line of thinking, have you thought about simulation theory and whether or not if, you know, we're living in a simulation and if this is all constructed, I see Google is working on quantum computing and, you know, they're talking about creating a 49 qubit quantum computer by the end of the year, which could be potentially simulating every single atom state in the universe. And so you start to look at the future trajectory of virtual reality as well as quantum computing and all the technological roadmap is that if we're not already living in a simulation we are going to create a simulation that other entities are going to be living in and they're going to be asking that same question of whether or not we live in a simulation. So I'm just curious if you have any thoughts on the simulation theory and looking at perception and sort of how you resolve like what is reality.
[00:17:17.077] David Eagleman: Yeah. I mean, this is a very old question. And Descartes, for example, addressed this when he said, how do I know that I'm not a brain in a vat? And there are scientists who are poking and prodding and probing me such that I think that I'm listening to your voice and I'm doing whatever I'm doing. And, you know, he concluded that there's no way to know. And that answer has not changed in modern philosophy. There's no way to know. I mean, when we sit around with some beers or something, we talk about like, You know, could you like run around in a circle and burn out the transistor of the simulation so that you could... But really we're stuck here, it seems. What's interesting to me is, you know, you and I are here at this X-Tech Expo and there's all these great VR demos going on, and jeez, they're so compelling because we are... so used to just whatever comes in with our senses is what we believe. So for example, there's this one where you go up this 500-foot building on this pulley thing, and then you're asked to step off the pulley. And when I heard about it, I thought, it's going to be easy. I'm going to do that at a piece of cake. But it was unbelievably hard to do. And it's because we've never in our entire evolutionary history up to this moment had to deal with the situation of, oh, all of your senses are very clearly telling you this, but there's this other dimension that you're actually in that's not this dimension. So it's this completely new thing that the brain has to try to deal with, to try to figure out like, oh, it looks like I'm going to fall 500 feet, but I'm actually not in this other parallel universe that my body is in. So.
[00:18:54.642] Kent Bye: Great. And finally, what do you see as kind of the ultimate potential of virtual reality and what I might be able to enable?
[00:19:03.092] David Eagleman: My interest lately is in, I'm doing a lot of work in my laboratory about empathy, issues of empathy, and essentially, just to summarize this in one sentence, is that we care about people who are like us in some way, and if somebody is on the other team or looks different or whatever, we care less about them in terms of our brain's response. So for example, we have an experiment where you're in a scanner and you see a bunch of hands, and one of the hands gets selected at random by the computer and stabbed with a syringe needle. and you have areas in your brain that light up because even though it's not your hand getting stabbed, you are having a pain response. That is empathy. You're standing in the other person's shoes and feeling what that would feel like. But if I put a one-word label on the hand, so I label all the hands, Christian, Jewish, Muslim, Atheist, Hindu, Scientologist, now you see, do-do-do-do, the hand gets picked and stabbed. Depending on your in-group and your out-group, your brain just doesn't care as much if it's an out-group. My interest in VR is to think about, can I experience what it is like to be someone else? And would that change something about our empathic response? In other words, let's just say that you and I were having a meeting in VR space, but I looked at myself in a mirror and I'm a black woman and you are a Pakistani boy or something. And that's just how we have the meeting with each other or whatever. And let's say every time you went to have a virtual meeting, you were just a different gender and from someplace in the world and you would see yourself that way. I just wonder if that would make us more worldly. So that's my interest lately.
[00:20:38.944] Kent Bye: Awesome. Well, thank you so much.
[00:20:40.245] David Eagleman: Great. Thank you.
[00:20:41.866] Kent Bye: So that was David Eagleman. He's a neuroscientist as well as one of the founders of NeoSensory. So I have a number of different takeaways about this interview is that, first of all, I have been thinking about this interview a lot since I've conducted it. Just the idea that you could feed data into your body, and as long as it's correlated to something that you're observing or perceiving, or maybe it's just getting some sort of feedback loop, you're able to slowly learn what this information is trying to tell you. So just the fact that you could start to put new input streams into your body and to essentially expand your sense of reality And that all of this is essentially happening on an unconscious level. You're not able to create any sort of metaphoric descriptions or there's not even words to really describe it. So we're talking about an embodied sense of presence that transcends language. And this haptic vest is showing that if you can start to plug directly into your body and feed it data, then the brain can eventually kind of figure it out if it's able to correlate it to whatever you're seeing in your direct experience through your other senses. So David did this TED talk where he's talking about this umwelt, which is basically our sensory experience of our surrounding world. And he makes the point that our experience of reality is constrained by our biology. So whatever sensory input that we're receiving from the world is kind of how we're constructing our reality. So in David's TED talk, he talks about these experiments where they put these sensors on blind people's tongues. it's able to record the surrounding environments and send it through someone's tongue and they're able to take all that data and start to essentially see and be able to navigate physical spaces. So this sensory substitution started back in 1969 and then it's been able to allow people to see through their tongue or be able to hear through their torso. So the brain is very plastic. And then David start to think of it, it's kind of like this plug and play peripherals where it doesn't really matter where the data is coming from, as long as it's getting to the brain in a structured format in a way that the brain can actually kind of unpack it and make sense of it. So I think this interview brings up all sorts of really interesting questions about what is even possible to put into your body and have the brain figure out. And then the follow-on question is, just because you can put all this data into your brain and figure it out, should we be doing it? What are the ethical implications of it? in a world and a context in which we're already somewhat dissociated from the natural world and have this series of ecological and ethical crises in our world, then should we be focusing on using the senses that we already have to be able to be connected, emotionally grounded human beings in relationship to each other? Or are we going to start to put in stock market data into our bodies so that we can have further levels of mental abstractions being processed through our body and our brains? And is that going to just further this dissociative trajectory of becoming more and more disconnected from our natural world and with each other if we're putting further levels of mental abstractions into our body? To me, I think that's a little bit of the wrong direction. But I'm much more interested in putting in data into our body that's going to make us more physically grounded to the Earth, more connected to the Earth, or more emotionally connected to each other. What would it be like to be able to take someone else's biometric data and to feed it directly into your body so you can actually build more empathy with someone else? Now, in a lot of ways, this is the trajectory of where this is going, but I think that as we start to talk about sensory substitution and sensory addition, you can kind of get this dissociative transhumanist dream about what would be possible to be able to feed all sorts of abstractions into your body. But I just wanted to kind of call out that there's a little bit of a lack of an ethical framework drawing a boundary as to what we should and should not be doing here. So the other thing that really has stuck with me from this interview is the metaphors that David is using to describe what's happening in the brain and the body. And the way that he described it is that the brain is kind of like the city and that the body is just kind of a suburb. And I actually want to just challenge that metaphor because I think we don't actually know. It could actually potentially be reversed. And here's a couple of reasons why I think that. First of all, David says that the conscious brain is the smallest part of the brain and that almost all the hard work that is happening is happening on an unconscious level. And I think that there is just a huge amount of unconscious processing that's happening through our emotions as well as our body. And that this is such a huge data stream of what could potentially be used to describe our qualia, our quality of our experience, because the quality of the experience is coming from our body, all of our sensory systems. And David even said that, you know, neuroscientists are such idiots in terms of understanding what's actually happening on a holistic level and that The best way they could try to really approach this is to try to break things into little pieces and find solutions to those problems, but they're really missing the big picture. And I think it's this hard problem of consciousness, that there's no real scientific model to describe how our reality is being constructed in our brains. And that, you know, from the brain's perspective, it's just a lot of neurochemicals that are firing, but there's no real way that the neuroscientists can, at this point, or really anybody could describe the qualia and how that's really formed. But I would go out on a limb and say that I think it actually has more to do with our body and our emotions than it does with the brain. Actually, it's sort of a holistic system, but in David's metaphor, he's saying that the brain is the city and that the body is just merely a suburb, but I would say that it's possible that they're maybe on equal footing or maybe it's even reversed, such that the brain is maybe just the tip of the iceberg and that the whole entire body is actually where most of the real action is happening. And so the thing that I think is interesting is that as we start to both do the sensory substitution experiments with people who don't have the primary senses, but also start to experiment in expanding different senses into things that no other human has had the capability of really sense before but start to feed data through the body and then start to eventually figure it out then what are the philosophical implications of is that going to give us some insights into the philosophy of mind and how our reality is constructed and as we start to expand our levels of experience through you know what used to be limited by the constraints of our biology as we do this man and machine fusion and then expand our sense of potential experience that's even possible, then what's that mean for where we go and how we evolve? Now, you know, just the other thing that I just wanted to bring in here is just this reductionistic mindset that I think that neuroscientists still have to really operate it in, which is to break things into its component parts and to really do science for what can be controlled with the input and output. And that is kind of like that mindset that is leading to Kernel and some of their potential roadmap of being able to have these direct and potentially invasive neural interfaces into the brain. If we're trying to write code to be able to inject data into our brain without going through our bodies, then What's that mean in terms of where we're going and how we're evolving? To me these invasive neural interfaces kind of remind me of genetically modified organisms and the potential downfalls of kind of messing with the evolutionary code of nature such that there's unattended consequences that if you tweak something then there's this potential side effect of a small change that you make in the genetics of a crop could actually wipe out bees for example or start to change the overall ecosystem. So it's not that you're just sort of impacting this crop and improving it, but there's potentially side effects that are impacting the overall ecosystem. And I think of that as a metaphor, as you start to do these direct invasive neural interfaces with the brain, what's that mean that you're bypassing the body's normal mechanisms of how it gets data through the body and the emotions? And would you be able to start to encode memories through just writing code or does it need to really go through your direct experience? And so that's to me what I think is kind of an open question as to whether or not this neural code and direct invasive neural interfaces are going to have any legs or if something like these virtual reality technologies or something like this neosensory vest where you're able to use the existing processes of the body and our existing perceptual system with all the latest virtual reality and augmented reality technologies to kind of hijack the senses and to trick the brain. The specific example that David mentions at the Experiential Technology Conference was a collaboration between 2-Bit Circus and Transcendent Studios where it's called The Ledge, where you're on this motion platform, you go up 500 feet and you're expected to step off of this window washer and kind of plunge down 500 feet. Now, I looked at it and I was like, okay, that's not going to be a problem. I've done these types of things before. You know, in Superhot, they have a scene like that. But when they added a motion platform there that was kind of moving back and forth, it was a lot harder for me to step over that edge because everything that was happening in my body was telling me, you know what, this is a really bad idea. You should not step over this edge and jump off this side of this building. And, you know, I had to basically break presence in order to do that, but I was able to do it, but it sort of ruined the illusion. But this is the thing that David is saying is now that with these virtual reality technologies, this is for the first time our body is having to deal with the situation where we have a direct sensory experience, but we have this parallel reality that we're in an actual real world that we have to kind of balance those two. And these dual realities are happening at the same time. And so, How is our brain going to be able to evolve to be able to handle the direct sensory experience of something that's happening? to have the brain have this kind of cordoned off into some sort of incepted reality that we recognize that we're kind of still present in the real world. I think most people that are in VR kind of already have this dual awareness. But the trajectory of VR, and I guess the goal of VR, is to cultivate that sense of presence, that sense of emotional presence, that sense of embodied presence, sense of social mental presence, as well as the sense of active presence. So you have all these different dimensions of presence that you're able to feel like you're actually grounded in this virtual experience. And in this ledge experience, all these different dimensions of presence were so highly activated that it did actually make it really difficult to step over the edge and jump off the side of this building. And I think that is a direct experience of the importance of the body when it comes to our overall construction of our reality and really feeling deeply present, like the quality of the experience is actually that it's real and that it's happening. And that is the goal of VR. So I just really wanted to push back a little bit on David's metaphor that the body and emotions is just merely a suburb that's a little bit important, but not that much compared to what's happening in the brain. And to his point where if you have damage to the brain, then that's more significant than damage to other parts of the body, that's true and correct. But when you're talking about direct experience, I think the body has much more to say about the construction of reality than we necessarily even understand at this point. But I suspect that the body and emotions are going to be a huge part of really understanding the nature of consciousness as we move into this post-Cartesian mindset of moving beyond just breaking things into individual pieces, but starting to move into this experiential design that we can start to do within virtual reality so that we can have a holistic experience and maybe start to tweak different variables and really understand what's really happening in this process of construction of reality within our minds and understanding the nature of consciousness. So that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and become a donor. Just a few dollars a month makes a huge difference. So you can donate today at patreon.com slash Voices of VR. Thanks for listening.