Sophia Batchelor is a recent neuroscientist graduate from UC Berkeley who focused on immersive technologies and neuroethics. We talk about the power of XR for creating new memories, the moral implications of experiential design when XR can be so salient, the bioethics implications of XR including the including the types of sensitive information that can be extrapolated from biometric data, and the need for XR R&D teams to have more ethicists on staff. We also talk about the emerging field of brain computer interfaces, and the privacy implications of being able to read someone’s thoughts. Batchelor talks about some of the privacy architectural decisions of BCI start-up Neurosity, which she announced after AWE that she took as job there as the neuroscientist in residence.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So carrying on in my series on XR ethics and privacy, today's episode features Sophia Batchelor, who's a recent graduate student. She is coming out of Berkeley. She had a undergrad in psychology and then went on to do interdisciplinary studies within computer science, neuroscience and bioethics. And so she had just graduated, I think she's actually went to go work with neurosity, which is a brain computer interface, which she mentions in this interview in passing, but I think she went to go work with them afterwards. So Sophia's at this cross section of neuroscience and ethics and looking at the different ethical implications of immersive technologies. And so she had reached out and wanted to talk about some of these ethical issues, especially as I was getting prepared for my talk at Augmented World Expo, trying to map out all of the different ethical and moral dilemmas that I had been gathering from the conversations like this one. So, that's what we're covering on today's episode of the Voices of VR podcast. So, this interview with Sophia happened on Wednesday, May 29th, 2019 at the Augmented World Expo in Santa Clara, California. So, with that, let's go ahead and dive right in.
[00:01:57.021] Kent Bye: Awesome. Well, you are like the perfect person that I want to talk to right now, because I'm about to give a talk here in like 48 hours about the ethical and moral dilemmas of mixed reality, and specifically trying to, as comprehensively as I can, map out all the different moral dilemmas that I have seen emerging within this new revolution of spatial computing. So from a neuroscience perspective, you're able to get down to the very granular, to the metal parts of our perception and what you can do to be able to get our inattentive attention, to be able to direct attention, and really get all sorts of really intimate information about us by observing these different biometric markers. And so I'm just curious to hear, first of all, what you were able to come to any sort of conclusion or framework
[00:02:46.815] Sophia Batchelor: So to go through kind of the different studies I've done, so one is I tracked how do we actually acquire memories in virtual reality. Enterprises using it a lot, we talk about like learning and education, it's fantastic. Okay, I didn't see any numbers there and you know, there is just a lack of really good controlled research. So I was like, right. What rate do we acquire memories? And I focused on skill memories, which is implicit, and that's procedural skills. So the difference in memory, to kind of zone up a little bit, is that you're either aware of some memories, those are declarative, and implicit memories are the ones you're unaware of. So they're how you sign your name, they're how you drive, they're things that are beyond our conscious recollection. They're skills, like learning how to ride a bike. And what I found was that VR was better. Full stop.
[00:03:30.819] Kent Bye: And better on all dimensions of memory. So anything you want to learn and have memory about VR can make it better.
[00:03:36.777] Sophia Batchelor: Potentially, yeah. And that's what I'm starting to unpack, so I'm trying to do more research into that. So, by better, what I mean is that we acquired implicit memories, again, these are the ones we're unaware of, faster. They were better in terms of the quality of memory, better in terms of the metrics that I used. So, for one, I tracked, you know, how good could you do a ball maze game where you, like, tilt a maze and you move the ball wearing around, and people who learnt in VR had less errors. so they performed better, they learnt the task better. And then I also tracked at a four-week follow-up, so how salient was that over time? And the people who trained in VR were better at the task at a follow-up and consistently. I also had people in every group that were habituated to VR, so those were our Beat Saber players or our drone pilots, so they'd been spending at least 10 hours in VR every week for 15 months. and they have the same results. So there is something so uniquely different about VR that just made this type of learning better.
[00:04:40.852] Kent Bye: That's exciting. Well, I think a lot of people have been finding that, either anecdotally or finding research combined with this embodied cognition. So VR is able to activate all those things in a way that just transcends any sort of 2D frame or any other way of learning something to actually be there and to create what I presume is pretty much equivalent of a memory of actually doing it in some ways. Like the brain may have difficulty discerning the difference. That's at least how I think of it, I don't know, from a neuroscience perspective.
[00:05:11.253] Sophia Batchelor: So you raise a really interesting point there in the sense that VR is superior in performing against 2D screens, but all of my research actually compared VR to what I call the natural world, which I stole from David Attenborough. So you, me, standing here right now at AWE, we're in the natural world. We're not rendered, we're not virtual. And so when I say that VR was better, it was that VR was actually better than natural.
[00:05:34.691] Kent Bye: So better than reality.
[00:05:36.032] Sophia Batchelor: Better than reality. What do you make of that? It means that we do, kind of bringing that back to your point, is that we have a moral imperative, huge ethical concerns. Anecdotally we hear about people playing horror games, they're like, oh you know, I was scared to go in my bathroom with the lights off like a month afterwards. What a lot of my research when I started to delve into that and looking at PTSD and the neural mechanisms that support PTSD, both in returning veterans and also in like longitudinal populations, And this is a phrase that is getting thrown around a lot at this conference that I do get frustrated with, but we do need to be thinking very carefully about what we are building in these virtual reality environments, especially if they are creating memories that are stronger than actual reality.
[00:06:21.669] Kent Bye: So memories that are more salient than happening in real life. I think in some ways what comes to mind is that there's the spectrum between what you expect and the unexpected, and so like the chaos, order and chaos. you can have some sort of prediction about what's going to happen and then that ends up being kind of boring. If it's everything that you expect, you only want to do that for so long. And if it's something completely chaotic, then there's like no order to it at all, so you almost can't make any sense of it. But having that sweet spot of that novelty, I think there's something about having something that is just within the realm of what we expect, but if it's like slightly different, that's the sort of enjoyable part of it. and it's almost like teasing our brain and giving us this dopamine hit whenever we have these differences of what we expect, then that means it's growing and expanding. But I guess the risk is that people get super addicted to that, that they want to spend all of their time in VR rather than in reality.
[00:07:20.211] Sophia Batchelor: I think we have to kind of unpack two different things from there. One is the cognitive experience that we have and the other is the neurological experience that we have. So the neurological experience is that that dopaminergic release of like, oh this was fun, like oh I got a notification, oh it's from a friend, like oh I just like unlocked a new level and that's something that gamification through VR has been shown like so much potential with pain management. stroke rehabilitation, kind of that gamification layer, but on the other side of things is at that cognitive level. So what we do is that when we learn, you know, there's cognitive dissonance, when you experience something that doesn't match your expectations, and then your schema, how you represent and how you think of something changes and adapts. So that's the idea of, you know, young kids, when they learn that things that move are alive. well, the sun kind of moves, does that mean the sun's alive? And no, so then they add that information to their schema, which is kind of a low-level way of explaining that same concept. And so, cognitively, if we're constantly provided with new information that does shift our schema is that we do understand, interpret, and create these new models of understanding the world, which is why when everyone talks about, oh, this interface is intuitive, or fluid, or natural, is that we can actually learn. It's like, you know, we learn how to adapt to phones, we learn how to adapt to this, and so, It's just a new way of interaction. But again, coming back to what is actually going on in the brain is something that is very, very poorly understood, and we're really trying to unpack, because the way the brain fires is this all-or-nothing release, and it's called the action potential. So it's one neuron speaking to another neuron, and it's all-or-nothing. So the neuron's either on or it's off. And what's not often understood or what is quite often misrepresented or miscommunicated is that all of your brain is always on. Otherwise, your brain's not working. And so when we're asleep, your brain is on. When we're awake, your brain is on. When you're meditating, your brain is on. And so when we talk about brain activity, it's just slightly more on in this one specific thing. But then brain areas can kind of repurpose and are in charge of multiple different tasks. So I can't look at a brain and point to one neuron and be like, that's your fifth birthday party. But I can kind of do that approximation looking at the activity, is that when you think of your fifth birthday party, there'll be a collection of neurons that suddenly want a lot more oxygen, a lot more glucose, and so I can look at that and be like, hey, this is where the memory for this specific thing is stored. So I can do that and also look at when we build these environments, what is activated? You know, when we're experiencing something in VR, and that comes back to your experience of, you know, it felt so real. What is real? What is an experience that is real? And then how real is virtual reality? And what I'm finding through the research is that it's more real.
[00:10:11.450] Kent Bye: Well, I think it brings up all these interesting philosophy of mind questions, which is, is consciousness an epiphenomenon of the brain, or is it some sort of fundamental field? Do you get into sort of the philosophical layer of down to the roots of the nature of reality? Because when we're talking about reality and what's more real, it kind of begs the question as to, well, if consciousness is sort of the primary experience, then to me, there's a lot of metaphysical open questions as to consciousness and its origin. I'm just curious as someone who's a neuroscientist looking at this, what you think of it.
[00:10:43.101] Sophia Batchelor: Oh gosh, this is something that I don't want to say keeps me up at night because it more wakes me up in the morning because I get excited about reading more things. For me, it's like I like to approach everything through like David Maher's kind of like three levels of analysis. You kind of have an algorithmic level, you've got a physical, and then you've kind of got a systems level. And we have a real, what's called a mapping problem within the brain, is that we can't, again, look at a firing of neurons and be like, that's consciousness. Or even we can't even look at it and say, you know, that's your fifth birthday party, that is happiness, that is sadness, because we just don't understand it. So what I try and think of it is like, if our experience is a river with water running through it, the eddies that are made, you know, that kind of recursion will act on itself. That might be where consciousness arises from, but I can't say, you know, it's an open question I would love to spend my career on, but I'm doing neuroscience of VR right now. So I'll get to it eventually. And what I can say is that everything we experience influences our brains. So if I raise my arm up and down, up and down, up and down, the neurons that are in charge of that mechanical process, as well as the memory of me, like cognitive memory of me raising my arm up and down, that part of my brain is going to grow. It's going to change. So we can look at a novice pianist's brain, someone who's been learning for three years, and a master pianist's brain, and the parts of their brain that control fine motor movements, so the movement of their fingers, will look very, very different. The master pianists will have a lot more grey matter, a lot more spines, a lot more dendrites and axons, which are little units within the brain that represent, you know, like, listening to music. and also maybe like hearing music and also playing the keys. And then his brain or her brain will also look very different to a violinist's brain who, similar thing, listening to music, but it's a different type of thing. So everything we do and feel and see and be actually changes your brain. Only one third of our neural architecture is actually shared between people. And so it's like, where is consciousness? It might be different in my body and brain than your body and brain. And that's kind of one of the hard things about it, but also one of the wonderful questions that I get so excited about. And so if we think about that and we think about experience, and it's really hard to, because nomenclature within neuroscience is almost as hard as it is within XR. And so when we talk about experience, technically everything I am experiencing right now standing here at this conference is a memory. It can be either a sensory memory or an attentional memory, then working memory, then short-term memory, then maybe long-term memory if it's important enough that my brain decides it's going to get all the way to long-term memory. And so at each of those levels there are kind of filters in place depending on how that information is getting all the way in. So where is consciousness? Is consciousness in our experience? Is it in our memory? Is it in who we are and what we are? I know I can't say for sure, but there's a really good book by Evan Thompson that tries to unpack it. So I can totally recommend that.
[00:13:51.920] Kent Bye: Cool. Well, let's shift gears a little bit and move on to the more ethical issues. So maybe you could just recount a little bit into your own journey in terms of how you started to look at the different ethics of immersive and spatial computing through the lens of neuroscience.
[00:14:06.747] Sophia Batchelor: Okay, you asked my personal journey, so I'm going to frame it as that. So I'm from Christchurch, New Zealand, and in 2011 we had a devastating earthquake which had the biggest energy release of any recorded earthquake in history, which has left my city and my people with, to be the largest recorded population of medical refractory PTSD, which means we don't respond to treatment. So I actually started looking at VR as a treatment for PTSD, to see if I could bring it home to my family and my friends. And then when I started looking at that I realised we don't understand how this works, that a lot of the treatment is non-standardised. There was just no neuroscience at its core, there's nothing, there's no kind of informed thing of how do we actually use this technology just for this one specific use case. And I ran a meta-analysis of 317 different studies, and that was when I first noticed that VR seemed to be more effective than traditional psychotherapies. I was like, huh, why is this? And I think my first point is always, what is the big ethical dilemma? Is that if I want to move to New York tomorrow, which I am actually doing by the way,
[00:15:21.321] Kent Bye: So hypothetically, but really, yeah.
[00:15:22.762] Sophia Batchelor: Hypothetically. You know, purchase my plane ticket, go to the airport, get in the plane, plane goes up, plane goes down, I land in New York, great, I've moved to New York. But the flight engineers, the pilots, people in the air traffic controllers, you know, all of the people that went into actually getting the plane to its destination, they need to understand a lot more. Not just in the case that something goes wrong, but also because they're the ones in charge of making that thing. And at the moment, specifically from a neuroscience and from a research perspective, is that we are building these environments from the passenger seat of the plane. We are, to quote, moving fast and breaking things, and things that we're breaking are people and our experiences and our memories. what we're actually doing and feeling. And so that is one thing that really needs to be addressed is that we can't just build these environments on the same principles that we always have been. There are talks here at AWE on, you know, what's different about 3D interfaces and like, you know, designers for 2D need to think about this when designing for 3D, but it's not just 3D. It's VR and AR and spatial computing provides us with a uniquely human medium. It's a different medium of interaction, and we need to understand that, not just say, oh, it's kind of the same, but more, is that it is different and therefore it has different rules, and we need to understand those different rules. The next, again, is that the research that is coming out from academia, we need to look at and say, okay, so, like mine, If we are acquiring these memories differently in virtual reality, what environments do these account for? Is it generalizable to all VR or is it just skill-based VR? Do you learn a type of declarative memory slightly differently? Do you learn this? Do you learn that? And then once we have that information, we can actually develop a standard and say, all right, everything that potentially will cause harm based on this research will be standardised to this thing, and whether or not that's informed consent, which is one of my other ethical points. Once we have this, it's like, okay, how do we actually protect people? what is going on within the people that we might not see come out of the wash until five or ten years down the road, when it's already too late. We're currently building the future that we're going to exist in, and we need to be building it brighter. Which is not about better, it's not about infighting over who has the best terminology for this, therefore we have the biggest monopoly, it's not about the big five, it's about we're going to be building this together because we're all going to be living in it. So we need to be working together on it. And so that will require standards and I don't know and I don't have confidence that legislation can actually keep up with that. And it is for like this discussion that we're having right now, it's with all the other discussions that we have, to actually decide how we're going to define these things based on good, controlled, qualitative research. Okay, so coming back to what the other thing I was saying is that informed consent is that what we saw with GDPR coming out is that, okay, clicking a button doesn't correlate with informed consent. People don't know what information they are actually handing over. And as we're getting biometric data, more and more of it, so I did a study that is hopefully being published later this year, that was based on third avatar, so a neutral, basically an active bystanding avatar that was not part of the main object interaction. within VR, using eye tracking, how much information could I get about that passive avatar over in the distance? And from that, with 97% accuracy, I could get height, I could also get gender, and I could also guess intent with 72% accuracy as in where that avatar was going to move next. And so the idea is that it's not just your data that is at risk, but also another person's. And we need to know that and people need to be informed about that when they use this. Because that means that if someone doesn't want their likeness captured and potentially unsold or whatever it is, if someone has the right, the autonomy of their own information, they can have that ownership. And it's not just passively collected just because someone else decided that all of their data was going to be live streamed and completely transparent. So, informing that and whether or not that it comes from a company standpoint, if the company is saying, okay, here is a hard drive with all of your data, every time we unsell something or anytime someone asks to access it, you get an email. Or whether it's that your data is tagged, maybe it's stored on blockchain, maybe it's anything like that. Love to have those discussions, but it is. what data is being collected and how do you know about that. So that consent is really, really important. Which again, so I mentioned the third kind of like avatar. So with cameras kind of everywhere, so London is one of the most CCTV cities anywhere in the world. So everything you do, every move, every time you sneeze, it's all being captured somewhere. But with biometric data, what does it actually do? So there's an insurance company here, it might be Kaiser, it might be Blue Shield, that will lower your insurance rate by giving you a Fitbit and if you reach 10,000 steps every day, you get a lower insurance rate. they now have access to all of that data, which is great. The US is a little bit more difficult than it is everywhere else in the world, but the UK and the EU have a lot more centralised healthcare. Same with Canada, same with New Zealand, which is where I'm from, and Australia. We try and create centralised health records, which means that if you go to the ER, your local GP will know about it. And there's kind of a system where all of that information is stored and shared, which is really, really good for preventative medicine. And that is something that I really believe deserves to be in our future. But with preventative medicine, if there's an insurance company that comes on board, what else is that insurance company invested in? And this comes to like the sharing of data is that knowing where your data is, what can be gained from it? Because, you know, we like to think that, you know, if I have my Facebook privacy settings on, you know, I'm kind of protected if I'm like private on Instagram and Twitter, but not really. is that there is so much that, again, using statistical modeling, that we can predict, that we can cluster, we can know your preferences. And knowing that from a non-tech standpoint is very hard. Trying to explain to my parents why not to share this thing and to click this button is slightly more difficult. My parents are amazing, don't get me wrong. They try. But because they haven't been raised in a generation where this is something we actually have to be concerned about. And that comes from, again, just information being very transparent or at least having a chain of you can access this and you can know that you are given autonomy over your biometric data. And biometric data comes back to how do we actually classify that? Is that medical data and therefore is yours or is it something that can be onsold same way that you put your email into a newsletter catch list and then they're like, great. Here are like five companies that will pay however much for your email address. And so I really think that some kind of chain would be really helpful in terms of privacy. I think coming back to the biology of things is that when we're starting to wear things on our faces, on our ears, in our back pockets, from three points of reference, those being head position and also your back pocket and either one glove or two gloves, I can actually get what we call affect or mood, so I can, you know, guess within a certain degree if you're feeling slightly more sad, if you're feeling slightly happier, if there's this thing. And again, those get into the abstract concept which we got back to, what is experience, what is consciousness, but I can guess, are you suicidal at this moment, from three points of reference. If you are wearing AR glasses, have your phone in your back pocket and maybe a smartwatch on your wrist, and maybe you're a little bit more down lately, does that mean there's a moral imperative to send a doctor to your house? Is that not an invasion of your privacy? At what point can we actually say that you are you and we are not creating a watched Big Brother society? So yeah, that's something that actually does keep me up at night, because I really believe that as people, we should have rights to our own bodies, our own data, to whatever extent we can. So that's something that, as we are starting to wear things on our eyes, on our faces, I would really like to think about. Which also comes down to, back to the physiology side of things, there's been so many talks here about eye tracking. That is also another huge ethical worry in the sense it's like, for marketing and advertising, are we going to be competing over air rights to certain advertisements? Are we going to become, it's only the person who can pay the most? How much will you have to pay to get a clean UI on your AR? Is this going to create a further divide in our society in terms of socioeconomic status?
[00:24:39.055] Kent Bye: You mean like pay for privacy?
[00:24:40.380] Sophia Batchelor: Yeah, pay for privacy. And I believe that with any technology, everything has dual use. And one of the talks here said, you know, every time something comes out, everything's going to be made good and bad. And then we have to think about what can be made. But going back to the quote I said earlier is that, you know, move fast, break things doesn't work when the thing we're breaking could be people. So in terms of eye tracking, whenever we're shining anything into the eyes directly is that, I'm part of a project, can't say heaps about, but it's called the Oz Project, and the idea is that we can actually shine a laser in your eye, activate a single cone, and make you see a colour that doesn't exist. Really cool. Really awesome. Means that I, from a neuroscience standpoint, I can go crazy over like, what is colour? What are like, how do we perceive these things? What is light? Which is awesome for the scientist. But I'm also like, okay, I'm shining a laser in your eye, what are the long-term effects of that? We just don't have longitudinal information. And that is something that is also, in a lot of the labs that I am a part of, not being discussed. And there's only so much I can do before everyone I sound like a broken record. I'm like, hey guys, how are we hurting people today?
[00:25:50.979] Kent Bye: Hey, let's talk about it.
[00:25:52.020] Sophia Batchelor: Yeah. Um, and that is one thing that I'm just like, there needs to be more discussions had, which is something that I do get frustrated about as well. It's like more discussions, but I'm like, hi, my name's Sophia. Come talk to me. I will have this discussion with you. Cause then both of us will go to lab meeting and I don't sound like a broken record anymore. And so we need to be thinking about it. But when I say thinking about it, I mean, hire a neuroethicist. or just hire a psychologist, hire any ethicist who has done a degree in this, who is going to be that person sitting at that table every meeting saying, these are the ethical implications I can see. Because right now there is only one company who actually has an ethics team tied to their R&D. And that, again, keeps me up at night.
[00:26:37.699] Kent Bye: What company is that, can you say?
[00:26:39.140] Sophia Batchelor: No. Cannot say.
[00:26:43.112] Kent Bye: Well, I can guess some R&D teams that don't have ethics teams.
[00:26:47.064] Sophia Batchelor: Yeah.
[00:26:48.187] Kent Bye: Since there's only one, it's probably pretty easy to guess all the ones that don't.
[00:26:52.167] Sophia Batchelor: Yeah, definitely. And that's the whole thing, is that everyone is creating this amazing, brilliant, wonderful technology. I've just talked about a lot of the ethical implications for the horror games and the fact that if we are creating these environments that could potentially be harming people, causing subclinical thresholds of PTSD, based on the research that I did for my thesis, but I'm like, hey, okay, for people with learning differences, for people with ADHD, for people with this, for people with that, with individuals, for population, What if we can teach physics in VR? I struggled with physics in high school. What if I could have learned that in VR, and if it's shown to be more effective, how much better off would I have been? How much better off would my learning populations have been? Is that this is a technology that, you know, with any technology, it enables social mobility. And so in building these incredible things and so I am excited for all of these companies that are trying to build things better. But I also say please just hire an ethicist. We're not that expensive and we will do you a lot of good. And do everyone else a lot of good too. Yeah.
[00:27:56.215] Kent Bye: Yeah, well the thing that comes to mind is just that there's so many different ethical moral dilemmas that in some ways the technology, both AR, VR, and AI are like creating these moments of what are previous boundaries of interacting with technology that, in some ways, it's opening up new avenues of this unconscious, implicit data. Like consent on the web used to be like you click a button and then you're consenting to whatever is unfolding on that experience. And that you may be typing in your email. But there's also all sorts of other behaviors and other things of tracking your mouse that's a little bit more of the unconscious behavioral. But once you start to tap into the body, it seems like it's getting access to all this really super intimate information about you as an individual. And the metaphor that I give is it's a little bit like the Rosetta Stone to your psyche to be able to understand all these things about yourself. And what I heard from you over and over again is that you can start to tie things together and create a mosaic of information that allows you to do a little bit of a sensor fusion to be able to take maybe one stream that may be pretty innocuous, but when you add it together to everything else, then you can start to tell all sorts of other information. And for me, what I find terrifying is just the fact that Facebook at F8 was announcing they want to put some sort of iris scan or fingerprinting on the headset to what they say is to presumably to prevent other people from picking up your headset and trying to mimic being you. But the problem is that now all of a sudden they have a biometric identifier that will always know whoever is using whatever else. They could be storing that and tying that to all of your biometric data. And that once they are able to tie all this stuff together, then you're able to extrapolate all sorts of information that could be potentially personally identifiable information. But individually, it's de-identified. And so I think the way that all these things are defined is that you can come up with a little bit more of a Bayesian probability as to what this may be. It's not like a black or white you know for sure. But you're able to get some pretty close guesses. And it feels like it's the aggregation of all this data and storing it and saving it indefinitely. To me, it poses this huge risk of being able to make it vulnerable for people to get access to it and to start to mine it. But I imagine that they want to capture and create and hold all this data for decades so they can train all their AI to understand all these things about ourselves. And talking to a behavioral neuroscientist, John Burkhart, he said the line between being able to predict behavior and control behavior starts to get blurred. So we're getting all these really intimate models about ourselves, tracking our eyes, what we're paying attention to, that with AI on top of that, based upon the decisions that we're making and being able to train it, it feels like they're going to be able to get some pretty sophisticated models of being able to model our behavior and also predict and potentially even control and manipulate it towards their means, which to me seems to be the most terrifying in terms of If we allow this data to be recorded and stored and aggregated, then to me, it just creates all these dilemmas that we talk about the ethics. There's like your autonomy for what is right for what you have the rights over to your data versus the utilitarianism argument that you may hear from these companies saying, well, if we aggregate all this data, then we can use it for research and to be able to discover all these things. for science's sake. But at the same time, there's a more utilitarian aspect of them. It's really for their profit, so they can use the science for an argument to justify it, but it's very much a utilitarian argument versus the autonomy of your own data.
[00:31:40.278] Sophia Batchelor: Jurassic Park was science too. And it's heartbreaking and really hard to say, but we can already do that, is that this is a model that we can build. It's not happening in our future. Sorry, it happened five years ago. And so we have a potential do-over with this new technology, is that here at this conference, for the people who couldn't make it to this conference, people who are listening, we have a chance to say, this is our line in the sand, and this is what we are defining our new future to be. And that means we do need to push back a little bit. And it's great to have discussions, you know. But my personal grievance is we're having a meeting. We're going to discuss about a meeting we want to have. And in that meeting that we want to have, we're going to discuss the things that we want to discuss in a meeting that will happen for a meeting. I'm like, no, just sit down in a room. No one leaves. Let's do it.
[00:32:37.557] Kent Bye: I mean, that's kind of the reaction you get if people, you try to bring it up as they kind of talk around it and they don't actually just have the conversation.
[00:32:45.621] Sophia Batchelor: Yeah, I think that there's a lot of like, oh, wouldn't it be great if, you know, oh, I'm really worried. It's like, you were worried, so what do we do about it? And that's the difference is that something does need to be done. And not to like call out any companies in particular, but the biometric data with the privacy and the iris scanning and the fingerprints. That is shocking. And it's something that, you know, there's, you know, a small moral outrage for the social credit score in China. You know, no matter what side of it you sit on, that is still happening. And that is something that we are not far enough away from now, is that it's far easier to boil a frog by putting it in cold water and slowly turning it up one degree at a time, is that this is, you know, the new norm. We're just happy. It's like, oh, you know, there's been a privacy scandal with Facebook. Great, we were outraged for three days, maybe a week. There was greater outrage at the Ashley Madison hack than there was about the Facebook privacy in Cambridge Analytica. And it is, like you mentioned, the... devices the information is getting closer and closer and closer to us is that the next stage is brain-computer interfaces, which is another one of my side projects. I build BCIs to fly drones, it's really fun. But it is, it's that, you know, the brain is one of the most, your brain scan is one of the most intimate things that I can get of you. Not a love letter, not like anything else, but a brain scan because your brain scan is how you make decisions, how you weight decisions, what you're going to do next. I can detect, you know, from your brain what you're about to say before you say it. Great, telepathic speech, awesome. We're already doing that. There's P300 Speller and it's fantastically used for ALS patients and for locked-in syndrome. But great, if I'm using that, you know, if I'm on a crowded commuter train, And I'm using Facebook's BCI. I'm not saying that they're building one, just saying that, you know, could be something they're working on in Redmond.
[00:34:49.191] Kent Bye: They are building one, yeah. I mean, it's in the R&D, yeah. We all know this, they and me. They announced it, they actually announced it. They said they were working on it two years ago, so yeah, they're working on it.
[00:34:57.182] Sophia Batchelor: Exactly, so it's fine. So, you know, to telepathic speech, and they have actually said last year at conference that they only need a sensor four times more sensitive to read externally off the scalp, which is really worrying. It's great, because from a researcher's perspective, I'm like, hey, how did you do that? I want to know what's the software. But then from, like, an ethical, I'm just like, you're reading speech, and I don't like how you're handling my data at the moment, so what happens when you're, like, you're reading my intent from my brain? It's just, and that's the whole thing, is that we all know that Facebook has a BCI team, why are we not all here talking about it? Because it's just, what do we do? We have more discussions instead of sitting down and actually approaching this.
[00:35:41.489] Kent Bye: Well, I was just at a Canadian Institute for Advanced Research. They had a workshop in New York City, gathering lots of different people from around the industry, focusing on the intersection of neuroscience and VR. And there was someone who gave a talk about what the current state of the art for being able to do ECOG and be able to put these sensors in a little bit more invasive way. Essentially able to decode speech with a AI NLP trained program They it's like it sounds very muddied when they played out But they're able to clean it up and you're able to pretty clearly hear what somebody is thinking and so he's basically saying that yeah within the next four or five years the technology trajectory is that we're gonna be able to do this with like external sensors and to be able to essentially read your mind and And I have to tell you, there was a hushed pause in the room where people were like, shit, that's pretty scary, especially you imagine wearing these headsets. And there's going to be errors. It's not going to be perfect. And so if I'm thinking something, and then if Facebook is reading my mind, and it gets coded, and then it gets stored in a database somewhere, and it thinks that I said something completely different in my mind, then is that going to flag me for a thought crime? And this is where it gets, eventually, if it's with a third-party doctrine, any information you give to a third party has no reasonable expectation of being private. So then, if they're storehousing this brain thought data for the last five to ten years, and they're just recording it and hoarding it, and then the government comes to him and says, I want to hear everything that Kemp, I thought of over the last ten years, and all of his emotional profiles, and everything that he's, essentially the reverse engineering of a brain scan. And that gets in the hand of a totalitarian government, or even if it gets in the hands of the dark web, of it getting out there and having hostile foreign nations being able to have access to that information. To me, it just seems like a really, really bad road to go down, allowing these companies to record this data, to store it, especially all the different ethical and moral problems with having our thoughts captured by these companies. On the other hand, I can't wait to have telepathic computing. I think it's going to be amazing. That's the other thing. It is going to be amazing, but I want to own that data. I don't want people to have access to that data that they can then use to be able to control and manipulate me.
[00:38:01.480] Sophia Batchelor: So I was working in a lab and we actually managed to decode someone thinking another Brick in the Wall by Pink Floyd and you can hear the entire song from the recording. Bob Knight's lab at UC Berkeley, he's one of two people that does ECOG readings in the world. It's brilliant. Fantastic. And I was like, hey, I love this song. And it was decoded from the brain. The next thing is that there's actually a company called Neurosity over in New York, and they are building brain computer interfaces with a real hard line on ethics. Because hey, they have an ethics person on their team. And they actually do all of the pre-processing on the chip. What a lot of brain-computer interfaces currently do is they'll send a raw data stream from the chip itself or whatever device you're wearing to the computer that will then process it, decode it. So they're actually doing all of that. So it only sends metadata, which means streams can't be interpreted. by a third party. It means it also can't be intercepted, it means you can't get to that device on the computer saying, hey, I actually wanted to go to this website, not this website, so it can lead you wrong. You kind of bring up, but also what happens when we've got BCIs in a large part of our population, say like in Wall Street or trading, and you can suddenly take the stock market because you know what people are thinking if you're gaining that data. What happens when 60% of your building wants to vote one person into the presidency and the third party or the current government doesn't want that person in power and so roadblocks are set up or suddenly you guys can't register to vote or anything like that happens is that it is, it's just who is accessing this data, how is it stored, how long is it getting stored for? and what is actually being recorded, and that comes down to, you know, I don't like siding with companies because I'm like, hey, I'm a researcher, give me research projects to do, but I really like Apple's forward thinking in terms of privacy where everything is stored on device, is that they, in terms of like, not to get into the coding behind everything, but the way that they have set up the secure enclave, have a one-way call system and they actually encode their biometric files is brilliant and I would be a lot happier and sleep better at night if a lot of other companies followed that path because we can't stop the tide. There was a government official that went down to Silicon Valley, he was a an advisor to the Obama administration, and he asked a lot of senior executives in Silicon Valley to slow down, and they laughed. Because you can't slow this down, this isn't something we're gonna stop. And what I would like to see from a lot more research is, okay, we're not trying to say stop, but we're just trying to guide that wave. Is that, okay, so there's something different about the way we acquire memories in VR. Why? is it to do with colour perception? If it is, then it means that in all of these scenes that may cause subclinical thresholds of PTSD, that may cause a big fear response that is potentially harmful over time, do you just shift the colours? So that, again, if there's a difference between cognitive perception and what's happening neurally, can we trip the brain to go back and be like, oh no, this isn't real, without losing that cognitive, oh, I'm really in the environment? is that this is something that we can potentially do. And a lot of the research indicates that, well, yes, this is something that we could be doing and should be doing. So it's not about trying to stop a tidal wave. It's about guiding it to protect the people on the beach.
[00:41:40.922] Kent Bye: Yeah, I'm very curious to hear a bit more on this ECOG technology that's able to decode people humming in their mind and to be able to hear a rendition of it. So is it invasive? Or what is the ECOG? What does it actually do?
[00:41:51.564] Sophia Batchelor: So ECOG stands for electrocorticography. So it means, yes, it is invasive. So right now we use populations with epilepsy, and we place electrodes directly on their brain, and that's where we get all of this really good, we can read songs, we can read all of these brilliant words, and they, when I say brilliant, I mean it is as clear as when I call my mum on the phone to New Zealand.
[00:42:20.769] Kent Bye: When I saw the video of this sort of decoding of the ECOG, it was like you could hear it. It was a little muddled and they were able to clean it up with some sort of processing, but it was like putting a microphone on your brain.
[00:42:33.522] Sophia Batchelor: Exactly it is, it's basically it's putting electrodes in your brain that then go through some software and it's a microphone. So the next level up is you've got something that is just scalp level. There's scalp level and then there's extra scalp and that's kind of considered EEG which is electroencephalography. One of the main hang-ups with both ECOG and EEG The spatial resolution on it is that you can only get a couple mils into the cortex and remember the brain is like massively deep in your skull. But also the geometric constraint is that our brains are really infolded. If you want to look, you know, the joke is about birds' brains. Birds' brains are very small because our skulls are about as big as they physically can get evolutionarily. If they were any bigger would fall over. And so what our brain did instead of growing outwards because our skulls couldn't get any bigger is that it started infolding so it's all convoluted. Which means that you need neurons to line up to give a good EEG signal because what EEG is picking up the electrical activity of your brain. And again your neurons fire and that kind of like all or nothing which is an electrical chemical signal. So then there's something else called EMG. You know, if you're a physicist listening to this, you'll know electrical field can go right hand rule, electrical field can go one way, magnetic field will go orthogonal. And so instead of getting like a direct up and down signal, you might get a horizontal left to right from the EMG. There's also a new technology coming out called FNIRS, which is like near-infrared spectro... near-infrared spectography. I'm just gonna call it fNIRS. It's just fNIRS. Spectography? There you go, that word. Thank you. And so what it actually does is same thing that MRIs or fMRIs do, is that the hemoglobin in your blood has oxygen and also hydrogen. And all the hydrogen, the protons in your body will line up with the magnetic field and tip off their axis and whatever relaxes back indicates the density of the tissue, how much water there is in that specific thing. So FNIR kind of uses the same thing. in the sense that if a certain part of your brain wants to be more active than the other part, that a bunch of blood will, you know, carry oxygen and glucose to that part of your brain, and then go through, you know, respiration, and so we can actually pick up how much oxygen is in a certain part of your brain at this point in time. And it does that by, you know, releasing out light, and however light gets reflected back, picks up by the senses, yay, that part of your brain's more active. So those are currently the main-ish methods that we use to measure neural activity, neurophysiology. And those technologies are reading a song off your cortex. They are advancing so rapidly and it is something that is finding itself in the hands of consumers now.
[00:45:24.034] Kent Bye: Well, in listening to the different neurosciences talk about all these different approaches of peering into the brain to see what's happening is that you have these trade-offs between is it mobile or do you have to be static? Do you have to do a scan and then look at it later or is it real-time processing? What is the spatial resolution? And also is it invasive or non-invasive? And so like the non-invasive real-time seems to be with enough time and AI that kind of the metaphor that was given to me and talking about it is like you're kind of like outside of a stadium with a loud crowd that's making a lot of noise and that you're on the outside able to know when maybe someone scored a goal and you hear the crowd roar and that over time you're able to get closer and closer in terms of like maybe getting a better sense as to what's happening and get a better maybe of a drone view of that stadium and but that eventually having non-invasive real-time with good enough spatial resolution in addition to lots of AI and having enough data to train over time, that's the thing, is just gathering the data and training it, then you're able to do a lot more with that. And that seems to be the trajectory that even though there's a lot of more invasive ECOG processes that are being able to read your mind and listen to your brain like a microphone, that eventually the technology curve is heading in the trajectory where this is all going to be real time with an EEG, potentially integrated with a VR headset?
[00:46:47.267] Sophia Batchelor: Yes. So I never say never, but as a neuroscientist, there are huge limitations to the technology that we have, but also to brains. I mentioned a few of them, the geometric, the way in which the unfolding of your brain happens. Also the other one is the spatial resolution. And I mentioned one way back ago is that only one third of our neural architecture is shared. So I need something that is my model trained on my brain data is going to look very different to yours. So there are some things that we just I never say never, but I do not feel confident as a scientist that we will be able to do. I don't think that we'll be able to get resolution on the neuron to neural level. I think that even hyper columns might be a little bit of a stretch. in saying that this technology trajectory is incredible. That just while I have been studying this, FNIRS has popped up on the radar, just as I have been studying this. And now I am seeing companies, you know, using it for this and that and all of these different things is that when I cite my papers, a lot of the citations are within the last three years, which is not generally practiced when something's new. But when it's integrated with VR, I love BCIs and spatial computing. Most hackathons, I'll come and bring some of my BCI equipment too, because it is. In terms of accessibility, you know, in terms of interaction, is that if we can select something, bring it closer to us, we can do that with intent. We can use BCIs to detect when drivers are getting sleepy faster than eye blinks. You know, when your eye blinks slow down before you'll know you're getting sleepy, which is incredible. It means we can stop a lot of accidents if we can integrate that with a semi-autonomous vehicle that we can slowly bring online when you are getting fatigued. We can save lives doing that. It means that someone who currently a lot of the VR controllers require two hands. if you don't have the ability to use both your hands, how do you interact with the spatial computing, with this metaverse, with this mirror world, is that you might be able to do it with a BCI. As a person who loves this space so much, I don't believe that we should gatekeep this. to the people who could pay for it and to the people who are able-bodied and of the highest intelligence and all of this and all of that is that the reason why I came into VR and AR was because I'm a neuroscientist and And initially, when I was like, hey, I don't really know how to code, but I'm really interested. Can someone teach me? I was lucky enough to find people. But for the first year, I was pushed out of rooms when I asked to be invited to things. And I was like, hey, can I come to this conference or this? It was like, no, you're a neuroscientist. You don't deserve to be here. And then I found VR. and everyone was excited and creative and it just gives us this medium that hasn't been seen before. You know, Michael Abrash says there hasn't been so much excitement since we built the internet. I wasn't alive back then, but like I can feel the excitement and I personally feel so confident turning away from really good medical, surgical, neurosurgery track that I was on because I started waking up every morning loving what I was doing and what I was doing was learning and building and talking to artists. I'm part of VR at Berkeley awesome student-led group we've got 19 project teams which will range from you know like building AR headsets to groups that are like engineering the most incredible things, but we also have an animation group, and we have a cinema, so they shoot 360 films and put it all in VR, and I love sitting there and listening and watching and seeing what these teams have built, because there is, it's creative, and that's something that I want to continue in this space, and I really see that BCIs, when done ethically, when done with a good moral compass, can help enable us to do so much more.
[00:51:00.463] Kent Bye: Yeah, it's a trend that I'm seeing is that these project-based teams and that VR is in some ways like this interdisciplinary melting pot, bringing together all these different disciplines, from architecture to neuroscience to game design to aesthetics to art and philosophy and storytelling. It's like all this blending of all the things that I'm, for me, I get to talk to all these different people. So it's a great cross-section. But yeah, I think that in talking to this neuroscientist who was given this five-year projection, he seemed to be pretty confident that we were on a roadmap based upon what he is seeing and what they're working on and with the potential of where this might go. I think we have to, in some ways, assume that there may be some pretty big breakthroughs. Just to see what has happened in artificial intelligence and computer vision, even within the last five to 10 years. I mean, it's really been like, I don't think the Oculus Quest would have been even possible without some of these huge breakthroughs that have happened in the realm of AI. So I feel like there's going to be equivalent numbers of applying these types of breakthrough technologies of the different algorithms and techniques. in addition to whatever combination of all these things come together, whether it's like a sensor fusion of EMG and EEG and, in this case, maybe training it on ECOG just to get the information in the neural networks there. But eventually, I think we have to assume that it's going to come. And how do we deal with it? Does it all happen with the processing on the chip? Is there approaches of differential privacy? homeomorphic encryption, whatever that ends up being, it feels like we need a lot more people like yourself that are looking at it from the ethical lens to figure out how to architect this in a way that is able to maybe explore the potential of what's possible without creating a dystopian Big Brother nightmare scenario.
[00:52:43.280] Sophia Batchelor: Yeah and I think it's like it's not just people like me in the sense that like I'm lucky in the sense that I have an accent so generally when I talk people listen but it's sitting down and thinking and when I say thinking I mean being okay What can go right with this? What can go wrong with this? When a lot of people make decisions, they'll make a pros and cons list. Why aren't we doing this for product development? And thinking about the pros and cons of this specific project on people, on the ethics. It's not that hard to go to a library or go online and look up an ethics textbook. There are so many resources there. It's like a willful ignorance. in a way. And again, there are so many university grads who will pick up their hand and be like, hey, I will intern if you, you know, like sign off on this and do experience and like we want to do this and we want to build this. And like you said, very, very astutely is that this is going to come, you know, based on someone who hopefully is a lot more informed than I am about this space as a new grad. This future is here, and it's like the more we kind of integrate with all these different technologies, with all these different sectors, it's something that, going back to what I said earlier, we have to define. Because I don't think that policy is going to keep up. It hasn't kept up in the past, so there is no reason to believe that it will now. That suddenly someone is going to get in, suddenly the majority of our US government your US government is going to become experts in AI and VR and all these technologies, biometric data, and be able to put these things in place, is that it will come from the product team, it will come from the company itself, and that once someone steps up and says this is where I'm drawing my line, who's with me, you will start to see a greater line by our entire sector, group, spatial computing space, by all of us, you know, bystander effect, right? Once someone steps forward, we all will. And so, hi, I'm talking to you, I'll step forward, we just need to do it together.
[00:54:56.653] Kent Bye: Yeah, that's great. I'm going to be right there, too, talking about this here at AWE on the moral dilemmas of mixed reality. And I will say that I do think that with AR, VR, and AI, that there's a lot of stuff that isn't covered in the textbooks and that we actually need to update our ethical frameworks and come up with a little bit more of a holistic approach. In some ways, I haven't studied a lot of those ethical frameworks, because I find that just from talking to so many people, I have cultivated some sense of my own moral intuition and trying to come up with, in some ways, a spatial representation of all these different contexts and how a lot of these dilemmas come when there is a context that is mashing up in different contexts. So for example, if the context is your biometric data, that's your body, and that you want to do it for healing, then that's a great context. But if there's a company that is interfacing with you and that the technology is then trying to gather and hoard it and use that data for profit, then that's sort of an economic context that's mashing up and it's in conflict with your personal context, with the context for your medical healing. So I feel like there's ways of mapping out those contexts and see how there's these different dilemmas and conflicts between those that I haven't come across an ethical framework that does a comprehensive mapping of the entirety of human experience that allows me to make those different context trade-offs.
[00:56:17.804] Sophia Batchelor: I can't say I have either. I think that is that larger issue I first started talking about, is that these contexts just don't exist, is that there is no precedence on which this can rest. And so I'm so excited for your panel, because I'm also really excited for after your panel to hear everyone discussing outside, because it's needed, so let's define it. And only by, you know, panels such as yours, only by discussions such as this, only by getting coffee and being like, hey, how is your company doing the ethics? It's not great. Well, now please start. That we can do this is that The AR cloud team here, they just had a talk and they had a symposium yesterday and they're like open AR cloud, I think is what they're called. And so they're trying to act not as a regulatory body, but they're trying to say, hey, here's all the information. Now here's the information. Right. And it's when you can start making informed decisions and not just saying, you know, a lot of us have our own kind of moral intuition, moral compass. And that comes back to when I say building a brighter world, it's not about building a better world, because everyone's idea of better is going to be different to everyone else's. So your moral compass might be slightly different to mine, slightly different to your mom's, your dad's, my mom, my dad. And so when we can have a basic framework to all agree on, and like the Hippocratic Oath, do no harm, right? That is the base layer. So then do we need the equivalent of a Hippocratic Oath for the technology sector? And that starts with do no harm, you know? What happened to do no evil? because evil can be defined as different things depending on which line you stand on. So, do no harm, do no wrong, it comes back to what we learnt as children, in the sense that if I am willfully causing malintent, is that ethical? Short answer, no. same way with willful ignorance, is that, hey, I don't know how this VR horror game is going to be affecting someone. Oh, a paper was published two weeks ago, and I don't have time to read that, right, is that there is information starting to swell up that is starting to be out there, and we need to be informed about it so that we can make informed choices, we can make informed decisions, and we can make informed policies about it.
[00:58:57.993] Kent Bye: Great. And finally, what do you think the ultimate potential of spatial computing is? And what am I able to enable?
[00:59:10.187] Sophia Batchelor: This isn't the ultimate one, but I want to be able to hug my parents. I get tragically homesick, but I, on my current life path, don't have a way to be home. And so I try and call my parents every week, I try and call my brother whenever he'll pick up the phone. My cats will still come running whenever they hear my voice in the background, but sometimes it is hard. I went through an injury which left me unable to walk, and my parents weren't there. And all I wanted to do was to be able to hug my parents. So, you know, what if my parents could have been there holographically at my hospital bed? What if my trainer could be there as I was going through rehab when no other person was in the room? You know, it's a uniquely human thing that can connect us and that is the thing I really want without we talk about conference withdrawal syndrome, right? So, you know, you have all this information, you talk to people, it's all exciting, it's all this thing, and then suddenly you go back home to your nine-to-five job and it's like, where are the people? Why is my phone not going crazy? Where am I meant to be? So it's like, how do we create this kinder world is that no being is born free from the need for compassion. So how do we build a compassionate world that is kind and also bright?
[01:00:32.558] Kent Bye: Great. And is there anything else that's left unsaid that you'd like to say to the immersive community?
[01:00:37.802] Sophia Batchelor: Go be awesome. Keep building cool things.
[01:00:41.705] Kent Bye: Awesome. Great. Well, thank you so much for joining me today. So thank you.
[01:00:44.908] Sophia Batchelor: Thank you for reaching out. And this was wonderful. And I loved to hear all your perspectives.
[01:00:51.253] Kent Bye: So that was Sophia Batchelor. She's a recent graduate focusing on neuroscience and bioethics. So I have a number of different takeaways about this interview is that, first of all, What we were talking earlier about the research that she had found through this, doing a meta analysis of all these different studies and saying that VR was more salient and more powerful than actual real life. Now that was through the lens of these different studies. I'd like to tend to believe that, you know, our life is more salient and more memorable, but I can also see how there may be certain memories and certain experiences that are so novel that they actually become deeper. and more real. So trying to draw that line as to what's real or what's not real, or how powerful it is, I think it's actually very difficult for us to quantify in any for sure way. But the main point she was trying to get across is that it's just powerful. And there's a bit of an ethical and moral obligation for people who are creating these experiences. She gave a great metaphor where she's saying, you know, she's about to move to New York City, and she's going to get on a plane, and she's going to be the passenger. And when she's flying across the country, there are people who have a better sense of all the different aerospace engineering, all the different dynamics of what it takes to fly an airplane and then the pilot to actually control it. And then as a passenger, she's just kind of in the back passively on the ride. And she's creating this metaphor that as VR creators, we're kind of like those passengers without fully being aware of all the different implications of what it means to be creating these immersive experiences in the first place. So just trying to advocate that people have a bit of a moral imperative to look at some of the different research and what is known and what are some of the potential harms that can be caused. And that's the big thing around ethics is trying to reduce the harm that's being caused, that there's certainly going to be benefits, but there's also going to be harms and you have to weigh the benefits with the harms as we're creating these immersive technologies. So we talked a lot about different things about informed consent and biometric data, what you can actually get out of that information. She was saying that just from like three points in the body, she can start to get different aspects of your affect and your mood, which means that if somebody is gathering all that information and looking at it, then they could potentially determine. whether or not you're depressed or whether or not you're getting suicidal. And then there's a question of like, what's the moral imperative for businesses that have this information? They have some sort of obligation to send a doctor to your house to be able to prevent you from committing suicide? Or is that just part of your own privacy? And if they're wrong, then there's all sorts of like feeling like they've encroached on your right to do whatever you want. So you have this trade offs between all these different disciplines and domains. And just the concept of privacy engineering, and these ethical issues means that there's these different trade offs. And there's not a perfect answer to these things. Like, what is the answer to if you have information that people may be in a condition that they're going to harm themselves, then what should you do? But then there's this whole other deeper context of information getting out there. Who's controlling it? Can you have a chain of custody? Can you start to audit it in some ways? And yeah, there's just a lot of different companies that are already getting information and making these judgments. She says that, you know, this is not something that's like theoretical in the future. This is something that's already happened like five years ago. And we're at the point now where we're reevaluating all this stuff and trying to draw like new lines in the sand to say what's okay and what's not okay. If we've had these ethical transgressions, then what are the ways to be able to draw up these principles so that we can recalibrate this entire industry, especially as we're moving into the immersive technologies, we're going to have like a floodgates of all sorts of new information that has typically been within the domain of medical research controlled by HIPAA, or even if it was within like a user study that's looking at all these things. That we're basically blurring all those different contexts and it's time to as an industry Take a step back and reevaluate things and to have some ethicists in the room It was really surprising for me to hear like, you know She reported that there was only a number of different teams that are out there who had an ethicist on their R&D team You know, she's a recent PhD graduate I'm sure she was traveling around all over the place with an opportunity to see where she wanted to work and And so this was information that was sharing with me. She couldn't say which company actually had an ethicist on their design team. But I think this is something that there needs to be a lot more ethicists within this research and development process and talking about it. I talked to Daniel Robbins at SIGGRAPH. This is after Augmented World Expo at SIGGRAPH. I did talk to Daniel Robbins, who was on the R and D team. He wasn't like a trained ethicist, but he was trying to bring in these deeper ethical conversations into the larger design process of the future of these immersive technologies. And there just needs to be a lot more of that is what Sophia is advocating for. And I certainly agree with that as well. And then the, this final thing was just this, uh, future of brain computer interfaces. This is something that sounds like Sophia has been diving quite into pretty deeply. In fact, she mentions neurosity in this interview. And then after this, that this was before neurosity had actually even. given an official announcement that they were an entity at all and so they gave the announcement while I think it was at the XR for change conference in New York City and I ran into her again and you know She was very concerned about having this biometric data what happens to it and with neurosity They have a lot of privacy architecture where a lot of this processing is happening within the context of the device itself It's on the chip and it's not being shared with third parties. It's not going off the device at all and And so when he comes to like this process of reading your thoughts, then yeah, it's a big question as to where is that data going? Is that going into a big database somewhere? Is that something that's being mined by artificial intelligence? And once you start to have somebody's brain scan, she said, that's a very intimate part of who you are and your identity. And she made the comment of saying that, you know, only about one third of our neural architectures are shared between people so that every experience we're having every moment is creating these new memories, whether it's just a, sensory memory and intentional memory, a working memory, short-term memory, or if it ends up getting into long-term memory. And then what is the nature of consciousness? Is it drawing into these memories and being able to like fuse them together and be a part of the language of our experience and how we make sense of things based upon these category schemas and ways that we create buckets of categories to be able to make sense of our experiences. And then are those being referenced somehow with our consciousness and our memories and. Yeah, so I think it's just interesting to see that like everybody's experience, they're creating new memories all the time, and that your neural architecture is very unique for who you are. And maybe she says only about one third of that is going to be be able to be transferred over between people. So if you start to think about what is the future of being able to do direct brain communication between people, you know, this would lead me to think that maybe you have to do this translation into the virtual reality where you have these embodied experiences that are being able to communicate things and then be able to transfer information back and forth between people. That's just talking about like brain-computer interface to brain-computer interface. So like having like telepathic communication with like the BCIs. Well, BCIs are reading what's happening in your brain, but being able to input information into your brain as well, I think is a whole Other thing that's like what Elon Musk and Neuralink are trying to do with trying to actually like put information into your brain. So ECOG, they put these different neurons onto your brain. They're able to like, and there's all sorts of other near infrared spectropathy. That's a hard word to say. Near infrared spectroscopy and all sorts of new techniques to be able to get more and more resolution. down the road, it feels like it's on this roadmap to be able to potentially do real time external EEG or some other sensors to be able to like look into your brain and be able to detect what you're thinking. And then you start to feed that into a brain computer interface and it's going to have all sorts of accessibility, but there's all sorts of other potential ways that that could go horribly wrong if that's into the wrong hands. And so I'm glad that there's people out there like Sophia who are looking at the bioethics of these types of things. And I'll be very curious to see what happens with neurosity. That's N-E-U-R-O-S-I-T-Y. And yeah, just the final part is that there's this Hippocratic oath of to do no harm. And so I think that's a guiding principle for all of this stuff that is really motivating Sophia and that there are a lot of amazing potentials for where these immersive technologies can go, especially when it comes for BCI, especially when you talk about like accessibility and Getting access for people. She doesn't want it to be like you have to pay for privacy that she just wants it to be available for everyone So yeah, just the different aspects of these ethical issues whether it's the social credit score in China you could start to see where there's these different countries that are taking different approaches and and just trying to take a step back and like to reevaluate what our ethical frameworks are to draw these clear lines in the sand. And that's a lot about what this whole journey of the last seven months was for me. We're having conversations like this with Sophia. And then I tried to synthesize it in both the keynote that I gave at Augmented World Expo, which is going to be in the next podcast, that's 8.36. And then the final distillation, well, at least the latest iteration and distillation with this XR ethics manifesto that I did at the Greenlight strategy conference. So you can skip ahead if you want to get the distilled aspects of all these things that really try to like boil it down into these deeper ethical principles. So that's all I have for today. And I just wanted to thank you for listening to the Voices of VR podcasts and If you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is an election-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. And as an independent journalist and an independent scholar, this is like a research project that I've taken on for the last seven months, and just wanting to set up these different talks, to have these different conversations, and to try to provide back into the community some at least working drafts of ethical frameworks This is a large part catalyzed by going to the VR Privacy Summit and then at the end, us wanting to present something back into the community. But it was such a huge topic that we couldn't condense it down in such a way to be able to provide some prescriptive ethical design guidelines and an ethical framework to present back to the community. So I kind of took it upon myself as this independent project in collaboration with all these different conversations and these different talks over the last seven months. So, if you enjoy that and want to see more of this type of independent research and oral history and trying to talk about these big ethical dilemmas that are on the horizon here, then please do become a member of the Patreon. You can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.