On March 18th, Facebook Reality Labs Research announced some of their research into next-generation neuromotor input devices for mixed reality applications. I paraphrased the most interesting insights from their press conferences announcement, but I was still left with a lot of questions on the specific neuroscience principles underlying their ability to be able to target individual motor neurons. I also had a lot of follow-up questions about some of the privacy implications of these technologies, and so thankfully I was able to follow up with Thomas Reardon, Director of Neuromotor Interfaces at Facebook Reality Labs and co-founder of CTRL-Labs to get more context on the neuroscience foundations and privacy risks associated with these breakthrough “adaptive interfaces.”
Reardon described his journey into working on wearable computing by starting at Microsoft, where he created the Internet Explorer browser. He eventually went back to school to get his Ph.D. in neuroscience at Columbia University, and then joined with other neuroscience colleagues to start CTRL-Labs as a startup (be sure to check out my previous interview with CTRL-Labs on neural interfaces). Reardon and his team set out to override the dogma on motor unit recruitment, and they were successful in being able to detect the action potentials of individual motor neurons through the combination of surface-level electromyography and machine learning. These wrist-based neural input devices are able to puppeteer virtual embodiments, and even cause action based on the mere intention of movement rather than actually moving. This breakthrough has the potential to revolutionize the fidelity of input that isn’t constrained by the human body, and the brain and motor neurons have a lot more low-latency capacity and higher degrees of freedom that may solve some of the most intractable bottlenecks for robust 3DUI input for virtual and augmented reality.
But with the increase in orders of magnitude of new opportunities of agency, then there are also a similar increase in the sensitivity of this data in terms of the nature of how the network of these signals could be even more personally-identifiable than DNA. And there’s also a lot of open questions around how the action potentials of these motor neurons representing both the intentional and actual dimensions of movement could be used within a sensor-fusion approach with other biometric information. Facebook Reality Labs Research has a poster a IEEE VR 2021 that is able to extrapolate eye gaze information with access to head and hand pose data and contextual information about the surrounding virtual environment. So there’s already a lot of sensor fusion work happening towards Facebook’s goal of “contextually-aware AI,” which is not only going to be aware of the world around you, but also potentially and eventually what’s happening inside of your own body moment to moment.
Part of the reason why Facebook Realty Labs is making more public appearances talking about the ethics of virtual and augmented reality is because they want to get ahead of some of the ethics and policy implications of AR devices. Reardon was able to answer a lot of the questions around the identifiability of this nueromotor interface data, but it’s still an open scientific question as to exactly how that motor movement data could be combined with other information in order to extrapolate with Brittan Heller has called “biometric psychography” that tries to identify this new class of data.
Heller says, “Biometric psychography is a new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests. Immersive technology must capture this data to function, meaning that while biometric psychography may be relevant beyond immersive tech, it will become increasingly inescapable as immersive tech spreads. This is important because current thinking around biometrics is focused primarily on identity, but biometric psychography is the practice of using biometric data to instead identify a person’s interests.”
Heller continues on to evaluate how there are gaps in the existing privacy law that don’t cover these emerging challenges of biometric psychography that “most regulators and consumers incorrectly assume will be governed by existing law.” For a really comprehensive overview of the current state of U.S. privacy law, then be sure to listen my interview with Joe Jerome (or read through the annotated HTML or PDF transcript with citations). There are a lot of current debates about a pending U.S. Federal Privacy law, and I’d be really curious to hear about Facebook’s thoughts their current thinking on how the types of biometric and psychographic data from XR could shape the future of privacy law in the United States.
Another point that came up again again is the context dependence of these issues around ethics and privacy. Lessig’s Pathetic Dot Theory tends to look at the culture, laws, economics, laws, and technological architecture/code as independent contexts, but I’m proposing more of a mereological structure of wholes and parts where the cultural context drives the laws, the economy is within the context of the laws, and then the design frameworks, app code, operating systems, and technology hardware are nested within the hierarchy of other contexts. Because these are nested wholes and parts, then there are also feedback loops here technology platforms can result in the shifting of culture.
I’ve previously covered how Alfred North Whitehead’s Process Philosophy takes a paradigm-shifting process-relational approach to some of these issues, which I think provides a useful framing to help provide a deeper contextual framing for some of these issues. Whitehead helped to popularize these types mereological structures with a lot of his mathematics and philosophy work, and this type of fractal geometry has been a really useful conceptual frame for understanding the different levels of context and how they’re related to each other.
Context is a topic that comes up again and again is thinking about these ethical questions. Despite Facebook’s promotion of “contextually-aware AI,” most of how they’ve been using talking about context was through a lens of your environmental context, but during their last press conference they said that the other people around you also helps to shape your context. It’s not just the people, but it’s also the topic of conversation that also has the ability to jump between different contexts. Up to this point Facebook has not elaborated on any of their foundational theoretical work for how they’re conceiving of the topic of context, contextually-aware AI, and the boundaries around it. One pointer that I’d provide is Helen Nissenbaum’s Contextual Integrity approach to privacy, which tries to map out how the relationship of information flows with different stakeholders in different contexts (e.g. how you’re okay with sharing intimate medical information with a doctor and financial information with a bank teller, but not vice versa).
A lot of the deeper ethical questions around data from XR are elucidated when looking it at through the lens of context. Having access to hand motion data and the motor neuron data driving it may actually not have that many privacy concerns. However, FRL Research is able to extrapolate gaze data when that hand pose is combined with head pose and information about the environment. So in isolation it’s not as much of a problem, but when it’s combined within an economic context of contextually-aware AI and the potential extension of Facebook’s business model of surveillance capitalism into spatial computing. How much of all of this XR data is going to be fused and synthesized towards the end goal of biometric psychography is also a big question that could shape future discussions about XR policy.
It’s possible to see a future where these XR technologies could be abused to lower our agency over the long run weaken our body autonomy and privacy. In order to prevent this from happening, then what are the guardrails from a policy perspective that need to be implemented? What would the viable enforcement of these guidelines look like? Do we need a privacy institutional review board to provide oversight and independent auditing? What is Facebook’s perspective on a potential Federal Privacy law and how XR could shape that discussion.
So overall, I’m optimistic about the amazing benefits of neuromotor input devices like the one Facebook Reality Lab is working on as a research project, and how it has the potential to completely revolutionize 3DUI and exponentially increase our degrees of freedom in expressing our agency in user interfaces and virtual worlds. But yet I also still have outstanding concerns since there’s a larger conversation that needs to happen with policy makers and the larger public, and for Facebook to be more proactive in doing more of the conceptual and theoretical work about how to weigh the tradeoffs of this technology. There are always benefits and risks for any new technology, and we currently don’t have robust conceptual or ethical frameworks to be able to navigate the complexity of some of these tradeoffs.
Update: 4/1/2021: Here’s some more info on the
Facebook Reality Labs symposium on Ethics of Noninvasive Neural Interfaces in collaboration with Columbia University’s NeuroRights Initiative.
— Kent Bye (Voices of VR) (@kentbye) March 31, 2021
Also here’s a human rights proposal for Neuro-Rights:
Yuste, R. Genser, J. & Herrmann, S. "It's Time for Neuro-Rights." Horizons: Journal of International Relations and Sustainable Development, no. 18, 2021. pp 154-164. JSTOR, https://t.co/1OMsuqQaWW pic.twitter.com/gTVk7folFl
— Kent Bye (Voices of VR) (@kentbye) March 31, 2021
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
23/ @FBRealityLabs Research has a poster at #IEEEVR2021 where they can estimate your eye gaze in a VR scene based on your head & hand pose.
This is just the beginning of this type of sensor fusion + ML in FB's mission to know what you're looking at for their contextually-aware AI pic.twitter.com/jInjHktmi2
— Kent Bye (Voices of VR) (@kentbye) March 29, 2021
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So today, I'm really excited to share with you a conversation that I had with a Facebook Reality Labs researcher, also happened to be the founder of Control Labs. He's a neuroscientist. And there was an announcement that happened a couple of weeks ago now, March 18th, 2021. Facebook Reality Labs had done this whole press event because they wanted to talk about the future of input when it came to augmented reality. This is something that doesn't have a technological roadmap just yet, although there have been discussions about Facebook's work on a watch that they might start to integrate some of this EMG, Mio control type of input onto like a smartwatch. But there's no plans that have been announced in terms of what specifically is going to be coming out of this. But the idea is that eventually we're going to have augmented reality and we need a lot better input than what we have existing. You know, the human computer interaction using a mouse and keyboard is something that we've been using for, you know, since the late 60s. But now we're kind of moving into this realm with spatial computing where we need higher fidelity of different input. And from my assessment, these different types of neuromotor input controls are going to be a huge, huge thing for both virtual and augmented reality. It's going to unlock so much potential. However, with that power of increased agency also comes lots of really untrackable problems around the privacy implications and what happens to the data that are collected from it, but also all the information that's coming from immersive technologies, what happens when you tie all that information together, and what kind of guardrails do we need to have in place. Facebook is really taking the approach right now of saying, hey, we need to have a conversation about this because this is really important. I wanted to at least start with having this conversation with the neuroscientists to really get down into the nitty-gritty of the neuroscience that is really driving a lot of these neuromotor input devices that they're working there at Facebook Reality Labs Research. And I should also note that after I had this conversation, the IEEE VR conference, I was attending there and there's actually a relevant paper that sort of speaks to how Facebook Reality Labs is actually actively working on different types of sensor fusion. Cara Emery is a graduate student who had an internship at Facebook Reality Labs Research and was doing this project called Estimating Gaze from Head and Hand Pose in Scene Images for Open-ended Exploration in a VR Environment. So, taking in consideration what your hands are doing as well as what your head poses, you're able to extrapolate your eye gaze, not even having eye-tracking technologies. They're able to start to prove out some of this type of sensor fusion that I think is, in some ways, really exciting in terms of what's possible. On the other hand, there's a lot of larger ethical concerns, especially in the context of Facebook's drive towards their concept of contextually aware AI, which is trying to be essentially aware of everything that's happening in your environment, but also how you're in relationship to that environment around you. So I have lots more questions that I'll dig into at the end, but I wanted to just start at the baseline of like taking a look at the neuroscience. And there's actually quite a lot of really mind blowing stuff that I think is going to be unlocked here. I mean, stuff that Reardon is kind of alluding that they have the capability to superhuman typing speeds, both faster, more accurately than you ever have before. And these are the types of things that are going to be unlocked, but not only that, but like. whole six degree of freedom, spatial computing interfaces, lots of really amazing stuff that I really believe in the potential of these types of devices as an input device. So that's what we're covering on today's episode of the voices of VR podcast. So this interview with Thomas Rudin happened on Friday, March 26 2021. Also, just as a bit of a heads up, there's a bit of audio feedback that I had with some of the questions I was asking. And I think it was probably a combination of both zooms algorithm with the portal technology and any information that was lost was happening within the context of that conversation as well. But apologies for that. And hopefully you'll be able to work around that type of issue in the future. So with that, let's go ahead and dive right in.
[00:04:01.599] Thomas Reardon: My name is Thomas Reardon. I'm a. neuroscientist by training and a software developer by a past profession. I am part of this grand division at Facebook called Facebook Reality Labs that is probably best known for the Quest line of VR devices. I am now working on a set of what we'll just call next generation experiences in the realm of AR and augmented reality, mixed reality devices. or even more broadly, what we can think of as like wearable computers. And in particular, I'm leading the Control Labs team, which is developing neuromotor interfaces to control these devices, to control AR, to control experiences in there, to control your watch, to control your, not a watch, but a wristband, et cetera, and give you a new kind of user experience that's sort of not based on how you push buttons or sling a joystick around, but creating a much tighter connection between you and machine.
[00:04:59.141] Kent Bye: Great. Yeah. Maybe you could give me a bit more context as to your background and your journey into this realm of hundreds of computing.
[00:05:05.806] Thomas Reardon: Yeah. I'm, I'm kind of a cranky old tech pro who also did a detour through academia. So, I'm first and foremost a hacker at heart. And by that, I mean the old school sense of hacker, the MIT hacker, software creator. I was probably writing software when I was 12 years old, actually hanging out at the lab at what back then was called the LCS, the Lab for Computer Science, I think, at MIT. And grew up as a software professional. I started a company when I was really young and ended up at Microsoft when I was 19, and actually spent all of the 90s at Microsoft working on Windows, early versions of Windows. I had a lot of contributions to Windows 95. And as part of that process, I came up with and started the Internet Explorer project there, which at the time was just fundamentally a part of Windows. And I then served as the architect for IE and the web architect for IE through the 90s, through 2, 3, and 4 of IE. I went from that to another company, to OpenWavePhone.com, where I served as the CTO. So I've been working around browsers for years. And sometime around 2003, I got tired of tech and threw in the towel and said, I'm going to do something totally different. And I decided to finally go to college. And I went to Columbia as an adult. And I studied classics. I studied Greek and Latin. I was just indulging my own mind, doing the opposite of tech. And I just started sampling other classes. And one of the classes I sampled while I was there, reading Cicero and my beloved Seneca, was a neuroscience class. And I just really dug the people who were in the class, like the minds that were there. The topic was awesome, but the minds that were there were pretty incredible to me. Young minds that I met that were like grad students. So I kind of followed them to the lab and started volunteering in a neuroscience lab and doing experiments. And writing code, using my old skills. And that led me into a really rich academic career in neuroscience that went for about 10 years. I did my PhD. I started at Duke and I finished my PhD at Columbia with the legendary Tom Jessel and Attila Lajonzi. But Tom is now passed away, but really the legend of motor neuroscience and understanding all the way down to the level of molecules, like how does the motor nervous system assemble itself? How do you get from all this thought out to, Hey, I'm turning a muscle on and off and making my fingers move. And that was the launching point for control labs. So I left Columbia with several other neuroscientists, much as I loved it, academic research. I went with a couple of colleagues from Columbia to start this company. based on a loose idea that people have a lot more capacity at the edge of the nervous system, at the periphery, at your motor neurons, than we really thought. And that that could be used, and you could actually interface that to machines in a new and novel way. It wasn't just about how you moved, how you grabbed a mouse or typed on a keyboard, but that you could go right to the nervous system and do something novel and train a person to learn new kinds of skills they've never learned before faster than they've ever learned them before. You could control myriad devices around you, everything, the Nest thermostat on your wall, or your laptop, or a VR device. That's where, after a couple of years, we ran into FRL and a bunch of the geniuses here working on a decade-long project to make augmented reality real. I think we had a unique perspective on how you might control it. We thought you could use neural interfaces, our neuromotor interfaces to control everything in the world, But nothing seemed as exotic as this like fully embodied, rich, immersive experience that AR promises. So we kind of married efforts and joined up with FRL and kind of been, I want to say off to the races because in a startup that felt like the races, I'd say like off to the marathon maybe. And combining efforts with FRL and this really pretty deep, deep investment on neuromotor interfaces combined with wearable computers.
[00:09:17.273] Kent Bye: How's that? Is that a reasonable background? Do you want to go down one of those paths? Employee number one on Internet Explorer to now create these really sci-fi neural interfaces to completely change the paradigm for how we interact with spatial computing, you know, with some classics thrown in there as well. That's great.
[00:09:33.738] Thomas Reardon: It does feel a little bit different, but, you know, I would say like at the time, you know, something like IE was obvious, like Netscape already existed and Timbersley was already like having wild success getting W3C launched. That was easy, just in the sense of it was an easy problem to describe. Like, hey, there's this browser thing happening in the world. Microsoft, we need to do a browser. It's clear this is going to be the main way people navigate information. That was easy. And just go, go, go. I would say what we're doing now is not nearly that easy. We have a general thesis of what wearable computing could be, but no way to test it right now. We're kind of inventing as we go. And that's scary. It feels like we're all working without a net.
[00:10:17.207] Kent Bye: Yeah, well, I wanted to get a little bit more insight in terms of whether or not some of these revelations that you were coming up with, say, like the motor unit recruitment, being able to target these individual motor neurons, if that was already coming from like the academic literature or if you were going out and creating these technologies and these new platforms that was able to kind of reveal new neuroscience insights as you were playing around with using EMG neural interfaces to use it as an input device, or if this was something that was a result of some of the technology platforms that were developing, or if this was established where you just saw an opportunity to say... You're getting to the good stuff.
[00:10:55.253] Thomas Reardon: So This is like 70% really boring and 30% kind of novel and interesting. So the basic technology that we're pursuing is called surface electromyography. So this is the ability for us to have sensors layered on the skin outside of your body. So it's not invasive. You don't perforate the body. And we read the electrical activity of your muscles. And from that, we reverse engineer the activity of motor neurons. The ideas about how, or the science of how motor neurons turn muscles on and off That was all pretty well established by the 50s, even starting in the 1920s. EMG itself, this electrical activity of the muscles, which a lot of people know probably most likely from EKG, which is the electroactivity of heart muscle, but EMG is a more general version of that. It's been around for a hundred plus years. In some sense, it's kind of where neuroscience starts. It's trying to understand why do muscles have electricity? Why do they have a voltage across the membrane? Why does that change? Why does that change lead to the contraction of the muscle? That's kind of where neuroscience started, what you might call functional neuroscience. Our insight here was that this almost like a scientific ghetto was just completely discarded and left behind while people worked on increasingly exotic things inside the skull of neuroscience. All that stuff out at the periphery of how the neurons inside your spinal cord actually turn muscles on and off is kind of boring. We took a different approach to it that just said, this has been underleveraged, underexploited, if you will. Let's go back and look at this on scientific first principles, and rather than looking at it as a case closed issue, let's go reopen it back up and see what could change. Because 50 years of dogmas, a lot of things just die scientifically because of dogma, not because of ongoing evolution or ongoing scientific research. So, you use the term motor recruitment. So, I should just establish that because that was the dogma that we started the company to take a shot at upsetting that dogma. So, the idea is pretty simple. Let's take your arm, there's like 14 for most of us, there's 14 muscles in the arm, the forearm, and those muscles control most of the movement of your hand. We call those the extrinsic muscles that control your hand. There's a few more muscles inside your hand, like the thumb, Pollock's muscle, but most of the ones that matter are in the arm. So any one of those muscles receives input from a cluster of motor neurons in the spine together. And The way that we've mostly in neuroscience thought this worked was that your muscles generate force, they generate contractile force. And if you can imagine it in your head, I'll do this with my hand right now, that force follows this sigmoidal function. So it's monotonic, it goes up and it hits a peak level. And at the lowest end of the force point, the certain neurons, we'll call them Ann and Bob and Charlie, call them, So neuron Anne is the first one to fire and your muscle just barely starts to twitch. And then neuron Bob starts to fire. And by that mean, it sends little electrical pulses to the muscle. And now your muscle starts to twitch more and contract more. And then neuron Charlie takes off. And that same sequence, the exact same specific neurons always start to fire in the same order. And what that means computationally is that there's only one dimension of information in your muscle. That's pretty boring. We think of the brain as being this exotic, high-dimensional computer, crazy 15 billion neuron network, 100 trillion connections. and all that dimensionality that's implied by that collapsing down to one dimension of contracting your muscle. Because I can't turn on neuron Charlie and then neuron Anne and go back and forth. So all that information just gets like lost. That idea that there's a sequence to neurons turning on and then climbing up that force curve to where you hit the peak force when you're really grabbing something really heavy, we call that motor recruitment. Neuron A recruits motor B. When B gets very active, it recruits neuron C. When C gets very, very active, it recruits neuron D. And we said, what could we do to overturn that dogma and to start to look at how much independence you can learn across those neurons? And that was just kind of like a napkin level idea that we sketched out. It's like, let's go pursue this and see what we can do. That's still the biggest idea we've had. we are still pursuing it. And the stuff that we think of as being years out, seven plus years out, is all based on how can you learn to control these motor neurons in a way that neuroscience had told us for the last 50 years was not possible. I.e., by the time you get out to muscles, effectively, I'm going to use air quotes here, you are dumb. You just have neurons that all collapse down and they can only do this one dimension of output. And we said, like, let's go see if we can find if there's more dimensions of output, even on one muscle basis. Within our work, and I'll kind of use these two different terms today, and these terms are a little, I'm cheating a little bit because they're not purely exclusive, but we have two kinds of control that we try to get out of this muscle sensing technology and that neural signal decode that we do. One we call myocontrol. And you can think of that as just us using that electrical signal to computationally understand what your muscle is doing as you do a normal manual task, like type on a keyboard or move a joystick. And that's very stereotyped. And we can like recreate virtually what you were doing in that mechanical action of like wiggling your fingers to type on a keyboard. We call it myocontrol because we're saying you're controlling things at the level of muscles. And that means that dimension I talked about, meaning each muscle is one dimension, 14 muscles, 14 dimensions. Turns out that's a lot of dimensions and a bunch of neuroscience I could go into about extensors and flexors and synergist muscles, et cetera, that might collapse that even more, that dimensionality more. In my own control, we don't care. All we care about is like, here's a natural way, natural meaning you move naturally, you create forces across joints to control a typical device like a mouse or a keyboard. And we'll let you do that same thing without the mouse or the keyboard. It's mostly going to feel just like a mouse or a keyboard, but you don't need the physical device anymore. You could use it. We're just not going to listen to the device. We're just only going to listen to your nerve. The neat thing is you can start to quickly exceed what you can do with the physical devices. In one way, you can do things like what we call brain button. So this is this fun little demo we have. We haven't really shown this publicly much, but I love this demo where you try to push a button, an actual physical button. The computer can tell you you're going to push a button. And you try to trick it. You try to say like, and the point is like, we can always tell you somewhere around 150, between 100 and 140, 150 milliseconds before the button is pressed, we can tell you, you're going to press the button. And you can try to trick it and like pull your finger back, et cetera. But like, we know what that looks like too. We know what the trick looks like. The point is the electrical activity happens well before the actual mechanical contraction of your muscles. And the electroactivity of you stopping that push happens before you actually stop it. So we know the truth before your body actually knows the truth, meaning before your body's done the action. And it's cool because you do these little things where you can like push a button basically faster than you've ever pushed a button before, because we can take 100 milliseconds out of the action. Now, I don't know what to do with that. It's fun and maybe we'll enter some Korean video gaming league to swap the other teams because we get 100 millisecond edge. That's not really the goal, but it's a fun artifact of, oh, we're looking at these signals at this electrical neural input level rather than the mechanical output level. The next thing it leads to is a radical reduction in error. So, when you move, you have a whole feedback loop. You have what's called proprioception, where you're actually listening to your muscles, figuring out how stretched they are. It's why when you close your eyes, you kind of know where your hands are. You know if they're above your head or not. That's because your brain is inputting what's called proprioception, which is the stretch receptors inside your muscles telling your brain how stretched they are. And that gives you a sense of where your body is in space. So, We can do things such that that error reading, error correcting process you do during normal movement when you say type on a keyboard, we can kind of melt a lot of that down to the minimum possible error correction that you need to go through to affect some outcome, like typing A, B, C, D on the keyboard. And we can continue to shrink that down, and shrink that down, and shrink that down, where the full movement matters less and less. But that electrical signature that means you're about to type an A is very distinguishable from the electrical signature of you're about to type a B. And we get so good at it that you actually don't even have to type the A. You actually make just like a tiny little, almost like micro movement towards the A, and that's the A. And because you don't have to go through the full movement, the full movement is where you have to do all this error correction. Like you have to mechanically adjust as you're doing the movement. the more we can take that mechanical error out of it, the more efficient you become with the way that you actually manipulate the machine. That's the goal for some of these myocontrol things that are a bit crazy, but you get really quite a bit more efficient. We talked about this last week publicly where we didn't show it. We were trying to hint that maybe there's evidence that you can type faster than you've ever typed before and more intriguingly, more accurately than you've ever typed. And hopefully those are results that we'll get to share soon.
[00:20:53.803] Kent Bye: Yeah, that's fascinating. And, you know, I could see both the neuroscience research and the other concepts like homuncular flexibility that both Jeremy Bailenson and Jaron Lanier have talked about in their books and research of the body mapping of being able to take this concept of putting a thing on your wrist and being able to presumably isolate down into individual neurons. And you showed some demos that had six to seven degrees of control within one instance where I mean, if you think about it, we have like two controls, like A and B button on each hand and then the joystick. But yet what I imagine is what this could potentially introduce is the equivalent of having like a dozen buttons or hundreds of buttons. Sure. You could potentially train yourself how to. not only for input, but also for these other really exotic, like embodying an octopus or having it, or being able to train yourself extra hand and have that do typing. And so I just, it feels like a whole weird sci-fi future that we're heading into what this is regarding to produce.
[00:21:53.671] Thomas Reardon: I think it's going to be a wonderful future in which human joy increases, human agency increases. And I'm somebody who thinks like agency and joy go hand You increase agency, you increase happiness. You decrease it, you decrease it. So that sense of having more control over things you haven't been able to control in the past gives you a sense of being kind of bigger than you are today or more capable than you are today. I find that to be just conceptually intriguing and fun. And I've experienced a bit of it myself in our work. So I can tell you it's really, really fun. And I, Well, I want to make sure that we don't go too woo-woo on it and try to ground it in where we are today and what neuroscience there is to still do because there's, oh boy, there's a lot more to go. We could be doing this for decades and still be making progress against it and hopefully delivering things to the real world the whole way. But I don't think this is something like, oh, we're going to wrap this up in two years and be done. This is, in some sense, this is what I'm going to work on the rest of my life, happily. So the experiences we do now are kind of focused on what I'll call legacy or translational things, like how can we make typing something better than it's ever been before? How could we give you a mouse that's more than 2D? So you can imagine being in AR where it's like, boy, it would be nice to have a 6D mouse and to have it feel as natural as your hand is today. I have, say, five digits here, and I move them. And I don't have to think. I can just move them, five different digits. Let's call that five degrees of freedom. I could add six degrees of freedom. The point here is that your hand isn't the smart part. Your hand is, some people get offended when I say this, your hand is just like a dumb mitten. It's just stuck there at the end of your arm. It's an end effector in the control theory sense. And you learn how to use it. You get pretty capable of it. You're not born that way. You learn and get better at it. What is cool is you're really, really good at remapping it to other tasks, even just these five end effectors. Now, like one proof of this is polydactyly, like people born with extra fingers, people born with six and seven, the finger don't just have two non-working fingers attached to their hand. And then the five that their brain was ready for, like they have full dexterous control over all six or seven fingers as if we all would have that. The brain just wires into them and then you create a different control space and you adjust to it and you're off to the races. They don't experience the sixth and seventh digit as being more difficult. We have a video that I love that we show around of somebody who is born with a congenital hand deficit. Most of their fingers were never developed in their hand. And with our band, they're actually able to get control of the five digits of a virtual hand in about two minutes, just by watching it on screen and doing weird things where they're like, I don't know what I'm doing right now. I'm just kind of clenching or something my arm and it's causing the fingers to move and you just watch it. And in this case, only with visual feedback, which is probably not the optimal feedback, you get control of it. So for instance, if I put it on you and I change the mapping of the fingers, so that the pinky was the index finger. Like how long do you think it would take you to learn that? It turns out you learn it really quick. Like this is this motor remapping we do constantly because you're just trying to adjust the end effector. So think of the natural hand that you have as just something that might read out what your brain is telling the computer or might not. But what your brain is doing is paying attention to what's happening on the computer, not what's happening in your hand. Does that make sense? And this partly is what Jaron Lehner is saying with homuncular remapping, and part of it is a little bit different because we're not talking about mapping to like, I'm going to make my finger and now control my elbow. Instead, we're saying you have more degrees of freedom embedded in your nervous system that are exploited. Your hand is the restriction, not your brain. And I can't emphasize that strongly enough. Your hand is the restriction, not your brain. Your brain can output vastly higher degrees of freedom, high fidelity degrees of freedom than your hand can output.
[00:26:14.152] Kent Bye: Yeah. And I wanted to ask about something that you had said to NPR, which was that if you take like a 30 second sample of the neural input data that you could potentially identify somebody, it's like a unique personal identifiable information, not only in that moment, but forever in the future. So I'm just curious, like, how do you start to think about this information in terms of the privacy implications of Yeah, yeah, yeah.
[00:26:37.378] Thomas Reardon: This is a huge part of our conversations every day inside and you're tilting into exactly why we started to kind of open up the lab a bit and give people a look inside the lab. So there's a couple of tiers of this. I'm going to get to the privacy implications of this, but I need to set up some of the science for it, I think, to make sense to the broadest number of people. So I said a lot of our work today and what we've done with stuff under this regime of what we call my own control. What I want to do is introduce this other regime of control we call neurocontrol. So rather than myocontrol, where we're kind of just doing a more accurate, hopefully flexible way of moving your muscles and decoding that. Now, instead, we're actually looking at things really strictly at the level of just neural activity. Don't care about the muscle activity at all. So one of the important results of neuroscience of the 20th century is this idea of a motor map and a motor unit. And this was something that we started our company to really take advantage of, this idea that if we can get a device that can read motor units, not the big gross activity of your muscles, the electrical activity, but a single motor unit, then we could really start to talk about a true neural interface. So let me explain what a motor unit is. When you are born, you are overwired. You have a bunch of neurons in your spine, and they will be in a segment of your spine in which they will then grow out, and they will innervate a muscle. And they will do it kind of haphazardly. And your muscle is made up of lots of fibers in nice lined up formation. And they have parts of their surface, the end plate, that neurons connect to this. And any one fiber when you're born in the muscle actually gets connection from tons of motor neurons in your spine, hundreds all at the same time. That's the way you're born. Every motor nervous spine talks promiscuously to lots of different fibers and the fibers get that kind of promiscuous input. When you see a baby kind of like waving its arms around and acting like it's like out of control, it's actually doing something crucial in development called motor babbling. And that motor babbling leads to something called motor refinement in which all of those excess synaptic connections onto the fibers fall off. And there's this winner-take-all phenomenon where any one fiber in your muscle is contacted from only a single motor neuron. This is crucial in human development, in primate development, in most mammalian development. Crucial that we go through this motor map refinement. So now what we have is this idea of a motor unit, this connection map from that one neuron out to a fiber in your muscle. And it's actually a collection of fibers. So let's call it a hundred fibers in your muscle, one neuron that's associated with it. There's some other phenomenon about that connection between a motor neuron and the fibers that's different from the rest of your brain, such as it's a We say it's a non-stochastic, non-probabilistic synapse, so that if the motor neuron fires, sends out an action potential, the zero or one of the nervous system, that that electrical signal is guaranteed to propagate into the muscle, propagate by a chemical transmission into the muscle. The consequence of that is that when we look at the EMG, if we decompose it at high enough signal and noise, we can now get a view of a single neuron and its activity in your spinal cord because of this motor map. That's very, very exciting from a neuroscience point of view, because now from outside of the body, I'm able to listen to single neurons and not just single neurons, single action potentials. So again, the information currency of the brain is an action potential. It is, for the most part, a zero or a one across most of the nervous system. If I get that, it's kind of off to the races. That's mana for neuroscientists. And we do have that. And that's exciting. Now, one of the consequences of this is to realize your wiring of this motor map, I said, is stochastic. It's done randomly because it just happens during that first year or so of development. And it's actually static for the rest of your life. It doesn't keep updating and changing. That's the motor map you get for the rest of your life. When you build up muscles, you're not making new fibers. You're adding protein into the fibers that you have. So that map of these muscle fibers to these neurons is set at a certain point. It's not encoded in your DNA. In fact, there's not enough information in your DNA to do it. This is orders and orders of magnitude more connection specificity than you could possibly encode in your DNA. Evolution came up with a solution to this, which is to overwire everything up front, because we don't know where to tell everything to go up front, then let you go through this competition process that yields this final motor map. That motor map uniquely identifies you. I mean, it doesn't identify your name. It doesn't identify any ethnographic thing about you. It doesn't say how old you are. It doesn't say what gender you are, what race, what, who knows what languages you speak. None of that. All it does is say, here's the outcome of a bunch of motor refinement that you went through in the first year of life. Crazy thing is like that little motor map in the electrical signal that we see as a consequence of it is unique to you. It's probably unique to you relative to every person who's ever lived. It's unique to you relative to your genetic clone, your twin and you, the motor maps look nothing alike, completely stochastically different. That is kind of intriguing. The consequence of that is like, in some idealized form, and certainly we're not there today, but in some idealized form, if we could detect the motor map on you, Now, every single time you put the device on in the future, we could say, oh, that's that same person, that person 1, 1, 2, 3, 4, 5. We don't know who it is. We just know that's a uniquely identifiable signature to that person. Weirdly, even more identifiable than your DNA. It's weird that there's no, we can't really imply anything else out of it. It's almost like just this bucket of bits. It's almost like an encrypted number that it's unique, but it doesn't mean anything other than it's unique. So that was what you asked about saying, if that's possible, I think this is where you're going, what are the privacy implications of it? And frankly, yeah, the privacy implications damned substantial to us. It's exactly why I'm here talking to you now, and we've been engaging more publicly. It's important to note that the signals we detect are intentional signals. We are not decoding thoughts. We are not getting a signature of your thoughts, things that map to something but your social preferences in the world, your social actions in the world. All that we see is the result of your intent. You intend to move and control something, and then we see the result of that intention. We don't see what you thought about beforehand. We don't see what you were sad about or what you had for breakfast. It's just, what are you trying to do in the five milliseconds that we're decoding the signal and turning that into a button push on the machine? We can do all of that more accurately if we do it relative to your specific motor map. So, if I say like, oh, that was Neuron Charlie, not Neuron Bob. Neuron Charlie is what this person trained to mean push the space key on a keyboard. All of a sudden now, we have this like, neural interface that's based on single neuron control. And we've actually showed this. We showed a demo of this last week. And it really is single neuron control, which is even more intuitive. It's invariant to underlying movement and load on the arm, et cetera. But to make that work super, super well, we have to do it based on that motor map. We want you to learn how to control that neuron and then take off the band, take off the device and come back tomorrow and have the same level of control as you had on it yesterday. And then the next day and the next day, and actually just keep getting better and better at it. Like you can't even help getting better at it. Cause that's what your brain does. Your brain gets really, really good at motor tasks and get better and better and better at them. In some sense, it's sort of what your brain is for. It's to generate movement. It's to control your body. So I won't go longer there, but it would be, interesting to like, I'll let you ask kind of questions about like the privacy implications could be and like how we're trying to approach it. Right now, we're trying to have this, I think, open conversation in the world and trying to be as transparent as we can about, hey, look, we think this is kind of important that there's this signature here. And we want to like work with really everybody, people in the policy domain, end users themselves directly to say like, what's the right trade off here? What are people comfortable with? Like, we think there's a bunch of magical things you can do with these signals. But we got to make sure people are super comfortable with this and that they understand exactly what's happening with these signals that are coming off of their body and that nothing is hidden from them.
[00:35:24.698] Kent Bye: Yeah, I guess the concern that I have and just talking to different neuroscientists who have said that, you know, the way that you move is the way that you think. So taking the concepts of embodied cognition, as you get more and more intimate information about that movement, combined with things like eye tracking information or gambling skin response, or your emotional sentiment analysis, you know, you're basically going to have this sensor fusion effect where if you are not only having the virtual environment, but eventually potentially your eye gaze data and what you're looking at. And if it's a virtual environment, specifically what you're looking at. So having a bunch of this information and then adding on top of that, all this information about your motor neurons could start to really get down into like your intention, but also the context under which that intention is happening. That could have things be extrapolated from there. So I guess it's my concern is all the potential biometrically inferred information when you start to tie all this stuff together.
[00:36:18.631] Thomas Reardon: Yeah, this is a dense topic. And I by no means want to put myself forth as an expert on if we could sense every last thing about your body and your emotional state. That's not part of my research. In some sense, I don't care about it, but that's just because I'm passionate about motor neuroscience. I'm not a big believer that this kind of like psychographic stuff that can be built up in particular out of like motor signals really has any real legs, but I'll try to be a scientist and learn from others about it, but it's not what my group does. So like just not where we are, it's not what we care about. What we care about is giving people control over machines that they've never had before. Like that sounds exciting and I think users will love it. The world you're talking about is one that I'm like, I don't, It's not our research program. I never talked to anybody here about how do we combine this with galvanic skin response or things like that, things that people use as proxy signals for emotional state or affective state. What is the universe of science going to do down this path? I have no idea. Who knows? I'm not saying that makes it an irrelevant question. That's a super interesting question. I think we ought to try to, as a community, come up with guardrails against the worst possible usages of the technology. We have to do that jointly. It's not a Facebook thing. That's an Earth thing. We have to do it seriously. I think for us, our commitment is to be as transparent as possible. You just can't hide anything about this and just tell users, this is what's being collected from you, if you will. In our ideal world, we're not really collecting any data. this device on your wrist collects the data and it doesn't go any further than that. It doesn't go into the cloud and somewhere else. I don't know if that's where we'll land. That would be ideal if we could do that. I think everyone inside my research team and our partners here thinks that. We're trying to like go figure out what the constraints of machine learning are that allow us to give people this control. And the more we can push the edge running into the cloud, the better. Combining that with other sensor modalities of what you just kind of ran through, Yeah, like I said, it's just not anywhere in my research umbrella. It's certainly not like or anybody around me, like the whole research division that we're in. I don't know how to predict where it goes like across earth. Where do you think it goes?
[00:38:44.158] Kent Bye: Well, I mean, I think it's, all this technology has amazing potentials and also amazing risks. And I think that's the thing. Yeah, it does. I've been looking at a lot of the ethical implications and there are going to be amazing things with accessibility, but also, like, I do think it would be inevitable that we create these, what are essentially mind reading machines. When you tie everything together, that you won't need to necessarily have like EEG from the brain, but that if you have enough of the contextual information tied with everything else, then I think AI will eventually get to that. So how do we cope with that? It feels like a technological inevitability that we'll get there at some point. But it's like, how do we put in the ways in which that you only do things on edge compute, as an example. If you're recording it, that's where I think I get additional concerns is because essentially you go back in time and look at somebody's thoughts or intentions in a certain context. But as long as it's like in real time and you're doing edge compute, I feel like that to me is an approach that we need to pay for.
[00:39:36.995] Thomas Reardon: It's a huge part of the research mission here, is to push as much as possible to the edge, to just relieve the pressure against this problem of, we know this is pretty profoundly intimate data. Although, again, I want to qualify it and repeat what I said earlier. It's not social data, and it's not tied to any social data. It's just this weird thing that's clearly uniquely identifiable, but not, at least for me, it doesn't raise these scary things the way that somebody reading my email would like scary the crud out of me. I'd be like, no, no, no, don't do that. So yeah, I think this is a really, really dense topic. I am kind of curious, almost like philosophically, like use the expression mind reading. And I think it's one that's like, it's easily overcharged. So like, does my hand read my mind?
[00:40:28.924] Kent Bye: Well, it's for me, it's sort of like the intention. And like you say, like eye tracking information and for someone's sexual preferences based upon what they're paying attention to. So it's like, Oh yeah, I see that word information or what Britton Heller calls biometric psychography. So looking at the biometrics, but within the context of what's happening around you and aware of that as well, we're able to make these based upon very small. Yeah.
[00:40:53.473] Thomas Reardon: There probably are things like, I'm going to assert that eye tracking is not a neuromotor interface, so it's something else different. It's an attentional signal, clearly. I don't know if it's an intentional signal as strong as what we're talking about, but what you raised is a really interesting example. It's where it's actually relatively cost-free. You don't need to have these exhaustive models. You just present people interesting visual information and see what they respond to with their eye gaze. That's scary. That reminds me of psych experiments in the 60s and all that stuff, and the old subliminal advertising when we used to think that that was actually a real thing in the 60s and 70s. psychological thriller movies and stories we used to tell back then. For the work I'm doing in neuromotor interfaces, it just seems kind of far field. There are so many other things that can be combined together that I think are so much more privacy concerning than the combination of our signals and those things. As you just said, boy, eye gaze by itself is already pretty potent. Eye gaze plus a very, very intentional motor action at the wrist, I'm like, maybe there's something there, but it's just, like I said, it's like, it's so interesting to us, like we don't pursue it. So probably a bunch of other things I'd be more concerned with. Like, I mean, you told me about like eye gaze combined with GSR, like, okay, boy, I can start to see where that's going to start to go maybe wrong. So I have like, it's maybe at the highest level, there's this really interesting thing that's saying like, what does it mean when we get to this world of wearable machines and they start to really become extensions of us? And things like the iPhone are already halfway there, right? It can be frustrating, but I think for a lot of people, it's like the thing, I can't leave home without it. I got to have my phone. It's like leaving part of your body behind. What do we do to make that less bad and almost privacy enhancing? Yeah. And I think people have talked about what happened in the development of DNA science and when it started to really explode in the 2000s. And when we actually started to actually create, we needed to start to set policies, laws about it, and things about protecting the disclosure of DNA, genomic information to insurance companies, et cetera. Yeah, that's the kind of stuff we need to start doing now. From our perspective, we want to be early. We don't want to be like, hey, we shipped the product, and five years later, boy, we better go out there and come up with a policy for this. We want to do the policy work now, not in a decade. We want to know what promises people need, what is going to allow them to invite this kind of technology into their lives in a way they trust. It would be such a disaster to me if we did all this work, and I had this unbelievable talented team of neuroscientists, and we did all this work just to find out we screwed up all the policy stuff, and nobody can use it because they're afraid of it. I can't imagine being more sad than if that were the outcome.
[00:43:55.623] Kent Bye: Yeah, that's why a lot of folks are talking about it because we're trying to figure out what that balance is. We don't want to stifle innovation, but at the same time, we want to make sure that we're not creating a roadmap to dystopia where we have all this information that is too much information. We've already crossed this threshold with all these other mobile devices, but once we start getting down into detecting individual neuron firing and the metaphor that I have from talking to neuroscientists is that You know, we used to think about the mind and the brain, but really what I've heard time and time again from neuroscientists is that the whole body is like the brain through embodied cognition. And so I've had some neuroscientists say the way that you move is the way that you think. And so being able to tie all these things back is the long road of things where my personal concerns come in terms of. You know, it may not be there now, but it feels like an inevitability that eventually we'll be able to extrapolate now. And then if that is true that we're on that roadmap, then what do we do from a policy perspective? And I think there's also what Thomas Metzger calls the pacing gap, which is that the pace of technology goes so fast of innovation that it's hard for the conceptual frameworks and the policy to even. Yeah, yeah. Absolutely. So I think it's like part of the larger discussion that I think Facebook is starting to have, but also just, you know, for me as a journalist, it's sort of like, what are the ways in which that we are able to have that balance between what are the guardrails that we put on this stuff and say, you know, this is going to be too much. We know that let's like say that this is only going to be edge compute device. It's real-time processing. It's not being recorded and stored forever. And that's just one example. But what those specific examples might be to be able to both have ethical design while it is able to explore the full potential, but also put on some guardrails to the point where this technical map doesn't go to the path of being able to use this information to create environments that are able to take the worst aspects of information warfare down into an experiential level.
[00:45:46.045] Thomas Reardon: Well, let me ask you, do you feel like there's such a thing as like values neutral technology, or do you think any technology has an inherent bias for good or bias for bad?
[00:45:56.687] Kent Bye: I think it's up to the culture. I mean, I think of it as a very logical set of context at the highest level is the culture. Then the next level is the laws and the economy and the experiential design. And then from there, the actual code of the experience, the operating system and the hardware. So you have these nested sets of context. And that's a feedback loop, where the culture is shaping everything, but also the technology can have a feedback loop into everything else. Yeah. Yeah. So there's both positives and negatives, and I think that's why there are these ethical issues that never have a clear answer, is because it's often different contexts that you have to weigh the benefits from the costs, to say, we're willing to take on these risks, given the benefits that we have here.
[00:46:35.688] Thomas Reardon: Yeah. I think this, call it a feedback loop, this cultural feedback loop is, yeah, I've actually had like a thesis around this about really kind of the development of the internet. And it kind of overwhelmed us before we actually had the social norms to actually like use it properly. And then we, sometimes the wheels can come off. And when I think about this technology, I'm like, How do we avoid that kind of chaos where all this unintended damage happens? I don't know any other way to do it other than to say, let's talk about the policy upfront. Let's talk about the goals and how can you be radically transparent about what you're doing with it? I'm proud of the fact that we're out there We're doing these neuroethics conferences. We're big sponsors of it. A giant team of people within my lab are engaged on it. We are starting to get some interesting pointers and guidance that feels like it's started to congeal a little bit. There's a little danger in this. Sometimes you just don't fully know until it's actually out in the world. What we're trying to do is optimize. Like, don't wait till 10 years after it's out there to start having those policy discussions. Like, do it now. Just a simple level of humans talking about it, and then ultimately at the level of real policy and laws. And we are, like, not just open for that kind of thing. We're, like, almost desperate for those levels of conversations. Like I said, like, I'm a researcher. Like, I care about this stuff doing maximum good in the world. And that's my obsession. So, like, anything I can do. to wire that up culturally in the right way that increases the maximum social good is the one I'm going to do. That's it. This whole technology is being created, invented to maximize human agency, to maximize any one given person's sense of control. So it would be unbelievably tragic if people's digestion of the technology, usage of the technology, leads them to conclude, wow, this is reducing my agency. That's the opposite consequence. So I think we're doing about as much as we can. We got more to go to kind of increase the odds that this really does do like maximum positive social outcomes.
[00:48:51.070] Kent Bye: Yeah, just two questions, just curious about haptics, because you have talked about that a little bit, but this concept of sensory substitution, if you think that you're showing some stuff with getting some level of texture, but I don't know if it's a detail of like the frequency of a fingertip and being able to simulate the nuances of touch, if you're able to, from a haptic perspective, vibrate on your wrist. Yeah. you know, if the experience is to the point where it kind of tricks your brain enough to the point where you almost can't distinguish, or at least I've never had like a sensory substitution experience yet to know what it's like to swap out my hearing to use my torso for my ear. But that's kind of the concept is that as long as it signals into your brain, then you should be okay. I'm just wondering the extent of what it means to start to play with haptics
[00:49:35.913] Thomas Reardon: I think there's the work that we're doing around haptic rendering. There's work like folks like neuroscientists like David Eagleman are doing that are all like super interesting. Some of this is sensory substitution. Some of it is not substitution, but call it novel means of building up like higher level interpretation of the sensory stimuli, not lower levels. So that's like, you know, I'm going to type braille on your torso and see if you can, this is kind of the David Eagleman work. I would say this is my personal view versus an FRL view, because I don't run the team doing all this great haptics work here. My personal view is I'm not all that wired up for the synesthetic version of this, which is like, oh boy, I did something on mechanoreceptors on my wrist and that turned into a taste sensation, exotic forms of sensory substitution, true synesthesia. What I'm more interested in is Can I get you, can I trick you enough, as it were, with this sensory stimuli that your reaction to it is no different than the real thing? That's it. If your reaction, like, and maybe this is that kind of like William James kind of point of view, like how your body reacts to it represents, you know, your emotions are the manifestations that exist in your body. In that same way here on the sensory input side, like if the reaction to it is the exact same as the pure stimuli, if you will, that's a pretty good outcome. And that means that basically we can use these other stimuli to help you learn new skills. But I don't know about that, like the full kind of like synesthetic endpoint that some people talk about.
[00:51:07.897] Kent Bye: Right. Great. And finally, what do you think the ultimate potential of virtual reality might be? What am I able to enable? I think
[00:51:19.487] Thomas Reardon: Virtual reality is one weird thing. I'll talk about wearable computing and what I think it means to have the ultimate personalized computing experience. I think what we want is machines that feel properly like extensions of our will, that they act on our behalf and that the information they present to us is based on our willful decision-making. The closer and closer and closer we can get to where machines are well controlled by humans and they feel like a productive extension of ourselves, the better. I think that that could be a magnificent outcome for all of us. Some of that is better embodiment and things like VR, immersive embodiment. Some of it is not necessarily that, and it's more like the sensory substitution zone that you just brought up. But what I really think about is machines that, to be a little bit crazy about it, that do our will. Machines that act on our behalf and exclusively on our behalf. I think the biggest problem we face is on the input side, the human output side, not the human input side. I think your nervous system's bit rate is pretty good on the input side and pretty bad on the output side. So things like VR and AR, just from the rendering, display, et cetera, auditory input perspective, just represent a way of trying to engage more and more of something that you have a ton of capacity for, which is input. I think the output side is a huge problem and a massively different problem. So I think when we think about the outcomes of this, to be selfish about my neuromotor and output perspective, I think what we will look back on this era in 20 years as the one that kind of changed the equation between people and machines and put people back in control of machines. And I actually think that's not what happened over the last 20 to 40 years. I think we actually let the machines kind of dominate us increasingly while we got like less and less control over them. So more control for us, less control for the machines.
[00:53:18.099] Kent Bye: Awesome. Well, I just wanted to thank you for taking the time to be able to talk about a little bit of both the neuroscience as well as the privacy implications of all this. I'm really excited to see how this goes. And yeah, just thanks for taking the time to sit down and talk about all this stuff with me today. So thanks.
[00:53:32.066] Thomas Reardon: Thanks so much, Kent. Thanks for your time today.
[00:53:34.668] Kent Bye: So that was Thomas Rudin. He's a director of research science at Facebook Gravity Labs and one of the founders of Control Labs, which was acquired by Facebook in September of 2019. So I've had a number of different takeaways about this interview is that first of all, Well, first I'm just really excited about the potential for these types of neuromotor interfaces for what it's going to unlock with all sorts of superhuman input control, being able to tap into individual motor neurons and to shave off a hundred to 150 milliseconds of latency. I think it's going to open up all sorts of really interesting higher bandwidth communication. So there's going to be all sorts of things from being able to like press buttons of being able to extend and puppeteer different embodiments. And I think Rearden is right in terms of these different types of devices are going to just increase the bandwidth of our expression of our agency. Now, the things that I noticed in this conversation is that there may be a little bit of compartmentalization that is happening within Facebook Reality Labs research. I mean, Thomas said that he's personally not aware of any research programs trying to tie specifically this motor neuron information into other initiatives. But this general process of trying to do this type of sensor fusion is already happening and just had publications that showed up at the IEEE VR conference just a few days after we had this conversation on Friday. Facebook Reality Labs is actually actively working on to extrapolate your eye gaze, being able to just take the hand pose and the head pose and to be aware of what's happening in the virtual reality context and have all that information in addition to the eye tracking data that they're feeding in as a baseline, and then to see if they're able to match what that baseline information is based upon what's happening with both the hands and the head and the context that they've created. to result in gaze data, which eye gaze already is a lot of really sensitive information that we actually talked about here. So it is happening. And just in looking at the co-authors, there's Marina Zanoli, who's a program manager at Facebook AI, as well as Sachin Talithi, the research science manager at Facebook Reality Lab. So this, I think, generally is an intersection between artificial intelligence and what's happening with the Facebook Reality Lab with all the immersive technologies. There's actually quite a lot of innovation that could come from algorithms and approaches for artificial intelligence and machine learning to be able to take this movement data and to be able to translate that into other aspects that are happening in the body. I think it's, for me, a technological inevitability. whatever that Thomas Reardon is working on is going to eventually fit into this larger ecosystem of contextually aware AI and trying to do this type of sensor fusion and do what is referred to as biometrically inferred information or what Brenton Heller has called biometric psychography. Now, Reardon is saying that he's a bit skeptical about biometric psychography, comparing it to supplemental advertising as something that was generating a lot of fear at the time but proven to be kind of an unfounded pseudoscience. I don't actually think that that's true with biometric psychography because it's already happening and it's already being able to extrapolate all sorts of intimate information based upon looking at eye-tracking information and being able to extrapolate sexual preferences. It can go on and on and on in terms of what type of biometrically inferred information you can get from somebody. Again, this is what Brent Heller is calling biometric psychography. Someone's interest, their attention, what they value. All that information is not always necessarily personally identifiable, but it's certainly intimate information. I think it's probably safer, rather than to not believe it, to be in this state of suspended judgment and almost believe the opposite, that it's a technological inevitability that eventually we're going to figure out how to tie all this stuff together. Then it becomes an issue of thinking about the larger context under which all these technologies are being tied together in that way, and what, from a consumer perspective, we need to figure out what guardrails we need. The thing that I realized is that when you only look at it through the lens of the smallest context of the technologist and input and output, a lot of the really big ethical issues comes when it comes in the context of a platform with an economic business model of, in Facebook's case, of surveillance capitalism, where they want to gather as much and record as much of this data as they possibly can. And, you know, there's a lot of questions around if they're going to continue this line of economic business model. So the legal research document that Britton Heller wrote, Watching Android Dreams of Electric Sheep, Immersive Technology, Biometric Psychography, and the Law. A big point that she's making is that a lot of the ideas around privacy are tied back to identity and personally identifiable information. But the real privacy risk for a lot of this type of stuff is all this other biometrically inferred information that's coming from you and what happens to that specific information. So there's a lot of gaps in the existing law. And I think in some sense, there's a bit of Facebook saying, yeah, we want to have a policy discussion. But from my point of view is like, OK, let's see the policy recommendations you're making, then having, say, privacy IRB, so an independent institutional review board for privacy that's able to have full access to everything to have auditing. That is just one idea for what kind of organization would need to be in place to be able to kind of have some oversight there. But I don't hear Facebook talking about, hey, let's have a privacy IRB. Let's invite outside experts. They're waiting for other people to kind of figure out, OK, what is going to be put forth onto them? I think there's a lot of ways in that Facebook will say, we're going to have RFPs. They'll engage other outside policy experts like the XR Association or the Information Technology and Innovation Foundation. So there's a lot of ways in which the details of some of those policy discussions are handled by not Facebook themselves. They have other people that are engaged there. But also, I do think there is this element where Facebook can't be the creator of the technology and also coming up with all the rules for how it's going to be regulated. In some sense, it isn't. their job to be able to come up with all of those boundaries. But I think they need to be involved in the conversation, especially when these technologies aren't widely available to independent researchers or different security researchers or, you know, just generally trying to help with the conceptual frameworks for how to make sense of all this stuff. I've been disappointed of how reactive Facebook has been in terms of all of this stuff, rather than proactively trying to put forth some of the different conceptual frames around this stuff, engaging with the philosophers of privacy to go beyond what the laws of privacy were written back in 1973 as an example, you know, to really think forward looking in terms of what's it mean, what does privacy mean, and how do we start to define it? If they want to go off of privacy as a human right, then great, you know, a lot of work that Dr. Anita Allen has been doing in terms of that and the GDPR. There's the contextual integrity theory of the Helm-Niessen bomb. There's Adam Moore who looks at it more akin to like copyright law, where you can kind of license out different aspects. And what's that mean to be able to have a little bit more control and ownership of your data? I mean, there's lots of ways in which that there's not only the technological architecture, but also the larger conceptual frames and policies that the conversation right now is that there's the pacing gap, which is that Facebook is just kind of blazing forward with all this stuff that is blowing people's minds and not really having any idea of how to even relate to it. There's this question of how to not stifle the innovation, but also have those guardrails in place. And I think this is such a tricky issue for how to do that. There's not a lot of really good models for how that actually is going to play out. So just to kind of put out an RFP here or there, or to say, you know, let's have a policy discussion, but to look at the larger context of how broken a lot of that political oversight is already with the existing technologies. And as we move forward, then what needs to be in place to really ensure that we're doing this in a way that is really protecting the end user and the consumer. I had mentioned during this podcast this concept of the mereological set of nested context. Mereology is like the study of wholes and parts. And looking at Alfred North Whitehead's process philosophy, there's a lot of ways in which he was trying to come up with new mathematical metaphors, that he was trying to look at the nature of reality in terms of these organisms. And it's a paradigm shift towards a relational ontology. looking at things in terms of how things are related to each other rather than these static objects that are in isolation. And I think that's a big paradigm shift that I keep coming back to. And Whitehead was really pushing forth this mirror logical approach of having a scale free way of whether or not you're looking at quantum mechanics or galaxies, it's all of these different entities that have different relationships to each other. And that is a math metaphor of this kind of mariological structure that you can go from the smallest to the larger aspects. And I think there is that same dimension of a mariological nested structure when it comes to at the highest level of the culture. And then the laws that are set by the culture and the context of that culture, and then the economics that happen within that context of those laws and also the culture. And so there's like these nested subsets and then you get down into like. an individual human experience of some sort of like mediated technologies that have code and experiential design behind them, but also the operating system and that hardware technology itself. And so the technology is at the lowest level, but it's being fed into all of the additional nested contexts from the culture, the laws, the economics, the design guidelines and ethical frameworks that were agreed to as a society, but also the specific experiential design of each program, the operating system that is running on and then the underlying technology. So at each of these different layers, there's different interfaces and Thomas is in a very kind of the lowest level of the fundamental foundational research, but that is happening within the context of an economic company that is wanting to sort of use this sensor fusion approach to do contextually aware AI, which is their explicitly expressed goal to be able to be completely aware of your entire context, including what is happening inside of your body. So this is something that they've already said that they want to work on and that they are actively pursuing. So he's somebody who is a research scientist and wants to create neuromotor input devices that are able to make us reach this whole new scale of agency. But the real ethical and moral dilemma questions come in when it happens within the economic context of this company. So I would like to see much more policy details, but also to really dig into a lot of this ethical and privacy frameworks. I think there need to be new philosophical foundations. And I think that's part of what happened before with all these unintended consequences that weren't necessarily the immediate thing that anybody was thinking about. And then all of a sudden we have all these problems that have aggregated over a long period of time that has accelerated into all the various issues that we see today with information warfare or disinformation or filter bubbles or algorithmic bias and the different injustices that come from that. And all these different things are things that happen at the largest cultural scale that you don't really know until you have the network effects of massive amounts of people that are using it and have emergent behaviors that come out of that. But the thing that they do have control over is the business models and these different things that are happening within their own bubble. Generally, he's right in the sense that this is not just a Facebook problem. This is a Facebook and Earth problem. And the community needs to come together to start to close this pacing gap and to start to have a little bit more of these conversations. And hopefully it will go just beyond just talking to the press, but to be engaged at a level of transparency and to have a discussion about some of these different options of not only conceptual frameworks to start to understand the technology, but then what needs to happen in terms of the guardrails that need to be in place in order to protect the users. So that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a VISTA-supported podcast, and I do rely upon donations in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.