FRL-neuromotor-interface
On March 18th, Facebook Reality Labs Research announced some of their research into next-generation neuromotor input devices for mixed reality applications. I paraphrased the most interesting insights from their press conferences announcement, but I was still left with a lot of questions on the specific neuroscience principles underlying their ability to be able to target individual motor neurons. I also had a lot of follow-up questions about some of the privacy implications of these technologies, and so thankfully I was able to follow up with Thomas Reardon, Director of Neuromotor Interfaces at Facebook Reality Labs and co-founder of CTRL-Labs to get more context on the neuroscience foundations and privacy risks associated with these breakthrough “adaptive interfaces.”

Reardon described his journey into working on wearable computing by starting at Microsoft, where he created the Internet Explorer browser. He eventually went back to school to get his Ph.D. in neuroscience at Columbia University, and then joined with other neuroscience colleagues to start CTRL-Labs as a startup (be sure to check out my previous interview with CTRL-Labs on neural interfaces). Reardon and his team set out to override the dogma on motor unit recruitment, and they were successful in being able to detect the action potentials of individual motor neurons through the combination of surface-level electromyography and machine learning. These wrist-based neural input devices are able to puppeteer virtual embodiments, and even cause action based on the mere intention of movement rather than actually moving. This breakthrough has the potential to revolutionize the fidelity of input that isn’t constrained by the human body, and the brain and motor neurons have a lot more low-latency capacity and higher degrees of freedom that may solve some of the most intractable bottlenecks for robust 3DUI input for virtual and augmented reality.

But with the increase in orders of magnitude of new opportunities of agency, then there are also a similar increase in the sensitivity of this data in terms of the nature of how the network of these signals could be even more personally-identifiable than DNA. And there’s also a lot of open questions around how the action potentials of these motor neurons representing both the intentional and actual dimensions of movement could be used within a sensor-fusion approach with other biometric information. Facebook Reality Labs Research has a poster a IEEE VR 2021 that is able to extrapolate eye gaze information with access to head and hand pose data and contextual information about the surrounding virtual environment. So there’s already a lot of sensor fusion work happening towards Facebook’s goal of “contextually-aware AI,” which is not only going to be aware of the world around you, but also potentially and eventually what’s happening inside of your own body moment to moment.

Part of the reason why Facebook Realty Labs is making more public appearances talking about the ethics of virtual and augmented reality is because they want to get ahead of some of the ethics and policy implications of AR devices. Reardon was able to answer a lot of the questions around the identifiability of this nueromotor interface data, but it’s still an open scientific question as to exactly how that motor movement data could be combined with other information in order to extrapolate with Brittan Heller has called “biometric psychography” that tries to identify this new class of data.

Heller says, “Biometric psychography is a new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests. Immersive technology must capture this data to function, meaning that while biometric psychography may be relevant beyond immersive tech, it will become increasingly inescapable as immersive tech spreads. This is important because current thinking around biometrics is focused primarily on identity, but biometric psychography is the practice of using biometric data to instead identify a person’s interests.”

Heller continues on to evaluate how there are gaps in the existing privacy law that don’t cover these emerging challenges of biometric psychography that “most regulators and consumers incorrectly assume will be governed by existing law.” For a really comprehensive overview of the current state of U.S. privacy law, then be sure to listen my interview with Joe Jerome (or read through the annotated HTML or PDF transcript with citations). There are a lot of current debates about a pending U.S. Federal Privacy law, and I’d be really curious to hear about Facebook’s thoughts their current thinking on how the types of biometric and psychographic data from XR could shape the future of privacy law in the United States.

nested-context-lessig
Another point that came up again again is the context dependence of these issues around ethics and privacy. Lessig’s Pathetic Dot Theory tends to look at the culture, laws, economics, laws, and technological architecture/code as independent contexts, but I’m proposing more of a mereological structure of wholes and parts where the cultural context drives the laws, the economy is within the context of the laws, and then the design frameworks, app code, operating systems, and technology hardware are nested within the hierarchy of other contexts. Because these are nested wholes and parts, then there are also feedback loops here technology platforms can result in the shifting of culture.

I’ve previously covered how Alfred North Whitehead’s Process Philosophy takes a paradigm-shifting process-relational approach to some of these issues, which I think provides a useful framing to help provide a deeper contextual framing for some of these issues. Whitehead helped to popularize these types mereological structures with a lot of his mathematics and philosophy work, and this type of fractal geometry has been a really useful conceptual frame for understanding the different levels of context and how they’re related to each other.

Context is a topic that comes up again and again is thinking about these ethical questions. Despite Facebook’s promotion of “contextually-aware AI,” most of how they’ve been using talking about context was through a lens of your environmental context, but during their last press conference they said that the other people around you also helps to shape your context. It’s not just the people, but it’s also the topic of conversation that also has the ability to jump between different contexts. Up to this point Facebook has not elaborated on any of their foundational theoretical work for how they’re conceiving of the topic of context, contextually-aware AI, and the boundaries around it. One pointer that I’d provide is Helen Nissenbaum’s Contextual Integrity approach to privacy, which tries to map out how the relationship of information flows with different stakeholders in different contexts (e.g. how you’re okay with sharing intimate medical information with a doctor and financial information with a bank teller, but not vice versa).

A lot of the deeper ethical questions around data from XR are elucidated when looking it at through the lens of context. Having access to hand motion data and the motor neuron data driving it may actually not have that many privacy concerns. However, FRL Research is able to extrapolate gaze data when that hand pose is combined with head pose and information about the environment. So in isolation it’s not as much of a problem, but when it’s combined within an economic context of contextually-aware AI and the potential extension of Facebook’s business model of surveillance capitalism into spatial computing. How much of all of this XR data is going to be fused and synthesized towards the end goal of biometric psychography is also a big question that could shape future discussions about XR policy.

It’s possible to see a future where these XR technologies could be abused to lower our agency over the long run weaken our body autonomy and privacy. In order to prevent this from happening, then what are the guardrails from a policy perspective that need to be implemented? What would the viable enforcement of these guidelines look like? Do we need a privacy institutional review board to provide oversight and independent auditing? What is Facebook’s perspective on a potential Federal Privacy law and how XR could shape that discussion.

So overall, I’m optimistic about the amazing benefits of neuromotor input devices like the one Facebook Reality Lab is working on as a research project, and how it has the potential to completely revolutionize 3DUI and exponentially increase our degrees of freedom in expressing our agency in user interfaces and virtual worlds. But yet I also still have outstanding concerns since there’s a larger conversation that needs to happen with policy makers and the larger public, and for Facebook to be more proactive in doing more of the conceptual and theoretical work about how to weigh the tradeoffs of this technology. There are always benefits and risks for any new technology, and we currently don’t have robust conceptual or ethical frameworks to be able to navigate the complexity of some of these tradeoffs.

This public conversation is just starting to get under way, and I’m glad to have had the opportunity to explore some of the lower-level neuroscience foundations mechanics of neuromotor interfaces and some of their associated privacy risks. But I’m also left feeling like of the more challenging ethical and privacy discussions are going to be happening at a higher level within the business and economic context for how all of this biometric XR data will end up being captured, analyzed, and stored over time. At the end of the day, how this data are used and for what purpose are beyond the control of foundational researchers like Reardon, as these types of business decisions are made further up in the foodchain. Reardon expressed his personal preference that these data aren’t mined and recorded, and so at the research level there’s a lot of work to see whether or not they can do real-time processing on edge compute devices instead. But again, Facebook has not committed to any specific business model, and they’re currently leaving everything on the table in terms of what data recorded and how they choose to use it. If it’s not already covered in their current privacy policy, then it’d just be a matter of updating it for them to declare.

Historically, Facebook has not had a great history of living up to their privacy promises, and they still need to embody a lot more actions over time before I or the rest of the XR community has more trust and faith in the alignment between what they’re saying an how they’re embodying those principles in action. The good news is that I’m seeing a lot more embodied action from both their public statements and my own interactions with them both in this interview with Reardon, but also my interview with the privacy policy manager Nathan White back in October 2020. Is that enough for me yet? No. There’s still a long way to go, such as seeing their details on any of their policy ideas or have a better accountability process to be able to have some checks and balances over time. This XR tech represents some amazing potential and terrifying risks, and the broader XR community has a responsibility to brainstorm what some of the policy guardrails might look like in order to nudge us down the more protopian path.

Update: 4/1/2021: Here’s some more info on the
Facebook Reality Labs symposium on Ethics of Noninvasive Neural Interfaces in collaboration with Columbia University’s NeuroRights Initiative.

Also here’s a human rights proposal for Neuro-Rights:

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Comments are closed.

Voices of VR Podcast © 2021