AR-neural-input
I participated in a Facebook press event on Tuesday, March 16th that featuring some Facebook Human-Computer Interaction Research on AR Neural Inputs, Haptics, Contextually-Aware-AI, & Intelligent Clicks. It was an on-the-record event for print quotes, however I was not given permission to use any direct audio quotes and so I try to paraphrase, summarize, and analyze the announcements through a lens of XR technology, ethics, and privacy.

I’m generally a big fan of these types of neural inputs, because as CTRL-labs neuroscientist Dan Wetmore told me in 2019, these EMG sensors are able to target individual motor neurons that can be used to control virtual embodiment. They even showed videos of people training themselves being able to control individual neurons without actually moving anything in their body. There’s a lot of really exciting neural input and haptic innovations on the horizon that will be laying down the foundation for a pretty significant human-computer interaction paradigm shift from 2D to 3D.

The biggest thing that gives me pause is these neural inputs are currently being paired with Facebook’s vision of “contextually-aware AI” that is presumably an always-on, AI assistant who is constantly capturing & modeling your current context. This is so their “Intelligent Click” process can extrapolate your intentions through inferences and aims to give you the right interface, within the right context, at the right time.

I don’t think Facebook hasn’t really thought through how to opt-in or opt-out of specific contexts or how third-party bystanders who revoke their consent and opt-out or if there’s even any opt-in process. When I asked about how Facebook plans to to handle consent for bystanders to either opt-in or opt-out, then they pointed me to an external RFP to get feedback from the outside community for how to handle this. I hear a lot of rhetoric from Facebook about how the fact they are in charge of the platform is allowing them to “bake in privacy, security, and safety” from the beginning, which sort of implies that they’d be taking a privacy-first architectural approach. But yet at the same time, when asked how they plan on handling bystander consent or opt-out option for their always-on & omnipresent contextually-aware AI assistant, then they’re outsourcing these privacy architectures to the responsibility of third parties via their RFP process, which has already closed for submissions in October 2020.

They also have been mentioning their four Responsible Innovation principles announced at Facebook Connect 2020 of #1 Never surprise people, #2 Provide controls that matter, #3 Consider everyone, #4 Put people first. My interpretation is that these are stack ranked because there’s language elsewhere that indicates that the “#3 Consider Everyone” specifically refers to non-users and bystanders of their technology (as well as underrepresented minorities). Part of why I say this is because there are other passages that seem to indicate that the people in “#4 Put people first” is actually referring to Facebook’s community of hardware and software users, “#4 Put people first: We strive to do what’s right for our community, individuals, and our business. When faced with tradeoffs, we prioritize what’s best for our community.”

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s some research prototype videos that Facebook has released:

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Comments are closed.

Voices of VR Podcast © 2021