#991: Critiquing Facebook’s Responsible Innovation Principles & Project Aria through the lens of Anthropology & Tech Ethics

Facebook’s Project Aria announcement in September at Facebook Connect raised a number of different ethical questions with anthropologists and technological ethicists. Journalist Lawrence Dodds described it on Twitter by saying, “Facebook will send ‘hundreds’ of employees out into public spaces recording everything they see in order to research privacy risks of AR glasses.” During the Facebook Connect keynote, Head of Facebook Reality Labs Andrew Bosworth described Project Aria as a prototype research device worn by Facebook employees and contractors that would be “recording audio, video, eye tracking, and location data” of “egocentric data capture.” In the Project Aria Launch video, Director of Research Science at Facebook Realty Labs Research Richard Newcomb said that “starting in September, a few hundred Facebook workers will be wearing Aria on campus and in public spaces to help us collect data to uncover the underlying technical and ethical questions, and start to look at answers to those.”

The idea of Facebook workers wearing always-on AR devices recording egocentric video and audio data streams across private and public spaces in order to research the ethical and privacy implications raised a lot red flags from social science researchers. Anthropologist Dr. Sally A. Applin wrote a Twitter thread explaining “Why this is very, very bad.” And tech ethicist Dr. Catherine Flick said, “And yet Facebook has a head of responsible innovation. Who is featured in an independent publication about responsible tech talking about ethics at Facebook. Just mindboggling. Does this guy actually know anything about ethics or social impact of tech? Or is it just lip service?” The two researchers connected via Twitter an agreed to collaborate on a paper over the course of six months, and the result is a 15,000-word peer-review paper titled “Facebook’s Project Aria indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the Commons” that was published in latest Journal of Responsible Technology.

Applin & Flick deconstruct the ethics of Project Aria based upon own Facebook’s four Responsible Innovation Principles that were announced by Boz in the same Facebook Connect keynote after the Project Aria launch video. Those principles are #1) Don’t surprise people. #2) Provide controls that matter. #3) Consider everyone. And #4) Put People First. In their paper, Applin & Flick conclude that

Facebook’s Project Aria has incomplete and conflicting Principles of Responsible Innovation. It violates its own principles of Responsible Innovation, and uses these to “ethics wash” what appears to be a technological and social colonization of the Commons. Facebook enables itself to avoid responsibility and accountability for the hard questions about its practices, including its approach to informed consent. Additionally, Facebook’s Responsible Innovation Principles are written from a technocentric perspective, which precludes Facebook from cessation of the project should ethical issues arise. We argue that the ethical issues that have already arisen should be basis enough to stop development—even for “research”. Therefore, we conclude that the Facebook Responsible Innovation Principles are irresponsible and as such, insufficient to enable the development of Project Aria as an ethical technology.

I reached out to Applin & Flick to come onto the Voices of VR podcast to give a bit more context as to their analysis through their anthropological & technology ethics lenses. Sally Applin is an anthropologist looking at the cultural adoption of emerging technologies through the lens of anthropology and her social multi-dimensional communications theory called PolySocial Reality. She’s a Research Fellow at HRAF Advanced Research Centres (EU), Canterbury, Centre for Social Anthropology and Computing (CSAC), and Research Associate at Human Relations Area Files (HRAF), Yale University. Catherine Flick is a Reader (aka associate professor) of Centre for Computing and Social Responsibility at the De Montfort University, United Kingdom.

We deconstruct Facebook’s Reponsible Innovation Principles in the context of technology ethics and other responsible innovation best practices, but also critically analyze their principles and how quickly they break down even when looking at the Project Aria research project. Facebook has been talking about their responsible innovation principles whenever ethical questions come up, but as we discuss in this podcast, these principles are not really clear, coherent, or robust enough to provide useful insight into some of the most basic aspects of bystander privacy and consent for augmented reality. Applin & Flick have a much more comprehensive breakdown in their paper at https://doi.org/10.1016/j.jrt.2021.100010, and this conversation should help give an overview and primer for how to critically evaluate Facebook’s responsibile innovation principles.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality