Facebook’s Project Aria announcement in September at Facebook Connect raised a number of different ethical questions with anthropologists and technological ethicists. Journalist Lawrence Dodds described it on Twitter by saying, “Facebook will send ‘hundreds’ of employees out into public spaces recording everything they see in order to research privacy risks of AR glasses.” During the Facebook Connect keynote, Head of Facebook Reality Labs Andrew Bosworth described Project Aria as a prototype research device worn by Facebook employees and contractors that would be “recording audio, video, eye tracking, and location data” of “egocentric data capture.” In the Project Aria Launch video, Director of Research Science at Facebook Realty Labs Research Richard Newcomb said that “starting in September, a few hundred Facebook workers will be wearing Aria on campus and in public spaces to help us collect data to uncover the underlying technical and ethical questions, and start to look at answers to those.”
The idea of Facebook workers wearing always-on AR devices recording egocentric video and audio data streams across private and public spaces in order to research the ethical and privacy implications raised a lot red flags from social science researchers. Anthropologist Dr. Sally A. Applin wrote a Twitter thread explaining “Why this is very, very bad.” And tech ethicist Dr. Catherine Flick said, “And yet Facebook has a head of responsible innovation. Who is featured in an independent publication about responsible tech talking about ethics at Facebook. Just mindboggling. Does this guy actually know anything about ethics or social impact of tech? Or is it just lip service?” The two researchers connected via Twitter an agreed to collaborate on a paper over the course of six months, and the result is a 15,000-word peer-review paper titled “Facebook’s Project Aria indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the Commons” that was published in latest Journal of Responsible Technology.
Applin & Flick deconstruct the ethics of Project Aria based upon own Facebook’s four Responsible Innovation Principles that were announced by Boz in the same Facebook Connect keynote after the Project Aria launch video. Those principles are #1) Don’t surprise people. #2) Provide controls that matter. #3) Consider everyone. And #4) Put People First. In their paper, Applin & Flick conclude that
Facebook’s Project Aria has incomplete and conflicting Principles of Responsible Innovation. It violates its own principles of Responsible Innovation, and uses these to “ethics wash” what appears to be a technological and social colonization of the Commons. Facebook enables itself to avoid responsibility and accountability for the hard questions about its practices, including its approach to informed consent. Additionally, Facebook’s Responsible Innovation Principles are written from a technocentric perspective, which precludes Facebook from cessation of the project should ethical issues arise. We argue that the ethical issues that have already arisen should be basis enough to stop development—even for “research”. Therefore, we conclude that the Facebook Responsible Innovation Principles are irresponsible and as such, insufficient to enable the development of Project Aria as an ethical technology.
I reached out to Applin & Flick to come onto the Voices of VR podcast to give a bit more context as to their analysis through their anthropological & technology ethics lenses. Sally Applin is an anthropologist looking at the cultural adoption of emerging technologies through the lens of anthropology and her social multi-dimensional communications theory called PolySocial Reality. She’s a Research Fellow at HRAF Advanced Research Centres (EU), Canterbury, Centre for Social Anthropology and Computing (CSAC), and Research Associate at Human Relations Area Files (HRAF), Yale University. Catherine Flick is a Reader (aka associate professor) of Centre for Computing and Social Responsibility at the De Montfort University, United Kingdom.
We deconstruct Facebook’s Reponsible Innovation Principles in the context of technology ethics and other responsible innovation best practices, but also critically analyze their principles and how quickly they break down even when looking at the Project Aria research project. Facebook has been talking about their responsible innovation principles whenever ethical questions come up, but as we discuss in this podcast, these principles are not really clear, coherent, or robust enough to provide useful insight into some of the most basic aspects of bystander privacy and consent for augmented reality. Applin & Flick have a much more comprehensive breakdown in their paper at https://doi.org/10.1016/j.jrt.2021.100010, and this conversation should help give an overview and primer for how to critically evaluate Facebook’s responsibile innovation principles.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So on today's episode, we're going to be unpacking Facebook's responsible innovation principles through the lens of their Project ARIA, which was announced September 16th, 2020 at Facebook Connect, where they're saying, hey, we have these glasses, they're just a research study, but we need to kind of figure out the ethical implications. And so what we're going to do is put these AR glasses onto our employees and have them send out into public and walk around. And we'll figure out some of the ethical implications by iterating. I'm actually going to play a segment of how they announced this research using their employees to figure out the technological and ethical implications by sending them out into public and seeing how they react by having these devices on and recording all this ecocentric data capture.
[00:00:56.400] Facebook Employee: Now these glasses are a precursor to working AR. It doesn't display the information inside the lens, it's not for sale, and it's not a prototype. It's a research device that will help us understand how to build the software and hardware necessary for real working AR glasses. Starting in September, some specially trained Facebook employees and contractors will be wearing the glasses in real world conditions, indoors and outdoors. Their sensor platform will capture video and audio, eye tracking and location data, all to help us answer some of the questions that we need to ask before we release AR glasses to the general public.
[00:01:33.838] Facebook Employee: Starting in September, a few hundred Facebook workers will be wearing ARIA on campuses and in public spaces to help us collect data to uncover the underlying technical and ethical questions and start to look at answers to those.
[00:01:51.011] Kent Bye: So some anthropologists and tech ethicists saw this and said, hey, we can already see there's going to be some issues here and some problems. And at the same time, as Facebook was announcing this Project ARIA, they were also announcing a set of their responsible innovation principles, which essentially are don't surprise people, provide controls that matter, consider everyone, and put people first. I'm going to play this segment where Boz talks about their responsible innovation principles.
[00:02:16.863] Facebook Employee: More generally though, all of this is about operationalizing a set of principles for responsible innovation that guide our work in the lab. At Facebook, principles are not just a list of nice things. They describe trade-offs, things that we do even when the opposite might benefit us somehow. So, for example, when we're building, we should be transparent about how and when data is collected and used over time so that people are not surprised. We will build simple controls that are easy to understand and clear about the implications of a choice. And we build for all people of all backgrounds, including people who aren't using our products at all, but may be affected by them. We think about this a lot in the context of Project ARIA. And we strive to do what's right for our community, individuals, and our business. But when faced with these trade-offs, we prioritize our community.
[00:03:07.643] Kent Bye: So this is at least their framework that they're using. Whenever any sort of ethical questions come up, they refer to this. And there hasn't been a lot of really critical analysis, really deconstructing if this is a good framework, if it's robust enough, if there's conflicts, or how they actually navigate some of these different things, and what kind of transparency or oversight do they have in any of this. And so there's two people, Sally Applin and Catherine Flick. saw the initial announcement on Twitter and then they kind of dug into it and they actually wrote a whole paper called Facebook's Project ARIA indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the commons. So just this past week on Tuesday, there was a data dialogue on responsible innovation of immersive technologies that XRSI's Kavya Perlman was at and was tweeting about it. And I was wondering if these two researchers, Kaplan and Flick, were involved in that discussion and they were not invited and they weren't there. And so I decided to reach out to them to be able to unpack their article that they wrote and to not only talk about the Project ARIA and some of the concerns around bystanders and the commons and what happens when you have technology companies that are potentially seizing certain aspects of the commons and what does that mean? But also looking at these responsible innovation principles through the lens of how does this fit into the context of other tech ethics and other responsible innovation principles and deconstructing each of these different principles point by point. And so that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Sally Eplin and Catherine Flick happened on Wednesday, April 28th, 2021. So with that, let's go ahead and dive right in.
[00:04:39.997] Sally Applin: I'm Sally Applin. I am an anthropologist. I came to anthropology through user experience design. And my background is varied from I have a conceptual art degree. I went to NYU's ITP program in terms of new technology and society as that was kind of getting started as the web was growing. I've worked at big companies like Apple and lots of little startups. And my doctoral work was really studying makers of new technology and how that technology expands and is adopted by bigger companies and by the public or not.
[00:05:11.805] Catherine Flick: I'm Catherine Flick. I'm a Reader in Computing and Social Responsibility, which is kind of like an associate professor at DeMolfe University in the UK. I've come through to technology ethics by a very roundabout route, doing a Bachelor of Science that majored in computer science and history and philosophy of science as my undergraduate. Then I did a PhD specifically on informed consent in ICT back where I studied end user license agreements. and how they definitely are not informed consent. And then I went on to do a bunch of postdocs, mostly on ethical governance of technologies. I did one on online child protection, looking at the ethics of natural language processing for identification of adults pretending to be children in children's chat channels. I've also done some projects on what's called responsible research and innovation, where we're looking at, from a European perspective, integrating ethics into the innovation cycle. We've worked with small companies, with big companies. One of my current projects is working with very large companies like Atos and Telefonica, Siemens, Ericsson, Infineon on integrating ethics and responsible innovation in large companies. I also have a side interest in video games specifically, and I look at a lot of ethics and responsible innovation in video games. I'm currently doing some work with colleagues at the University of York on monetization, ethical monetization of video games. That's where I'm at and what I'm up to.
[00:06:46.975] Kent Bye: Yeah, so Sally, you've been involved in this virtual augmented reality, anthropology, ethics space for a while. Maybe you could give a little bit more context as to your entry point into this wider virtual and augmented reality space.
[00:06:58.432] Sally Applin: Oh, sure. So part of it had to do with some of the work I was doing with Dr. Michael Fisher at Kent, who was my advisor. My research started actually in steampunk. When I was looking at my dissertation, I started to look at steampunks. And steampunks were interested in lots of little fringe new technologies. And that kind of led me down a path through understanding where these groups met and what they were doing. And at the same time, we were working on a theory of communication, which shows up in the paper, polysocial reality, we were looking at an umbrella of all kinds of communications messaging, and whether it's synchronous or asynchronous, and how messages end up in cooperation or not, depending on how they're received. And looking at communication and looking at the new technology in terms of mixed reality, it was sort of easy to foresee a lot of clogging of messages, a lot of mixed messages, a lot of trouble with how we were going to process all of that. And that became how I was contributing to the augmented reality community in terms of giving some talks at AWE on how to think about designing for multiple messages and dipping in and out of various realities. Simultaneously, I kind of followed the research and worked with people trying to understand, we were all kind of like 2010, 2011, we're all sort of trying to understand, well, what is reality really? Are these, is this alternate AR, is it an alternate reality? Or is it just part of our one world, which is how we see it and which is in the paper. And in doing so, trying to understand within that how we spend our time, how we focus our attention and how we kind of adapt and cope. But mostly what my focus is and how I really spend my thrust in terms of the consulting I do to industry and various companies is to pay attention to what is around the technology that's going to be released into the world, because I don't think companies are really thinking so much ahead about the impact or how they're going to be put into the world and how people are going to have to kind of adjust to that. And that's something that I've spent time writing about for Fast Company, for Motherboard, and academic publications, trying to get a communication going with the builders of technology so that they'll be more thoughtful about how to bring in people like me to help them understand the world. And so that I'm not against AR, I just want it to be useful in a way that won't harm us and will actually help us. And there's ways to do that. It's just, they have to think about it. And those cycles aren't in industry yet, which is why these ethics principles of Facebook are so disappointing.
[00:09:35.205] Kent Bye: Sally, you mentioned in this paper this concept of polysocial reality. I know there's the mixed reality spectrum from Milgram and lots of different XR extended reality, mixed reality, virtual augmented reality, as well as the bifurcation of the virtual and the physical worlds. There was a contrast between the hybrid spaces or the digital dualities versus the polysocial spaces, which is trying to see this as a singular world, not as an artificial bifurcation of the virtual and the real. when you could describe what is polysocial reality?
[00:10:05.943] Sally Applin: Sure. We shortened it to Poser because it's a little easier to write all the time. But polysocial reality is a conceptual model that explains the umbrella of human communication. So it's human-human communication, human-machine communication, machine-machine communication, machine-animal communication, animal-animal communication. It doesn't matter. It's just the umbrella of all communication. So if you visualize a dimensional space and within it, you can see all the connections of all the messages that we're sending. Like you and I are talking in synchronous time, so mostly synchronous time. I mean, it's modulated through this apparatus in the internet, but mostly we see each other and we respond as close to synchronous time as we can. And then you take other messages where somebody sends you all the emails that have been piling up since we've been talking in your inbox. or all the tweets you haven't responded to, or all your texts that are going. Those are all asynchronous messages. We have synchronous messages and we have asynchronous messages. Within those, there's sometimes missed delivery. There's missed intention in synchronous communication. Even when we're looking at each other and talking, we may miss meaning. But we also really miss meaning if we miss messages. And with the automated apparatus that manages a lot of our messages, we can miss messages, they end up in spam, we misaddress them, and so forth. Ultimately, our goal is human cooperation and collaboration, because without that, we'll die. We rely on each other for cooperation to be able to do things. It's how we do things in the world. And to do that, we have to communicate. So if we're not getting messages effectively, or we're missing messages, or messages get dropped due to both automation errors or human errors in terms of missing things or being distracted, we don't actually create that cooperation that we want. So what poly-social reality is, is it's just the umbrella of all the messaging. It's like bearing in mind that messages are synchronous, they're asynchronous, they're between machines, they're between people. There's just constantly this environment of messages kind of percolating and populating our world. And we use that as a reference in the paper to talk about things like ARIA. When you have ARIA, and it's taking video messages, and it's taking audio messages, and while someone's using it, there's messages coming by, and then that means they can't reply to other messages. And you take the aggregate of all of those, you can start to see that there may be issues with cooperation and collaboration as people miss communications or synchronize in ways that are unexpected. So ultimately what it does, it provides a theoretical construct to understand this kind of super messaging world we've built. And there's always been this kind of messaging. It's just that it was slower. It used to be letters or telegrams, or there's always been different forms of message communication. It's just It's happening so quickly now and so much of it is moderated by machines that there's a propensity missing. And when you miss communication or you handle things too asynchronously, so if the message structure becomes too asynchronous or too rigid in how it's divided, meaning can change. And when meaning changes, cooperation changes or goes away. And that's also a problem because we need to cooperate.
[00:13:27.318] Kent Bye: Yeah. So, and the Facebook Connect, which would have been the Oculus Connect 7, although they kind of rebranded it to Facebook Connect 1, that happened in 2020 in September. Facebook rolled out their responsible innovation principles along with Project ARIA, which I think catalyzed the paper that you wrote that was really deconstructing not only the responsible innovation principles, also specifically the project ARIA that they were proposing in terms of just as a praxis of what is the most ethical way to research these different things. And so maybe it's worth going back to this 2014 emotional contagion study. Catherine, I know that you've talked about this in terms of, you know, this is the famous study that Facebook did with Princeton University involving like 700,000 people without real informed consent, kind of doing social experimentation on them. And that In this paper, you talk about how it catalyzed an entire journal responding to the ethical problems that was happening within this type of social science research using these social networks. So maybe you could give a little bit more context as to your entry point into this larger discussion around consent and some of the social science research.
[00:14:33.080] Catherine Flick: Yeah. So, I mean, I did my PhD on informed consent and end-user license agreements. It was back in around 2005, 2006-ish, but it was finished around 2009 was when I finally graduated. But even then I could already see that the thing was shifting away from end-user license agreements to things like privacy policies, terms of service. We were seeing those pop up more and more on websites. Of course, then there was a bunch of legislation that came out and we ended up with things like all the cookie policies, which you're constantly having to say yes or no to. Really, these ideas of consent are just so poorly done and so poorly implemented. The idea of consent in this space comes very much from a medical consent background. In medical consent, you are generally sitting down with somebody who is actually explaining to you that they're going to cut open your body and they're going to pull this bit out or stitch this bit back together or whatever it is that they're going to do to you. You're able to sit there and ask questions about what exactly are they going to do, how are they going to do it, etc. So you can have any concerns mitigated at that point. Then you can decide whether you want to go ahead with the medication or the surgery or whatever it is that you're going to be consenting to. The problem, of course, with technology is that we don't have those one-to-one conversations with the websites that are taking our data. We can't sit down and ask them, well, what exactly are you going to be doing with this in a way that's easy to understand and then integrate into our decision whether or not to continue with that website. Of course, technology has a whole lot of other pressures as well. You have obviously things like Facebook, for example, have a lot of peer pressure where if all your friends are on Facebook, you probably want to be on Facebook too. Therefore, there's a much higher likelihood that you're going to say yes to their privacy policy or their terms of service, even if you don't necessarily agree with all of the points. or you don't particularly like what they're doing with your data. Also, the other issue that crops up a lot is that every website is doing this. I mean, at the moment, the big thing, certainly in Europe, I don't know if you actually have these pop-ups in America, it might just be in Europe, but we get these massive pop-ups now that basically says, here are all the things we're going to do with your data. By default, they're all checked kind of yes, and then you have to go down into the details and uncheck them all if you don't want that. The thing is, all of that stuff is there for you to make a decision and quite an informed decision. But if you're doing that day after day after day after day, there's this idea of this kind of numbness that comes into your decision making and you just kind of tune out any of the particular differences between the different websites, and you're just like, oh, well, they're all just as bad as each other, or I kind of trust Microsoft to be okay, or maybe I trust Facebook, which I certainly don't, but maybe there are people out there who think, oh, yeah, well, they're a big enough company, surely if they're doing something bad, they'll get pulled up for it, or they'll get in trouble, or, you know, so many other people wouldn't have said yes, if there was some problem here, right? So there's a whole load of different pressures. And this is where I think, you know, it crosses over There's a lot of hubris with which Facebook conducts its business because it takes advantage of these naive consent decisions from users and draws a lot more out of those than it really ought to. It'll assume that then users actually do know what Facebook is doing with their data, when they don't. And so they bury things like the use of research, for example, deep within their privacy policies. Or in this case, I think it was even added later. It wasn't even added at the beginning. So there's a lot of arrogance with which Facebook conducts their business because they think that they can kind of move fast and break things. Then they can get away with just mopping it up at the end when they finally get pulled up for it. We've seen that again and again and again with Facebook. In the Facebook study, the emotional manipulation study, where they were essentially manipulating people's timelines in order to show them happy posts or sad posts and then see what types of posts people posted. The people who saw sad posts tended to post more sad posts afterwards. Essentially, there was quite a lot of potential that they would be causing people to become more depressed or to trigger certain depressive moments. It could have had quite significant medical outcomes, and that's a real problem. But they just kind of did it, and then they kind of went, oh, well, it was all a study, and then they tried to brush it under the carpet, and they tried to blame the university, and all this, that, and the other. And who was actually responsible? Oh, well, it can't have been us. This is just how we do things, sort of thing. And so that kind of set the scene for pretty much everything else that they've done since. And actually the biggest uptick I saw in people reading my paper about informed consent and Facebook, which obviously there was no informed consent from these people, was after the Cambridge Analytica scandal, where basically now it seems to be this is the cornerstone of Facebook's unethical practice in terms of consent and data usage or data manipulation within Facebook. So it doesn't actually surprise me at all that they're continuing to do this, despite the fact that they've been pulled up multiple times and that they've been criticized for lack of proper procedure And I mean, we'll get onto what they actually put in place, but I don't think that's sufficient either. Let's just say.
[00:20:02.233] Sally Applin: I have something I wanted to add, too, that in needing to click boxes and approve things and go through the motions which seem like they're granting permissions or not, there's really not a lot of power for the individuals doing this for any website. So what happens is that they're agreeing or setting some sort of preference, but then on the back end, it's really, really hard to figure out how to undo that or change that. These users were sort of asked to administer in some ways, but we're not given any power to administer. So we're kind of quasi-admins without any actual real way to manage in terms of control.
[00:20:41.886] Catherine Flick: I mean, theoretically, this is what the European GDPR was supposed to solve. So I can technically as a European, well, not a European resident anymore because of Brexit, but actually technically we're still using that law. So I can still technically write to any company and ask them to withdraw any data of mine and produce any copies of data that they have. And in fact, there's a fellow on Michael Veal on Twitter. He's a UK academic and he has quite successfully done freedom of information type requests under the GDPR for the data that he owns within other companies. But places like Facebook are particularly poor at actually returning those successfully to him because they say there's operational, you know, it's too difficult for them to do so or whatever, which is like there's some thresholds and that. But Yeah, it really cocked up Facebook's back end a bit, I think, when the EU really started to enforce that. And it's caused them quite a bit of trouble, I think. But it's not the silver bullet either, because there's still a lot of problems with that, as you say.
[00:21:41.163] Sally Applin: And also, if you think about how many websites you go to in a day, or a week or whatever, there's not enough kind of human hours to do that. And that's where Poser kicks in in terms of, theoretically, there's so many messages and only so much time. So you're getting some cleanup in some pieces of your overall umbrella of communication structure, but other pieces just remain vulnerable. There's no way to really seal privacy or consent really in that way, because it's just information overload too much.
[00:22:10.848] Kent Bye: Yeah, well, that's probably a good segue to talking about some of the responsible innovation principles that Facebook has put forth. It's probably worth just mentioning what they put out back in September of 2020. Their four principles are number one, never surprise people. Number two, provide controls that matter. Number three is consider everyone. And then number four is put people first, which is contradictory in different ways. So we can dig into each of these, but maybe we should set a larger context at Facebook connect. They announced this project area, which they said they're going to be putting these augmented reality glasses onto employees. And they're going to be doing ego centered data capture. So walking around, presumably recording everything around them. and then going out in public and doing the social science experiment with not only the employees, but also the public at large. I'm curious how you each heard about this idea and this project, and then what catalyzed you to deconstruct why this might be problematic, breaking down each of these different responsible innovation principles that Facebook put out.
[00:23:08.880] Sally Applin: So the Twitter feed was where I got the news of this Facebook Project ARIA coming out. And I just saw so many red flags in it, and I was very concerned about what was going to happen with it. And I wrote a pretty long thread about, OK, here's what Facebook announced. Here's what I know about this particular type of technology in the commons. And these are the things I think are going to happen. And this is why it's a terrible idea. And prominent people from Facebook Reality Labs started commenting on my posts. And they were commenting in ways saying, oh, you're making assumptions and it's not going to do that. And I guess I kind of know better from my research. And so it sufficiently inspired me like, well, if you're this way with my tweets, I need to write a paper about this because I think that there's probably a lot more to unpack and really discuss in length about why I feel this way about these things and to explain it very, very carefully. So I got fired up enough to want to write a paper about it. And simultaneously, Catherine, I'm not quite sure how we connected, but we both were like, Do you want to write something? Yeah, let's write something. And we realized that we had just great compatible skills to address this particular project announcement that just meshed really well. It was a very delightful paper to write. It was really fun and I'm really proud of it. Catherine, how about you? How did you
[00:24:34.933] Catherine Flick: I think it came by on Twitter for me too, and I thought, oh God, another Facebook thing that I'm going to get mad about. Since I wrote that paper, the original consent paper, I've had a hashtag, why you should delete Facebook or something like that. I've just been totting up the reasons. Every time something new comes out that makes me very pleased that I have deleted my Facebook, I tend to write something snide on Twitter. about it. I mean, it's never really had any traction, I think, but it makes me feel a bit better. I think this is really one of them where I saw Sally getting mad about it at the same time I got mad about it. To be honest with you, my best papers are things that I get mad about. I think there's a lot of drive when you're really angry about something and you want to explain why you're really angry about it, that it doesn't involve you just kind of shouting into the void. I'm a big fan of Dave and Goliath type bridges, where you're one or two little academics throwing stones at a giant company. I think that there's a lot to be said for just the lack of critical reflection of these companies, or if it does exist, the fact that the companies themselves just squash it massively. I think there's a lot to be said for being outside looking in. I like the position that I have, especially as an academic. I've got a lot of freedom to write about this sort of stuff and really like to use this position to be able to fight back in some ways that I can to try to hold these companies to account, especially from an ethical perspective. That was my drive really in this. I mean, I knew Sally vaguely through the ACM because we're both on the Committee on Professional Ethics, but I only really knew her as a name and a vague occasional email kind of thing. So I thought, well, you know, I liked what she'd been writing about on Twitter and I thought she might be a really good person just to help focus the anger in a particular direction. And yeah, I mean, obviously we didn't remain angry. We just got very academic about it in true academic style. And yeah, that's where this paper kind of came from.
[00:26:38.255] Kent Bye: I wanted to maybe start, before we dive into each of the different principles, start with this concept of the commons. You had mentioned the code of ethics of the ACM, so not to seize different aspects of the commons. And because of augmented reality technologies, we're going to potentially have people wearing these out in public. And when you go out in public, you can take a photo of somebody and if it's in the public space, you can have access to it. But if you're a private company, do they still have the right to be able to start to just go out and do mass data collection and surveillance? And this is essentially what we're going to have is these devices that are going to be able to potentially become these surveillance machines as we walk out in these public spaces. And so it is potentially changing our relationship to the public commons and these larger sociological dynamics around there. So maybe you could start with how you conceive of the commons from that anthropological and ethical perspective.
[00:27:26.285] Sally Applin: So that question is answered really thoroughly in the paper, but not everybody's reading because they're listening to a podcast. I didn't mean to be flippant. It's just that that's kind of the thrust of what we wrote about. The commons has different meanings for different cultures and different places. So that's, again, another problem with tech companies deploying anything, in that they think that they're deploying something that's going to fit culturally everywhere, and it doesn't. but they don't hang around long enough to know. And part of the reason how they justify this is they talk about scaling. Well, when you start to get into culture and cultural meaning and different definitions for different cultures, that becomes a scaling problem. And technology companies, particularly big ones like Facebook or Apple or Amazon, they don't like scaling problems that they can't solve. It's very frustrating for them. So cultural meaning of what the commons are, and what they mean to various cultures becomes an issue with something like Project ARIA, in that Americans may see the commons in certain ways, and there are certain laws that cover our particular commons in terms of, yes, you can photograph in public spaces, but that may not be true worldwide. So how do you deploy a project like ARIA, which are augmented reality glasses that are going to be recording, and comply with various cultural custom or various laws in multiple countries worldwide? And when you have a company like Facebook, which really has a charter of getting everyone on Facebook, and everyone means globally everyone. It becomes a really interesting technology issue as well as a social one. We wrote the paper at the focus of the US because that's where they said their study was. But something I think is very important to talk about is that in the original proposal that we read in the fall, Project ARIA was a study. But there were clues that it wasn't going to be a study. And those were things we alluded to in the paper, that Luxottica was forming a partnership with another company. They set up an entire new company to make these kinds of glasses. That shows an intent that goes beyond a study. Facebook had said, oh, we're not going to do facial recognition. This is just a pilot. And then earlier this year, like just about a month ago, Facebook announced that they were considering facial recognition for Project ARIA. So you had the company that had formed in response to this Facebook partnership, getting the mechanisms ready for manufacturing. So there's an interesting dynamic about intent. and disclosure in terms of what we're being told versus what they're planning. And the clues of that were put in the paper. That all affects the commons, right? There's a difference between it's still bad to put a pilot in the public without giving them the ability to comment on it or withdraw from it in a way where they're not part of the experiment, which we write about at great length. But the other piece of that is that when you start to deploy something globally, with franchises, and Luxottica has franchises worldwide, it becomes like overnight, you could put these things out everywhere. And that's a significant issue as a monocultural device being deployed in polycultures around the world. And that adaptation and the laws around that, I don't think they got it through.
[00:30:42.528] Catherine Flick: I guess for me in terms of the commons, one of the big things within the tech sphere, there's this idea of the commons, like there's been lots of interest in open source. A lot of these big companies started out with people who are quite brought up in the open source and Linux and things like that. And there's this sort of idea of the commons was a very early internet goal, really. I mean, even Tim Berners-Lee was talking about how the World Wide Web was supposed to be a democratizing event and that it should open things up. to be a commons itself. But we've seen over the years that actually the internet and the web and that has actually closed in and closed in and pulled everything into big, giant companies rather than being this nice kind of democratization. So the shift in terms of the understanding of the commons from a technological perspective has changed. And I think that this is actually where they kind of see perhaps there's a bit of that seeping into Facebook's understanding of what the commons in a physical realm might be. So, I mean, we see Wikipedia, like Archive of Their Own, are probably some of the biggest still remaining commonses, I guess, on the internet, but they're really struggling for funding and they have a lot of the big scaling problems that you might imagine that they would have. And so it kind of gets watered down, this idea of the fact that the internet was really supposed to be democratizing thing and open to everyone to create things that other people would be able to see and everyone be in the same level. My suspicion is that what Facebook would like to do is to kind of monopolize the commons in the way that they have online, offline, and in the physical realm as well. I think this is one of the reasons why in the ACM Code of Ethics, we really wanted to make sure that there were things in place that protected that. And the wording got a little bit watered down because of the fact that most people don't understand what the commons actually is. So we talked about more like public ownership and things that are in the public domain. but that can still be interpreted to be physical locations and physical places as well as technological or data or whatever it is. I think it's a complicated thing to get your head around, but if you think about it in terms of the context in which it's public, it's open, it should be available to everyone who's interested in being a part of it, then that's a bit of an easier way to kind of think about it, I think.
[00:33:09.347] Sally Applin: And it's supported by public resources. So tax money in municipalities support the maintenance of the systems within our communities that we all share. and public spaces that are supported by that funding become the commons for us as well. And the internet, funded by the government initially, also started out as a public-funded military thing. The military is not necessarily a commons in terms of there's access problems that gatekeep us from the public, but we fund them for, ostensibly, the public good. But we pay taxes, and that contributes to public maintenance of public areas. And then you have a private company. We talk about this in the paper. You have private companies. So it's not just Facebook doing this, right? There's going to be, there's always lots of, I mean, I live in Silicon Valley. There's always lots of stuff kind of popping up, privacy invading, surveillance-y tech experiments popping up all the time in various neighborhoods. And the intersection of those, you know, how do we own our right to public space when it's being subsumed by not just something like Facebook, but lots of different companies, scanning and cruising and photographing and running experiments, mostly without clearance.
[00:34:32.625] Kent Bye: Yeah, well, I know that Facebook, as I've been listening to them talk, whenever any sort of ethical questions come up, they will now point to these four responsible innovation principles that they've came up with, which in my perspective are not really that robust to be able to really handle some of the complexity of some of the issues that they're going to be facing. And also they seem to kind of divert the attention away from digging into critical analysis. And so what I appreciate about the paper that you published called the Facebook's Project ARIA indicates problems for responsible innovation. when broadly deploying AR and other pervasive technologies into the commons. So you're going through and deconstructing what is wrong with each of these principles in the context of Project ARIA especially. But before we dive into each of those, you could sort of contrast them to other responsible innovation principles because It sounds like there's been a number of different efforts from the European Union and the commissions to be able to do responsible innovation. And responsible innovation is not something that Facebook invented or created. It's been around for a while. And so how do these four responsible innovation principles kind of compare to what else has been out there in terms of responsible innovation or just general code of ethics principles?
[00:35:39.552] Catherine Flick: this is definitely up my alley, so I'll jump in here. But feel free, Sally, to jump in when you like. So technology ethics has been around since the 1950s. It's come through computer ethics was kind of the original term for it. And we've got some big people like Norbert Wiener, who's one of the more famous original computer ethicists. I could give you a whole history lesson on the history of computer ethics, but we probably don't have enough time to do that. Fast forward to the 1990s, where it's moving into... Originally, it was a lot of software engineers who were just worried about what they were creating and putting out into the public. Mostly what they were worried about were things like security. They were worried about privacy, but not to the same degree that we are now, but basic data privacy. They were worried about things like using software for weapons, but a lot of it was actually around security and actually physical security, which is interesting now because we don't really seem to worry too much about physical security. in computer ethics anymore anyway. In around the 1990s, early 2000s, it started to shift and there were a lot more philosophers that started to get more interested in this. I did my PhD at the Centre for Applied Philosophy and Public Ethics in Canberra, in Australia. Back then, it doesn't exist anymore, unfortunately. But my PhD supervisor, John Weckett, he was a philosopher. He was very much coming from a classic approach to information and he was mostly concerned about the ways that we work with information and security and things like that. I guess more recently, there's people like Luciano Floridi. He's a professor at Oxford University and his big thing is about information ethics. There's a bunch of different strands. He's a philosopher, but there's always been a bit of a tension between The people who've come through from a software engineering perspective who are very much about professional ethics, about professionalism, about things like security, privacy, that sort of thing. And then the people who've come through from the philosophy side who are more about what is computing, what is information, what is data and all that sort of thing, right? So fast forward to, I don't know, maybe about 10, 15 years ago, and they're all sort of starting to really talk to each other a lot more than they did before. In fact, I'm the current chair of a steering committee for a computer ethics conference called Ethicon, which I think it was the original computer ethics conference where we very specifically wanted philosophers and technologists and actual industry people to come together to talk about ethics because we thought it was really important. Obviously, I wasn't involved back in the 1990s when it started, but It's still going, but now there's lots and lots of computer ethics conferences and things related to that. But basically, this has all kind of come together to the point where, because of the shift in lives to being online and the impact of the internet, there's a lot more measurable impact of technology on people's lives. And so there's a lot more, obviously, raised levels of concern about that technology. So the European Commission, I really only know mostly about the European perspective because that's where I've kind of come through, but the European Commission was quite concerned about this in some of their research. They were worried that what they didn't want was for the research they fund to be used for problematic purposes. So they didn't want to fund another Manhattan Project basically or similar. And so they started to look very much at what they originally called science within for society. I'm probably getting my history wrong here, but this is where I kind of came into it in late 2000s. And there was this strand where they were looking at governance and how does the commission actually do ethical governance of the technology projects that they're doing? So there were lots of things like guidelines for technology projects that came out of that. And then that morphed into the next round of funding projects where there was a whole strand on science within forced society, where it looked at things like citizen science and co-creation and participative designs and things. Then feeding into this, you have the Danish Board of Technology, which was really big on technology assessment. Their big thing was, how do we look at the impact of technologies. There's a whole series of technology assessment methods they came up with over the years. That's fed into this European perspective as well. There's also things like some of the ELSI movements from the US and science and technology studies stuff that's fed into it as well. Obviously, we have researchers come from all over the world to work in Europe, so you have these different perspectives, which is really nice. And really what we ended up with was what was in the last round of Horizon 2020 funding was called Responsible Research and Innovation. And this particular European, he's not a commissioner, but he works for the European Commission in terms of running these big project funding things. René von Schomburg, he came up with a definition of responsible research and innovation from the commission's perspective, which was about science with and for society, and he set out a series of things that it needs to do. It needs to be reflexive. It needs to anticipate the issues that might happen. It was not so much specifically about technology, but any science should be doing this. So from that, basically, there spawned a whole bunch of projects looking at how do we actually do this in practice? Because the Europeans tend to be a little bit more on the philosophical side, and then there's these STS and LC kind of approaches from the States, which tend to be a little bit more on the practical side. You can almost see this divide in the attendees of our conference, where the philosophers mostly come from the European universities and a lot of the practical people who originally were software developers or whatever come from the States. So it's an interesting kind of mix of things. And so this is where we're at in the moment. where we're still looking at how do you do this in practice. The European Commission came up with these pillars that they really wanted all of their projects to focus on through this lens of responsible research and innovation. They included things like open science, public engagement. I mean, there's a whole bunch of them, the seven keys. They're quite vague. And there's one that's just called gender, which I really have problems with. And there's none about the environmental impact of anything, which I also have problems with. And there's a whole bunch of issues with various different conceptions of responsible research and innovation. But really, I think the core of it is what is the main thing, which is about that if you're creating technology, you need to think about what the impacts might be. You need to anticipate what the impacts might be. You need to reflect on what your implicit biases are, what your prejudices are, where you're coming from, what your motivation is, what your values are. You need to engage with people who going to be impacted, and maybe even those who may not be directly impacted but only impacted through some sort of knock-on effect or people in the commons perhaps. And then you need to set in place frameworks and institutional structures so that you can actually achieve you want to actually do. So using things like codes of ethics, or there's a bunch of responsible innovation frameworks out there now, especially things for AI and stuff like that, which you'd employ to do that and to help you with that. And so, yeah, so it's kind of moved a lot around, but I think it's important to just detail some of that historical approach about how there are these two kind of quite traditionally opposed groups, the philosophers and the software engineers who often never the twain shall meet. But in this case, they really have come together and done some pretty amazing work, I think.
[00:43:17.720] Sally Applin: So you wanted to jump in, right? Yeah, I think I want to jump in a little bit. So my dissertation was on Silicon Valley. So I spent eight years here doing fieldwork and trying to understand the folks here. I also worked in these companies on various engineering teams, software teams, in anything from 3D graphics group to communications technology. So I spend a lot of time around the engineering side, and I am an applied anthropologist, and then I take the work that I do and try to apply it to products that ship in the world. So that's just a caveat of where I'm coming from. I think, Catherine, speaking to the divisions, there's also something that's endemic in engineering culture, which is something that we're trying really hard to broaden and educate, which is this DIY ethos, making it yourself, being able to engineer your way out of any problem. There's also a huge reliance on science fiction for inspirational material and ideation. So people that have these engineering capabilities see things that are conceived and created by science fiction authors or filmmakers, and are inspired to try to make that a reality in the present world. And they're going from a different kind of passion. And I mean, that's how we got the cell phone from Star Trek. from the communicator. The inventor of the cell phone liked the communicator and worked on it at Motorola and worked on building a cell phone. There's nothing wrong with having an inspiration and working to make it, but if you put that technology in a culture where we're not 3,000 years in the future or 300 years in the future, the technology doesn't work the same as it does in the fictionalized environment. Because a fictionalized environment is a linear narrative. There doesn't have to be adaptation. There doesn't have to be adjustment. You can create the characters to respond and engage in the way one wants to. And that's not actually how people really function and work. So there's a kind of a third element between engineers or people that come from a more pragmatic sense of like, okay, how do I make this work? And philosophers and this other kind of element of entertainment inspired reality, and how that is going to function in society. And that's how I see Project ARIA. there's something that inspired these kinds of AR devices and inspired these engineers to build them. And if they have a culture of, I can do it myself, I don't need any extra help, I don't need philosophy help, it's going to work. It worked on TV. I mean, they don't literally say that, but I think that there's kind of an unbridled optimism about that technology. So I think that that piece also kind of overlaps with some of the other ideas Catherine was talking about. And that shapes the culture of technology and the culture of technology creation.
[00:46:07.663] Catherine Flick: Yeah. And I think you're absolutely right because it's also not just philosophers, right? I mean, one of the big things about responsible research and innovation is that you need to get a diverse group of people together coming from different perspectives. So don't just get philosophers in the room because that's not going to help you very much. get anthropologists, get people who are expert in whatever it is that you're putting. Maybe people who are experts in traffic management, if you're going into the commons, because maybe your AR glasses people are going to hold traffic up or something like that. But get a whole bunch of different people in the room so that you can understand the context into which you're putting your technology, as well as the potential issues. A lot of the pushback that I've certainly had over many, many, many years, and this hasn't really changed much until AI came along and started getting everybody worried was that tech people were just not interested in ethics. They saw it as a roadblock. They see it as, oh well, you'll stop us from doing the thing that we want to do. It's never been about stopping anyone. It's always been about asking questions about how you can do it better. Sometimes there is the question of, why do we need this? Even that is a question of, how do we do it better? Because it may be that if you're creating something that nobody needs or nobody wants, you're wasting a whole lot of money, right? So maybe you could be doing something else that would be better spending of that money. The biggest myth about ethics is that it's this roadblock. I think that if people are more reflective about what they were doing, and I think a lot of it is because ethics is brought in often too late. When I say ethics is brought in, this is often how it happens. Someone goes, oh crap, I need someone to come in. We're dealing with children and I need someone to come and advise us on how do we do this properly because we don't want to get into trouble with legal or parents or whatever. Often, it's just one person in the company. If there's any difference to be made, it has to be senior management who suddenly goes, oh, yeah, this is a bit of a problem. Maybe we should do something about this. Certainly, all the companies that I've ever worked with have all been the ethics embedding has to come from the top. If the CEO is not interested, it's never going to happen. This is one of the problems is that it's brought in a point at which development has gone so far that it's a massive loss if they don't push out their MVP or their product that they've been promising to everyone or that the shareholders are waiting for. They're not going to want to stop it if there's some really bad ethical problem with that project. That's why they see it like this. Sally, you're jumping up and down your chair.
[00:48:40.296] Sally Applin: No, I'm excited about your point. So the pattern too is, you know, I've been in the industry for a really long time and I started out in user experience design and back then it was called human factors, human interface design. And exactly what we're seeing with ethics is what we used to see with HI. It's exactly the same people, exactly the same arguments. The only thing that's changed is people have kind of come around to user experience design as being something that they need. But now ethics and social science influence, humanities influence, it's now the same argument that we used to have trying to get HI, UX, whatever considered. And the thing that is also really interesting is that there's an expectation that the designers that the user experience designers can hold this load. So if it's not engineering, then the human factors people, you know, the expectation is that they can obviously kind of hold this load of not just figuring out controls and interaction and temporality and all the things that user experience designers are trained to do, but also that they're going to understand ethics, human cultural systems, human social systems, social dynamics, all these things that we're kind of trained to think about. And the designers just aren't trained to do that. But the expectation in companies is that the squishy stuff is going to get handled by that particular function. And I think that's another piece of this that takes education and explanation. Facebook, for Project ARIA, they're probably well-staffed with designers and HI people that probably worked on these ethical guidelines or are planning this project. And yet the result of that is that we just see some trouble with the framework that they've developed.
[00:50:22.100] Catherine Flick: I think it's even worse than that because, I mean, they obviously came up with these. I mean, who knows at what point they came up with these? Because I suspect that ARIA has been a long time coming, right? So Google Glass was however many years ago, and I'm sure that's just been nestling in the back of the heads for a while, right? So the fact is, is that even conceptually, they will have already put a lot of work into ARIA before they've suddenly gone, oh crap. You know, we know that Google Glass flopped terribly and because people had real issues with the implementation and the glass holes and all that sort of thing. We don't want that to happen, so we need to come up with something that shows that we're doing it properly. It's even worse than that. This is where this sort of ethics washing starts, because they're like, well, we know we're going to do something and we know we're going to do it almost exactly the same way as these previous people, but we want to make it look like we're not. Let's come up with some ethics stuff. Everyone's thinking about ethics now because ethics is actually pretty cool. AI caused an ethics boom. Up until about four or five years ago, nobody was knocking on my door asking me to come and talk about ethics, and now I get everybody asking me. It's all AI ethics. I've actually tried to step a little bit away from that, but you can't help but be sucked in by this massive juggernaut that is AI ethics. They've realized that people actually are concerned about this and they need to do something about it. Of course, they come up with these completely vague, wishy-washy, like they're all very aspirational, kind of, you know, let's not surprise people and let's be people-focused. And it's like putting people first, it has to be just the most hilarious thing I've ever heard Facebook say, because we all know who rules Facebook and it's definitely not the people. It is, the people of Facebook. Well, no, it's the prophet, I would say, actually, first, then the people of Facebook, because they don't even treat their own people properly.
[00:52:14.282] Sally Applin: Okay, it's Mark and Cher.
[00:52:16.648] Catherine Flick: Yeah, all right. That'll work. Yeah, I'll let you cut it off there. Yeah.
[00:52:20.708] Kent Bye: Well, I want to dive into the actual principles. Well, just to reflect Catherine, you were saying is that there's a sense of like anticipate, reflect, engage, and act. And so that's more of a general kind of strategy. And I think that on the scale from philosophical to pragmatic, Facebook is usually much more on the pragmatic side. So they're much more into like, what are the things that we could use from our engineering perspective? And so the first couple of them are much more around the disclosure, making sure that never surprised people. I think their intention at least was that there's going to be all this data from XR technologies and potentially even being able to read your mind. So if Facebook is reading your mind, then is that something that people would be surprised that all of a sudden Facebook has their thoughts? And so maybe if they disclose it, but then even if people know about it, is that enough to say that their awareness of that Facebook is reading their mind, is that enough? Now, in terms of the Project ARIA, surprising people is, hey, you're doing a public research project. What's the consent? Is there forced consent here for people? Do they have the technology to be able to opt out, opt in, and the bystanders? And so maybe you could talk about how the never surprise people principle kind of starts to break down, even just by looking at the Project ARIA.
[00:53:30.507] Catherine Flick: I think one of the biggest things is the fact that they just don't define what people are. Which people? Is it people who are using Project ARIA? Is it the people who are within 10-metre range of Project ARIA? Is it the people who are running the services behind? Is it the third-party companies that are no doubt wanting to interact with this at some point? Which people? If you're going to say something like, never surprise people, you're never going to be able to abide by that. You've hamstrung yourself before you're even out the gate because you're always going to surprise people. In fact, the more important thing is what do you do about that? It's not so much about the not doing it as the how do you actually effectively communicate with people. Here I am doing it too, but I'm much more specific in my people, which are the people who are likely to be surprised by it. They talk about transparency, where they have a history of being completely opaque, And even within the ARIA project, they're just completely opaque about how it's going to work, what it's going to do, et cetera, et cetera. There's lots of aspirational bloody blahs and cool visual graphics of fancy equipment, but how is it actually going to work in terms of collecting all this data, processing all this data, packaging it and chucking it back out onto these glasses, right? And then they talk about things like trade-offs and it's like, okay, well, if you're going to be talking about the trade-offs that you've made. Maybe it's not about the trade-offs that you, maybe you should be even stepping back a little bit to start with and actually talking about how did you come to these decisions in the first place? Who was involved? Did you get people that are likely to be surprised sitting in the room with you actually having any power over the decisions that you were making? And I mean, like, there's just so much to unpack just in this first one. I'm only up to like the first sentence here, right? Like, it's just, oh, Sally.
[00:55:19.714] Sally Applin: So the fact that they kind of dropped this on the public and the press as an announcement was a surprise. So never surprise people by declaring you're going to subsume the commons for a research project was pretty surprising. So out of the gate with the product announcement, they were violating their principle. And there's also the definition of surprise, right? It's not just who are the people, but what does surprise mean? What are the boundaries of surprise? Is it shock? Is it delight? It's just not defined enough. I think what they mean is I don't know what they mean.
[00:55:55.583] Kent Bye: Well, I think the, so part of what, as I read through, you know, my take on the first, you know, never surprised people is that in talking to Joe Jerome, there's the fair information practice principles that were created back in 1973. So all the privacy policy with the FTC is a company has to disclose saying, Hey, we're going to be doing this. And as long as they disclose it, then they can pretty much do whatever they want. there's that disclosure. And then I think that's, you know, kind of unpacking this never surprised people is that, you know, trying to at least for Facebook sense, describing what they are going to be doing. And then if people agree to those terms of service or privacy policies, then they're presumably not surprised, but we can get into how that perhaps breaks down when they start to do things like Project ARIA. there's a bit of ambiguity when it comes to like, they actually have different versions of like the fourth principle in terms of the put people first, and then they say something between the community and our community. And when that they're talking about their users versus the bystanders, because put in a request for proposals during Facebook Connect One, which was people to come in and fill the gap in terms of the bystanders, like how to deal with bystanders and the non-users, because they didn't necessarily have a comprehensive framework that included what to do with bystanders. So whenever I asked about the bystander of contextually aware AI, how do you have people consent or not consent? They're like, oh, well, we just had an RFP for researchers to be able to come in and help us figure out the bystander impact. And that in some ways, the bystanders are kind of like, are those the people? Or are those your community? Or is it the wider community? And are they stacked order? So is it considered everyone? Put people first is number four, right? So what does that mean? If put people first is the fourth thing, then how do these things relate to each other? It's a little confusing. Well, anyway, we're sort of jumping ahead here in terms of unpacking things. But when I read the Never Surprise people, it's a little bit of like, they're talking about their users. They're saying our users of our technology, don't surprise them. Don't do too much. That's going to be surprising. And that all of a sudden you realize that all of your thoughts have been recorded for the last decade. You didn't know that that was happening.
[00:57:55.358] Catherine Flick: I think the important thing also with that first one is they really push a lot of work onto the people. So the users of the technology, the ones that have to be responsible, the users are the ones that have to use it safely. You know, they're just throwing away their responsibility and their requirement for needing to actually take some responsibility for how their users actually use this and how their users do deal with the technology. It's quite negligent in many ways because it's very nice for them. They can say, oh yeah, we want people to use our technology safely and responsibly. but then they don't talk about what will they do if people don't. This bystander thing is very important. The fact that they've only coming out after they've announced the technology to then start thinking about, oh, how might this technology then impact people who don't use it, or who are not interested in using it, or who are within a certain radius of it. If you're creating technology responsibly, this should be at the forefront. You should not be creating prototypes before you've thought about these problems. You should be thinking about these problems before you put money into this sort of thing. I mean, it's nice to have little technical puzzle solving, right? It's a fan favorite of engineers everywhere. They love to solve little puzzles. How do we make this work? How do we get this more efficient? How do we bring cameras to tiny spaces or whatever, right? How do we make it like the movie? Yeah, how do we make it like the movie? Exactly. But some of these things just don't belong in a society where A, not everybody can access them. B, not everybody wants to access them. C, the requirements for the data and things like that outweigh the rights of the people in the vicinity to conduct their daily lives and things like that. It's just irresponsible from the outset.
[00:59:45.954] Sally Applin: I would also add with bystanders that Facebook's own community are bystanders because ARIA is not going just to draw from the environment, right? It's also going to likely draw from the data that is within inherent to Facebook. And not all of those people are going to be participating in ARIA or have ARIA glasses, yet they're participation is varied depending on what people with ARIA are going to be doing. There's this kind of halfway in for the people that are posting and contributing to Facebook in general, as not fully being part of Project ARIA, but also being used as fodder for data for that project. And that's also something to consider is the role of the people that are creating the data that feeds the project.
[01:00:33.058] Catherine Flick: Also, the surprising thing might be if you're walking around in public one day and all of a sudden there's somebody looking at you with one of these glasses and you realize, oh my gosh, that must be that Project ARIA thing. Or you might not actually know at all. They say things like, oh, we're going to put them in highly visible clothing or whatever. We've gotten used to this to a certain degree with Google Maps cars driving around. There's been a whole lot of hullabaloo about whether Google Maps can go down certain roads and things like that. There's a huge privacy stuff about that that you can look up back when they were initially mapping everything. But that's a large physical object with these giant things on top that are quite obvious. Somebody walking around wearing a pair of glasses, even if they're the most whiz-bang, massively huge... I mean, I'm from Australia, so Australians would understand the Day, Med and Reverage glasses. You don't see them if you're not looking straight at them. You may not then understand what it is that you're looking at. Facebook is saying things like, oh, you can go up and scan a lanyard that has a QR code or something on it. Certainly, that's what they were saying it would be like. Who wants to then go up to someone that's wearing an all-singing, all-dancing data collection device to then get quite close to them in order to scan a QR code or to type a URL into their phone or something like that? Not surprising people isn't just about the actual technology itself. It's about the positioning of that technology within the public space as well. This is why they can't ignore the bystanders and why the bystanders really should have been one of the point number one that they really looked at, because it's irresponsible for them to not have thought about that.
[01:02:18.467] Sally Applin: It's almost as though they're looking at their community as the commons and not the commons as the commons. And there's two points we make in the paper, which I think are relevant to bring up here. One is that these uniform lanyard QR code things pretty much dehumanize that Facebook employee into the equivalent of a Google Maps pin. They just become a point. And they're a point of representing everything that these glasses mean. And that could endanger them as well, but it also is very dehumanizing to them. The second point is that with this partnership with Luxottica and Ray-Ban, the glasses are going to change. So with Google Glass, there's this othering that happens because the glasses that people are wearing really look different in space age, and they differentiated people from each other. Whereas the glass manufacturers that they have partnered with really focus on style, but also are going to be glasses that we've seen before. So there'll be this blending in which will make it much harder to delineate. And then the other point about collecting data and viewing data through the glasses, which we used in the paper, I'm trying to remember the name. But Ayers Rock in Australia, the indigenous people that own that rock have stopped physical tours and Google has mapped that. And they've actually petitioned and won to stop any release of the data on that rock of the view. So even though it wouldn't leave footprints or cause any physical damage, it would cause a cultural imposition on the indigenous people that own that property that it's special to. And as a result, there's data that's not available to be used. So with this particular kind of technology of kind of mapping and viewing, there has to be also other cultural places that shouldn't be recorded or played back. And that doesn't seem to show up in those principles at all either.
[01:04:15.030] Kent Bye: Yeah. In terms of the provide controls that matter, again, they seem to be focusing on their own users, but yet as we're talking about the bystanders, you know, what kind of controls do the bystanders have? I think that was the point that you're bringing up. Yeah. So I was wondering if you had any other thoughts on the providing controls that matter in terms of how that breaks down?
[01:04:35.249] Sally Applin: Well, matters also is something that's, again, they haven't defined the term. What does matter mean? Who does it matter to? Did the controls matter to Facebook? Do they matter to people? Do they matter to the people that are not engaged? Do they matter to the people in the commons? They don't define it.
[01:04:51.500] Catherine Flick: And the other big thing is that they say they won't offer controls for everything. And so they're specifically putting boundaries on that, which are very much up to Facebook to control. So it's not about, we will discuss with stakeholders what controls are necessary or required or optional that people want. They just say, we're not going to do it. We're not going to give controls to everything. And so you can immediately see Facebook walling off the requirements for Facebook's moneymaking or partnerships with Luxottica or whatever it is that they want to protect, they will put that behind that wall. You're not going to be able to control. You've seen this through Facebook's history. They've been trying to scrape as much information about everybody, not just their users, and they're pushing back against this new Apple privacy controls that have just come out. There's a whole bunch of things that they really, really want that they know that we don't want to give them. By saying things like, we're not going to offer controls for everything, that gives them the leeway to put the things that they want to keep control over, which may not be the same things that a user would want them to be able to access. I feel this is quite reprehensible because if they're using these responsible innovation principles as saying, look, here's how we're going to be responsibly doing this. and then still giving themselves space to essentially exploit the people who are using this or who are not even using it, but just happen to be in the vicinity of it. But then they can say, oh, well, we told you because it's in the responsible innovation principles, right? And that to me is just such antithesis of what responsible innovation should be about. And I, yeah, I find this one particularly egregious.
[01:06:32.433] Sally Applin: Almost as if they designed them for arguing in court, not necessarily for enacting good policy for people. It's nebulous enough. It feels kind of weaselly in terms of how the wording is, and that doesn't feel ethical or right for the people. And also, if you're considering everyone and then you're limiting controls, you're not really considering everyone. So they violate a principle, which surprises people, which violates another principle.
[01:07:00.571] Kent Bye: Well, there's certain ways in which that these are interconnected and related, and it's hard to know whether they're stacked ranked or what, because there's consider everyone and put people first. I'm wondering if we could unpack those two together, if we can, in terms of what you make sense of those, you know, in some sense to consider everyone's like a diversity and inclusion and trying to make sure that you're not bringing harm to underrepresented or marginalized communities, but yet put people first is then creating a utilitarian argument saying, okay, we're going to take whatever's going to be the best for either their community or the wider community. it's a little unclear, but how do you make sense of these two and what issues do you see in terms of consider everyone and put people first?
[01:07:38.158] Sally Applin: In considering everyone, they're not, because not everyone's going to be able to afford these glasses. Not everyone is a visual information processor. Some people have eye issues or have challenges with recognizing red-green color. Their control lights are red and green, so they're not considering everyone because not everyone's going to be able to have them. out of the gate, there's a problem with that. And I think they're thinking about considering everyone within their user community, which is different. And certainly they aren't considering everyone by not considering the bystanders, as we talked about, not considering the people at the commons, not considering the local laws, not considering property owners. I mean, they're not considering everyone at all. And the people, it seems to me, that they're putting first is Facebook. And that's all I'll say about that.
[01:08:33.342] Catherine Flick: I think words are really important. So when I was involved in rewriting the ACM's Code of Ethics, we agonized over words. We agonized for hours over one sentence that might be slightly in conflict with another sentence that was in the code elsewhere. I mean, on the one hand, I want to say that even some of my first year students would do a better job than this in terms of making it internally consistent. But then I actually think that that's even kind of like a backhanded compliment because anyone can see that these are not internally consistent. It's just so poorly thought out in terms of the wording of it. And perhaps what Sally was saying earlier about it being sort of a smoke and mirrors in some ways, right? But it might just be more for the legal argument that, yes, we did something and here it is. And here you are showing that this was the thing, right? Because they talk about prioritization of what's best for most people in the community in principle four. Put people first. This most people thing is, as you said, a utilitarian argument and is in much contradiction with responsible innovation approaches, which really should be problematizing that. It shouldn't be about, we're going to make a decision in this way. It should be about, how do we come to these decisions? Let's bring people together to discuss what the issues are. Let's contextualize it. Let's do this. But this is very much a top-down, we're going to take this particular approach, which is very much against a classic responsible innovation ideal, really, is all I can compare it to here. And this really just further solidifies Facebook's approaches. I think they're also looking to make themselves feel a little bit better about what they've been doing for so long as well by saying, well, if we look at it from this perspective, it is actually quite ethical because we're using utilitarian theory or something like that. Even though the fact that utilitarian theory has been widely decried in these sorts of situations. I mean, you don't have to read a huge amount of ethics to know some of the problems with utilitarian approaches, especially when you're trying to bring in technology that everyone, quote unquote, is supposed to be using. If you're not going to actually prioritise everyone, if you're only going to prioritise most people, you're not going to be prioritising everyone. And therefore, who's going to be left out? It's going to be the people that get left out all the time, the edge cases. And this is a classic issue with utilitarianism. is that it's kind of like the people in the too hard basket get left in the too hard basket. And so then there's a further digital divide that's enabled. It just further entrenches existing prejudice or institutional bias or lack of infrastructure or whatever it is that has caused these people to be in the too hard basket to start with. And so this really is just so internally inconsistent with putting people first. that I have to throw my hands up and just wonder how they actually... I'd really like to have seen when they were all sitting, I assume there was a group of people sitting down talking about what are we going to do with this? How are we going to write some responsible innovation principles? Because we've got to do it because, I don't know, if someone higher up has said that we have to or something, I'm sure that's how it went pretty much. I bet someone just kind of doodled on the back of a napkin a few ideas and then maybe they discussed it or whatever. but they never really sat down to give it, like, this is really serious stuff and I really feel quite disappointed that they didn't take it more seriously. Because if they had taken it more seriously, they would have at least made it internally consistent. And that's the least you can do for this sort of thing. I mean, I could disagree with the actual content or whatever, but at least make it internally consistent. Yeah, it takes effort to make these things. And I think some people think perhaps that ethics is just, you know, anyone can do ethics. It's just about being good or about, you know, stopping bad things from happening and what might people expect us to write or whatever, but it's not as easy as that. And you have to be really careful because words mean things. And sometimes when you write words, you leave a whole bunch of people out that is really important to include. And so, yeah, disappointed.
[01:12:48.625] Sally Applin: I don't think they thought about it. And I don't think that they, I mean, words matter, but also consideration of others and understanding how things work over time matters. And I think the whole announcement, deployment, consideration, framing, development, ethics charters altogether, this is going to be a huge problem. And the thing is, is that Facebook's not the only one making this. So we are his ethical guidelines, but that's not everybody that's going to be building these. They're going to be 3D printers and startups and all kinds of individuals developing these and participating in the data sphere with them in new ways. And those people don't necessarily have the same kind of accountability or obligation that larger companies do or will kind of slip through in various ways. So we're going to see thousands of little startups worldwide trying this stuff out inspired by what Facebook's doing. we'll see a real uneven deployment of all of these kinds of things in the commons. And that's going to be a real adjustment for everyone. And that creates a different kind of concern and a different kind of impact for people as well.
[01:14:02.807] Kent Bye: I wanted to bring the conversation to an end here. We've been covering a lot of different stuff here for a while. In your conclusion, you put forth some suggestions that I think are actually quite useful in terms of the need for transparency, the need for effective oversight for this responsible innovation, as well as the willingness to halt the development if there's too many of these ethical issues. And as you talk about the intent of, you know, things just seem to be already happening in that, you know, is this just a public relations ethics washing to be able to point to, to say, we're doing this responsibly when really it's already kind of in motion without much feedback or public comment or oversight of any meaningful fashion or any transparency. So maybe if you want to wrap things up here with some of the suggestions you're providing and some other final thoughts.
[01:14:48.838] Sally Applin: I think one of the suggestions we made is not to do it. Honestly, we don't see a good way forward for this in the way that they're framing it at the moment. And that was very clear in the paper. And by the way, I wanted to point out the paper is open access, so anyone can read it and they don't have to pay. They can read it online, they can download it. But trying stuff out and putting them in the world at a large enough scale with, you know, there's again, that serious partner development going on with Luxottica is an intent to do something big and global. And it can happen really fast because of the franchises, because of the infrastructure that's already in place with the partner. Technology companies don't do glasses very well. We've seen various attempts, and they're all kind of horrible. But Luxottica has some real power in that realm. It's a very, very smart alliance, but it also is going to give them leverage to be able to deploy really quickly worldwide. And I don't think the globe is really ready or prepared for that kind of happening overnight. And it can. But my conclusion after going through this practice of writing a very long paper on the thoughts around all of these ethics principles and what they're intending to do is it's going to change the world irreparably, and we're not ready. And we haven't thought about it. And I think that it is irresponsible to do this now, or potentially ever, without much more consideration from way more people in the community of all ages, genders, expressions, ethnicities, cultures, like everything, governments. It's just not... I don't see a way that this could be good. I just don't.
[01:16:39.859] Catherine Flick: I guess for me, really, I think it's important to be able to say, stop a project. But I think it's also important to learn lessons from these sorts of things. And I think really, for me, the big things that, I mean, like Facebook's ever going to read this and go and change their company. But in my ideal world, if Facebook were to read this and think, actually, yeah, they've got some points and want to do some things a bit better. they really need to work on their transparency in terms of how to do all these big projects. And it's not just about ARIA, but they have a lot of other weird stuff going on that it's not entirely clear how that hooks into Facebook's goals, into Facebook's overall strategy. I mean, I'm sure investors get a lot of information, but maybe that's not quite so broadly deployed. Facebook is a huge company with a lot of money and a lot of resources. Perhaps if they spent some time and energy and resources on developing meaningful consent mechanisms, like developing meaningful opt-out methods, in developing meaningful understandings of their algorithms and things like that, that would be a really good step towards transparency and actual trustable transparency. The thing that companies need from their users is, to a large extent, for the most part, is trust. Facebook manages to get away with it because they don't necessarily need the trust of their users. They just need everybody else to be using it too, right? So they rely a lot more on peer pressure. But ultimately, if there's a massive enough breakdown of trust, people will move away from it. what that line is. I don't know, but there is a line somewhere at some point. And the other way to build that sort of trust is to have an effective oversight mechanism. So they have this oversight board theoretically, which is basically seems to be pretty toothless. I really like the efforts of the real Facebook oversight board. So this external group of academics and people who are interested in actually holding Facebook to account. I really like that they've gone and done that because that's what it should be. It should be a group like that. And there are companies that do that, right? So I've worked with some companies in my responsible innovation things that have completely independent ethics boards. They have open minutes of the meetings. The ethics boards are free to call out the company for doing problematic things. And the company is required by various statutes in the company that, I don't know, I'm not a business person, but they're required to listen to these people, right? or come up with really good reasons as to why they're not listening. I mean, this is such a big company and they have a lot to lose if they lose everything, right? And so I think they can spend some time and resources on building up trust and building up transparency mechanisms and solving some of these really difficult problems like consent mechanisms that are meaningful. I don't think they can. but they can have a go, right? They can certainly make it a lot easier than it is now for Facebook users, and they need to stop doing things that greedily suck up shadowy data in shadowy ways that people find creepy. And that includes things like ARIA, because ARIA is just going to be creepy, right? I mean, Google Glass was creepy. ARIA is going to be creepy. And I just think that that's probably Yeah, I mean, that's a slightly positive spin on it, but I don't think it's likely it will happen. But yeah.
[01:20:05.780] Sally Applin: I think there's a huge difference, though. And the difference is that this is a new medium paradigm. It's not something we're used to. We've had phones. We're used to phones. We've had laptops. We're used to laptops or desktops. It's pushing mobility in another way. Changing how we, you know, the face is really a special space in terms of how we interpret faces, what they mean with our identity. Still in our culture, very few people tattoo their face. Like there's just, there's something sacrosanct about our heads. And we're also changing from dipping in and out of mobile to constantly being on and constantly collecting. And that's a real big shift. And I don't think even optimism with these large companies is enough because it's going to shift culture immeasurably. It may already be on the books, it may already be happening. But I think a larger cooperative assessment amongst not just the big players, but society makes this very, very critical and important. And for me, that's why this project is different.
[01:21:22.428] Kent Bye: Awesome. Is there anything else that's left unsaid that either one would you like to say?
[01:21:26.677] Sally Applin: I think there is something that I think is important to say that I didn't say today. We live and work in groups, and most of the processing of understanding this technology is still focused through the lens of the individual. And those ethics, even though they use words like people as a plural, they're still framing and thinking about this as an individual experience in the world, not a collective one. And a collective one means it has to fit within the group in the commons of social groups, of our individual, our families, our friends, extending to people that are part of our group, but not in our group, you know, the people that work at the store or the library or, you know, whoever we see in the world, which has changed with the pandemic. but humans are group focused and without cooperation and collaboration in a group environment, we actually will die. If you think about just being able to have this podcast, you know, the people that made the computers and the software and the microphone, like the goddess to the point where we can actually have this conversation required massive amounts of human cooperation that took thousands of people. So to think about things or rights from an individual perspective, it just doesn't account for how we actually are in the world. And I think that that's an important consideration for these conversations and these kinds of ethics, because they don't seem to be addressing how interrelated we are.
[01:23:01.242] Catherine Flick: I think also something I wanted to add is actually coming, you just vaguely mentioned it then and it just reminded me that, yeah, so these principles are not something that a developer could go away and say, okay, I'm going to write some code today. How am I going to apply these responsible innovation principles to the code that I'm writing? Whereas if you look at something like the ACM's Code of Ethics, it's very much geared around how do I make the best decisions possible in my day-to-day activities as a programmer, or as a systems administrator, or a manager, or whatever it is. Once again, these principles really are not for Facebook. They're for other people to read about and have, well, maybe have some understanding about what Facebook would like to think that they're doing. But in order to maximize transparency, particularly, they really need to show what are the values, what are the principles, what are the guidelines that their day-to-day programmers, their systems admins, any decision makers, because it's all about decision making. How do I make the best decisions in my day-to-day work? It should be that the paramount foundation for any of these sorts of principles, right? And so it'd be really good to see some sort of internal decision-making guidance from an ethics perspective or responsible innovation perspective, which I suspect they maybe have in small ways, in some ways. And certainly there will be ACM members within Facebook. And so they will actually have to abide by the ACM's Code of Ethics. As we state in the paper, if these people are actively contributing to some of the ARIA project, they could be in violation of the ACM Code of Ethics because of the fact that they don't deal with potential harms. They don't deal with diversity and inclusion. They don't provide meaningful opt-outs and things like that. These people really should be thinking about and questioning the work that they do, which then really should have some sort of route further back up the line in Facebook. And if Facebook is a decent and responsible company, like they'd like you to think they are, they should have robust internal methods for dealing with ethical concerns from employees. But I don't think they do, somehow.
[01:25:17.607] Sally Applin: I'd also like to add that I wrote the paper because I wanted to explain to others, any other, anybody else who's not either Facebook or not Facebook, like, You know what, you're putting stuff into society, and there's a lot of stuff around the thing you're making that you aren't thinking about. And I'd like to take you through all of those things. How private companies have started to encroach in municipal relationships, encroaching the commons, the structure, one theory of community, or poly-sociality, one theory of communication structure about how your messages are creating cooperation or not, what those things mean, what that means to layer electronic communication in people walking through an environment, whether their attention's distracted. There's a lot of pieces that are going to make the world that we're in. And I don't think that the people developing the technologies or even purchasing them have really had exposure to think about necessarily sort of a broader argument about how all these things are working together or not, and how historically this has been creeping. It was absolutely no surprise to me that Facebook this year was saying, oh, you know, maybe with ARIA we're going to put in facial recognition. If they do that, that changes the entire discussion on ethics. that brings in way more law, privacy, surveillance concerns, identity, tracking, things that we've touched on, but in much stronger and deeper levels. And I just wanted to put out a paper that really explored what was going on, so that at this point in time, before all this stuff happens, we have an ethics perspective of what the company says they're doing, what they're doing, but also all the stuff around these glasses, this production, whatever it's going to be. So that we've marked in time, look, this is where we're at. And this is what's happening. And now you're going to put something in. And here's how things might change.
[01:27:23.616] Kent Bye: Awesome. Well, Sally and Catherine, I just wanted to thank you so much for paying attention to all these things and to feel the emotion to be catalyzed, to be able to write all this stuff down and to explain it in such a clear way. Cause I think that that's a big part of the conversation as we move forward, especially as Facebook, presumably in this time where they're really trying to take in this feedback, you know, who knows how this may feed into. them changing some of these principles or incorporating all this stuff. I think you're pointing out a lot of the stuff that's missing here from a responsible innovation from what has already been happening. You know, there's no citations from any of this that's referencing any of the prior work here. We're just kind of making this up from scratch. So the lack of connection to any of the established traditions and best practices and oversight and all this stuff is obviously all very concerning. And I reference people to go check out your article to get more context and information. And yeah, just thanks for writing it up and for joining me today to be able to unpack it a little bit more. So thank you.
[01:28:15.048] Sally Applin: Thank you. It was a real pleasure.
[01:28:16.389] Catherine Flick: Yeah, it has been great. Yeah, really, really, really great conversation.
[01:28:19.611] Sally Applin: Thank you. I want to add one thing. If you're driving, listening to this, maybe you can't get to the podcast website. Look for Applin and Flick and Facebook and you will see our paper in the Journal of Responsible Technology.
[01:28:33.911] Kent Bye: Awesome. Thank you so much. So that was Sally Applin. She's an anthropologist looking at the cultural adoption of emerging technologies and communication implications of those through the lens of her polysocial reality theory. And then Catherine Flick, she's a reader or AKA an associate professor at the Center for Computing and Social Responsibility at Dumont Fort University in the United Kingdom. So I have a number of different takeaways about this interview is that, first of all, well, I just really appreciate being able to set these responsible innovation principles in the context of these other development of responsible innovation. And there seems to be other approaches that have been around for a while, you know, just anticipate, reflect, engage, and act as a framework that seems to be a lot more robust to be able to start to deal with a lot of these issues as they come up. I would love to hear more information and context from Facebook in terms of how these got developed. They seem to be very much influenced by the lens of what's going to be useful for Facebook's business. The very first one is Don't Surprise People, which is all about a recasting of the fair information practice principles, which is essentially around informed consent and disclosure of different stuff that they're doing. And so that's all of how the privacy laws are written is to make sure that the users are informed as to what's happening. Now, how much is that informed consent, a bunch of forms that people aren't necessarily reading? That's where Sally's polysocial reality in terms of the different types of information that is expected, and this numbness that we have from signing all these different privacy policies in terms of service, and how much do we really read all those and all these different things that we're signing, how much is that really truly informed consent? And also, surprising people as a metric, is that really the best way? Are there other ways to have either transparency or accountability? If you really don't want to surprise people, then what kind of methods of transparency, of having oversight and building trust and having independent researchers be able to come in and comment on this, rather than this don't surprise people just the very nature of how they announce to the world that, hey, we're just going to start to do this kind of research project out in public. There's all these implications that we talk about, the commons, and what is the relationship to these private companies going into these places, and where's the ability for the public to be able to comment on this. If it is a public research project, then there should be some sort of mechanism to be able to have the public respond to it. So there's this, hey, we're going to do this, and we've already decided, and this is already happening. And so that within itself is very surprising in terms of just how they announced it. So whether or not that's the best metric or how to really live into that, I think lots of different questions around that. So that's just the first principle of don't surprise people. The second one of controls that matter, you know, obviously Facebook is going to be recording lots of different information, but there doesn't seem to be any way to kind of opt out of certain information that you may not want them to be recording. So whether or not that's your eye gaze information that's extrapolated from your head or hand pose or what you're paying attention to, all sorts of different specifics. If people were really given the option to be able to click a checkbox and say, hey, don't track what I'm doing or don't pay attention to this or that. There's this interesting thing that I see Facebook doing, which is that they have this business model foundation of surveillance capitalism. That's where they still make the majority of their money. And as they move into this augmented and virtual reality, are they going to still port this existing business model in? And if you listen closely to what they say, it sounds like they're kind of on that roadmap to do that. Part of the argument that I hear them saying, like Boz was on the Wired Gadget Lab podcast and he was saying something along the lines of like, most tech policy is hurting people who are poor. and that they're not really considering how to create accessible technologies. Because Facebook's business model is based upon mortgaging your privacy to be able to subsidize all their different features, they're using that as an ethical argument of saying, hey, we're going to make this technology widely accessible to as many people as we can. There's these certain trade-offs that are already within the first principle, saying, we're just going to tell you what we're going to do. But when you look at something like the right to mental privacy, it's a human right from a neuroethics perspective, but that's nowhere mentioned within any of this responsible innovation framework. It doesn't really fit into any of these existing things, because it's not a right that they're necessarily optimizing for. If you listen to Baz in the Biggagic Lab podcast, he says, Apple's really optimizing for the privacy, and Facebook optimizes for other things. In some ways, they're optimizing for making this technology as accessible as possible, i.e. mortgaging your privacy to be able to subsidize all these different technologies. And is that a trade-off that we have any say over? Would we rather pay more money to be able to have access to some of these things? Right now, we don't have much competition for standalone VR, but if Apple does come into the game, then if they come up with super, super expensive devices, then is that something that people who are really concerned with privacy and how things all go, will they be willing to pay all that extra money? I think that's yet to be seen. I think this is some of what's playing out. But even if you look at the first two principles, they're kind of around a business model that isn't challenging surveillance capitalism. The controls that matter, what you can and cannot control, there's going to be limits to that. As Catherine is saying, there's going to be certain ways in which that is going to be funneling into monetization of the data that are being collected here. Potentially. If you listen closely and read between the lines, that's kind of where I see things going. There's nothing in the privacy policy right now that would prevent them from monetizing data in that way. So the other two principles of the consider everyone and put people first, obviously there's conflicts between those and inconsistencies. And, you know, for me, I'm less concerned about the inconsistencies because I think actually some of these do have trade-offs. You can't serve all of the audiences equally all at the same time. You do have to make some sort of trade-offs. I mean, Facebook in general has been optimizing going for the mass audience first. They have to actually build up a viable product before they can start to really say, okay, now that we actually have traction with the market, then how do we start to really expand out in terms of accessibility concerns? Because if people who are deaf or blind, there's certain aspects of the immersive technologies that the first iterations can't be optimized for that. So if anything, if they're stacked ranked, then you know, they're putting their main community first, and then they're optimizing for these other aspects of the bystanders and everybody else Now, if there is a stacked order and No. 3 is Consider Everyone, then theoretically the bystanders should be more important than the people that are actually using their technology. But just the mere fact that they launched this research without really even having a full plan for the bystanders, and they're basically outsourcing this to the researchers at Facebook Connect, they put out an RFP saying, hey, we're offering up to $1 million of different grants and research grants to be able to help us figure out the bystander impact of Consider Everyone. Clearly, if that was their priority, they would have had a much more robust framework for actually how to deal with a lot of these things. But I think in some ways they just legitimately don't know how to deal with that yet, and so that's certainly a big concern. Anytime they're asked about that, then they'll point to these RFPs or these responsible innovation principles, but these principles don't actually provide any answers for how to do that yet. They're just high-level guidelines. Catherine was talking about just a larger context of the responsible innovation principles. There's other strategies that are already out there in terms of anticipate, reflect, engage, and act. These are established principles that are happening within responsible innovation, that are happening in the context of the European Commission or these other efforts and initiatives. There's no citations or footnotes or any references for how these came about. Are they born out of a specific tradition? They don't seem to be. They seem to be very tailored for what's going to fit within the context of Facebook's business model and their context of the technologies that they're building. It's a little unclear as to when they came about, how they got developed, and how they're actually applied on a case-by-case basis. There's going to be a ton of different issues when it comes to the ethical and moral dilemmas of mixed reality. This is something I've covered in both my XR Ethics Manifesto as well as a talk on the main stage at Augmented World Expo talking about the ethical and moral dilemmas of mixed reality. So just looking at the variety of different issues around algorithmic bias, or who owns the right to augment certain spaces, different aspects of the commons and how the commons are used, and privacy and the right to mental privacy and your agency, and are some of these different technologies undermining different aspects of agency? I mean, there's lots of different, really tricky ethical and moral dilemmas that I don't think that this framework really is robust enough to really cover a lot of those different issues. I'd love to see a little bit more of like, how do these continue to develop? Because as they stand now, they're not really all that informative in terms of how they're going to actually help to make the decisions from an engineering perspective. Sally was saying, well, maybe these are just kind of meant for public consumption and that they're almost designed to be argued in a court that they thought about them or that they had some sort of guidelines of what their guidance would be to justify some of the decisions that they may or may not have already made. I think there's a general concern as to this concept of ethics washing, which is just that they have these principles there, but they're not really being used from an engineering perspective. That's where the ACM's code of ethics is a little bit more detailed in terms of, as you're an engineering perspective, how you're going to be able to live into those different codes of ethics. Are the different code of ethics, if you're a member of the ACM, are you in conflict if you're actually working on this project and you're doing things that are not necessarily fully considering all the harms? And so just this last one of put people first of, and when it talks about either their business, an individual or their community, they're going to prioritize their community. So in other words, their community of users, so their user base of people who are using this technology, that they're going to put their users first. And so that within itself creates a bifurcation as to whether or not put people first. It sounds like that's all people but their people actually are talking about their users And so there's language here that isn't necessarily clear if you just say put people first That's a lot different than saying we're gonna put our community first, which is actually literally what boss said here I'm actually I'm gonna play this segment where boss talks about put people first and we strive to do what's right for our community individuals and our business but when faced with these trade-offs we prioritize our community and So yeah, he says we're going to put our community first. So if you look at put people first, they're talking about their community, their community of users. And then the one before that, I've considered everyone. That is everyone that are already using the technology or also not using the technology. So the bystanders.
[01:38:41.707] Facebook Employee: And we build for all people of all backgrounds, including people who aren't using our products at all, but may be affected by them. We think about this a lot in the context of Project ARIA.
[01:38:51.472] Kent Bye: So when they talk about consider everyone, they're talking about the non-users and everybody else that's outside of the people that are not within this ecosystem of the technology. So, you know, I think some takeaways just generally, like these responsible innovation principles, there's a lot of problems with them. They're unclear. We don't know where they came from. We don't know how they're developed. We don't know how they're going to be applied. And there's larger issues, I think, in terms of transparency and accountability And is there a willingness to kind of halt all the development if there's different issues that come up? What kind of internal reporting is there? And if there are oversight, what kind of independent oversight would there be? The existing oversight board, is there any remit to be able to look at some of these different issues? Or is that only a mandate to look at the existing content moderation issues of all the other Facebook systems that are out there already? So there's the real Facebook oversight board, which is a group of independent scholars that are kind of like trying to implement what an actual oversight board would look like. So I think to really kind of live into this, don't surprise people. You know, what are the transparency and accountability mechanisms here? And how can the public be in more of a dialogue for how this all starts to play out? Once it reaches a scale, then some of these different algorithms and the way the technology is deployed has the ability to really shift all sorts of different aspects of the culture. That's a big reason why Sally was talking about this polysocial reality theory. It's a communication theory that's really looking at all these different messages and how you start to look at pluralism and the different heterogeneity of lots of different systems and contexts and how those all get blended together. How do you manage that when you have these technologies that are a little bit more monolithic and homogeneous? into these poly-social, poly-cultural, and pluralistic contexts with many different meanings and different normative standards across lots of different cultural contexts, then taking a technocratic approach, it doesn't necessarily work all that well. And the other aspect of the utilitarianism and just the problems of utilitarianism, I mean, from an ethical perspective, there's lots of issues that are being brought up here. And there's a lack of critical dialogue within the larger XR industry that we haven't really dug in and unpacked a lot of these different things. So I'm really happy to see that Applin and Flick were able to put together this paper. Again, if you haven't read it, there's lots of other information that we weren't able to really dive into here. I mean, there's only so much detail you can go into when you have someone writing like a 15,000 word article that really digs into a lot of these different things with other references and pointers. I think we gave a good overview, but there's certainly a lot more details to be looked at. I'd recommend people check out. It's called Facebook's Project ARIA indicates problems for responsible innovation when broadly deploying AR and other pervasive technologies in the commons. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.