There have been a number of documentaries at Sundance over the past three years that have taken a critical lens the social impacts of technology including The Cleaners (2018), The Great Hack (2019), The Social Dilemma (2020), and Coded Bias (2020). Karim Amer was the director The Great Hack, and he returned to Sundance this year continuing to explore the dynamics of surveillance capitalism in a virtual reality piece called Persuasion Machines co-created with immersive media artist & architect Guvenc Ozel.
Persuasion Machines takes an architectual approach to telling the story of surveillance capitalism by exploring three different layers of space. The first layer of space is a projection mapping of a grid that’s on the floor that gives you a sense of the space you’re about to enter. Then as you enter into VR, there’s a photorealistic, millennial living room of the future filled with all of the latest gadgets, and then the final layer you reach by walking through different portals that then reveals a visual-depiction of the world as it would be seen by the immersive gadgets of the future. You’re metaphorically cutting through the matrix to be able to see abstract representations of data that shows what all of these gadgets actually are, “persuasion machines.”
I had a chance to speak with co-creator Ozel about their architectural-approach to design, and their strategy of slowly deviating from the affordances of digital vernacular in order to create a deeper sense of plausibility and environmental presence.
While I absolutely love the deeper intention & experiential design of Persuasion Machines, they had one creative decision that I strongly disagree with. Persuasion Machines is a piece that’s critiquing the tools of surveillance capitalism, but yet they’re also using some of those very same dark patterns. Persuasion Machines was livestreaming sessions of people as they were experiencing their VR experience live onto YouTube without making it explicitly clear to the audience. I didn’t find out about until I was in the middle of my interview with Ozel, and needless to say I wasn’t too happy about it.
Here’s are the links to the archives of the 3 days of livestreams of audience members watching Persuasion Machines at Sundance.
- Sunday, January 26th (2h 57m)
- Monday, January 27th (36m, 5h 43m, 5h 10m, & 5h 50m)
- Tuesday, January 28th (8h 59m)
The Persuasion Machines team didn’t go out of their way to disclose that the livestream was happening, and in fact were hoping to trick people in providing consent by presenting a long terms of service form that they later skewer in their piece. I honestly don’t remember signing a digital release form. It’s entirely possible that I did sign it, and I just don’t remember because I was in such a rush to see the experience. If I did sign it, then I definitely didn’t do a close reading of the release form, which says,
I acknowledge that The Othrs, LLC might be currently filming, photographing, video and audio taping and telecasting scenes at this location for inclusion in television programs to be initially exhibited on digitally via live stream.
IF YOU DO NOT WISH TO BE FILMED, PHOTOGRAPHED, VIDEOTAPED OR AUDIO TAPED, OR TO APPEAR ON TELEVISION, PLEASE LEAVE THIS LOCATION DURING OUR FILMING, VIDEOTAPING AND TELECASTING.
I trusted the creators that they had my best intentions in mind, and that they wouldn’t be adopting the very same dark patterns that they’re aiming to critique. I certainly don’t recall anyone verbally disclosing that I was entering into a space that was going to be livestreaming all of my moves. There are only archives from Sunday, Monday, and Tuesday, and so it’s also possible that they weren’t streaming yet on that previous Thursday, and didn’t have the journalists sign the digital release form.
The quality of the stream is indeed very low and there may be a perceived safety in thinking that all of the data in the videos is anonymous and private. There are moments when attendees are clearly identifiable. But yet there are also many surprising ways of de-identifying the data by people seeing themselves, their friends, family, or acquaintances, and also through techniques like gait detection and other artificial intelligence systems that aggregates data from other sources are able to de-identify data as well.
But even if there isn’t any personally identifiable information on these streams, it’s still the deeper principle of not being providing human-readable or contextually-clear, informed consent that is the most bothersome. It’s the same lack of consent transaction that companies themselves are using, and so my overall experience of their execution felt more like they were replicating and amplifying the problem.
If the intention of this was to be provocative and get the audience angry, then it certainly achieved that with me. However, I don’t think the reveal of the stealth livestream was all that clear, as most people I tell this too were not aware of the livestreaming. It sparked a dynamic debate with Ozel during our interview, and I quickly conceded after not recalling as to whether I had signed my consent away or not.
Deconstructive Criticism versus Constructive Solutions
If I were to try to summarize an argument to the Persuasion Machines creators Ozel and Amer, then I’d say that creators have a moral responsibility to not just replicate dark patterns, but to actively try to construct solutions that help to solve some of these issues around privacy. There’s a role for creating cautionary tales and provocative demonstrations within an artistic context in order to make a larger point, but there’s also a role for exploring creative solutions and implementing best practices.
As Helen Nissenbaum explains, there has already been a movement towards “comprehensible privacy policies, more usable choice architectures, opt-in and not opt-out, tiered consent, and more effective notice.” She cites Ryan Calo’s work on this in his papers Code, Nudge, or Notice? and Against Notice Skepticism in Privacy (and Elsewhere).
It wasn’t made clear to me, many of the other attendees, or even the organizers of the Sundance New Frontier that Persuasion Machines was actively livestreaming attendees without their full, informed consent. What does informed consent look like in the future? Should privacy be considered like a copyright licensing contract that we maintain control over as Adam Moore argues? Or should privacy be considered a human right that we can’t choose to yield control like Anita Allen argues?
Helen Nissenbaum says that currently there’s an escape clause in all privacy protections that you can do anything that you want as long as you get someone’s consent, but operationalizing choice through unread and unreadable terms of service that are too complicated to fully comprehend is not the way to go. Do we need to build an institutional backdrop of contextual integrity that protects privacy in certain contexts? Nissenbaum says there can be room for choice, but that choice shouldn’t be able to trump some of our fundamental rights to privacy and protections. There’s still a lot of open questions in the philosophy of privacy, and I have some references down below to dig into more details of these debates.
But another point that I should make is that currently our rights to privacy are normative standards that change based upon our collective behaviors. The way in which a “reasonable expectation of privacy” is defined at any moment is determined by what the culture is doing. As more and more companies and artists start to broadcast representations onto the Internet, then this actually starts to erode our rights to privacy. The third party doctrine, also means that any data collected by a third party has no reasonable expectation of privacy, which means that all data broadcast is available to the US government without needing a warrant.
I’m not sure if Ozel and Amer were aware of these deeper privacy dynamics, but in hindsight I’m glad that I had the experience because it catalyzed a lot of deeper research for me into philosophical foundations of ethics and privacy with immersive technologies. I went down quite a rabbit hole of references and deeper constructive conversations about this topic that I’m including down below, as well as with some of my previous work on ethics and privacy in XR.
Resources to Get More Context on Ethics & Privacy in XR
Here’s some pointers to some of the previous work that I’ve done on ethics and privacy in mixed reality.
- An aggregation of 23 Voices of VR interviews from 2015-2018 on Privacy
- A series of 13 Voices of VR interviews, talks, panels on ethics and privacy from 2019
- My main stage presentation from Augmented World Expo on The Ethical & Moral Dilemmas of Mixed Reality
- My XR Ethics Manifesto + slides of an ethical design framework for experiential design.
Here are more academic references that talk about the ethics and philosophy of privacy in relation to technology
- DeCew, Judith, Privacy. Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), Stanford University, 18 Jan. 2018, Retrieved from https://plato.stanford.edu/entries/privacy/ on March 10, 2020.
- Allen, Anita L., The Philosophy of Privacy and Digital Life (2019 Eastern Division Presidential Address), Proceedings and Addresses of the American Philosophical Association, vol. 93, 2019, pp. 21-38.
- Allen, Anita L., Synthesis and Satisfaction: How Philosophy Scholarship Matters, 20 Theoretical Inquiries of Law. 343 (2019). Retrieved from https://www7.tau.ac.il/ojs/index.php/til/article/view/1618 on March 10, 2020.
- [University of Pennsylvania Carey Law School]. (2015, June 17). Privacy Conference: Law, Ethics, and Philosophy of End User Responsibility for Privacy [Video File]. Recorded on April 24, 2015. Retrieved from https://www.youtube.com/watch?v=8WIB_2isRxw on March 10, 2020. 2015 CTIC Privacy Conference Website
- A. Barth, A. Datta, J. C. Mitchell and H. Nissenbaum, Privacy and contextual integrity: framework and applications, 2006 IEEE Symposium on Security and Privacy (S&P’06), Berkeley/Oakland, CA, 2006, pp. 184. Retrieved from https://www.andrew.cmu.edu/user/danupam/bdmn-oakland06.pdf on March 10, 2020.
- Nissenbaum, Helen, Contextual Integrity Up and Down the Data Food Chain, 20 Theoretical Inquiries of Law 221 (2019). Retrieved from https://www7.tau.ac.il/ojs/index.php/til/article/view/1614 on March 10, 2020.
- Calo, Ryan, Against Notice Skepticism in Privacy (and Elsewhere), 87 Notre Dame L. Rev. 1027 (2013) Retrieved from https://www.law.upenn.edu/live/files/4442-ssrn-id1790144pdf on March 10, 2020.
- Calo, Ryan, Code, Nudge, or Notice?, 99 Iowa L. Rev. 773 (2014) Retrieved from https://www.law.upenn.edu/live/files/4443-ssrn-id2217013pdf on March 10, 2020.
- Barocas, Solon, and Helen Nissenbaum. Big Data’s End Run around Procedural Privacy Protections. Communications of the ACM, vol. 57, no. 11, 2014, pp. 31–33., doi:10.1145/2668897. Retrieved from https://nissenbaum.tech.cornell.edu/papers/Big%20Datas%20End%20Run%20Around%20Procedural%20Protections.pdf on March 10, 2020.
- Hildebrandt, Mireille. Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning. Theoretical Inquiries in Law, vol. 20, no. 1, 2019, pp. 83–121., doi:10.1515/til-2019-0004. Retrieved from https://www7.tau.ac.il/ojs/index.php/til/article/view/1622 on March 10, 2020.
- McQuillan, Dan Data Science as Machinic Neoplatonism. Philosophy of Technology. 31, 253–272 (2018). Retrieved from https://link.springer.com/article/10.1007/s13347-017-0273-3 on March 10, 2020.
Hopefully I’ll be able to synthesize a lot of these thoughts in the course of more interviews and conversations here soon.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.