In her Contextual Integrity theory of privacy, Helen Nissembaum defines privacy as “appropriate flows of information” where the appropriateness is defined by the context and its contextual informational norms. Contextual Integrity is a paradigm shift away from the Fair Information Practice Principles, which emphasizes a model of privacy focused on the control of personal information. Framing privacy through the lens of control has led to a notice & consent model of privacy, which many have argued that notice and consent fails to actually protect our privacy. Rather than focusing on the definition of privacy as the control of private information, Contextual Integrity focuses on appropriate flows of information relative to the stakeholders within a specific context who are trying to achieve a common purpose or goal.
Nissembaum’s Contextual Integrity theory of privacy reflects the context dependent nature of privacy. She was inspired by social theories like Michael Walzer’s Spheres of Justice, Pierre Bourdieu’s field theory, as well as what others refer to as domains. These are ways of breaking up society into these distinct spheres that have their “own contextual information norms” for how data are exchange to satisfy a shared intentions. Nissembaum resists choosing a specific social theory of context, but has offered the following definition of context in one of her presentations.
Contexts – differentiated social spheres defined by important purposes, goals, and values, characterized by distinctive ontologies, roles and practices (e.g. healthcare, education, family); and norms, including informational norms — implicit or explicit rules of info flow.
Nissembaum emphasized the importance of how the informational exchanges should be helping to achieve the mutual purposes, goals, and values of that specific context. As an example, you would be more than willing to provide medical data to your doctor for the purpose of evaluating your health and helping you heal from an ailment. But you may be more cautious in providing that same medical information to Facebook, especially if it was unclear how they intended on using it. The intention and purpose of these contexts help to shape the underlying information norms that helps people understand what information is okay to share given the larger context of that exchange.
Nissembaum specifies how these contextual information norms have five main parameters that be used to form rules to help determine whether or not privacy has been preserved or not.
(Image credit: Cornell)
The first three are the stakeholder actors including the sender and recipient and the subject or intention of the data exchange.
Then there’s the different types of information that are being exchanged, whether it’s physiological data, biographical data, medical information, financial information, etc.
Finally, the transmission principles that facilitate an exchange include a range including through informed consent, buying or selling, coercion, asymmetrical adhesion contracts or reciprocal exchange, via a warrant, stolen, surreptitiously acquired or with notice, or as required by law.
Notice and consent is embedded within the framework of Contextual Integrity through the transmission principle, which includes informed consent or adhesion contracts. But Nissenbaum says that in order for the contextual integrity to be preserved, then all five of these parameters need to be properly specified.
The final step within the Contextual Integrity theory of Privacy is to evaluate whether or not the informational norm is “legitimate, worth defending, and morally justifiable.” This includes looking at the stakeholders and analyzing who may be harmed and who is benefitting from any informational exchange. Then looking to see whether or not it diminishes any political or human rights principles like the diminishment of the freedom of speech. And then finally evaluating how the information exchange is helping to serve the contextual domain’s function, purpose, and value.
Overall, Nissembaum’s Contextual Integrity theory of privacy provides a really robust definition of privacy and foundation to build upon. She collaborated with some computer scientists who formulized “some aspects of contextual integrity in a logical framework for expressing and reasoning about norms of transmission of personal information.” They were able to formally represent some of information flows in legislation like HIPAA, COPPA, and GLBA into their logical language meaning that it could be possible to create computer programs that could enforce compliance.
Even though Contextual Integrity is very promising as a replacement to existing privacy legislation, there are some potential limitations. The theory leans heavily upon normative standards within given contexts, but my observation with immersive XR technologies is that not only does XR blur and blend existing contexts, but is also creating new precedents of information flows that goes above and beyond. Here’s my state of XR privacy talk from the AR/VR Association’s Global Summit that provides more context on these new physiological information flows and still relatively underdeveloped normative standards of what data are going to be made available to consumer technology companies and what might be possible to do with this data:
The blending and blurring of contexts could lead to a context collapse. As these consumer devices are able to capture and record medical-grade data, then these neuro-technologies and XR technologies are combining the information norms of the medical context with the consumer technology context. This is something that’s already been happening over the years with different physiological tracking, but XR and neuro-technologies will start to create new ethical and moral dilemmas with the types of intimate information that will be made available. Here’s a list of physiological and biometric sensors that could be integrated within XR within the next 5-20 years:
It is also not entirely clear how Contextual Integrity would handle the implications of inferred data like what Ellysse Dick Refers to as “computed data” or what Brittan Heller refers to as “biometric psychographic data”. There is a lot of really intimate information that can be inferred and extrapolated about your likes and dislikes, context-based preferences, and more essential character, personality, and identity from observed behaviors combined with physiological reactions with the full, situational awareness of what you’re looking at and engaged with inside of a virtual environment (or also with eye tracking + egocentric data capture within AR). Behavioral neuroscientist John Burkhardt details some of these biometric data streams, and the unethical threshold between observing and controlling behaviors.
Here’s a map of all of the types of inferred information that can come from image-based eye tracking from Kröger et al’s article “What Does Your Gaze Reveal About You? On the Privacy Implications of Eye Tracking”
It is unclear to me how Nissembaum’s Contextual Integrity theory of privacy might account for this type of computed data, biometric psychographic data, or inferred data. It doesn’t seem like inferred data fits under the category of information types. Perhaps there needs to be a new category of inferred data that could be extrapolated from data that’s transmitted or aggregated. It’s also worth looking at the taxonomy of data types from Ellysse Dick’s work on bystander privacy in AR.
It’s also unclear whether or not this would need to be properly disclosed or whether the data subject has any ownership rights over this inferred data. The ownership or provenance of the data also isn’t fully specified within the sender and recipient of the data, especially if there are multiple stakeholders involved. It’s also unclear to me how the intended use of the data is properly communicated within contextual integrity, and whether or not this is already covered within the “subject” portion of actors.
Some of these open questions could be answered when evaluating whether or not an informational norm is “legitimate, worth defending, and morally justifiable.” Because VR, AR, and neuro-technologies are so new, then these exchanges of information are also very new. Research neuroscientists like Rafael Yuste have identified some fundamental Neuro-Rights of mental privacy, agency, and identity that are potentially threatened because of the types of neurological and physiological data that will soon be made available within the context of consumer neuro-tech devices like brain control interfaces, watches with EMG sensors, or other physiological sensors that will be integrated into headsets like Project Galea collaboration between OpenBCI, Valve Software, and Tobii eye tracking.
I had a chance to talk with Nissenbaum to get an overview of her Contextual Integrity theory of privacy, but we had limited time to dig into some of the aspects of mental privacy. She said that the neuro-rights of agency and identity are significant ethical and moral questions that are beyond the scope of her privacy framework in being able to properly evaluate or address. She’s also generally skeptical about relying about treating privacy as a human right because you’ll ultimately need a legal definition that allows you to exercise that legal right, and if it’s still contextualized within the paradigm of notice and consent, then we’ll be “stuck with this theory of privacy that loads all of the decision onto the least capable person, which is the data subject.”
She implied that even if there are additional human rights laws at the international level, as long as there is a consent loophole like there is within the existing adhesion contract frameworks of notice and consent, then it’s not going to ensure that you can exercise that right to privacy. This likely means that US tech policy folks may need to use Contextual Integrity as a baseline to be able to form new state or federal privacy laws, which is where privacy law could be enforced and the rights to privacy be asserted. There’s still value in having human rights laws shape regional laws, but there is not many options to enforce the right to privacy through the mechanisms of international law.
Finally, I had a chance to show Nissenbaum my taxonomy of contexts that I’ve been using in the context of my XR Ethics Manifesto that maps out the landscape of ethical issues within XR. I first presented this taxonomy of the domains of human experience at the SVVR conference on April 28, 2016, as I was trying to categorize the range of answers of the ultimate potential of VR into different industry verticals or contexts.
We didn’t have time to fully unpack all of the similarities to some of her examples of contexts, but she said that it’s in the spirit of other social theorists who have started to may out their own theories for how to make sense of society. I’m actually skeptical that it’s possible to come up with a comprehensive or complete mapping of all human contexts. But I’m sharing it here in case it might be able to shed any additional insights into how something like Nissembaum’s Contextual Integrity theory of privacy could be translated into a Federal Privacy Law framework, and whether it’s possible or feasible to comprehensively map out the purposes, goals, and values of each of these contexts, appropriate information flows, and stakeholders for each of them.
It’s probably an impossible task to create a comprehensive map of all contexts, but it could provide some helpful insights into the nature of VR and AR. I think context is a key feature of XR. With augmented reality, you’re able to add additional contextual information on top of the center of gravity of an existing context. And with virtual reality, you’re able to do a complete context shift from one context to another one. Also, having a theoretical framework for context may also help develop things like “contextually-aware AI” that Facebook Reality Labs is currently working on. Also if VR has the capability to represent simulations of each of these contexts, then a taxonomy of contexts could help provide a provocation on thought experiments on how existing contextual informational norms might be translated and expressed within virtual reality as they’re blended and blurred with the emerging contextual norms of XR.
I’m really impressed with the theory of contextual Integrity, since there are many intuitive aspects about the contextual nature of privacy that are articulated through Nissenbaum’s framework. But it also helps to elaborate how Facebook’s notice and consent-based approach to privacy in XR does not fully live into their Responsible Innovation principles of “Don’t Surprise People” or “Provide Controls that Matter.” Not only do we no have full agency over information flows (since they don’t matter enough for Facebook to provide them), but there’s no way to verify whether or not the information flows are morally justifiable since there isn’t any transparency or accountability to these information flows and all of the intentions and purposes that Facebook plans to use with the data that are made available to them.
Finally, I’d recommend folks check out Nissembaum’s 2010 book Privacy in Context: Technology, Policy, and the Integrity of Social Life for a longer elaboration of her theory. I also found these two lectures on Contextual Integrity particularly helpful in my preparation for my interview with her on the Voices of VR podcast: Crypto 2019 (August 21, 2019) and the University of Washington Distinguished Speaker Series at the Department of Human Centered Design & Engineering (March 5, 2021). And here’s the video where I first discovered Nissembaum’s Contextual Integrity theory of privacy where she was in conversation with philosophers of privacy Dr. Anita Allen and Dr. Adam Moore at a Privacy Conference: Law, Ethics, and Philosophy of End User Responsibility for Privacy that was recorded on April 24, 2015 at the University of Pennsylvania Carey Law School.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.