#988: Defining “Biometric Psychography” to Fill Gaps in Privacy Law to Cover XR Data: Brittan Heller’s Human Rights Perspectives

biometric-psychography

brittan-heller-2Brittan Heller is a human rights lawyer who recently published a paper pointing out that there are some significant gaps in privacy laws that do not cover the types of physiological and biometric data that will be available from virtual and augmented reality. Existing laws around biometrics are tightly connected to identity, but she argues that there are entirely new classes of data available from XR that she’s calling “biometric psychography,” which she says is a “new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests.”

Her paper published in Vanderbilt Journal of Entertainment and Technology Law is titled “Watching Androids Dream of Electric Sheep: Immersive Technology, Biometric Psychography, and the Law.” She points out that “biometric data” is actually pretty narrowly defined in most state laws to be tightly connected to identity and personally-identifiable information. She says,

Under Illinois state law, a “biometric identifier” is a bodily imprint or attribute that can be used to uniquely distinguish an individual, defined in the statute as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” 224 Exclusions from the definition of biometric identifier are “writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color” and biological material or information collected in a health care setting. 225

The types of biometric data that will be coming from immersive technologies are more like types of data that used to only be collected within the context of a health care setting. One of her citations is a 2017 Voices of VR podcast interview I did with behavioral neuroscientist John Burkhardt on the “Biometric Data Streams & the Unknown Ethical Threshold of Predicting & Controlling Behavior,” which lists some of the types of biometric psychographic data that will be made available to XR technologists. Heller says in her paper,

What type of information would be included as part of biometric psychographics? One part is biological info that may be classified as biometric information or biometric identifiers. 176 Looking to immersive technology, the following are biometric tracking techniques: (1) eye tracking and pupil response; 177 (2) facial scans; 178 (3) galvanic skin response; 179 (4) electroencephalography (EEG); 180 (5) electromyography (EMG); 181 and (6) electrocardiography (ECG). 182 These measurements tell much more than they may indicate on the surface. For example, facial tracking can be used to predict how and when a user experiences emotional feelings. 183 It can trace indications of the seven emotions that are highly correlated with certain muscle movements in the face: anger, surprise, fear, joy, sadness, contempt, and disgust. 184 EEG shows brain waves, which can reveal states of mind. 185 EEG can also indicate one’s cognitive load. 186 How aversive or repetitive is a particular task? How challenging is a particular cognitive task? 187 Galvanic skin response shows how intensely a user may feel an emotion, like anxiety or stress, and is used in lie detector tests. 188 EMG senses how tense the user’s muscles are and can detect involuntary micro-expressions, which is useful in detecting whether or not people are telling the truth since telling a lie would require faking involuntary reactions. 189 ECG can similarly indicate truthfulness, by seeing if one’s pulse or blood pressure increases in response to a stimulus. 190

While it’s still unclear if these data streams will end up having personally-identifiable information signatures that are only detectable by machine learning, the larger issue here is that when this physiological data streams are fused together then it’s going to be able to extrapolate a lot of psychographic information about our “likes, dislikes, preferences, and interests.”

Currently, there are no legal protections around this data that are setting any limits about what private companies or third party developers can do with this data. There’s a lot of open questions around the limits of what we consent to sharing, but also to what degree might having access to all of this data might put users in a position where their Neuro-Rights of agency, identity, or mental privacy are undermined by whomever has access to this data.

Heller is a human rights lawyer, who I previously interviewed in July 2019 on how she’s been applying human rights frameworks to curtail harassment and hate speech in virtual spaces. Now she’s taking the approach of looking at how human rights frameworks and agreements may be able to help set a baseline of human rights that are more consensus-based in the sense that there’s not a legal enforcement mechanism. She cited the “UN Guiding Principles on Business and Human Rights” as an example of a human rights framework that is used combine a human rights lens with company business practices around the world. Here’s a European Parliament policy study of the UN Guiding Principles on Business and Human Rights that gives a graphical overview:

un-guiding-principles-on-busness-and-human-rights

One of the biggest open issues that needs to be resolved is how this concept of “biometric psychography” is enshrined into some sort of Federal or State privacy law in order for it to be legally binding to these companies. Heller talked about a hierarchy between the laws, and this is one way to look at the different layers of how international law is at a higher and more abstract level that isn’t always legally binding in national, regional, or state jurisdictions. She said that citing International Law in a US court is often not going to be a winning strategy.

hierarchy-of-contexts

Another way to look at this issue is that there’s a nested set of contexts where there’s cultural norms, a set of international, national, regional, and city laws, but also the economic business layers. So even though Article 12 of the UN’s Universal Declaration of Human Rights says, “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” There are contextual dimensions of privacy where individuals can enter into Terms of Service & Privacy Policy contractual agreements with these businesses where they can consent for companies to have privileged information could be used to undermine our sense of mental privacy and agency.

nested-hierarchy-of-contexts

Ultimately, the United States may need to implement a Federal Privacy Law that sets up some guardrails for companies for what they can and cannot do with the types of biometric psychographic data that comes from XR. I previously discussed the history and larger context of US Privacy Law with Privacy Lawyer Joe Jerome where he explains that even though there’s a lot of bi-partisan consensus for the need for some sort of Federal Privacy Law, there are still a lot of partisan disagreements on a number of issues. There is a lot of United States legislation on privacy being passed at the State level, which the International Association of Privacy Professionals is tracking here.

Heller’s paper is a great first step in starting to explain some of the types of biometric psychographic data that are made available by XR technologies, but it’s still an open question as to whether or not there should be laws implemented at the Federal or State level in order to set up some guardrails for how this data are being used and in what context. I’m a fan of Helen Nissenbaum’s contextual integrity approach to privacy as a framework to help differentiate the different contexts and information flows, but I have not seen a generalized approach that maps out the range of different contexts and how this could flow back into a generalized privacy framework or privacy law. Heller suggested to me that creating a consensus-driven, ethical framework that businesses consent to could be a first step, even if there is no real accountability or enforcement.

Another community that is starting to have these conversations are neuroscientists interested in Neuro Ethics and Neuro-Rights. There is an upcoming, free Symposium on the Ethics of Noninvasive Neural Interfaces on May 26th hosted at the Columbia Neuro-Rights Initiative and co-organized by Facebook Realty Labs.

Columbia’s Rafael Yuste is one of the co-authors of the paper “It’s Time for Neuro-Rights” published in Horizons: Journal of International Relations and Sustainable Development. They are also taking a human rights approach of defining some fundamental rights to agency, identity, mental privacy, fair access to mental augmentation, and protection from algorithmic bias. But again, the real challenge is how these higher level rights at the international law or human rights level get implemented at a level that has a direct impact on these companies who are delivering these neural technologies. How are these rights going to be negotiated from context to context (especially within the context of consumer technologies that within themselves can span a wide range of contexts)? What should the limits be of who has access to this biometric psychographic data from non-invasive neuro-technologies like XR? And should there be limits of what they’re able to do with this data?

I have a lot more questions than answers, but Heller’s definition of “biometric psychography” will hopefully start to move these discussions around privacy beyond personal-identifiable information and our identity, and look at how this data provides benefits and risks to our agency, identity, and mental privacy. Figuring out how to conceptualize, comprehend, and weigh all of these tradeoffs is one of the more challenging aspects of XR Ethics, and something that we need to still collectively figure out as a community. It’s going to require a lot of interdisciplinary collaboration between immersive technology creators, neuroscientists, human rights and privacy lawyers, ethicists and philosophers, and many other producers and consumers of XR technologies.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Update April 20th On April 11th, I posted this visualization of the relational dynamics that we covered in this discussion:
Preserving-Mental-Privacy.jpg

Here is a simplified version of this graphic that helps to visualize the relational dynamics for how human rights and ethical design principles fit into technology policy and the ethics of technology design.
ecosystem-visualization

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality