container

em>Container is a hyperreal 180-degree spatial art installation that explores modern day slavery. Visual artist/visual anthropologist Meghna Singh and documentary filmmaker Simon Wood utilize the shipping container as consistent piece of architecture across space and time to create a spatial metaphor to viscerally connect the products shipped in these containers with the oppression and exploited human labor that’s invisible to consumers.

Container is a provocative & stylized piece of immersive storytelling that has created some visceral scenes of slavery that are deeply lodged into my memory. It pushes forward the grammar of immersive storytelling by combining art installation, history, theater, and 180-degree video to create a sort of poetic spatial anthropology that makes associative connections in an embodied and dreamlike fashion. The piece designed to implicate the audience into reflecting on how we may be unwittingly participating in systems of modern-day slavery, and the artists hope to take it to different film festivals around the world and create shipping container installations and showings at port cities involved in slave trade.

The piece is situated within the context of the port city of Cape Town, South Africa, but the piece also doesn’t have spoken words and so it’s generalizable to a global context.

I had a chance to talk remotely with co-directors Meghna Singh and documentary filmmaker Simon Wood to talk about their 4-year journey of producing this piece during it’s World Premiere at the Venice Film Festival.

Container is one of the more evocative pieces of 180 or 360-degree video I’ve seen this year, and it is currently available at the Venice VR Expanded until September 19th.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

anandala

Kevin Mack’s Anandala is an awe-inspiring piece of VR-native, generative, abstract art with some really compelling experiments in interactive, embodied NPC entities he calls “blorts.” There are 130 of these blorts spread out throughout a massive series of never-ending tunnels split into different zones. I’ve had some of the most compelling and playful interactions with these blorts that go beyond anything else I’ve experienced in VR. These blorts feel alive as they exhibit a broad range of responsive behaviors that I interpret as curious, playful, contextually-aware, interactive, and non-aggressively embodied.

While not driven by any machine learning, neural architecture, the heuristic-based AI behind the blorts take in enough external inputs to make them very difficult to predict what they’re going to do. This includes reacting to your position, gestures, movements, musical sonifications, what is happening in the surrounding context over time as it maintains a short-term memory of your actions and it’s own changing state. Mack said that there are a number of ways that he can tune each blort so that they each have their unique set of behaviors, personality, and character.

Each blort also has a unique underlying topology, but it is constantly shapeshifting via a vertex-shader that that is responding to a number of environmental inputs. Each blort also has an interactive, dynamic shader texture that’s like a psychedelic abstraction of fluid dynamics. It’s a variation of the shader that is on the walls of the series of never-ending tunnels, and the combination of these generative inputs produces an infinite source of perpetual novelty that’s both viscerally stimulating and feels like a boundless source of awe as it consistently perverts my ability to predict what’s going to happen next.

Interestingly enough, even though Anandala is viscerally stimulating and cognitively engaging, it still manages to have an overall calming and hypnotic quality with a sort of visual entrainment, transcendental Buddhist soundtrack, and induction of a flow state driven by curiosity and poetic interpretation of embodied behaviors of blorts like a divinatory reading of tea leaves to try to discern what these alien intelligence metaphors are doing and why. It’s hard not to anthropomorphize these blorts, talk to them, and use them as a blank slate to project ourselves onto. Mack shares a series of his own interactions and conversations as he’s been developing it over the last couple of years.

Anandala the spiritual successor to Blortasia, which features a similar underlying architecture and environmental shaders with slightly-refined, flying mechanic that enables intuitive locomotion to explore the space. Again the biggest change in Anandala are how sophisticated and complex the blorts have become, as well as some Easter egg behaviors and locations to discover.

I had a chance to interview Mack after Anandala’s World Premiere at the Venice VR Expanded festival that runs until September 19th. I highly recommend checking it out if you got the Venice accreditation, or you can also get temporary access to it via Viveport Infinity until the end of the festival.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

goliath_-_playing_with_reality

GOLIATH: PLAYING WITH REALITY is an interactive VR story that explores the experience of schizophrenia & psychosis through the story of a Twitch Streamer named GoliathGames. It’s a really strong piece of immersive storytelling balancing interactivity of gaming metaphors that serves the story, great pacing, and brilliant onboarding and offboarding as voiced by Tilda Swinton.

I had a chance to talk with Barry Murphy, Director at Anagram, and May Abdalla, Co-founder at Anagram remotely while they were at the Venice VR Expanded. We unpack the art and experiential design direction, their background research for how to best represent psychosis, and the evolution of the piece since it’s Tribeca World Premiere.

GOLIATH launches on Oculus for free on Thursday, September 9th, and is currently in competition at the Venice Film Festival.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

kusunda

Kusunda is a spatial documentary about a indigenous language in Nepal that is on the brink of going dormant and falling asleep. It’s an incredibly powerful story that uses photogrammetry, Tilt Brush interpretations of oral history, and interactive natural language processing actions that help teach the interactor a number of Kununda vocabulary terms. It’s a production that had to adapt to COVID-19 by innovating other metaphoric & spatial ways of telling this story.

I had a chance to talk with Now Here Media co-founders Gayatri Parameswaran and Felix Gaedtke after their Tribeca World Premier in June, 2021. We explore their journey producing this piece, the special considerations telling this story spatially, and the deep listening involved in producing a piece like this.

Kusunda won the Tribeca Storyscapes Grand Jury prize, and is currently featured out of competition at the Venice Film Festival VR Expanded selection until September 19th.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

The Venice VR Expanded portion of the Venice Film Festival opens today, and runs from September 1-19, 2021. I had a chance to talk to the co-creators Liz Rosenthal and Michel Reilhac to get a sneak peak of the 37 immersive storytelling projects with 24 projects in Competition, 11 projects in the Best Of section, 1 project in the Biennale College Cinema VR section, and 1 Special Event Out of Competition.

There are also 34 VRChat worlds that are being featured in a VRChat world gallery, which are accessible via portal doors within a public instance of the Venice VR Expanded 2021 hub world in VRChat. There will be events in the private instance of this world, which you can get access to those events as well as all of the experiences with a 100€ accreditation fee that will get you a download code for the projects that are hosted on a combination of either Viveport and Oculus.

We talk about the evolution of the Venice VR Expanded selection into it’s fifth edition in 2021, as well as some of the other financing opportunities that are made available through their production bridge as well as their special Biennale College Cinema VR program.

I’m looking forward to digging into this year’s selection, as there’s always a lot of amazing innovations in immersive storytelling each and every year.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

kent-bye-Voices-of-VR-1000

Episode #1000 of the Voices of VR podcast is a special, three-hour retrospective featuring over 100 of the best answers I’ve received about the ultimate potential of VR over the past 7 years. I hope that it serves as a primer on the scope and breadth of different XR applications across many different contexts and domains of human experience as described by a diverse range of subject matter experts who are on the frontiers of innovation within spatial computing. The episode starts with describing the power of presence and immersion, and then elaborates on the underlying neuroscience foundations of perception and embodied cognition that explain why Extended Reality (XR) technologies like Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) present unique affordances that go above and beyond what previous communication mediums can deliver. I believe XR represents a new computing paradigm that will be moving from flat 2D interfaces through windowed portals to volumetric experiences and worlds that are more embodied, immersive, participatory, social, and spatial.

There’s been a lot of recent buzz and hype about the potential and possibilities of the “metaverse” ever since Mark Zukerberg told Casey Newton on July 22 that as Facebook is “building out the next set of computing platforms, like virtual and augmented reality, to give people that sense of presence,” and that if they do this well enough then “I think over the next five years or so, in this next chapter of our company, I think we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.” What does Facebook mean by the Metaverse? Is VR just a fad? Or is this type of rhetoric overblown hyperbole?

After conducting over 1600 oral history interviews over the past seven years, I can say that there are some compelling reasons why I think VR and AR are on the cusp of becoming a reality. How metaverse will continue to unfold, whether it’s closed or open/interoperable, and what it will ultimately look like is still up for debate as it’s still being created. But what I can say is that there’s been a ton of foundational work that’s been done on these immersive technologies since the mid-1960s starting with the military and continuing on through academic research, and then through the first wave of enterprise VR in the early 90s, and now with the second wave of consumer VR that was catalyzed in August 2012 with the Oculus Kickstarter.

Virtual Reality was considered an emerging technology by Gartner’s Hype Cycle Map up until 2017, but since 2018 it’s not been tracked as an emerging technology as it has graduated through the plateau of productivity into an established market. Virtual Reality is here, and it’s actually happening this time around. This podcast episode takes a 10% sample of my first 1000 episodes of the Voices of VR podcast to show a wide range of immediate, near-term, and potential future applications across a wide variety of contexts.

The first Oculus Developer Kits (the Oculus DK1) were shipped in March of 2013, and I bought my DK1 on January 1, 2014. I attended the first professional conference during the Silicon Valley Virtual Reality conference on May 19 and 20th of 2014 where the excitement about the potentials of this new medium was palpable. I ended up recording 44 interviews over those two days because I wanted to capture what felt like a historical moment within the community of early adopters and innovators who would prove to become key figures in the development and continued evolution of what’s possible within the VR medium. I feel like I’ve been in a collaborative conversation with the broader VR community over the past 7 years helping to document the full range of applications, but to also tap into the more philosophical, ethical, and future dreaming potentials for where this could all go.

This episode is broken up into loose sections that cover XR applications and potential futures spanning everything from medicine, neuroscience & behavioral research, early education, training, hanging out with friends in Social VR, connecting to family, recording memories, travel, wayfinding, location-based entertainment, embodied AI & virtual assistants, telepresence & remote work, productivity, new spatial computing paradigms, interdisciplinary collaborations, new human-computer interaction paradigms, the metaverse, open standards like WebXR and OpenXR, empathy, immersive storytelling, journalism, documentary, porn, gaming, ethical considerations around addiction & escapism, trolling and harassment, risks to privacy via surveillance capitalism, risks to Neuro Rights of Agency and autonomy, identity, psychological implications of avatar embodiment, accessibility, diversity and inclusion, the philosophical implications of our first-person perspective, situated knowledges, pluralism, filter bubbles of reality and worldviews, activism, ecological and relational awareness, speculative design and worldbuilding, future dreaming, indigenous futurism, latent human potentials, expanding perception, consciousness hacking and transformative practices, cooperative models of AI development, simulation theory, virtual goods and resources, environmental sustainability, more equitable access to experiences, limits of simulation vs reality, being able to see reality in a new way, and being in right relationship to ourselves, to each other, to the planet, and to all aspects of reality.

Thanks for coming along on this journey so far, and please consider supporting this work via the Voices of VR Patreon. I wouldn’t have been able to come this far without the support from the community, and I have over 600 other unpublished interviews that I look forward to digging into many more insights and reflections on the ultimate potential of VR. More detailed shownotes can be found down below:

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Links to episodes referenced in the podcast:

Introduction

Neuroscience Foundations

Medicine

Early Education, Training, & Behavioral Research

Hanging Out with Friends in Social VR

Connecting to Family & Recording Memories

Travel

Augmented Reality, Wayfinding, Location-Based Entertainment, Embodied AI & Virtual Assistants

Telepresence & Remote Work, Productivity, New Spatial Computing Paradigms, Interdisciplinary Collaborations, & New Human-Computer Interaction Paradigms

The Metaverse, Open Standards like WebXR and OpenXR

Empathy, Immersive Storytelling, Journalism, & Documentary

Is it Porn or Gaming that’s Driving VR Innovation?

Gaming

Ethical Considerations Including: Addiction & Escapism, Trolling & Harassment, Risks to Privacy via Surveillance Capitalism, & Risks to Neuro-Rights of Agency & Autonomy

Identity, Self-Expression, Psychological Implications of Avatar Embodiment

Accessibility, Diversity & Inclusion, & Sense of Self

The Philosophical Implications of XR, Limits to First-person Perspective, Situated Knowledges, Pluralism, & Filter Bubbles of Reality & Worldviews

Consciousness Transformation, Activism, & Relational Awareness

Speculative Design, Worldbuilding, Future Dreaming, & Indigenous Futurism

Latent Human Potentials, Expanding Perception, Consciousness Hacking and Transformative Practices, & Cooperative Models of AI Development

Simulation Theory

Virtual Goods and Resources, Environmental Sustainability, More Equitable Access To Experiences, & Limits of Simulation vs Reality

Being Able to See Reality in a New Way, and Being In Right Relationship to Ourselves, to Each Other, to the Planet, and to All Aspects of Reality.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

eff-human-rights
The Electronic Frontier Foundation is a non-profit that has been defending civil liberties & digital rights for over 30 years since it was first announced on July 10, 1990. My first conversation with the EFF was with Dave Maass at the VR Privacy Summit in November 2018 when consumer VR was still in a nascent phase and lower priority relative to civil liberty threats back then. But over the past year, the EFF has been starting to investigate the civil liberties and privacy implications of XR technologies, especially as VR has been gaining more commercial momentum. The EFF published an article on “If Privacy Dies in VR, It Dies in Real Life” on August 25, 2020 a week after Facebook announced that Facebook accounts would be required to use the Oculus VR headsets. After Facebook’s Project Aria AR prototype was announced at Facebook Connect, the EFF published “Augmented Reality Must Have Augmented Privacy” a month later on October 16, 2020. Their latest article was published on June 2, 2021 about “Your Avatar is You, However You See Yourself, and You Should Control Your Experience.”

The EFF helped to organize a RightsCon session on June 10, 2021 titled “As AR/VR becomes a reality, it needs a human rights framework.” Avi Bar-Zeev and I helped to contextualize some of the issues with VR to the Human Rights Lawyers and Activists attending RightsCon, and then were four different breakout groups for half the session brainstorming different human rights, digital rights, and civil liberties across different contexts including Law Enforcement Access to Potential Evidence & Surveillance, Biometric Inferences and Mental Privacy, Avatars and Virtual Agents, and Content Moderation.

I wanted to gather representatives from the EFF the day after the RightsCon session to debrief on insights what a human rights framework might look like for XR from the perspective of law, tech policy, tech architecture, and grassroots organizing.

  • Kurt Opsahl – Deputy Executive Director & Counsel for Electronic Frontier Foundation
  • Katitza Rodriguez – Policy director for global privacy for EFF
  • Jon Callas – Director of Technology Projects
  • Rory Mir – Grassroots organizer

We talk about human rights insights into XR technologies along with the provocation of what it might look like to have more civil liberty rights within XR. We talk about the first UN General Assembly Resolution on the “The right to privacy in the digital age,” and how the global policy perspectives provides new insights for how International data protection regulations, treaties, as well as resolutions passed by the UN’s General Assembly, the UN Council of Human Rights, and actions taken by the UN Special Rapporteur on the Right to Privacy. We also cover some of the US domestic privacy laws concerning the Fourth Amendment, the third-party doctrine, different philosophical approaches to privacy, the invisible nature of privacy harms, and whether it’s possible to build off of laws like the Video Privacy Protection Act since contains some of the strongest US protections for privacy. We talk about whether or not existing Notice & Consent models provide truly informed consent that’s freely given. We touch on a number of tech projects like Privacy Badger and the EFF’s first VR project Spot the Surveillance, what tech companies have your back in protecting your privacy, and how to get involved in your local EFF chapter at eff.org/fight.

How do we ensure that we have just as many if not more human rights in virtual reality? It’s going to take looking at existing human rights frameworks, and listening to the human rights advocates from around the world who are tracking the intersection of digital rights and human rights across different contexts. The EFF is helping to lead some vital discussions on the topic, and I was happy to be able to participate in their RightsCon session and do this debriefing the day after. Below is a rough timeline of UN Resolutions and Reports from the UN General Assembly and UN Council of Human Rights related to the Right to Privacy.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Timeline of United Nations Resolutions Related to Privacy

10 December 1948: Universal Declaration of Human Rights, General Assembly resolution 217A [A/RES/217(III)]

Article 12
“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”

16 December 1966: The International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights, UN General Assembly Resolution 2200A (XXI) [A/RES/2200(XXI)].

article 17 of the International Covenant on Civil and Political Rights
“No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation.
2. Everyone has the right to the protection of the law against such interference or attacks.”

4 December 1986: UN General Assembly Resolution 41/128 [A/RES/41/128]: Declaration to the right to development

1 September 1988: Sub-Commission on Prevention of Discrimination and Protection of Minorities 1988/29 [E/CN.4/1989/3, E/CN.4/Sub.2/1988/45]: Guidelines on the Use of Computerized Personal Files

28 September 1988: 43/40 Supplement No. 40 [E/CN.4/1989/86]: Report of the Human Rights Committee

Right to privacy
69. With reference to that issue, members of the Committee requested details on protection against arbitrary and unlawful interference with privacy, family, home, and correspondence, particularly with regard to postal and telephone communications. It was also asked whether evidence obtained in violation of the right to privacy could be used in the courts and, if so, whether such instances had occurred and what the reaction of the court had been, whether authorities other than judges could order a house to be searched and under what circumstances and whether wire-tapping was authorized by law.

6 March 1989: Commission on Human Rights resolution 1989/43 [E/CN.4/1989/86]: Guidelines on the use of computerized personal files

24 May 1989: UN Economic & Social Council 1989/78 [E/RES/1989/78]: Guidelines on the use of computerized personal data files

15 December 1989: General Assembly Resolution 44/132 [A/RES/44/132]: Guidelines for the regulation of computerized personal data files

14 December 1990: General Assembly Resolution 45/95 [A/RES/45/95]: Guidelines for the regulation of computerized personal data files

13 October 1993: The Vienna Declaration and Programme of Action, UN General Assembly A/CONF.157/24(PartI): Vienna Declaration and Programme of Action via the UN World Conference on Human Rights Vienna, 14-25 June 1993

7 January 1994: UN General Assembly Resolution 48/141 [A/RES/48/141]: High Commissioner for the promotion and protection of all human rights

24 February 2006: UN General Assembly Draft Resolution 60/L.48 [A/60/L.48] Draft resolution submitted by the President of the General Assembly, Human Rights Council

“Assume the role and responsibilities of the Commission on Human Rights relating to the work of the Office of the United Nations High Commissioner for Human Rights, as decided by the General Assembly in its resolution 48/141 of 20 December 1993;”

15 March 2006: UN General Assembly Resolution 60/251 [A/RES/60/251]: Human Rights Council. The Human Rights Council assumes “the role and responsibilities of the Commission on Human Rights”

22 June 2006: UN Human Rights Council 1/1 [A/HRC/1/1]: Adoption of the Agenda and Organization of Work, Note by the Secretary-General

16 May 2007: UN Human Rights Council 5/1 [A/HRC/5/1]: Provisional Agenda, Note by the Secretary-General. “To consider in particular the institution-building process” for Human Rights.

27 April 2007: UN Human Rights Council 5/2 [ A/HRC/5/2]: Implementation of General Assembly Resolution 60/251 of 15 March 2006 entitled “Human Rights Council”

28 March 2008: UN Human Rights Council Resolution 7/36 [7/36]: Mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

30 April 2009: UN Human Rights Council 11/4 [A/HRC/11/4]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

2 October 2009: UN Human Rights Council Resolution 12/16 [A/HRC/RES/12/16]: Promotion and protection of all human rights, civil, political, economic, social and cultural rights, including the right to development: Freedom of opinion and expression

8 April 2011: UN Human Rights Council Resolution 16/4 [A/HRC/RES/16/4]: Freedom of opinion and expression: mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

16 May 2011: UN Human Rights Council 17/27 [A/HRC/17/27]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

12 September 2011: International Covenant on Civil and Political Rights CCPR/C/GC/34 [CCPR/C/GC/34]: General comment No. 34, Article 19: Freedoms of opinion and expression, General remark

17 April 2013: UN Human Rights Council 23/40 [A/HRC/23/40]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

“The present report, submitted in accordance with Human Rights Council resolution 16/4 [of 8 April 2011], analyses the implications of States’ surveillance of communications on the exercise of the human rights to privacy and to freedom of opinion and expression. While considering the impact of significant technological advances in communications, the report underlines the urgent need to further study new modaliti.es of surveillance and to revise national laws regulating these practices in line with human rights standards.”

21 March 2011: UN Human Rights Council 17/31 [A/HRC/17/31]: Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, John Ruggie

HRC/17/31 Annexes (p. 6-27): Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework.

10 August 2011: UN General Assembly 66/290 [A/66/290]: Promotion and protection of the right to freedom of opinion and expression

10 December 2013: UN General Assembly [A/68/456/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee*

“At its 2nd plenary meeting, on 20 September 2013, the General Assembly, on the recommendation of the General Committee, decided to include in the agenda of its sixty-eighth session, under the item entitled “Promotion and protection of human rights”, the sub-item entitled “Human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms” and to allocate it to the Third Committee.”

26 November 2013: UN General Assembly [A/C.3/68/SR.23]: Third Committee, Summary record of the 23rd meeting, Held at Headquarters, New York, on Wednesday, 23 October 2013, at 10 a.m.

“Ms. Almeida Watanabe Patriota (Brazil) expressed concern that some countries had still not accepted the universal periodic review. In the case of country-specific human rights resolutions, negotiations should be more transparent. Her delegation commended the impartial work of the commission of inquiry on the Syrian Arab Republic. In view of the recent revelations of violations of the fundamental right to privacy, she asked what the international community could do to help to enforce that right and whether, in the High Commissioner’s opinion, the absence of Internet privacy guarantees might undermine freedom of expression. She would also appreciate further comments on what Member States could do to help others realize that securing basic human rights for all vulnerable groups, including lesbian, gay, bisexual and transgender people, did not represent an emphasis on any one group.”

1 November 2013: UN General Assembly [A/C.3/68/L.45]: Brazil and Germany: draft resolution, The right to privacy in the digital age [Would ultimately pass as A/RES/68/167 on 18 December 2013: The right to privacy in the digital age]

10 December 2013: UN General Assembly [A/68/456/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee

“At the 43rd meeting, on 7 November, the representatives of Brazil and Germany made statements and on behalf of Austria, Bolivia (Plurinational State of), Brazil, the Democratic People’s Republic of Korea, Ecuador, France, Germany, Indonesia, Liechtenstein, Peru, Switzerland and Uruguay, jointly introduced a draft resolution, entitled “The right to privacy in the digital age” (), which read…”

10 December 2013: UN General Assembly 68/456/Add.2 [68/456/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee**. Rapporteur: Ms. Adriana Murillo Ruin

18 December 2013: First General Assembly Resolution 68/167 [A/RES/68/167]: The right to privacy in the digital age

“Welcoming the report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression [A/HRC/23/40] submitted to the Human Rights Council at its twenty-third session, on the implications of State surveillance of communications on the exercise of the human rights to privacy and to freedom of opinion and expression,”

30 June 2014: UN Human Rights Council 27/37 [A/HRC/27/37]: The right to privacy in the digital age, Report of the Office of the United Nations High Commissioner for Human Rights

18 December 2014: UN General Assembly Resolutions 69/166 [A/RES/69/166]: The right to privacy in the digital age

26 March 2015: UN Human Rights Council Resolution 28/16 [A/HRC/RES/28/16]: The right to privacy in the digital age

“Decides to appoint, for a period of three years, a special rapporteur on the right to privacy, whose tasks will include… to submit an annual report to the Human Rights Council and to the General Assembly, starting at the thirty-first session and the seventy-first session respectively;”

22 May 2015: UN Human Rights Council 29/32 [A/HRC/29/32]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye

21 October 2015: UN General Assembly Resolution 70/1 [A/RES/70/1] Transforming our world: the 2030 Agenda for Sustainable Development

16 December 2015: UN General Assembly Resolution 70/125 [A/RES/70/125] Outcome document of the high-level meeting of the General Assembly on the overall review of the implementation of the outcomes of the World Summit on the Information Society

16 December 2015: Press release for General Assembly of the United Nations, “WSIS+10 Outcome: Countries Adopt New Plan to Utilize Internet and Information Technologies in Implementation of New Sustainable Development Agenda

1 February 2016: UN General Assembly Resolution 70/125 [A/RES/70/125]. Outcome document of the high-level meeting of the General Assembly on the overall review of the implementation of the outcomes of the World Summit on the Information Society

1 July 2016: UN Human Rights Council 32/13 [A/HRC/RES/32/13]: The promotion, protection and enjoyment of human rights on the Internet

15 December 2016 (publication date) VIDEO: Professor Cannataci: UN Special Rapporteur on the Right to Privacy

Five Priorities of UN Special Rapporteur on the Right to Privacy: A better understanding of privacy, security & surveillance, Big Data & Open Data, Health Data, & Personal Data Held by Corportations

19 December 2016: UN General Assembly Resolutions 71/199 [A/RES/71/199]: The right to privacy in the digital age

23 March 2017: UN Human Rights Council Resolution 34/7 [A/HRC/RES/34/7]: The right to privacy in the digital age

17 to 28 June 2017: Visit to the United States of America Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci [See report A/HRC/46/37/Add.4 published on 20 January 2021]

6 September 2017: UN Human Rights Council 34/60 [A/HRC/34/60]: Report of the Special Rapporteur on the right to privacy

“In his report, prepared pursuant to Human Rights Council resolution 28/16, the Special Rapporteur on the right to privacy focuses on governmental surveillance activities from a national and international perspective. The Special Rapporteur elaborates on the characteristics of the international legal framework and the interpretation thereof. He also describes recent developments and trends, how these can be studied and how they interact with the enjoyment of the right to privacy and other interconnected human rights. Consequently, he outlines first approaches to a more privacy-friendly oversight of government surveillance. In conclusion, the Special Rapporteur reports on his activities in the period covered by his report.”

19 October 2017: UN General Assembly 72/540 [A/72/540]: Right to privacy, Note by the Secretary-General, Special Rapporteur of the Human Rights Council on the right to privacy, Joseph A. Cannataci, submitted in accordance with Human Rights Council resolution 28/16

6 April 2018: UN Human Rights Council Resolution 37/2 [A/HRC/RES/37/2] The right to privacy in the digital age

5 July 2018: UN Human Rights Council 38/7 [A/HRC/RES/38/7]: Promotion, protection and enjoyment of human rights on the Internet

13 July 2018: UN Human Rights Council 38/35/Add.5 [/A/HRC/38/35/Add.5]: Encryption and Anonymity follow-up report

3 August 2018: UN Human Rights Council 39/29 [A/HRC/39/29]: The right to privacy in the digital age: Report of the United Nations High Commissioner for Human Rights

“The present report is submitted pursuant to resolution 34/7, in which the Human Rights Council requested the High Commissioner for Human Rights to prepare a report identifying and clarifying principles, standards and best practices regarding the promotion and protection of the right to privacy in the digital age, including the responsibility of business enterprises in this regard, and present it to the Human Rights Council at its thirtyninth session.”

29 August 2018: UN General Assembly 73/348 [A/73/348]: Promotion and protection of the right to freedom of opinion and expression

17 October 2018: UN General Assembly 73/438 [A/73/438: Right to privacy*, Note by the Secretary-General

29 November 2018: UN General Assembly 73/589 [A/73/589]: Promotion and protection of human rights. Report of the Third Committee

1 December 2018: UN General Assembly 73/589/Add.1 [A/73/589/Add.1]: Promotion and protection of human rights: implementation of human rights instruments, Report of the Third Committee

3 December 2018: UN General Assembly 73/589/Add.4 [A/73/589/Add.4]: Promotion and protection of human rights: comprehensive implementation of and follow-up to the Vienna Declaration and Programme of Action, Report of the Third Committee

4 December 2018: UN General Assembly 73/589/Add.2 [A/73/589/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee

6 December 2018: UN General Assembly 73/589/Add.3 [A/73/589/Add.3]: Promotion and protection of human rights: human rights situations and reports of special rapporteurs and representatives, Report of the Third Committee

17 December 2018: UN General Assembly Resolutions 73/179 [A/RES/73/179]: The right to privacy in the digital age

13 February 2019: Annex 4: Privacy Metrics – Consultation Draft, “Metrics for Privacy – A Starting Point“, Special Rapporteur on the right to privacy, Professor Joseph A. Cannataci.

11 July 2019: UN Human Rights Council 41/12 [A/HRC/RES/41/12]: The rights to freedom of peaceful assembly and of association

“Requests the Special Rapporteur to continue to report annually to the Human Rights Council and the General Assembly;”

5 August 2019: UN General Assembly 74/277 [A/74/277] Right to privacy, Note by the Secretary-General (Part II. Health-related data)

26 September 2019: UN Human Rights Council Resolution 42/15 [A/HRC/RES/42/15]: The right to privacy in the digital age

In Human Rights Council Resolution 42/15, “Paragraph 10 of the resolution requested the United Nations High Commissioner for Human Rights “to organize, before the forty-fourth session of the Human Rights Council, an expert seminar to discuss how artificial intelligence, including profiling, automated decision-making and machine-learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy [and] to prepare a thematic report on the issue”.

16 October 2019: UN Human Rights Council 40/63 [A/HRC/40/63]: Right to privacy: Report of the Special Rapporteur on the right to privacy

24 March 2020: UN Human Rights Council 43/52 [A/HRC/43/52]: Report of the Special Rapporteur on the right to privacy

23 April 2020: UN Human Rights Council Resolution 44/49 [A/HRC/44/49]: Disease pandemics and the freedom of opinion and expression: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

13 May 2020: UN Human Rights Council Resolution 44/50 [A/HRC/44/50]: Ten years protecting civic space worldwide: Report of the Special Rapporteur on the rights to freedom of peaceful assembly and of association

27-28 May 2020, [Expert Seminar Report – Right to Privacy] Report of the proceedings of the online expert seminar with the purpose of identifying how artificial intelligence, including profiling, automated decision-making and machine learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy

18 June 2020: UN Human Rights Council Resolution 44/57 [A/HRC/44/57]: Racial discrimination and emerging digital technologies: a human rights analysis, Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance

19 June 2020: UN Human Rights Council 43/4 [A/HRC/RES/43/4]: Freedom of opinion and expression: mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

24 June 2020: UN Human Rights Council 44/24 [A/HRC/44/24]: Impact of new technologies on the promotion and protection of human rights in the context of assemblies, including peaceful protests, Report of the United Nations High Commissioner for Human Rights

20 July 2020: UN General Assembly 75/184 [A/75/184]: Rights to freedom of peaceful assembly and of association

27 July 2020: UN General Assembly 75/147 [A/75/147]: Right to privacy, Note by the Secretary-General

28 July 2020: UN General Assembly 75/261 [A/75/261]: Promotion and protection of the right to freedom of opinion and expression, Note by the Secretary-General

28 August 2020: UN General Assembly Resolution 75/329 [A/75/329]: Contemporary forms of racism, racial discrimination, xenophobia and related intolerance, Note by the Secretariat

16 December 2020: UN General Assembly Resolutions 75/176 [A/RES/75/176]: The right to privacy in the digital age

20 January 2021: UN Human Rights Council A/HRC/46/37/Add.4 [A/HRC/46/37/Add.4]: Visit to the United States of America Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci

“The Special Rapporteur on the right to privacy, Joseph A. Cannataci, carried out an official visit to the United States of America between 17 and 28 June 2017. While praising several strengths of the United States system, the Special Rapporteur observed the risks resulting from fragmentation caused by organic growth and the misplaced confidence that certain conventions would be respected by the Executive. He recommends a gradual overhaul of privacy law, with a special focus on simplification, and an increase in both safeguards and remedies. He especially recommends that United States law should be reformed further to entrench the powers of existing and new oversight authorities while bringing safeguards and remedies for foreign intelligence up to the same standard as for domestic intelligence…

The present report was finalized in autumn 2020, after evaluating the preliminary results of the country visit in meetings held during the visit, which took place from 17 to 28 June 2017, and cross-checking them with follow-up research and developments to date. The benchmarks used in the present report include the privacy metrics document released by the Special Rapporteur.”

17 February 2021: UN Human Rights Council /A/HRC/46/37/Add.8 [A/HRC/46/37/Add.8]: Report of the Special Rapporteur on the right to privacy on his visit to United States of America, Comments by the State

U.S. comments on the framework of the Special Rapporteur’s analysis:
The United States notes that, throughout his report, the Special Rapporteur (UNSRP) assumes that “necessity and proportionality” and related European Union (EU) law data protection standards reflect current international law. In the view of the United States, this assumption is incorrect. Instead, the applicable international human rights law for evaluating U.S. privacy practices is the International Covenant on Civil and Political Rights (ICCPR). Article 17 of that instrument provides that “[n]o one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence.” This provision does not impose a requirement of proportionality or necessity on a State Party’s interference with privacy; instead, it imposes an obligation to refrain from arbitrary or unlawful interference with privacy. That is the obligation that the United States implements through its domestic legal framework. While certain elements of U.S. domestic law may use the words “necessary” or “proportionate” in relation to privacy, the relevant inquiry here is how the United States implements its obligations under Article 17 of the ICCPR.

23 March 2021: UN Human Rights Council Resolution 44/57 [A/HRC/RES/46/16]: Mandate of Special Rapporteur on the right to privacy

“Recognizing the increasing impact of new and emerging technologies, such as those developed in the fields of surveillance, artificial intelligence, automated decision-making and machine-learning, and of profiling, tracking and biometrics, including facial recognition, without proper safeguards, on the enjoyment of the right to privacy and other human rights,”

18 May 2021: UN Human Rights Council: 47/61 [A/HRC/47/61]: The right to privacy in the digital age, Note by the Secretariat

“In its resolution 42/15 on the right to privacy in the digital age, the Human Rights Council requested the United Nations High Commissioner for Human Rights to prepare a thematic report on how artificial intelligence, including profiling, automated decisionmaking and machine-learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy, and to submit it to the Council at its forty-fifth session. ”

UN Digital Library Search “Right to Privacy in the Digital Age

UN Digital Library Search “Right to Privacy

Timeline of ICT and Internet governance developments [Really detailed timeline and history]

List of UN Special Rapporteurs

Video: History of the UN Human Rights Council

28 January 1981: Details of Treaty No.108: Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data

31 December 2020: Guide on Article 8 of the European Convention on Human Rights: Right to respect for private and family life, home and correspondence

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Contextual-Integrity-Nissenbaum

In her Contextual Integrity theory of privacy, Helen Nissembaum defines privacy as “appropriate flows of information” where the appropriateness is defined by the context and its contextual informational norms. Contextual Integrity is a paradigm shift away from the Fair Information Practice Principles, which emphasizes a model of privacy focused on the control of personal information. Framing privacy through the lens of control has led to a notice & consent model of privacy, which many have argued that notice and consent fails to actually protect our privacy. Rather than focusing on the definition of privacy as the control of private information, Contextual Integrity focuses on appropriate flows of information relative to the stakeholders within a specific context who are trying to achieve a common purpose or goal.

contextual-integrity-context

Nissembaum’s Contextual Integrity theory of privacy reflects the context dependent nature of privacy. She was inspired by social theories like Michael Walzer’s Spheres of Justice, Pierre Bourdieu’s field theory, as well as what others refer to as domains. These are ways of breaking up society into these distinct spheres that have their “own contextual information norms” for how data are exchange to satisfy a shared intentions. Nissembaum resists choosing a specific social theory of context, but has offered the following definition of context in one of her presentations.

Contexts – differentiated social spheres defined by important purposes, goals, and values, characterized by distinctive ontologies, roles and practices (e.g. healthcare, education, family); and norms, including informational norms — implicit or explicit rules of info flow.

Nissembaum emphasized the importance of how the informational exchanges should be helping to achieve the mutual purposes, goals, and values of that specific context. As an example, you would be more than willing to provide medical data to your doctor for the purpose of evaluating your health and helping you heal from an ailment. But you may be more cautious in providing that same medical information to Facebook, especially if it was unclear how they intended on using it. The intention and purpose of these contexts help to shape the underlying information norms that helps people understand what information is okay to share given the larger context of that exchange.

Nissembaum specifies how these contextual information norms have five main parameters that be used to form rules to help determine whether or not privacy has been preserved or not.

contextual-integrity-transmission-principles
(Image credit: Cornell)

The first three are the stakeholder actors including the sender and recipient and the subject or intention of the data exchange.

Then there’s the different types of information that are being exchanged, whether it’s physiological data, biographical data, medical information, financial information, etc.

Finally, the transmission principles that facilitate an exchange include a range including through informed consent, buying or selling, coercion, asymmetrical adhesion contracts or reciprocal exchange, via a warrant, stolen, surreptitiously acquired or with notice, or as required by law.

contextual-integrity-parameters

Notice and consent is embedded within the framework of Contextual Integrity through the transmission principle, which includes informed consent or adhesion contracts. But Nissenbaum says that in order for the contextual integrity to be preserved, then all five of these parameters need to be properly specified.

contextual-integrity-justifiability

The final step within the Contextual Integrity theory of Privacy is to evaluate whether or not the informational norm is “legitimate, worth defending, and morally justifiable.” This includes looking at the stakeholders and analyzing who may be harmed and who is benefitting from any informational exchange. Then looking to see whether or not it diminishes any political or human rights principles like the diminishment of the freedom of speech. And then finally evaluating how the information exchange is helping to serve the contextual domain’s function, purpose, and value.

Overall, Nissembaum’s Contextual Integrity theory of privacy provides a really robust definition of privacy and foundation to build upon. She collaborated with some computer scientists who formulized “some aspects of contextual integrity in a logical framework for expressing and reasoning about norms of transmission of personal information.” They were able to formally represent some of information flows in legislation like HIPAA, COPPA, and GLBA into their logical language meaning that it could be possible to create computer programs that could enforce compliance.

Even though Contextual Integrity is very promising as a replacement to existing privacy legislation, there are some potential limitations. The theory leans heavily upon normative standards within given contexts, but my observation with immersive XR technologies is that not only does XR blur and blend existing contexts, but is also creating new precedents of information flows that goes above and beyond. Here’s my state of XR privacy talk from the AR/VR Association’s Global Summit that provides more context on these new physiological information flows and still relatively underdeveloped normative standards of what data are going to be made available to consumer technology companies and what might be possible to do with this data:

The blending and blurring of contexts could lead to a context collapse. As these consumer devices are able to capture and record medical-grade data, then these neuro-technologies and XR technologies are combining the information norms of the medical context with the consumer technology context. This is something that’s already been happening over the years with different physiological tracking, but XR and neuro-technologies will start to create new ethical and moral dilemmas with the types of intimate information that will be made available. Here’s a list of physiological and biometric sensors that could be integrated within XR within the next 5-20 years:

physiological-and-biometric-data

It is also not entirely clear how Contextual Integrity would handle the implications of inferred data like what Ellysse Dick Refers to as “computed data” or what Brittan Heller refers to as “biometric psychographic data”. There is a lot of really intimate information that can be inferred and extrapolated about your likes and dislikes, context-based preferences, and more essential character, personality, and identity from observed behaviors combined with physiological reactions with the full, situational awareness of what you’re looking at and engaged with inside of a virtual environment (or also with eye tracking + egocentric data capture within AR). Behavioral neuroscientist John Burkhardt details some of these biometric data streams, and the unethical threshold between observing and controlling behaviors.

Here’s a map of all of the types of inferred information that can come from image-based eye tracking from Kröger et al’s article “What Does Your Gaze Reveal About You? On the Privacy Implications of Eye Tracking

kroger-eyetracking

It is unclear to me how Nissembaum’s Contextual Integrity theory of privacy might account for this type of computed data, biometric psychographic data, or inferred data. It doesn’t seem like inferred data fits under the category of information types. Perhaps there needs to be a new category of inferred data that could be extrapolated from data that’s transmitted or aggregated. It’s also worth looking at the taxonomy of data types from Ellysse Dick’s work on bystander privacy in AR.

It’s also unclear whether or not this would need to be properly disclosed or whether the data subject has any ownership rights over this inferred data. The ownership or provenance of the data also isn’t fully specified within the sender and recipient of the data, especially if there are multiple stakeholders involved. It’s also unclear to me how the intended use of the data is properly communicated within contextual integrity, and whether or not this is already covered within the “subject” portion of actors.

Some of these open questions could be answered when evaluating whether or not an informational norm is “legitimate, worth defending, and morally justifiable.” Because VR, AR, and neuro-technologies are so new, then these exchanges of information are also very new. Research neuroscientists like Rafael Yuste have identified some fundamental Neuro-Rights of mental privacy, agency, and identity that are potentially threatened because of the types of neurological and physiological data that will soon be made available within the context of consumer neuro-tech devices like brain control interfaces, watches with EMG sensors, or other physiological sensors that will be integrated into headsets like Project Galea collaboration between OpenBCI, Valve Software, and Tobii eye tracking.

I had a chance to talk with Nissenbaum to get an overview of her Contextual Integrity theory of privacy, but we had limited time to dig into some of the aspects of mental privacy. She said that the neuro-rights of agency and identity are significant ethical and moral questions that are beyond the scope of her privacy framework in being able to properly evaluate or address. She’s also generally skeptical about relying about treating privacy as a human right because you’ll ultimately need a legal definition that allows you to exercise that legal right, and if it’s still contextualized within the paradigm of notice and consent, then we’ll be “stuck with this theory of privacy that loads all of the decision onto the least capable person, which is the data subject.”

She implied that even if there are additional human rights laws at the international level, as long as there is a consent loophole like there is within the existing adhesion contract frameworks of notice and consent, then it’s not going to ensure that you can exercise that right to privacy. This likely means that US tech policy folks may need to use Contextual Integrity as a baseline to be able to form new state or federal privacy laws, which is where privacy law could be enforced and the rights to privacy be asserted. There’s still value in having human rights laws shape regional laws, but there is not many options to enforce the right to privacy through the mechanisms of international law.

ethics-contexts
Finally, I had a chance to show Nissenbaum my taxonomy of contexts that I’ve been using in the context of my XR Ethics Manifesto that maps out the landscape of ethical issues within XR. I first presented this taxonomy of the domains of human experience at the SVVR conference on April 28, 2016, as I was trying to categorize the range of answers of the ultimate potential of VR into different industry verticals or contexts.

We didn’t have time to fully unpack all of the similarities to some of her examples of contexts, but she said that it’s in the spirit of other social theorists who have started to may out their own theories for how to make sense of society. I’m actually skeptical that it’s possible to come up with a comprehensive or complete mapping of all human contexts. But I’m sharing it here in case it might be able to shed any additional insights into how something like Nissembaum’s Contextual Integrity theory of privacy could be translated into a Federal Privacy Law framework, and whether it’s possible or feasible to comprehensively map out the purposes, goals, and values of each of these contexts, appropriate information flows, and stakeholders for each of them.

It’s probably an impossible task to create a comprehensive map of all contexts, but it could provide some helpful insights into the nature of VR and AR. I think context is a key feature of XR. With augmented reality, you’re able to add additional contextual information on top of the center of gravity of an existing context. And with virtual reality, you’re able to do a complete context shift from one context to another one. Also, having a theoretical framework for context may also help develop things like “contextually-aware AI” that Facebook Reality Labs is currently working on. Also if VR has the capability to represent simulations of each of these contexts, then a taxonomy of contexts could help provide a provocation on thought experiments on how existing contextual informational norms might be translated and expressed within virtual reality as they’re blended and blurred with the emerging contextual norms of XR.

I’m really impressed with the theory of contextual Integrity, since there are many intuitive aspects about the contextual nature of privacy that are articulated through Nissenbaum’s framework. But it also helps to elaborate how Facebook’s notice and consent-based approach to privacy in XR does not fully live into their Responsible Innovation principles of “Don’t Surprise People” or “Provide Controls that Matter.” Not only do we no have full agency over information flows (since they don’t matter enough for Facebook to provide them), but there’s no way to verify whether or not the information flows are morally justifiable since there isn’t any transparency or accountability to these information flows and all of the intentions and purposes that Facebook plans to use with the data that are made available to them.

Finally, I’d recommend folks check out Nissembaum’s 2010 book Privacy in Context: Technology, Policy, and the Integrity of Social Life for a longer elaboration of her theory. I also found these two lectures on Contextual Integrity particularly helpful in my preparation for my interview with her on the Voices of VR podcast: Crypto 2019 (August 21, 2019) and the University of Washington Distinguished Speaker Series at the Department of Human Centered Design & Engineering (March 5, 2021). And here’s the video where I first discovered Nissembaum’s Contextual Integrity theory of privacy where she was in conversation with philosophers of privacy Dr. Anita Allen and Dr. Adam Moore at a Privacy Conference: Law, Ethics, and Philosophy of End User Responsibility for Privacy that was recorded on April 24, 2015 at the University of Pennsylvania Carey Law School.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

itif-banner

Ellysse-Dick
Ellysse Dick is a policy analyst who has written 10 technology policy publications over the last year about VR & AR for the The Information Technology & Innovation Foundation. The ITIF is a non-partisan, non-profit tech policy think tank, who says their mission is to “advance innovation,” believes “disruptive innovation almost always leads to economic and social progress,” has a “considered faith in markets and businesses of all sizes,” and believes in “deftly tailoring laws and regulations to achieve their intended purposes in a rapidly evolving economy.” In other words, they lean towards libertarian ideals of limited government to avoid reactionary technology policy that might stifle technological innovation.

While the ITIF is an independent organization, their tech policy positions have a strong alignment with the types of arguments I’d expect to hear from Facebook themselves. The ITIF lists Facebook as a financial supporter and Facebook has listed the ITIF as an organization as a part of Facebook’s Political Engagement. But Facebook also says “we do not always agree with every policy or position that individual organizations or their leadership take. Therefore, our membership, work with organizations, or event support should not be viewed as an endorsement of any particular organization or policy.” And Dick says that she maintains editorial independence for what type of tech policy research that she’s doing within VR and AR. That all said, there’s likely a lot of alignment between ITIF’s published tech policy positions and the implicit and often undeclared policy positions of Facebook.

Ellysse Dick has written about XR privacy issues in these three publications:

One really interesting insight Dick had in her December 4th piece on Augmented Reality and bystander privacy is that there are already a lot of social norms or legal precedents when it comes to the different types of data collection. Here’s the taxonomy of data collection that she lays out:

  • Continuous data collection (non-stop & persistent recording)
  • Bystander data collection (relational dynamics of recording other people)
  • Portable data collection – (the mobile & portability ease of recording anywhere)
  • Inconspicuous data collection (Notification & consent norms around capturing video or spatial context)
  • Rich data collection: (the geographic context & situational awareness)
  • Aggregate data collection: (combining information from third-party sources)
  • Public data exposure: (associating public data to individuals within a real-time context)

Dick says that the combination of the real-time, portable, aggregate, and persistent nature of data recording that may create a new context requiring either new social norms or laws.

I wanted to talk with Dick about take on XR Privacy, why she sees the need for a US Federal Privacy Law, some of the concerns around government surveillance and the Third Party Doctrine, and how aspects of biometrically-inferred data should be a key part the broader discussion about a comprehensive approach to privacy. She calls this data “computed data” while Brittan Heller refers to it as biometric psychographic data.

Dick is not as concerned about near-term risks of making inferences from physiological or biometric data from XR, and cautions us from a “privacy panic” that catalyzes a reactionary technology policy that leads to technologies being banned. I guess I’m on the other side of having a reasonable amount of privacy panic considering that technology policy analyst Adam Kovacevich has estimated the odds of a Federal Privacy Law passing ranging from 0-10% for the more controversial sticking points, or around 60% if the Democrats compromise on the private right to action clause.

Dick says that the ITIF follows the innovation principle, which is to not overregulate in advance for harms that may or may not happen. Creating laws too early has the potential to either stifle innovation, to not have the intended consequence, or to quickly go out of date. Dick recommends soft laws, self-regulation, and trade organizations as the first step until policy gaps can more clearly be identified. This means the end result is that the most likely and default position is to do no pre-emptive actions regarding these privacy concerns around XR, which will likely result in us trying to reel it back once it’s gone too far.

Dick seems to have a lot of faith that companies will not go too far with the tracking our data and ads that could lead towards significant behavioral modification, but for me the more pragmatic opinion is that companies like Facebook will continue to aggregate as much data as possible in trying to track our attention and behaviors creating an asymmetry of power when it comes to delivering targeted advertising.

Overall the ITIF generally takes a pretty conservative approach to new technology policy, suggesting that we either wait and see or rely upon self-regulation and consensual approaches. Dick and I had a spirited debate on the topic of XR Privacy, and in the end we agree on the need for a new U.S. Federal Privacy Law. I think we’d also agree that we need the right amount urgency to make it a public policy priority without leading to a reactionary panic that leads to a technology policy that bans certain immersive technologies.

And I believe that it’s still up for debate how much privacy panic we should collectively have on this issue, especially considering how there is no way to verify the appropriate flows of information given the broad mandate that Terms of Service & Privacy Policy adhesion contracts give to Facebook for how they can use the data they can capture.

In the next episode, I’ll be diving into how philosopher Helen Nissembaum defines privacy as appropriate information flows within a given context within her Contextual Integrity theory of privacy. She also argues that notice and consent model of privacy is broken, and her contextual integrity approach may provide some more viable and robust solutions for ensuring users have more transparency on how their data are being used.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

project-galea
conor-russomanno
OpenBCI’s Project Galea was originally announced on November 19, 2020 as a “hardware and software platform that merges next-generation biometrics with mixed reality.” OpenBCI has been collaborating with MIT Ph.D. student Guillermo Bernal in integrating PhysioHMD’s design, which includes EOC, EMG, EDA, PPG sensors in addition to 10 EEG channels and eye-tracking into a single headset. On January 24th, Valve CEO Gabe Newell told New Zealand’s 1 NEWS that Valve was “working on an open source project so that everybody can have high-resolution [brain signal] read technologies built into headsets, in a bunch of different modalities.” 1 News reported that Valve was collaborating with OpenBCI. Then on February 4th, 2021, Tobii announced that it was “engaging in research collaboration with Valve and OpenBCI by incorporating Tobii’s eye tracking technology with elements of Valve’s Index hardware to produce developer units for the recently announced Galea Beta Program.” The Project Galea dev kits are expected to ship sometime in 2022, and Newell told 1 NEWS, “If you’re a software developer in 2022 who doesn’t have one of these in your test lab, you’re making a silly mistake.”

I first interviewed OpenBCI co-founder and CEO Conor Russomanno at Rothenberg Ventures’ Founder’s Day on May 16, 2016, which was the day before the 2016 Neurogaming Conference. It was also after the Silicon Valley Virtual Reality Conference April 27-29, 2016, which is where I first really starting covering the topic of privacy in VR as it was after an UploadVR article on Facebook & VR privacy caught the attention of Senator Al Franken, who wrote Oculus a letter. I first spoke to Russomanno about some of the privacy implications of neuro-technologies back in 2016, and the ethical implications of neuro-tech has only increased as the capabilities of physiological measurement devices and what can be inferred from them have also been increasing.

I recently heard Russomanno speak about Project Galea on May 26th at the Non-Invasive Interfaces: Ethical Consideration symposium co-sponsored by the Columbia Neuro-Rights Initiative and Facebook Reality Labs.

He was also discussing some of the ethical and privacy implications of neuro-technologies, and he got into an interesting debate with Rafael Yuste, who I interviewed about Neuro-Rights in episode #994. They were debating whether or not technologies that are able to measure physiological data should be classified as medical devices that are capturing medical data. Russomanno doesn’t believe that the hardware technology should be regulated by the FDA and medical regulations since it’s likely that a project like OpenBCI would never be able to exist as it does today, but he’s also open to the possibility of giving special treatment to the data. Ultimately, Russomanno hopes that someday consumers will have more ownership and control over the data that are captured by these devices, but that there’s a long way to get there from where we are at right now.

I had a chance to talk with Russomanno on June 4th to be able to talk about the evolution of OpenBCI into Project Galea, a little bit about how their collaboration with Valve and Tobii came about, what type of insights they’re able to gather from these different physiological and biometric measurement sensors, the value of combining different sensory modalities together, and the potential of closed, feedback loop immersive systems that are able to help track and modulate different aspects of your brain, mind, and ultimately consciousness. We also talk about some of the potential healing, quantified self, and consciousness hacking applications, but also the risks of how these technologies could undermine our rights to mental privacy but also our agency. There are still more open questions than answers right now, but the open hardware approach by OpenBCI has been able to seed quite a lot of experimentation and research evaluation by major XR companies across the industry.

I’ll be releasing a series of interviews on Neuro-Rights and the Ethical Implications of XR & neuro-technologies starting with OpenBCI, but hearing about the technology policy research papers written by the Information Technology & Innovation Foundation’s Ellysse Dick, the founder of the Contextual Integrity Theory of Privacy with Philosopher Helen Nissembaum, and then four representatives from the Electronic Frontier Foundation talking about privacy from a Human Rights perspective and reporting back from Rightscon. Also be sure to check out my recent conversations with Rafael Yuste on Neuro-Rights, Brittan Heller on biometric psychography, as well as with Joe Jerome on a historical primer on the history of privacy law.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

After hearing about all of the sensors that OpenBCI’s Project Galea was integrating, I did an audit of the different physiological and biometric sensors:

Also, here’s a state of XR privacy talk that I gave at the AR/VR Association Global Summit that provides an overview of some of the biggest issues on privacy with the intersection between XR and neuro-technologies.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality