eff-human-rights
The Electronic Frontier Foundation is a non-profit that has been defending civil liberties & digital rights for over 30 years since it was first announced on July 10, 1990. My first conversation with the EFF was with Dave Maass at the VR Privacy Summit in November 2018 when consumer VR was still in a nascent phase and lower priority relative to civil liberty threats back then. But over the past year, the EFF has been starting to investigate the civil liberties and privacy implications of XR technologies, especially as VR has been gaining more commercial momentum. The EFF published an article on “If Privacy Dies in VR, It Dies in Real Life” on August 25, 2020 a week after Facebook announced that Facebook accounts would be required to use the Oculus VR headsets. After Facebook’s Project Aria AR prototype was announced at Facebook Connect, the EFF published “Augmented Reality Must Have Augmented Privacy” a month later on October 16, 2020. Their latest article was published on June 2, 2021 about “Your Avatar is You, However You See Yourself, and You Should Control Your Experience.”

The EFF helped to organize a RightsCon session on June 10, 2021 titled “As AR/VR becomes a reality, it needs a human rights framework.” Avi Bar-Zeev and I helped to contextualize some of the issues with VR to the Human Rights Lawyers and Activists attending RightsCon, and then were four different breakout groups for half the session brainstorming different human rights, digital rights, and civil liberties across different contexts including Law Enforcement Access to Potential Evidence & Surveillance, Biometric Inferences and Mental Privacy, Avatars and Virtual Agents, and Content Moderation.

I wanted to gather representatives from the EFF the day after the RightsCon session to debrief on insights what a human rights framework might look like for XR from the perspective of law, tech policy, tech architecture, and grassroots organizing.

  • Kurt Opsahl – Deputy Executive Director & Counsel for Electronic Frontier Foundation
  • Katitza Rodriguez – Policy director for global privacy for EFF
  • Jon Callas – Director of Technology Projects
  • Rory Mir – Grassroots organizer

We talk about human rights insights into XR technologies along with the provocation of what it might look like to have more civil liberty rights within XR. We talk about the first UN General Assembly Resolution on the “The right to privacy in the digital age,” and how the global policy perspectives provides new insights for how International data protection regulations, treaties, as well as resolutions passed by the UN’s General Assembly, the UN Council of Human Rights, and actions taken by the UN Special Rapporteur on the Right to Privacy. We also cover some of the US domestic privacy laws concerning the Fourth Amendment, the third-party doctrine, different philosophical approaches to privacy, the invisible nature of privacy harms, and whether it’s possible to build off of laws like the Video Privacy Protection Act since contains some of the strongest US protections for privacy. We talk about whether or not existing Notice & Consent models provide truly informed consent that’s freely given. We touch on a number of tech projects like Privacy Badger and the EFF’s first VR project Spot the Surveillance, what tech companies have your back in protecting your privacy, and how to get involved in your local EFF chapter at eff.org/fight.

How do we ensure that we have just as many if not more human rights in virtual reality? It’s going to take looking at existing human rights frameworks, and listening to the human rights advocates from around the world who are tracking the intersection of digital rights and human rights across different contexts. The EFF is helping to lead some vital discussions on the topic, and I was happy to be able to participate in their RightsCon session and do this debriefing the day after. Below is a rough timeline of UN Resolutions and Reports from the UN General Assembly and UN Council of Human Rights related to the Right to Privacy.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Timeline of United Nations Resolutions Related to Privacy

10 December 1948: Universal Declaration of Human Rights, General Assembly resolution 217A [A/RES/217(III)]

Article 12
“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”

16 December 1966: The International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights, UN General Assembly Resolution 2200A (XXI) [A/RES/2200(XXI)].

article 17 of the International Covenant on Civil and Political Rights
“No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation.
2. Everyone has the right to the protection of the law against such interference or attacks.”

4 December 1986: UN General Assembly Resolution 41/128 [A/RES/41/128]: Declaration to the right to development

1 September 1988: Sub-Commission on Prevention of Discrimination and Protection of Minorities 1988/29 [E/CN.4/1989/3, E/CN.4/Sub.2/1988/45]: Guidelines on the Use of Computerized Personal Files

28 September 1988: 43/40 Supplement No. 40 [E/CN.4/1989/86]: Report of the Human Rights Committee

Right to privacy
69. With reference to that issue, members of the Committee requested details on protection against arbitrary and unlawful interference with privacy, family, home, and correspondence, particularly with regard to postal and telephone communications. It was also asked whether evidence obtained in violation of the right to privacy could be used in the courts and, if so, whether such instances had occurred and what the reaction of the court had been, whether authorities other than judges could order a house to be searched and under what circumstances and whether wire-tapping was authorized by law.

6 March 1989: Commission on Human Rights resolution 1989/43 [E/CN.4/1989/86]: Guidelines on the use of computerized personal files

24 May 1989: UN Economic & Social Council 1989/78 [E/RES/1989/78]: Guidelines on the use of computerized personal data files

15 December 1989: General Assembly Resolution 44/132 [A/RES/44/132]: Guidelines for the regulation of computerized personal data files

14 December 1990: General Assembly Resolution 45/95 [A/RES/45/95]: Guidelines for the regulation of computerized personal data files

13 October 1993: The Vienna Declaration and Programme of Action, UN General Assembly A/CONF.157/24(PartI): Vienna Declaration and Programme of Action via the UN World Conference on Human Rights Vienna, 14-25 June 1993

7 January 1994: UN General Assembly Resolution 48/141 [A/RES/48/141]: High Commissioner for the promotion and protection of all human rights

24 February 2006: UN General Assembly Draft Resolution 60/L.48 [A/60/L.48] Draft resolution submitted by the President of the General Assembly, Human Rights Council

“Assume the role and responsibilities of the Commission on Human Rights relating to the work of the Office of the United Nations High Commissioner for Human Rights, as decided by the General Assembly in its resolution 48/141 of 20 December 1993;”

15 March 2006: UN General Assembly Resolution 60/251 [A/RES/60/251]: Human Rights Council. The Human Rights Council assumes “the role and responsibilities of the Commission on Human Rights”

22 June 2006: UN Human Rights Council 1/1 [A/HRC/1/1]: Adoption of the Agenda and Organization of Work, Note by the Secretary-General

16 May 2007: UN Human Rights Council 5/1 [A/HRC/5/1]: Provisional Agenda, Note by the Secretary-General. “To consider in particular the institution-building process” for Human Rights.

27 April 2007: UN Human Rights Council 5/2 [ A/HRC/5/2]: Implementation of General Assembly Resolution 60/251 of 15 March 2006 entitled “Human Rights Council”

28 March 2008: UN Human Rights Council Resolution 7/36 [7/36]: Mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

30 April 2009: UN Human Rights Council 11/4 [A/HRC/11/4]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

2 October 2009: UN Human Rights Council Resolution 12/16 [A/HRC/RES/12/16]: Promotion and protection of all human rights, civil, political, economic, social and cultural rights, including the right to development: Freedom of opinion and expression

8 April 2011: UN Human Rights Council Resolution 16/4 [A/HRC/RES/16/4]: Freedom of opinion and expression: mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

16 May 2011: UN Human Rights Council 17/27 [A/HRC/17/27]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

12 September 2011: International Covenant on Civil and Political Rights CCPR/C/GC/34 [CCPR/C/GC/34]: General comment No. 34, Article 19: Freedoms of opinion and expression, General remark

17 April 2013: UN Human Rights Council 23/40 [A/HRC/23/40]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

“The present report, submitted in accordance with Human Rights Council resolution 16/4 [of 8 April 2011], analyses the implications of States’ surveillance of communications on the exercise of the human rights to privacy and to freedom of opinion and expression. While considering the impact of significant technological advances in communications, the report underlines the urgent need to further study new modaliti.es of surveillance and to revise national laws regulating these practices in line with human rights standards.”

21 March 2011: UN Human Rights Council 17/31 [A/HRC/17/31]: Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, John Ruggie

HRC/17/31 Annexes (p. 6-27): Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework.

10 August 2011: UN General Assembly 66/290 [A/66/290]: Promotion and protection of the right to freedom of opinion and expression

10 December 2013: UN General Assembly [A/68/456/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee*

“At its 2nd plenary meeting, on 20 September 2013, the General Assembly, on the recommendation of the General Committee, decided to include in the agenda of its sixty-eighth session, under the item entitled “Promotion and protection of human rights”, the sub-item entitled “Human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms” and to allocate it to the Third Committee.”

26 November 2013: UN General Assembly [A/C.3/68/SR.23]: Third Committee, Summary record of the 23rd meeting, Held at Headquarters, New York, on Wednesday, 23 October 2013, at 10 a.m.

“Ms. Almeida Watanabe Patriota (Brazil) expressed concern that some countries had still not accepted the universal periodic review. In the case of country-specific human rights resolutions, negotiations should be more transparent. Her delegation commended the impartial work of the commission of inquiry on the Syrian Arab Republic. In view of the recent revelations of violations of the fundamental right to privacy, she asked what the international community could do to help to enforce that right and whether, in the High Commissioner’s opinion, the absence of Internet privacy guarantees might undermine freedom of expression. She would also appreciate further comments on what Member States could do to help others realize that securing basic human rights for all vulnerable groups, including lesbian, gay, bisexual and transgender people, did not represent an emphasis on any one group.”

1 November 2013: UN General Assembly [A/C.3/68/L.45]: Brazil and Germany: draft resolution, The right to privacy in the digital age [Would ultimately pass as A/RES/68/167 on 18 December 2013: The right to privacy in the digital age]

10 December 2013: UN General Assembly [A/68/456/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee

“At the 43rd meeting, on 7 November, the representatives of Brazil and Germany made statements and on behalf of Austria, Bolivia (Plurinational State of), Brazil, the Democratic People’s Republic of Korea, Ecuador, France, Germany, Indonesia, Liechtenstein, Peru, Switzerland and Uruguay, jointly introduced a draft resolution, entitled “The right to privacy in the digital age” (), which read…”

10 December 2013: UN General Assembly 68/456/Add.2 [68/456/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee**. Rapporteur: Ms. Adriana Murillo Ruin

18 December 2013: First General Assembly Resolution 68/167 [A/RES/68/167]: The right to privacy in the digital age

“Welcoming the report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression [A/HRC/23/40] submitted to the Human Rights Council at its twenty-third session, on the implications of State surveillance of communications on the exercise of the human rights to privacy and to freedom of opinion and expression,”

30 June 2014: UN Human Rights Council 27/37 [A/HRC/27/37]: The right to privacy in the digital age, Report of the Office of the United Nations High Commissioner for Human Rights

18 December 2014: UN General Assembly Resolutions 69/166 [A/RES/69/166]: The right to privacy in the digital age

26 March 2015: UN Human Rights Council Resolution 28/16 [A/HRC/RES/28/16]: The right to privacy in the digital age

“Decides to appoint, for a period of three years, a special rapporteur on the right to privacy, whose tasks will include… to submit an annual report to the Human Rights Council and to the General Assembly, starting at the thirty-first session and the seventy-first session respectively;”

22 May 2015: UN Human Rights Council 29/32 [A/HRC/29/32]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye

21 October 2015: UN General Assembly Resolution 70/1 [A/RES/70/1] Transforming our world: the 2030 Agenda for Sustainable Development

16 December 2015: UN General Assembly Resolution 70/125 [A/RES/70/125] Outcome document of the high-level meeting of the General Assembly on the overall review of the implementation of the outcomes of the World Summit on the Information Society

16 December 2015: Press release for General Assembly of the United Nations, “WSIS+10 Outcome: Countries Adopt New Plan to Utilize Internet and Information Technologies in Implementation of New Sustainable Development Agenda

1 February 2016: UN General Assembly Resolution 70/125 [A/RES/70/125]. Outcome document of the high-level meeting of the General Assembly on the overall review of the implementation of the outcomes of the World Summit on the Information Society

1 July 2016: UN Human Rights Council 32/13 [A/HRC/RES/32/13]: The promotion, protection and enjoyment of human rights on the Internet

15 December 2016 (publication date) VIDEO: Professor Cannataci: UN Special Rapporteur on the Right to Privacy

Five Priorities of UN Special Rapporteur on the Right to Privacy: A better understanding of privacy, security & surveillance, Big Data & Open Data, Health Data, & Personal Data Held by Corportations

19 December 2016: UN General Assembly Resolutions 71/199 [A/RES/71/199]: The right to privacy in the digital age

23 March 2017: UN Human Rights Council Resolution 34/7 [A/HRC/RES/34/7]: The right to privacy in the digital age

17 to 28 June 2017: Visit to the United States of America Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci [See report A/HRC/46/37/Add.4 published on 20 January 2021]

6 September 2017: UN Human Rights Council 34/60 [A/HRC/34/60]: Report of the Special Rapporteur on the right to privacy

“In his report, prepared pursuant to Human Rights Council resolution 28/16, the Special Rapporteur on the right to privacy focuses on governmental surveillance activities from a national and international perspective. The Special Rapporteur elaborates on the characteristics of the international legal framework and the interpretation thereof. He also describes recent developments and trends, how these can be studied and how they interact with the enjoyment of the right to privacy and other interconnected human rights. Consequently, he outlines first approaches to a more privacy-friendly oversight of government surveillance. In conclusion, the Special Rapporteur reports on his activities in the period covered by his report.”

19 October 2017: UN General Assembly 72/540 [A/72/540]: Right to privacy, Note by the Secretary-General, Special Rapporteur of the Human Rights Council on the right to privacy, Joseph A. Cannataci, submitted in accordance with Human Rights Council resolution 28/16

6 April 2018: UN Human Rights Council Resolution 37/2 [A/HRC/RES/37/2] The right to privacy in the digital age

5 July 2018: UN Human Rights Council 38/7 [A/HRC/RES/38/7]: Promotion, protection and enjoyment of human rights on the Internet

13 July 2018: UN Human Rights Council 38/35/Add.5 [/A/HRC/38/35/Add.5]: Encryption and Anonymity follow-up report

3 August 2018: UN Human Rights Council 39/29 [A/HRC/39/29]: The right to privacy in the digital age: Report of the United Nations High Commissioner for Human Rights

“The present report is submitted pursuant to resolution 34/7, in which the Human Rights Council requested the High Commissioner for Human Rights to prepare a report identifying and clarifying principles, standards and best practices regarding the promotion and protection of the right to privacy in the digital age, including the responsibility of business enterprises in this regard, and present it to the Human Rights Council at its thirtyninth session.”

29 August 2018: UN General Assembly 73/348 [A/73/348]: Promotion and protection of the right to freedom of opinion and expression

17 October 2018: UN General Assembly 73/438 [A/73/438: Right to privacy*, Note by the Secretary-General

29 November 2018: UN General Assembly 73/589 [A/73/589]: Promotion and protection of human rights. Report of the Third Committee

1 December 2018: UN General Assembly 73/589/Add.1 [A/73/589/Add.1]: Promotion and protection of human rights: implementation of human rights instruments, Report of the Third Committee

3 December 2018: UN General Assembly 73/589/Add.4 [A/73/589/Add.4]: Promotion and protection of human rights: comprehensive implementation of and follow-up to the Vienna Declaration and Programme of Action, Report of the Third Committee

4 December 2018: UN General Assembly 73/589/Add.2 [A/73/589/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee

6 December 2018: UN General Assembly 73/589/Add.3 [A/73/589/Add.3]: Promotion and protection of human rights: human rights situations and reports of special rapporteurs and representatives, Report of the Third Committee

17 December 2018: UN General Assembly Resolutions 73/179 [A/RES/73/179]: The right to privacy in the digital age

13 February 2019: Annex 4: Privacy Metrics – Consultation Draft, “Metrics for Privacy – A Starting Point“, Special Rapporteur on the right to privacy, Professor Joseph A. Cannataci.

11 July 2019: UN Human Rights Council 41/12 [A/HRC/RES/41/12]: The rights to freedom of peaceful assembly and of association

“Requests the Special Rapporteur to continue to report annually to the Human Rights Council and the General Assembly;”

5 August 2019: UN General Assembly 74/277 [A/74/277] Right to privacy, Note by the Secretary-General (Part II. Health-related data)

26 September 2019: UN Human Rights Council Resolution 42/15 [A/HRC/RES/42/15]: The right to privacy in the digital age

In Human Rights Council Resolution 42/15, “Paragraph 10 of the resolution requested the United Nations High Commissioner for Human Rights “to organize, before the forty-fourth session of the Human Rights Council, an expert seminar to discuss how artificial intelligence, including profiling, automated decision-making and machine-learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy [and] to prepare a thematic report on the issue”.

16 October 2019: UN Human Rights Council 40/63 [A/HRC/40/63]: Right to privacy: Report of the Special Rapporteur on the right to privacy

24 March 2020: UN Human Rights Council 43/52 [A/HRC/43/52]: Report of the Special Rapporteur on the right to privacy

23 April 2020: UN Human Rights Council Resolution 44/49 [A/HRC/44/49]: Disease pandemics and the freedom of opinion and expression: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

13 May 2020: UN Human Rights Council Resolution 44/50 [A/HRC/44/50]: Ten years protecting civic space worldwide: Report of the Special Rapporteur on the rights to freedom of peaceful assembly and of association

27-28 May 2020, [Expert Seminar Report – Right to Privacy] Report of the proceedings of the online expert seminar with the purpose of identifying how artificial intelligence, including profiling, automated decision-making and machine learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy

18 June 2020: UN Human Rights Council Resolution 44/57 [A/HRC/44/57]: Racial discrimination and emerging digital technologies: a human rights analysis, Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance

19 June 2020: UN Human Rights Council 43/4 [A/HRC/RES/43/4]: Freedom of opinion and expression: mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

24 June 2020: UN Human Rights Council 44/24 [A/HRC/44/24]: Impact of new technologies on the promotion and protection of human rights in the context of assemblies, including peaceful protests, Report of the United Nations High Commissioner for Human Rights

20 July 2020: UN General Assembly 75/184 [A/75/184]: Rights to freedom of peaceful assembly and of association

27 July 2020: UN General Assembly 75/147 [A/75/147]: Right to privacy, Note by the Secretary-General

28 July 2020: UN General Assembly 75/261 [A/75/261]: Promotion and protection of the right to freedom of opinion and expression, Note by the Secretary-General

28 August 2020: UN General Assembly Resolution 75/329 [A/75/329]: Contemporary forms of racism, racial discrimination, xenophobia and related intolerance, Note by the Secretariat

16 December 2020: UN General Assembly Resolutions 75/176 [A/RES/75/176]: The right to privacy in the digital age

20 January 2021: UN Human Rights Council A/HRC/46/37/Add.4 [A/HRC/46/37/Add.4]: Visit to the United States of America Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci

“The Special Rapporteur on the right to privacy, Joseph A. Cannataci, carried out an official visit to the United States of America between 17 and 28 June 2017. While praising several strengths of the United States system, the Special Rapporteur observed the risks resulting from fragmentation caused by organic growth and the misplaced confidence that certain conventions would be respected by the Executive. He recommends a gradual overhaul of privacy law, with a special focus on simplification, and an increase in both safeguards and remedies. He especially recommends that United States law should be reformed further to entrench the powers of existing and new oversight authorities while bringing safeguards and remedies for foreign intelligence up to the same standard as for domestic intelligence…

The present report was finalized in autumn 2020, after evaluating the preliminary results of the country visit in meetings held during the visit, which took place from 17 to 28 June 2017, and cross-checking them with follow-up research and developments to date. The benchmarks used in the present report include the privacy metrics document released by the Special Rapporteur.”

17 February 2021: UN Human Rights Council /A/HRC/46/37/Add.8 [A/HRC/46/37/Add.8]: Report of the Special Rapporteur on the right to privacy on his visit to United States of America, Comments by the State

U.S. comments on the framework of the Special Rapporteur’s analysis:
The United States notes that, throughout his report, the Special Rapporteur (UNSRP) assumes that “necessity and proportionality” and related European Union (EU) law data protection standards reflect current international law. In the view of the United States, this assumption is incorrect. Instead, the applicable international human rights law for evaluating U.S. privacy practices is the International Covenant on Civil and Political Rights (ICCPR). Article 17 of that instrument provides that “[n]o one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence.” This provision does not impose a requirement of proportionality or necessity on a State Party’s interference with privacy; instead, it imposes an obligation to refrain from arbitrary or unlawful interference with privacy. That is the obligation that the United States implements through its domestic legal framework. While certain elements of U.S. domestic law may use the words “necessary” or “proportionate” in relation to privacy, the relevant inquiry here is how the United States implements its obligations under Article 17 of the ICCPR.

23 March 2021: UN Human Rights Council Resolution 44/57 [A/HRC/RES/46/16]: Mandate of Special Rapporteur on the right to privacy

“Recognizing the increasing impact of new and emerging technologies, such as those developed in the fields of surveillance, artificial intelligence, automated decision-making and machine-learning, and of profiling, tracking and biometrics, including facial recognition, without proper safeguards, on the enjoyment of the right to privacy and other human rights,”

18 May 2021: UN Human Rights Council: 47/61 [A/HRC/47/61]: The right to privacy in the digital age, Note by the Secretariat

“In its resolution 42/15 on the right to privacy in the digital age, the Human Rights Council requested the United Nations High Commissioner for Human Rights to prepare a thematic report on how artificial intelligence, including profiling, automated decisionmaking and machine-learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy, and to submit it to the Council at its forty-fifth session. ”

UN Digital Library Search “Right to Privacy in the Digital Age

UN Digital Library Search “Right to Privacy

Timeline of ICT and Internet governance developments [Really detailed timeline and history]

List of UN Special Rapporteurs

Video: History of the UN Human Rights Council

28 January 1981: Details of Treaty No.108: Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data

31 December 2020: Guide on Article 8 of the European Convention on Human Rights: Right to respect for private and family life, home and correspondence

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Contextual-Integrity-Nissenbaum

In her Contextual Integrity theory of privacy, Helen Nissembaum defines privacy as “appropriate flows of information” where the appropriateness is defined by the context and its contextual informational norms. Contextual Integrity is a paradigm shift away from the Fair Information Practice Principles, which emphasizes a model of privacy focused on the control of personal information. Framing privacy through the lens of control has led to a notice & consent model of privacy, which many have argued that notice and consent fails to actually protect our privacy. Rather than focusing on the definition of privacy as the control of private information, Contextual Integrity focuses on appropriate flows of information relative to the stakeholders within a specific context who are trying to achieve a common purpose or goal.

contextual-integrity-context

Nissembaum’s Contextual Integrity theory of privacy reflects the context dependent nature of privacy. She was inspired by social theories like Michael Walzer’s Spheres of Justice, Pierre Bourdieu’s field theory, as well as what others refer to as domains. These are ways of breaking up society into these distinct spheres that have their “own contextual information norms” for how data are exchange to satisfy a shared intentions. Nissembaum resists choosing a specific social theory of context, but has offered the following definition of context in one of her presentations.

Contexts – differentiated social spheres defined by important purposes, goals, and values, characterized by distinctive ontologies, roles and practices (e.g. healthcare, education, family); and norms, including informational norms — implicit or explicit rules of info flow.

Nissembaum emphasized the importance of how the informational exchanges should be helping to achieve the mutual purposes, goals, and values of that specific context. As an example, you would be more than willing to provide medical data to your doctor for the purpose of evaluating your health and helping you heal from an ailment. But you may be more cautious in providing that same medical information to Facebook, especially if it was unclear how they intended on using it. The intention and purpose of these contexts help to shape the underlying information norms that helps people understand what information is okay to share given the larger context of that exchange.

Nissembaum specifies how these contextual information norms have five main parameters that be used to form rules to help determine whether or not privacy has been preserved or not.

contextual-integrity-transmission-principles
(Image credit: Cornell)

The first three are the stakeholder actors including the sender and recipient and the subject or intention of the data exchange.

Then there’s the different types of information that are being exchanged, whether it’s physiological data, biographical data, medical information, financial information, etc.

Finally, the transmission principles that facilitate an exchange include a range including through informed consent, buying or selling, coercion, asymmetrical adhesion contracts or reciprocal exchange, via a warrant, stolen, surreptitiously acquired or with notice, or as required by law.

contextual-integrity-parameters

Notice and consent is embedded within the framework of Contextual Integrity through the transmission principle, which includes informed consent or adhesion contracts. But Nissenbaum says that in order for the contextual integrity to be preserved, then all five of these parameters need to be properly specified.

contextual-integrity-justifiability

The final step within the Contextual Integrity theory of Privacy is to evaluate whether or not the informational norm is “legitimate, worth defending, and morally justifiable.” This includes looking at the stakeholders and analyzing who may be harmed and who is benefitting from any informational exchange. Then looking to see whether or not it diminishes any political or human rights principles like the diminishment of the freedom of speech. And then finally evaluating how the information exchange is helping to serve the contextual domain’s function, purpose, and value.

Overall, Nissembaum’s Contextual Integrity theory of privacy provides a really robust definition of privacy and foundation to build upon. She collaborated with some computer scientists who formulized “some aspects of contextual integrity in a logical framework for expressing and reasoning about norms of transmission of personal information.” They were able to formally represent some of information flows in legislation like HIPAA, COPPA, and GLBA into their logical language meaning that it could be possible to create computer programs that could enforce compliance.

Even though Contextual Integrity is very promising as a replacement to existing privacy legislation, there are some potential limitations. The theory leans heavily upon normative standards within given contexts, but my observation with immersive XR technologies is that not only does XR blur and blend existing contexts, but is also creating new precedents of information flows that goes above and beyond. Here’s my state of XR privacy talk from the AR/VR Association’s Global Summit that provides more context on these new physiological information flows and still relatively underdeveloped normative standards of what data are going to be made available to consumer technology companies and what might be possible to do with this data:

The blending and blurring of contexts could lead to a context collapse. As these consumer devices are able to capture and record medical-grade data, then these neuro-technologies and XR technologies are combining the information norms of the medical context with the consumer technology context. This is something that’s already been happening over the years with different physiological tracking, but XR and neuro-technologies will start to create new ethical and moral dilemmas with the types of intimate information that will be made available. Here’s a list of physiological and biometric sensors that could be integrated within XR within the next 5-20 years:

physiological-and-biometric-data

It is also not entirely clear how Contextual Integrity would handle the implications of inferred data like what Ellysse Dick Refers to as “computed data” or what Brittan Heller refers to as “biometric psychographic data”. There is a lot of really intimate information that can be inferred and extrapolated about your likes and dislikes, context-based preferences, and more essential character, personality, and identity from observed behaviors combined with physiological reactions with the full, situational awareness of what you’re looking at and engaged with inside of a virtual environment (or also with eye tracking + egocentric data capture within AR). Behavioral neuroscientist John Burkhardt details some of these biometric data streams, and the unethical threshold between observing and controlling behaviors.

Here’s a map of all of the types of inferred information that can come from image-based eye tracking from Kröger et al’s article “What Does Your Gaze Reveal About You? On the Privacy Implications of Eye Tracking

kroger-eyetracking

It is unclear to me how Nissembaum’s Contextual Integrity theory of privacy might account for this type of computed data, biometric psychographic data, or inferred data. It doesn’t seem like inferred data fits under the category of information types. Perhaps there needs to be a new category of inferred data that could be extrapolated from data that’s transmitted or aggregated. It’s also worth looking at the taxonomy of data types from Ellysse Dick’s work on bystander privacy in AR.

It’s also unclear whether or not this would need to be properly disclosed or whether the data subject has any ownership rights over this inferred data. The ownership or provenance of the data also isn’t fully specified within the sender and recipient of the data, especially if there are multiple stakeholders involved. It’s also unclear to me how the intended use of the data is properly communicated within contextual integrity, and whether or not this is already covered within the “subject” portion of actors.

Some of these open questions could be answered when evaluating whether or not an informational norm is “legitimate, worth defending, and morally justifiable.” Because VR, AR, and neuro-technologies are so new, then these exchanges of information are also very new. Research neuroscientists like Rafael Yuste have identified some fundamental Neuro-Rights of mental privacy, agency, and identity that are potentially threatened because of the types of neurological and physiological data that will soon be made available within the context of consumer neuro-tech devices like brain control interfaces, watches with EMG sensors, or other physiological sensors that will be integrated into headsets like Project Galea collaboration between OpenBCI, Valve Software, and Tobii eye tracking.

I had a chance to talk with Nissenbaum to get an overview of her Contextual Integrity theory of privacy, but we had limited time to dig into some of the aspects of mental privacy. She said that the neuro-rights of agency and identity are significant ethical and moral questions that are beyond the scope of her privacy framework in being able to properly evaluate or address. She’s also generally skeptical about relying about treating privacy as a human right because you’ll ultimately need a legal definition that allows you to exercise that legal right, and if it’s still contextualized within the paradigm of notice and consent, then we’ll be “stuck with this theory of privacy that loads all of the decision onto the least capable person, which is the data subject.”

She implied that even if there are additional human rights laws at the international level, as long as there is a consent loophole like there is within the existing adhesion contract frameworks of notice and consent, then it’s not going to ensure that you can exercise that right to privacy. This likely means that US tech policy folks may need to use Contextual Integrity as a baseline to be able to form new state or federal privacy laws, which is where privacy law could be enforced and the rights to privacy be asserted. There’s still value in having human rights laws shape regional laws, but there is not many options to enforce the right to privacy through the mechanisms of international law.

ethics-contexts
Finally, I had a chance to show Nissenbaum my taxonomy of contexts that I’ve been using in the context of my XR Ethics Manifesto that maps out the landscape of ethical issues within XR. I first presented this taxonomy of the domains of human experience at the SVVR conference on April 28, 2016, as I was trying to categorize the range of answers of the ultimate potential of VR into different industry verticals or contexts.

We didn’t have time to fully unpack all of the similarities to some of her examples of contexts, but she said that it’s in the spirit of other social theorists who have started to may out their own theories for how to make sense of society. I’m actually skeptical that it’s possible to come up with a comprehensive or complete mapping of all human contexts. But I’m sharing it here in case it might be able to shed any additional insights into how something like Nissembaum’s Contextual Integrity theory of privacy could be translated into a Federal Privacy Law framework, and whether it’s possible or feasible to comprehensively map out the purposes, goals, and values of each of these contexts, appropriate information flows, and stakeholders for each of them.

It’s probably an impossible task to create a comprehensive map of all contexts, but it could provide some helpful insights into the nature of VR and AR. I think context is a key feature of XR. With augmented reality, you’re able to add additional contextual information on top of the center of gravity of an existing context. And with virtual reality, you’re able to do a complete context shift from one context to another one. Also, having a theoretical framework for context may also help develop things like “contextually-aware AI” that Facebook Reality Labs is currently working on. Also if VR has the capability to represent simulations of each of these contexts, then a taxonomy of contexts could help provide a provocation on thought experiments on how existing contextual informational norms might be translated and expressed within virtual reality as they’re blended and blurred with the emerging contextual norms of XR.

I’m really impressed with the theory of contextual Integrity, since there are many intuitive aspects about the contextual nature of privacy that are articulated through Nissenbaum’s framework. But it also helps to elaborate how Facebook’s notice and consent-based approach to privacy in XR does not fully live into their Responsible Innovation principles of “Don’t Surprise People” or “Provide Controls that Matter.” Not only do we no have full agency over information flows (since they don’t matter enough for Facebook to provide them), but there’s no way to verify whether or not the information flows are morally justifiable since there isn’t any transparency or accountability to these information flows and all of the intentions and purposes that Facebook plans to use with the data that are made available to them.

Finally, I’d recommend folks check out Nissembaum’s 2010 book Privacy in Context: Technology, Policy, and the Integrity of Social Life for a longer elaboration of her theory. I also found these two lectures on Contextual Integrity particularly helpful in my preparation for my interview with her on the Voices of VR podcast: Crypto 2019 (August 21, 2019) and the University of Washington Distinguished Speaker Series at the Department of Human Centered Design & Engineering (March 5, 2021). And here’s the video where I first discovered Nissembaum’s Contextual Integrity theory of privacy where she was in conversation with philosophers of privacy Dr. Anita Allen and Dr. Adam Moore at a Privacy Conference: Law, Ethics, and Philosophy of End User Responsibility for Privacy that was recorded on April 24, 2015 at the University of Pennsylvania Carey Law School.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

itif-banner

Ellysse-Dick
Ellysse Dick is a policy analyst who has written 10 technology policy publications over the last year about VR & AR for the The Information Technology & Innovation Foundation. The ITIF is a non-partisan, non-profit tech policy think tank, who says their mission is to “advance innovation,” believes “disruptive innovation almost always leads to economic and social progress,” has a “considered faith in markets and businesses of all sizes,” and believes in “deftly tailoring laws and regulations to achieve their intended purposes in a rapidly evolving economy.” In other words, they lean towards libertarian ideals of limited government to avoid reactionary technology policy that might stifle technological innovation.

While the ITIF is an independent organization, their tech policy positions have a strong alignment with the types of arguments I’d expect to hear from Facebook themselves. The ITIF lists Facebook as a financial supporter and Facebook has listed the ITIF as an organization as a part of Facebook’s Political Engagement. But Facebook also says “we do not always agree with every policy or position that individual organizations or their leadership take. Therefore, our membership, work with organizations, or event support should not be viewed as an endorsement of any particular organization or policy.” And Dick says that she maintains editorial independence for what type of tech policy research that she’s doing within VR and AR. That all said, there’s likely a lot of alignment between ITIF’s published tech policy positions and the implicit and often undeclared policy positions of Facebook.

Ellysse Dick has written about XR privacy issues in these three publications:

One really interesting insight Dick had in her December 4th piece on Augmented Reality and bystander privacy is that there are already a lot of social norms or legal precedents when it comes to the different types of data collection. Here’s the taxonomy of data collection that she lays out:

  • Continuous data collection (non-stop & persistent recording)
  • Bystander data collection (relational dynamics of recording other people)
  • Portable data collection – (the mobile & portability ease of recording anywhere)
  • Inconspicuous data collection (Notification & consent norms around capturing video or spatial context)
  • Rich data collection: (the geographic context & situational awareness)
  • Aggregate data collection: (combining information from third-party sources)
  • Public data exposure: (associating public data to individuals within a real-time context)

Dick says that the combination of the real-time, portable, aggregate, and persistent nature of data recording that may create a new context requiring either new social norms or laws.

I wanted to talk with Dick about take on XR Privacy, why she sees the need for a US Federal Privacy Law, some of the concerns around government surveillance and the Third Party Doctrine, and how aspects of biometrically-inferred data should be a key part the broader discussion about a comprehensive approach to privacy. She calls this data “computed data” while Brittan Heller refers to it as biometric psychographic data.

Dick is not as concerned about near-term risks of making inferences from physiological or biometric data from XR, and cautions us from a “privacy panic” that catalyzes a reactionary technology policy that leads to technologies being banned. I guess I’m on the other side of having a reasonable amount of privacy panic considering that technology policy analyst Adam Kovacevich has estimated the odds of a Federal Privacy Law passing ranging from 0-10% for the more controversial sticking points, or around 60% if the Democrats compromise on the private right to action clause.

Dick says that the ITIF follows the innovation principle, which is to not overregulate in advance for harms that may or may not happen. Creating laws too early has the potential to either stifle innovation, to not have the intended consequence, or to quickly go out of date. Dick recommends soft laws, self-regulation, and trade organizations as the first step until policy gaps can more clearly be identified. This means the end result is that the most likely and default position is to do no pre-emptive actions regarding these privacy concerns around XR, which will likely result in us trying to reel it back once it’s gone too far.

Dick seems to have a lot of faith that companies will not go too far with the tracking our data and ads that could lead towards significant behavioral modification, but for me the more pragmatic opinion is that companies like Facebook will continue to aggregate as much data as possible in trying to track our attention and behaviors creating an asymmetry of power when it comes to delivering targeted advertising.

Overall the ITIF generally takes a pretty conservative approach to new technology policy, suggesting that we either wait and see or rely upon self-regulation and consensual approaches. Dick and I had a spirited debate on the topic of XR Privacy, and in the end we agree on the need for a new U.S. Federal Privacy Law. I think we’d also agree that we need the right amount urgency to make it a public policy priority without leading to a reactionary panic that leads to a technology policy that bans certain immersive technologies.

And I believe that it’s still up for debate how much privacy panic we should collectively have on this issue, especially considering how there is no way to verify the appropriate flows of information given the broad mandate that Terms of Service & Privacy Policy adhesion contracts give to Facebook for how they can use the data they can capture.

In the next episode, I’ll be diving into how philosopher Helen Nissembaum defines privacy as appropriate information flows within a given context within her Contextual Integrity theory of privacy. She also argues that notice and consent model of privacy is broken, and her contextual integrity approach may provide some more viable and robust solutions for ensuring users have more transparency on how their data are being used.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

project-galea
conor-russomanno
OpenBCI’s Project Galea was originally announced on November 19, 2020 as a “hardware and software platform that merges next-generation biometrics with mixed reality.” OpenBCI has been collaborating with MIT Ph.D. student Guillermo Bernal in integrating PhysioHMD’s design, which includes EOC, EMG, EDA, PPG sensors in addition to 10 EEG channels and eye-tracking into a single headset. On January 24th, Valve CEO Gabe Newell told New Zealand’s 1 NEWS that Valve was “working on an open source project so that everybody can have high-resolution [brain signal] read technologies built into headsets, in a bunch of different modalities.” 1 News reported that Valve was collaborating with OpenBCI. Then on February 4th, 2021, Tobii announced that it was “engaging in research collaboration with Valve and OpenBCI by incorporating Tobii’s eye tracking technology with elements of Valve’s Index hardware to produce developer units for the recently announced Galea Beta Program.” The Project Galea dev kits are expected to ship sometime in 2022, and Newell told 1 NEWS, “If you’re a software developer in 2022 who doesn’t have one of these in your test lab, you’re making a silly mistake.”

I first interviewed OpenBCI co-founder and CEO Conor Russomanno at Rothenberg Ventures’ Founder’s Day on May 16, 2016, which was the day before the 2016 Neurogaming Conference. It was also after the Silicon Valley Virtual Reality Conference April 27-29, 2016, which is where I first really starting covering the topic of privacy in VR as it was after an UploadVR article on Facebook & VR privacy caught the attention of Senator Al Franken, who wrote Oculus a letter. I first spoke to Russomanno about some of the privacy implications of neuro-technologies back in 2016, and the ethical implications of neuro-tech has only increased as the capabilities of physiological measurement devices and what can be inferred from them have also been increasing.

I recently heard Russomanno speak about Project Galea on May 26th at the Non-Invasive Interfaces: Ethical Consideration symposium co-sponsored by the Columbia Neuro-Rights Initiative and Facebook Reality Labs.

He was also discussing some of the ethical and privacy implications of neuro-technologies, and he got into an interesting debate with Rafael Yuste, who I interviewed about Neuro-Rights in episode #994. They were debating whether or not technologies that are able to measure physiological data should be classified as medical devices that are capturing medical data. Russomanno doesn’t believe that the hardware technology should be regulated by the FDA and medical regulations since it’s likely that a project like OpenBCI would never be able to exist as it does today, but he’s also open to the possibility of giving special treatment to the data. Ultimately, Russomanno hopes that someday consumers will have more ownership and control over the data that are captured by these devices, but that there’s a long way to get there from where we are at right now.

I had a chance to talk with Russomanno on June 4th to be able to talk about the evolution of OpenBCI into Project Galea, a little bit about how their collaboration with Valve and Tobii came about, what type of insights they’re able to gather from these different physiological and biometric measurement sensors, the value of combining different sensory modalities together, and the potential of closed, feedback loop immersive systems that are able to help track and modulate different aspects of your brain, mind, and ultimately consciousness. We also talk about some of the potential healing, quantified self, and consciousness hacking applications, but also the risks of how these technologies could undermine our rights to mental privacy but also our agency. There are still more open questions than answers right now, but the open hardware approach by OpenBCI has been able to seed quite a lot of experimentation and research evaluation by major XR companies across the industry.

I’ll be releasing a series of interviews on Neuro-Rights and the Ethical Implications of XR & neuro-technologies starting with OpenBCI, but hearing about the technology policy research papers written by the Information Technology & Innovation Foundation’s Ellysse Dick, the founder of the Contextual Integrity Theory of Privacy with Philosopher Helen Nissembaum, and then four representatives from the Electronic Frontier Foundation talking about privacy from a Human Rights perspective and reporting back from Rightscon. Also be sure to check out my recent conversations with Rafael Yuste on Neuro-Rights, Brittan Heller on biometric psychography, as well as with Joe Jerome on a historical primer on the history of privacy law.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

After hearing about all of the sensors that OpenBCI’s Project Galea was integrating, I did an audit of the different physiological and biometric sensors:

Also, here’s a state of XR privacy talk that I gave at the AR/VR Association Global Summit that provides an overview of some of the biggest issues on privacy with the intersection between XR and neuro-technologies.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

full_Breonna_s-Garden_1920x1080

Breonna’s Garden is an augmented reality experience that premiered at the 2021 Tribeca Film Festival that was created by Lady PheOnix in collaboration with Ju’Niyah Palmer to honor the life of her sister, Breonna Taylor. I found it to be a profoundly moving experience, and lived into the intention to connect to the tender parts of myself in listening to the recorded memories by Taylor’s family and friends. The iOS app for Breonna’s Garden is available to try out here.

lady-phoenixI had a chance to talk with the creator Lady PheOnix about her journey in creating this project, the process of collaborating with Breonna Taylor’s family in creating it, and the underlying intentions and invitations that she has embedded into this piece — including an opportunity to record your own memories of loved ones that you may have lost. Lady PheOnix referred to this piece a sort of healing balm where you can be tender, batter and bruised, and I certainly was able to experience that in this powerful piece.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Neuro-Rights

yusteNeuroscientist researcher Rafael Yuste started the Columbia University’s Neuro-Rights Initiative to promote an ethical framework to preserve a set of human rights within neuro-technologies. He co-authored a Nature paper titled “Four ethical priorities for neurotechnologies and AI” in 2017, after creating the “Morningside Group” of over 20 neuroscientists who were also concerned about the potential ethical harms caused by neuro-technologies.

Another neuro-right was added to the latest Neuro-Rights paper titled It’s Time for Neuro – Rights. This brings the list up to the right to identity, right to agency, right to mental privacy, the right for equitable access to neural augmentation, and the right to be free from algorithmic bias. In the end, Yuste hopes to gain momentum within the United Nations to add these fundamental neuro-rights to the Universal Declaration of Human Rights, which could then put pressure on regional legislators to change their laws to stay into compliance with these neural rights.

On May 26th, there was a Non-Invasive Neural Neural Interfaces: Ethical Considerations day-long symposium featuring cutting-edge neuroscientists working to decode the brain, EMG specialists, and other companies working on commercial-grade, neuro-technologies. The gathering was sponsored by the Columbia Neuro-Rights Initiative as well as by Facebook Reality Labs as both sponsors wanted to bring scientists and ethicists together in order to debate the ethical and privacy implications of these neuro-technologies.

I did some extensive coverage of the Non-Invasive Neural Interfaces: Ethical Considerations event within this Twitter thread here:

Part of the concern about these neuro-technologies is that there is already a large amount of data from the brain that can be decoded, and this is only going to increase over time. Yuste also brought up that there as existing methods to stimulate the brain in a way that could violate our right to agency. Whether it’s reading or writing to our brains, Yuste says that we can’t be walking around with the metaphoric hoods of our brains opened up for any outside actor to measure or stimulate.

In the end, there was a lot more science shared at the Non-Invasive Neural Interfaces gathering than meaty ethical debates. There was not enough diversity of speaker backgrounds to hold a true Multi-Stakeholder Immersion gathering that included perspectives from privacy advocates, philosophers, or privacy lawyers. Part of what makes this topic of how to preserve mental privacy so challenging is that it does require a multi-disciplinary approach representing a critical mass of stakeholders and differing competing interests in order to have robust debates on all of the risks and benefits across different contexts. Also, dealing with the complexity of these emerging technologies requires some potential new paradigm conceptual frameworks around the philosophy of privacy such as Dr. Helen Nissembaum’s theory of Contextual Integrity or Dr. Anita Allen’s approach of treating privacy as a human right (see my talk for more context on this)

There was some interesting resistance to one of Yuste’s proposed strategies for preserving our right mental privacy for navigating the threats to mental privacy, since one of his suggestions was to treat the data from these non-invasive neural interfaces as medical devices and medical data. This would regulate data that could be used to decode what’s happening within the body, but also limit how the variety of different brain stimulation devices could be used.

Neuro-tech start-ups like Open BCI and Kernel Co resisted this suggested classification and regulation of neuro-tech as medical devices since their companies probably wouldn’t exist at the point they are today had there been additional medical regulations that they’d have to follow. But Yuste argues that the use of neural data could have profound impacts on the integrity of our body, and so there is a compelling argument that it’s a type of sensitive data that is most analogous to medical data.

After listening to Yuste at the “Non-Invasive Neural Neural Interfaces: Ethical Considerations” conference, I reached out to have him onto the Voices of VR podcast so that he could elaborate on the current state of the art neuroscience of neuro-tech, what he sees as the most viable strategy for protecting our right to mental privacy, why looking at these issues through the lens of human rights is so compelling, where the future of neuro-rights is headed, and why he’s so excited about the revolutionary and humanistic potential of neuro-technologies to help us understand our brains and ourselves better.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s my 22-minute talk on “State of Privacy in XR & Neuro-Tech: Conceptual Frames” presented at the VR/AR Global Summit on June 2, 2021

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

htc-vive-focus-3-vive-pro-2
HTC announced two new, enterprise-focused VR headsets at their Vivecon on Tuesday May 11th. The Vive Focus 3 is a standalone VR HMD with an impressive 2,448 x 2,448 per-eye resolution, 90Hz, 120° FoV, new controllers, swappable battery, and priced at $1,300. The New Vive Pro 2 VR HMD was also announced also with a 2,448 × 2,448 (6.0MP) per-eye resolution, but with 120 Hz, dual-element Fresnel lenses, 120° diagonal FoV, 120Hz refresh rate, $800, and June 3 release. HTC also annoucned a number of new Vive Business software offerings including Vive Business App Store, Vive Business Training, Vive Business Streaming, & Vive Business Device Management.

alvin-wang-graylinI had a chance to talk with Alvin Wang Graylin, the China President at HTC about HTC’s two new VR headsets, the launch of Vive Business, and more context on their Vive XR Ecosystem, new Vive Trackers, Facial tracker, and trends of virtual idols & VTubers including their new virtual spokeperson named VEE.

Here’s my Twitter thread from Vivecon and Virtual Vive Ecosystem Conference:

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

immersive-arcade
The second volume of the Immersive Arcade Showcase featuring four immersive stories from the United Kingdom launches today as DLC within the Museum of Other Realities. The theme of the second volume is Memories & Dreams and features Vestige, Limbo, Lucid, & Somnai. This showcase will be running for the next 8 weeks, and iss produced by Digital Catapult in collaboration with Kaleidoscope’s Immersive Production Studio, UK Research & Innovation, and Audience of the Future.

jessica-driscoll2I had a chance to talk with Jessica Driscoll, Head of Immersive Technologies at Digital Catapult, about the first and second showcases as well as more context on this government-funded, digital technology innovation centre, which is “accelerating the adoption of new and emerging technologies to drive regional, national and international growth for UK businesses across the economy.” Their CreativeXR program is a technology accelerator for arts and culture industries, which funds a lot of VR & AR stories, but there are other initiatives at Digital Catapult around IoT, AI, and 5G that has a lot of overlap with the companies working on XR projects. Digital Catapult’s Immersive Arcade program has also produced a timeline of immersive art & story projects from the past 20 years that were produced in the UK.

It’s great to see this type of funding and support from the United Kindgom into the immersive industry, and I definitely have been seeing the impact of the projects that they’re funding as many of them have appeared throughout the film festival circuit since the CreativeXR program started in 2017. This three-volume retrospective series of Immersive Arcade is a great opportunity to see some of the immersive stories that have come out of the UK over the past 5 years within the context of the Musuem of Other Realities, which has created some really impressive immersive installations and transportative worlds to help set the context for each of these pieces.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

responsible-innovation-principles-critique
Facebook’s Project Aria announcement in September at Facebook Connect raised a number of different ethical questions with anthropologists and technological ethicists. Journalist Lawrence Dodds described it on Twitter by saying, “Facebook will send ‘hundreds’ of employees out into public spaces recording everything they see in order to research privacy risks of AR glasses.” During the Facebook Connect keynote, Head of Facebook Reality Labs Andrew Bosworth described Project Aria as a prototype research device worn by Facebook employees and contractors that would be “recording audio, video, eye tracking, and location data” of “egocentric data capture.” In the Project Aria Launch video, Director of Research Science at Facebook Realty Labs Research Richard Newcomb said that “starting in September, a few hundred Facebook workers will be wearing Aria on campus and in public spaces to help us collect data to uncover the underlying technical and ethical questions, and start to look at answers to those.”

The idea of Facebook workers wearing always-on AR devices recording egocentric video and audio data streams across private and public spaces in order to research the ethical and privacy implications raised a lot red flags from social science researchers. Anthropologist Dr. Sally A. Applin wrote a Twitter thread explaining “Why this is very, very bad.” And tech ethicist Dr. Catherine Flick said, “And yet Facebook has a head of responsible innovation. Who is featured in an independent publication about responsible tech talking about ethics at Facebook. Just mindboggling. Does this guy actually know anything about ethics or social impact of tech? Or is it just lip service?” The two researchers connected via Twitter an agreed to collaborate on a paper over the course of six months, and the result is a 15,000-word peer-review paper titled “Facebook’s Project Aria indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the Commons” that was published in latest Journal of Responsible Technology.

Applin & Flick deconstruct the ethics of Project Aria based upon own Facebook’s four Responsible Innovation Principles that were announced by Boz in the same Facebook Connect keynote after the Project Aria launch video. Those principles are #1) Don’t surprise people. #2) Provide controls that matter. #3) Consider everyone. And #4) Put People First. In their paper, Applin & Flick conclude that

Facebook’s Project Aria has incomplete and conflicting Principles of Responsible Innovation. It violates its own principles of Responsible Innovation, and uses these to “ethics wash” what appears to be a technological and social colonization of the Commons. Facebook enables itself to avoid responsibility and accountability for the hard questions about its practices, including its approach to informed consent. Additionally, Facebook’s Responsible Innovation Principles are written from a technocentric perspective, which precludes Facebook from cessation of the project should ethical issues arise. We argue that the ethical issues that have already arisen should be basis enough to stop development—even for “research”. Therefore, we conclude that the Facebook Responsible Innovation Principles are irresponsible and as such, insufficient to enable the development of Project Aria as an ethical technology.

I reached out to Applin & Flick to come onto the Voices of VR podcast to give a bit more context as to their analysis through their anthropological & technology ethics lenses. Sally Applin is an anthropologist looking at the cultural adoption of emerging technologies through the lens of anthropology and her social multi-dimensional communications theory called PolySocial Reality. She’s a Research Fellow at HRAF Advanced Research Centres (EU), Canterbury, Centre for Social Anthropology and Computing (CSAC), and Research Associate at Human Relations Area Files (HRAF), Yale University. Catherine Flick is a Reader (aka associate professor) of Centre for Computing and Social Responsibility at the De Montfort University, United Kingdom.

We deconstruct Facebook’s Reponsible Innovation Principles in the context of technology ethics and other responsible innovation best practices, but also critically analyze their principles and how quickly they break down even when looking at the Project Aria research project. Facebook has been talking about their responsible innovation principles whenever ethical questions come up, but as we discuss in this podcast, these principles are not really clear, coherent, or robust enough to provide useful insight into some of the most basic aspects of bystander privacy and consent for augmented reality. Applin & Flick have a much more comprehensive breakdown in their paper at https://doi.org/10.1016/j.jrt.2021.100010, and this conversation should help give an overview and primer for how to critically evaluate Facebook’s responsibile innovation principles.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

tribeca-2021
loren-hammonds
The 29 immersive experiences that are a part of the Tribeca Immersive 2021 line-up were announced on Tuesday, April 29. There will be 11 Virtual Arcade experiences available starting on June 9 within the Museum of Other Realities, 5 Storyscape experiences only available in-person at Tribeca, and then 13 outdoor screenings (some of which will be also available remotely). I got the run-down of the Storyscapes & highlights from the outdoor screenings from chief curator Loren Hammonds, and more context about the first major film festival that will be having IRL gatherings since the pandemic turned everything remote in March 2020. The 17 New Images Paris experiences in competition were also announced today, and will be showing next to the Tribeca Virtual Arcade within th MOR.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality