#999: The EFF on XR Privacy & How AR/VR Needs a Human Rights Framework + Timeline of UN Resolutions on Privacy

The Electronic Frontier Foundation is a non-profit that has been defending civil liberties & digital rights for over 30 years since it was first announced on July 10, 1990. My first conversation with the EFF was with Dave Maass at the VR Privacy Summit in November 2018 when consumer VR was still in a nascent phase and lower priority relative to civil liberty threats back then. But over the past year, the EFF has been starting to investigate the civil liberties and privacy implications of XR technologies, especially as VR has been gaining more commercial momentum. The EFF published an article on “If Privacy Dies in VR, It Dies in Real Life” on August 25, 2020 a week after Facebook announced that Facebook accounts would be required to use the Oculus VR headsets. After Facebook’s Project Aria AR prototype was announced at Facebook Connect, the EFF published “Augmented Reality Must Have Augmented Privacy” a month later on October 16, 2020. Their latest article was published on June 2, 2021 about “Your Avatar is You, However You See Yourself, and You Should Control Your Experience.”

The EFF helped to organize a RightsCon session on June 10, 2021 titled “As AR/VR becomes a reality, it needs a human rights framework.” Avi Bar-Zeev and I helped to contextualize some of the issues with VR to the Human Rights Lawyers and Activists attending RightsCon, and then were four different breakout groups for half the session brainstorming different human rights, digital rights, and civil liberties across different contexts including Law Enforcement Access to Potential Evidence & Surveillance, Biometric Inferences and Mental Privacy, Avatars and Virtual Agents, and Content Moderation.

I wanted to gather representatives from the EFF the day after the RightsCon session to debrief on insights what a human rights framework might look like for XR from the perspective of law, tech policy, tech architecture, and grassroots organizing.

  • Kurt Opsahl – Deputy Executive Director & Counsel for Electronic Frontier Foundation
  • Katitza Rodriguez – Policy director for global privacy for EFF
  • Jon Callas – Director of Technology Projects
  • Rory Mir – Grassroots organizer

We talk about human rights insights into XR technologies along with the provocation of what it might look like to have more civil liberty rights within XR. We talk about the first UN General Assembly Resolution on the “The right to privacy in the digital age,” and how the global policy perspectives provides new insights for how International data protection regulations, treaties, as well as resolutions passed by the UN’s General Assembly, the UN Council of Human Rights, and actions taken by the UN Special Rapporteur on the Right to Privacy. We also cover some of the US domestic privacy laws concerning the Fourth Amendment, the third-party doctrine, different philosophical approaches to privacy, the invisible nature of privacy harms, and whether it’s possible to build off of laws like the Video Privacy Protection Act since contains some of the strongest US protections for privacy. We talk about whether or not existing Notice & Consent models provide truly informed consent that’s freely given. We touch on a number of tech projects like Privacy Badger and the EFF’s first VR project Spot the Surveillance, what tech companies have your back in protecting your privacy, and how to get involved in your local EFF chapter at eff.org/fight.

How do we ensure that we have just as many if not more human rights in virtual reality? It’s going to take looking at existing human rights frameworks, and listening to the human rights advocates from around the world who are tracking the intersection of digital rights and human rights across different contexts. The EFF is helping to lead some vital discussions on the topic, and I was happy to be able to participate in their RightsCon session and do this debriefing the day after. Below is a rough timeline of UN Resolutions and Reports from the UN General Assembly and UN Council of Human Rights related to the Right to Privacy.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Timeline of United Nations Resolutions Related to Privacy

10 December 1948: Universal Declaration of Human Rights, General Assembly resolution 217A [A/RES/217(III)]

Article 12
“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”

16 December 1966: The International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights, UN General Assembly Resolution 2200A (XXI) [A/RES/2200(XXI)].

article 17 of the International Covenant on Civil and Political Rights
“No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation.
2. Everyone has the right to the protection of the law against such interference or attacks.”

4 December 1986: UN General Assembly Resolution 41/128 [A/RES/41/128]: Declaration to the right to development

1 September 1988: Sub-Commission on Prevention of Discrimination and Protection of Minorities 1988/29 [E/CN.4/1989/3, E/CN.4/Sub.2/1988/45]: Guidelines on the Use of Computerized Personal Files

28 September 1988: 43/40 Supplement No. 40 [E/CN.4/1989/86]: Report of the Human Rights Committee

Right to privacy
69. With reference to that issue, members of the Committee requested details on protection against arbitrary and unlawful interference with privacy, family, home, and correspondence, particularly with regard to postal and telephone communications. It was also asked whether evidence obtained in violation of the right to privacy could be used in the courts and, if so, whether such instances had occurred and what the reaction of the court had been, whether authorities other than judges could order a house to be searched and under what circumstances and whether wire-tapping was authorized by law.

6 March 1989: Commission on Human Rights resolution 1989/43 [E/CN.4/1989/86]: Guidelines on the use of computerized personal files

24 May 1989: UN Economic & Social Council 1989/78 [E/RES/1989/78]: Guidelines on the use of computerized personal data files

15 December 1989: General Assembly Resolution 44/132 [A/RES/44/132]: Guidelines for the regulation of computerized personal data files

14 December 1990: General Assembly Resolution 45/95 [A/RES/45/95]: Guidelines for the regulation of computerized personal data files

13 October 1993: The Vienna Declaration and Programme of Action, UN General Assembly A/CONF.157/24(PartI): Vienna Declaration and Programme of Action via the UN World Conference on Human Rights Vienna, 14-25 June 1993

7 January 1994: UN General Assembly Resolution 48/141 [A/RES/48/141]: High Commissioner for the promotion and protection of all human rights

24 February 2006: UN General Assembly Draft Resolution 60/L.48 [A/60/L.48] Draft resolution submitted by the President of the General Assembly, Human Rights Council

“Assume the role and responsibilities of the Commission on Human Rights relating to the work of the Office of the United Nations High Commissioner for Human Rights, as decided by the General Assembly in its resolution 48/141 of 20 December 1993;”

15 March 2006: UN General Assembly Resolution 60/251 [A/RES/60/251]: Human Rights Council. The Human Rights Council assumes “the role and responsibilities of the Commission on Human Rights”

22 June 2006: UN Human Rights Council 1/1 [A/HRC/1/1]: Adoption of the Agenda and Organization of Work, Note by the Secretary-General

16 May 2007: UN Human Rights Council 5/1 [A/HRC/5/1]: Provisional Agenda, Note by the Secretary-General. “To consider in particular the institution-building process” for Human Rights.

27 April 2007: UN Human Rights Council 5/2 [ A/HRC/5/2]: Implementation of General Assembly Resolution 60/251 of 15 March 2006 entitled “Human Rights Council”

28 March 2008: UN Human Rights Council Resolution 7/36 [7/36]: Mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

30 April 2009: UN Human Rights Council 11/4 [A/HRC/11/4]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

2 October 2009: UN Human Rights Council Resolution 12/16 [A/HRC/RES/12/16]: Promotion and protection of all human rights, civil, political, economic, social and cultural rights, including the right to development: Freedom of opinion and expression

8 April 2011: UN Human Rights Council Resolution 16/4 [A/HRC/RES/16/4]: Freedom of opinion and expression: mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

16 May 2011: UN Human Rights Council 17/27 [A/HRC/17/27]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

12 September 2011: International Covenant on Civil and Political Rights CCPR/C/GC/34 [CCPR/C/GC/34]: General comment No. 34, Article 19: Freedoms of opinion and expression, General remark

17 April 2013: UN Human Rights Council 23/40 [A/HRC/23/40]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

“The present report, submitted in accordance with Human Rights Council resolution 16/4 [of 8 April 2011], analyses the implications of States’ surveillance of communications on the exercise of the human rights to privacy and to freedom of opinion and expression. While considering the impact of significant technological advances in communications, the report underlines the urgent need to further study new modaliti.es of surveillance and to revise national laws regulating these practices in line with human rights standards.”

21 March 2011: UN Human Rights Council 17/31 [A/HRC/17/31]: Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, John Ruggie

HRC/17/31 Annexes (p. 6-27): Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework.

10 August 2011: UN General Assembly 66/290 [A/66/290]: Promotion and protection of the right to freedom of opinion and expression

10 December 2013: UN General Assembly [A/68/456/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee*

“At its 2nd plenary meeting, on 20 September 2013, the General Assembly, on the recommendation of the General Committee, decided to include in the agenda of its sixty-eighth session, under the item entitled “Promotion and protection of human rights”, the sub-item entitled “Human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms” and to allocate it to the Third Committee.”

26 November 2013: UN General Assembly [A/C.3/68/SR.23]: Third Committee, Summary record of the 23rd meeting, Held at Headquarters, New York, on Wednesday, 23 October 2013, at 10 a.m.

“Ms. Almeida Watanabe Patriota (Brazil) expressed concern that some countries had still not accepted the universal periodic review. In the case of country-specific human rights resolutions, negotiations should be more transparent. Her delegation commended the impartial work of the commission of inquiry on the Syrian Arab Republic. In view of the recent revelations of violations of the fundamental right to privacy, she asked what the international community could do to help to enforce that right and whether, in the High Commissioner’s opinion, the absence of Internet privacy guarantees might undermine freedom of expression. She would also appreciate further comments on what Member States could do to help others realize that securing basic human rights for all vulnerable groups, including lesbian, gay, bisexual and transgender people, did not represent an emphasis on any one group.”

1 November 2013: UN General Assembly [A/C.3/68/L.45]: Brazil and Germany: draft resolution, The right to privacy in the digital age [Would ultimately pass as A/RES/68/167 on 18 December 2013: The right to privacy in the digital age]

10 December 2013: UN General Assembly [A/68/456/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee

“At the 43rd meeting, on 7 November, the representatives of Brazil and Germany made statements and on behalf of Austria, Bolivia (Plurinational State of), Brazil, the Democratic People’s Republic of Korea, Ecuador, France, Germany, Indonesia, Liechtenstein, Peru, Switzerland and Uruguay, jointly introduced a draft resolution, entitled “The right to privacy in the digital age” (), which read…”

10 December 2013: UN General Assembly 68/456/Add.2 [68/456/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee**. Rapporteur: Ms. Adriana Murillo Ruin

18 December 2013: First General Assembly Resolution 68/167 [A/RES/68/167]: The right to privacy in the digital age

“Welcoming the report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression [A/HRC/23/40] submitted to the Human Rights Council at its twenty-third session, on the implications of State surveillance of communications on the exercise of the human rights to privacy and to freedom of opinion and expression,”

30 June 2014: UN Human Rights Council 27/37 [A/HRC/27/37]: The right to privacy in the digital age, Report of the Office of the United Nations High Commissioner for Human Rights

18 December 2014: UN General Assembly Resolutions 69/166 [A/RES/69/166]: The right to privacy in the digital age

26 March 2015: UN Human Rights Council Resolution 28/16 [A/HRC/RES/28/16]: The right to privacy in the digital age

“Decides to appoint, for a period of three years, a special rapporteur on the right to privacy, whose tasks will include… to submit an annual report to the Human Rights Council and to the General Assembly, starting at the thirty-first session and the seventy-first session respectively;”

22 May 2015: UN Human Rights Council 29/32 [A/HRC/29/32]: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye

21 October 2015: UN General Assembly Resolution 70/1 [A/RES/70/1] Transforming our world: the 2030 Agenda for Sustainable Development

16 December 2015: UN General Assembly Resolution 70/125 [A/RES/70/125] Outcome document of the high-level meeting of the General Assembly on the overall review of the implementation of the outcomes of the World Summit on the Information Society

16 December 2015: Press release for General Assembly of the United Nations, “WSIS+10 Outcome: Countries Adopt New Plan to Utilize Internet and Information Technologies in Implementation of New Sustainable Development Agenda

1 February 2016: UN General Assembly Resolution 70/125 [A/RES/70/125]. Outcome document of the high-level meeting of the General Assembly on the overall review of the implementation of the outcomes of the World Summit on the Information Society

1 July 2016: UN Human Rights Council 32/13 [A/HRC/RES/32/13]: The promotion, protection and enjoyment of human rights on the Internet

15 December 2016 (publication date) VIDEO: Professor Cannataci: UN Special Rapporteur on the Right to Privacy

Five Priorities of UN Special Rapporteur on the Right to Privacy: A better understanding of privacy, security & surveillance, Big Data & Open Data, Health Data, & Personal Data Held by Corportations

19 December 2016: UN General Assembly Resolutions 71/199 [A/RES/71/199]: The right to privacy in the digital age

23 March 2017: UN Human Rights Council Resolution 34/7 [A/HRC/RES/34/7]: The right to privacy in the digital age

17 to 28 June 2017: Visit to the United States of America Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci [See report A/HRC/46/37/Add.4 published on 20 January 2021]

6 September 2017: UN Human Rights Council 34/60 [A/HRC/34/60]: Report of the Special Rapporteur on the right to privacy

“In his report, prepared pursuant to Human Rights Council resolution 28/16, the Special Rapporteur on the right to privacy focuses on governmental surveillance activities from a national and international perspective. The Special Rapporteur elaborates on the characteristics of the international legal framework and the interpretation thereof. He also describes recent developments and trends, how these can be studied and how they interact with the enjoyment of the right to privacy and other interconnected human rights. Consequently, he outlines first approaches to a more privacy-friendly oversight of government surveillance. In conclusion, the Special Rapporteur reports on his activities in the period covered by his report.”

19 October 2017: UN General Assembly 72/540 [A/72/540]: Right to privacy, Note by the Secretary-General, Special Rapporteur of the Human Rights Council on the right to privacy, Joseph A. Cannataci, submitted in accordance with Human Rights Council resolution 28/16

6 April 2018: UN Human Rights Council Resolution 37/2 [A/HRC/RES/37/2] The right to privacy in the digital age

5 July 2018: UN Human Rights Council 38/7 [A/HRC/RES/38/7]: Promotion, protection and enjoyment of human rights on the Internet

13 July 2018: UN Human Rights Council 38/35/Add.5 [/A/HRC/38/35/Add.5]: Encryption and Anonymity follow-up report

3 August 2018: UN Human Rights Council 39/29 [A/HRC/39/29]: The right to privacy in the digital age: Report of the United Nations High Commissioner for Human Rights

“The present report is submitted pursuant to resolution 34/7, in which the Human Rights Council requested the High Commissioner for Human Rights to prepare a report identifying and clarifying principles, standards and best practices regarding the promotion and protection of the right to privacy in the digital age, including the responsibility of business enterprises in this regard, and present it to the Human Rights Council at its thirtyninth session.”

29 August 2018: UN General Assembly 73/348 [A/73/348]: Promotion and protection of the right to freedom of opinion and expression

17 October 2018: UN General Assembly 73/438 [A/73/438: Right to privacy*, Note by the Secretary-General

29 November 2018: UN General Assembly 73/589 [A/73/589]: Promotion and protection of human rights. Report of the Third Committee

1 December 2018: UN General Assembly 73/589/Add.1 [A/73/589/Add.1]: Promotion and protection of human rights: implementation of human rights instruments, Report of the Third Committee

3 December 2018: UN General Assembly 73/589/Add.4 [A/73/589/Add.4]: Promotion and protection of human rights: comprehensive implementation of and follow-up to the Vienna Declaration and Programme of Action, Report of the Third Committee

4 December 2018: UN General Assembly 73/589/Add.2 [A/73/589/Add.2]: Promotion and protection of human rights: human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms, Report of the Third Committee

6 December 2018: UN General Assembly 73/589/Add.3 [A/73/589/Add.3]: Promotion and protection of human rights: human rights situations and reports of special rapporteurs and representatives, Report of the Third Committee

17 December 2018: UN General Assembly Resolutions 73/179 [A/RES/73/179]: The right to privacy in the digital age

13 February 2019: Annex 4: Privacy Metrics – Consultation Draft, “Metrics for Privacy – A Starting Point“, Special Rapporteur on the right to privacy, Professor Joseph A. Cannataci.

11 July 2019: UN Human Rights Council 41/12 [A/HRC/RES/41/12]: The rights to freedom of peaceful assembly and of association

“Requests the Special Rapporteur to continue to report annually to the Human Rights Council and the General Assembly;”

5 August 2019: UN General Assembly 74/277 [A/74/277] Right to privacy, Note by the Secretary-General (Part II. Health-related data)

26 September 2019: UN Human Rights Council Resolution 42/15 [A/HRC/RES/42/15]: The right to privacy in the digital age

In Human Rights Council Resolution 42/15, “Paragraph 10 of the resolution requested the United Nations High Commissioner for Human Rights “to organize, before the forty-fourth session of the Human Rights Council, an expert seminar to discuss how artificial intelligence, including profiling, automated decision-making and machine-learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy [and] to prepare a thematic report on the issue”.

16 October 2019: UN Human Rights Council 40/63 [A/HRC/40/63]: Right to privacy: Report of the Special Rapporteur on the right to privacy

24 March 2020: UN Human Rights Council 43/52 [A/HRC/43/52]: Report of the Special Rapporteur on the right to privacy

23 April 2020: UN Human Rights Council Resolution 44/49 [A/HRC/44/49]: Disease pandemics and the freedom of opinion and expression: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

13 May 2020: UN Human Rights Council Resolution 44/50 [A/HRC/44/50]: Ten years protecting civic space worldwide: Report of the Special Rapporteur on the rights to freedom of peaceful assembly and of association

27-28 May 2020, [Expert Seminar Report – Right to Privacy] Report of the proceedings of the online expert seminar with the purpose of identifying how artificial intelligence, including profiling, automated decision-making and machine learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy

18 June 2020: UN Human Rights Council Resolution 44/57 [A/HRC/44/57]: Racial discrimination and emerging digital technologies: a human rights analysis, Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance

19 June 2020: UN Human Rights Council 43/4 [A/HRC/RES/43/4]: Freedom of opinion and expression: mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

24 June 2020: UN Human Rights Council 44/24 [A/HRC/44/24]: Impact of new technologies on the promotion and protection of human rights in the context of assemblies, including peaceful protests, Report of the United Nations High Commissioner for Human Rights

20 July 2020: UN General Assembly 75/184 [A/75/184]: Rights to freedom of peaceful assembly and of association

27 July 2020: UN General Assembly 75/147 [A/75/147]: Right to privacy, Note by the Secretary-General

28 July 2020: UN General Assembly 75/261 [A/75/261]: Promotion and protection of the right to freedom of opinion and expression, Note by the Secretary-General

28 August 2020: UN General Assembly Resolution 75/329 [A/75/329]: Contemporary forms of racism, racial discrimination, xenophobia and related intolerance, Note by the Secretariat

16 December 2020: UN General Assembly Resolutions 75/176 [A/RES/75/176]: The right to privacy in the digital age

20 January 2021: UN Human Rights Council A/HRC/46/37/Add.4 [A/HRC/46/37/Add.4]: Visit to the United States of America Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci

“The Special Rapporteur on the right to privacy, Joseph A. Cannataci, carried out an official visit to the United States of America between 17 and 28 June 2017. While praising several strengths of the United States system, the Special Rapporteur observed the risks resulting from fragmentation caused by organic growth and the misplaced confidence that certain conventions would be respected by the Executive. He recommends a gradual overhaul of privacy law, with a special focus on simplification, and an increase in both safeguards and remedies. He especially recommends that United States law should be reformed further to entrench the powers of existing and new oversight authorities while bringing safeguards and remedies for foreign intelligence up to the same standard as for domestic intelligence…

The present report was finalized in autumn 2020, after evaluating the preliminary results of the country visit in meetings held during the visit, which took place from 17 to 28 June 2017, and cross-checking them with follow-up research and developments to date. The benchmarks used in the present report include the privacy metrics document released by the Special Rapporteur.”

17 February 2021: UN Human Rights Council /A/HRC/46/37/Add.8 [A/HRC/46/37/Add.8]: Report of the Special Rapporteur on the right to privacy on his visit to United States of America, Comments by the State

U.S. comments on the framework of the Special Rapporteur’s analysis:
The United States notes that, throughout his report, the Special Rapporteur (UNSRP) assumes that “necessity and proportionality” and related European Union (EU) law data protection standards reflect current international law. In the view of the United States, this assumption is incorrect. Instead, the applicable international human rights law for evaluating U.S. privacy practices is the International Covenant on Civil and Political Rights (ICCPR). Article 17 of that instrument provides that “[n]o one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence.” This provision does not impose a requirement of proportionality or necessity on a State Party’s interference with privacy; instead, it imposes an obligation to refrain from arbitrary or unlawful interference with privacy. That is the obligation that the United States implements through its domestic legal framework. While certain elements of U.S. domestic law may use the words “necessary” or “proportionate” in relation to privacy, the relevant inquiry here is how the United States implements its obligations under Article 17 of the ICCPR.

23 March 2021: UN Human Rights Council Resolution 44/57 [A/HRC/RES/46/16]: Mandate of Special Rapporteur on the right to privacy

“Recognizing the increasing impact of new and emerging technologies, such as those developed in the fields of surveillance, artificial intelligence, automated decision-making and machine-learning, and of profiling, tracking and biometrics, including facial recognition, without proper safeguards, on the enjoyment of the right to privacy and other human rights,”

18 May 2021: UN Human Rights Council: 47/61 [A/HRC/47/61]: The right to privacy in the digital age, Note by the Secretariat

“In its resolution 42/15 on the right to privacy in the digital age, the Human Rights Council requested the United Nations High Commissioner for Human Rights to prepare a thematic report on how artificial intelligence, including profiling, automated decisionmaking and machine-learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy, and to submit it to the Council at its forty-fifth session. “

UN Digital Library Search “Right to Privacy in the Digital Age

UN Digital Library Search “Right to Privacy

Timeline of ICT and Internet governance developments [Really detailed timeline and history]

List of UN Special Rapporteurs

Video: History of the UN Human Rights Council

28 January 1981: Details of Treaty No.108: Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data

31 December 2020: Guide on Article 8 of the European Convention on Human Rights: Right to respect for private and family life, home and correspondence

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to The Voices of VR Podcast. So the Electronic Frontier Foundation is a nonprofit looking at different civil liberties and digital rights online for the last 30 years, since 1990. And there was recently a RightsCon conference that brings together human rights activists and lawyers to be able to look at the intersection between digital technologies and human rights. And EFF organized a whole session called, As AR and VR Becomes a Reality, It Needs a Human Rights Framework. So they invited me to help introduce virtual reality to the community. And then we have different breakup sessions talking about different topics. And so I wanted to get together with the organizers from the EFF the day after and do a little bit of a debriefing, but also look at this issue of tech privacy, civil liberties, and digital rights from the lens of law, policy, technology, and culture. And there's different representatives from each of those different aspects that are able to cover each of those different angles. So that's what we're covering on today's episode of We See the VR Podcast. So this interview with Kurt, Katica, John, and Rory happened on Friday, June 11th, 2021. So with that, let's go ahead and dive right in.

[00:01:17.548] Kurt Opsahl: Hey, everybody, I'm Kurt Alpsal. I'm the Deputy Executive Director and General Counsel with Electronic Frontier Foundation, where all of us work. And we've been trying to fight for civil liberties, digital rights, online. EFO has been around actually just our 30th year. So as the internet has evolved, web developed, we've been there trying to look out for the user and make sure that as we go into the future, we're making a future that we would want to live in. And part of that future is going to be VR, AR technologies. So we've been starting to look at how does civil rights and civil liberties framework work, how does human rights work in the VR space?

[00:01:54.287] Katitza Rodriguez: Hola, my name is Catiza Rodriguez. I am policy director for global privacy for EFF. We are currently working on a lot of privacy and surveillance issues. But in particular, we are digging into the privacy aspects on AR, VR. Thank you so much for the invitation.

[00:02:12.975] Jon Callas: I'm John Callis. I'm Director of Technology Projects at EFF. My group does all of the products that we produce, like Privacy Badger and CertBot, the Atlas of Surveillance, and we also provide technologist support to the lawyers and activists.

[00:02:31.807] Rory Mir: And I'm Rory Meir, I use they, them pronouns, and I'm the grassroots advocacy organizer at the EFF, part of the organizing team. So I'm lucky enough to meet up with different groups that are involved in local organizing across the US. And my interest in VR was really inspired by some of those groups last summer at the beginning of the pandemic, starting to hold their local organizing events in VR. And yeah, that's where my road started with this topic.

[00:02:58.865] Kent Bye: Okay, yeah, great. I know I was at the VR Privacy Summit back in November of 2018. I ran into Dave Moss from the EFF, and he was really a VR enthusiast and wanted to get more people from EFF involved. But at that point in 2018, it was still very nascent. And I think within the last year or so, I think there's been a little bit more commercial momentum. And I'm really glad to see that there's been a number of different articles that EFF has been starting to publish about virtual reality and privacy, starting after the Facebook announced that they're going to be consolidating all the user accounts into one Facebook user account. The EFF, about a week later, published the first article of VR and privacy, and then Project ARIA, and then looking at all the implications there, and then starting to reflect upon bystanders and privacy, end of the year article, and then recently avatars. And just this past week, there was the RightsCon, which was starting to look at a human's rights framework for looking at privacy. And this has also crossed my radar in terms of Rafael Justa and the NeuroRights trying to bring forth a human rights approach, because privacy is a little bit of a mess within the United States, and we can sort of dig into why that is. But very curious to hear the entry point, a journey, because there's both for EFF talking about civil liberties, and there's this human rights approach, which I think is emerging and I think is interesting, and maybe it's putting pressure on the US law, but maybe you could jump in and contextualize this a little bit in terms of seeing that this has crossed over to the point where the EFF has now decided to start to talk about some of these different issues of where these immersive technologies are going.

[00:04:30.578] Kurt Opsahl: Yes, I mean, I think we've long been interested in applying human rights online. And I mean, human rights is a great way of phrasing things to look at a wide swath of the rights that we care about, expression, privacy, autonomy. And so applying this to the VR space seemed like a natural fit, especially for a conference like RightsCon, which is about human rights and about bringing together a lot of people who are human rights defenders from around the globe. I think for a lot of them, they may not have had much experience in VR. We had a fair number who had tried it. We asked at the beginning of the session who had done so. So in part, this was introducing the topic or having people think about it, and in part trying to just get the conversation started so we can start applying these longstanding principles to a new space.

[00:05:15.412] Katitza Rodriguez: Yeah, so my work, as I was mentioning, is focused on global privacy issues. So as part of our work, we have done a comparative analysis of many constitutions around the world. And privacy is a fundamental right in many of those constitutions. Also, the United Nations, a few years ago, issued a resolution about the right to privacy in the digital age, the first ever General Assembly resolution on the area. So for us, what we do is we already have existing principles that are embedded in the constitutions, in decisions of the Human Rights Council, in the UN system. And what we are trying to do is to make sure we explain how to apply those existing principles in this new concept. So we are not trying to rewrite the law, or we're not trying to get something adopted, but we're just trying to ensure or to explain how you apply this principle in this context. As we get deep and deep in the research we'll know whether there's something more that needs to be done or not. And that's more or less what we do. And yes, the concept of privacy in many countries are different. There are more than, for instance, just as an example, more than 100 countries with data protection laws, laws that rules the collection, use, and disclosure of personal data. The United States is not one of those. So when you go to a conference like RISECON, you will see a lot of lawyers, you know, who will defend their constitutions because they have data protections, either in the constitution itself or the judges, the constitutional tribunals have acknowledged that right.

[00:07:01.167] Rory Mir: Yeah, and I just wanted to quickly add, you mentioned Dave Moss and EFF's first foray into VR was a project called Spot the Surveillance, which folks can check out at EFF.org slash spot. And yeah, I think the kind of initial approach was seeing it as educational tool that you can be on a street corner virtually and see different surveillance technology and learn more about it in that context. And as someone who has personally a workshop experience, it's a really cool tool. I'm in that experience because especially at that time VR headsets We're few and far between outside of Google Cardboard. So it's really an interesting switch. I think within the organization that VR is not just an expensive play thing. It is becoming a more mainstream tool of communication and it's implicating all these other issues at the EFF that we care about from platform censorship to free expression and police surveillance as well. But yeah.

[00:07:54.029] Katitza Rodriguez: And also international, we did a workshop where we brought photo surveillance to Brazil to do a demo. It still is hard to get the headsets, you know, and bring it to other countries, but it was a fun experience.

[00:08:10.243] Jon Callas: And for me, I was doing VR things earlier in my career, I'm known mostly for working on encryption but I was working on what we were calling at the time social virtual reality which kind of diverged into harder VR and social media. And at that time when I was working on it, there was no SSL. And so I started getting into encryption because people said, I don't want to do this when just anybody could look at what I'm doing. And then I took a side detour from VR into encryption because I was interested in security and privacy, and I'm now coming back into it.

[00:08:54.500] Kent Bye: Well, I've been doing a lot of interviews about privacy and these different issues. Everyone from Helen Nissenbaum, who is a theorist of contextual integrity theory of privacy that I did an interview with her this past week. There was the Neuro Rights Initiative at Columbia University. They did a whole non-invasive neural interfaces, ethical considerations conference co-sponsored by Facebook Reality Labs. And I did a chat with Rafael Eusta about neuro rights. And previously I did an interview with the director of neuromotor interfaces, Thomas Reardon, talking about EMG and the being able to isolate down to individual motor neurons. OpenBCI talking about what's happening there. Elyse Dick from the ITIF. And so there's all these different perspectives and I'd say classify them into three buckets, at least that I see. One is like Dr. Anita Allen takes a very much a human rights approach saying that we should treat privacy like an organ that we shouldn't be able to buy and sell and trade. There's the libertarian approach, which is our privacy is just data associated to us that should be treated like a property right that we can buy, sell, or trade. And I think that's probably the one that we have right now with being able to do these adhesion contracts in terms of service and privacy policies. And you're kind of exchanging it as a good that you own that you're able to exchange for other services. And then contextual integrity theory is kind of in the middle. Nissenbaum is really focusing on the appropriate flow of information based upon the context. And it's all at that point about defining what's appropriate, what's not appropriate, what's the context, and if those need to be put into some sort of laws. But she's a little bit more skeptical about the human rights approach. And what she posed to me as a provocation was she said, well, if it is a human right, then how do you assert that right? Do you assert that right as an individual? Do you have to assert that right through law? I guess that's the question I put out there. I'm curious to hear your take on if there's any one of these philosophy of privacies that really makes sense in terms of how to approach privacy. And then also, if it is a right, how do you assert it?

[00:10:42.270] Jon Callas: I think that they both have a really good point and that they both have problems, that Nissenbaum accurately says that if it's a right, how do we assert it? But if you say it's like an organ, you can't sell it, then that is something of an answer to her question. And if it is something that you can sell on the libertarian side, this brings up problems of so what's it worth? And that is difficult because you immediately get into things where whose privacy might be worth more than somebody else's privacy. And the fact that information about someone is worth more, the more you have of it. If you take, you know, for example, Metcalfe's Law, which is that, you know, there are things that are worth more with like the square of the number of things that you have, that if you know one thing about me, it might be worth a penny. But if you know 10 things about me, it's worth a dollar. And that has a different set of problems because now all of a sudden privacy becomes so expensive that people can't collect any information at all very quickly.

[00:11:57.836] Kurt Opsahl: I'd also want to add that there's another way of looking at privacy and enforcement, which is privacy against the government or privacy against a legal process where, you know, you're not buying and selling it, but it is still being transferred and against your wishes. And this is definitely an area in which human rights can play a role. The law can play a role. Human rights are also, as Katitza was saying, reflected in constitutions. So you can challenge government actions that would violate those rights, but take that to court.

[00:12:26.490] Katitza Rodriguez: Yeah, I haven't read Helen in a long time, but I remember she questioned the concept of consent. We shouldn't control the flow of personal data. We should provide appropriate way of the data flows, of the flow of the information itself. For me, privacy is a human right. Not for me, from an international human rights law perspective, there is no absolute right. Rights have limitations. And so there are permissible limitations and there are ways where you can limit the right. Those rights can get in tension with other rights like freedom of expression. And then you have to do a balancing test to measure both. So I don't see them as absolute right. I feel all human rights. Well, not all the right to life doesn't have an exception, but freedom of expression, privacy do have limits and you have to balance them. That's my answer to that question.

[00:13:22.056] Kent Bye: Yeah, well, I guess the thing that I struggle with is the path forward. You know, as people are trying to think about this as an issue, there's been debate around U.S. federal privacy law. However, the last assessment I saw, you know, the probability of it actually passing was somewhere between zero to 10 percent. based upon all the other things that are happening with the government relative to these big tech companies and antitrust as maybe superseding some of these different privacy issues right now, where we had a lot of interest when it came to Cambridge Analytica, but there's other harms that are happening at the scale of big tech and the relationships to society that privacy isn't at the front of the legislative agenda, let's just say. But I think with these XR technologies, virtual and augmented reality, I think it's in terms of the potential threat, it's certainly a huge one. Now, I guess this is where you get into the debate of, do you try to legislate and to prevent harms versus just let it unfold and see what the harms are, and then try to do something later after you get a better sense of what is happening at scale? We're still at a small scale with this technology, but Thomas Metzinger is a philosopher who's talked about this technology pacing gap, which is that the evolution of the technology is moving so fast that we don't have our conceptual frames to be able to really understand the technology and what it is, but also its relationship to society and how to act at a collective scale to be able to rein in some of these different things. So I guess I'm struggling with like, what's the path forward here in terms of where should we be putting our attention and focus? Should it be at the UN and the generalized human rights principle level? Or are there legislative dimensions? Are there technological architectures? Or is it all of the above?

[00:14:56.415] Katitza Rodriguez: I think it's all of the above. Sorry, I think it's all of the above. For one hand, laws are important to be able to enforce the rights. One of the things that I feel very strongly is the right to control your own information. Like the data protection laws provide a framework to guide companies on how this data flow should flow. If you disclose your data for one purpose, it shouldn't be used for other purpose. So if I do it, I share my data to use a specific experience I don't want it to be used for other purpose without my informed consent. We can have discussions about what is informed consent and when it's done the right way, or we can have discussions about that. But it's not about privacy harms, I shouldn't be able to put a harm to be able to exercise my rights. First because privacy violations are invisible. So for instance, if I get discriminated because, for instance, I'm going to a job interview. and the potential employee has a whole file about, you know, my political affiliations, if I have some disability or something, he will just not give me the job. But I wouldn't know that it's because there were a breach of privacy. Of course, right now we have laws that protect us against discrimination. But the point that I'm trying to make is that violations on privacy are sometimes invisible. And so in that way, if you have to prove harm, you won't be able to exercise your rights. That's why I strongly disagree with the harm approach.

[00:16:33.451] Kurt Opsahl: So I just want to add to that, you know, all of the above. Absolutely agree. I do a multifaceted approach, and I think that's a little bit of what EFF tries to do. You know, we have lawyers who try to work through the legal system, court cases, set precedent and enforce rights. We have technologists who build technologies that will protect rights. We have activists who try to get through support and develop the legislative process. So all of these things need to work in conjunction. In fact, they're mutually supporting. that if you have a law but then the government or the private parties don't obey the law, then you could have technology as a backstop to that. You can have policy to help understand and create those laws and also provide a backstop for the laws to make it less likely that someone would break them. So all these things need to work in conjunction. I would also point out that we are not starting from scratch. The GDPR has a lot to say about data and it applies to VR as much as anything else. In the United States, there are a small number of state-level privacy laws, California, Virginia, most recently Colorado. that have something to say about privacy. And there are also privacy rights out there. I mean, we don't have a data protection statute like GDPR, but there are some privacy rights that are created by federal statute. For example, the strongest privacy law on the books in the United States is the Video Privacy Protection Act. You have a tremendous amount of protection for what videos you watch. It was written in the era of videotapes being rented at Blockbuster. So it has had some challenges being sort of reinterpreted to, you know, how does the language fit with new technologies? But it does speak about audiovisual things, which VR context has a lot of. So we need to do some of the work is actually taking things that already exist and helping courts, legislatures, and companies trying to make VR understand how these existing laws apply to the new products and the technologies.

[00:18:33.788] Rory Mir: I also want to jump in real quick as the grassroots person is also community defense of these harms. I think it really speaks to how a lot of our issues at the FF intersect that open source initiatives, for example, do help user autonomy limit what data is shared and how data is handled. And I think, yeah, ultimately there is this collective counter measure to push back on these potential harms additionally, which are. just as important and they can lead to these kind of more wider holistic changes down the line as organizers push back on things like police surveillance and can get things like face recognition banned. So I think also folks banding together to create the way they think the technology should be can be just as effective as these kind of prescriptive or reactive approaches.

[00:19:21.013] Kurt Opsahl: I mean, just to add a little bit to what Rory just said, one of the things that the EFA they work on is CCOPS ordinances, where local control over what police surveillance is used in their community. And so if a police department said, we want to start using AR glasses to look up people's faces, facial recognition, license plates through ALPR and walk around the streets with these glasses, they would have to go through the local city council and have a period in which the community could come there and give comments to the council. And I think that can actually be a very powerful check on some of the possible misuses of the technology in a sort of Orwellian sense.

[00:19:59.968] Jon Callas: And from the tech end, tech is extraordinarily powerful because when we are able to build something, it's just there. For example, we are in an encrypted chat room. The discussion of whether or not we should be able to talk privately with no one else listening in is irrelevant. We are talking privately. And similarly, Some of the things that we do if we wanted to talk about, there are lots of web trackers out there we make privacy badger privacy badger blocks these trackers, and it's not a matter of what happens if this were to occur it's no we're actually going out and doing this. So in the case where we can apply tech with activism, with legal things, we get to build something that changes the way that things are on the ground. But it does have its limits. I mean, you know, it's like one of the reasons why I left building things in industry to be at EFF is that I believe that we are at a point now where policy is more powerful than tech. And so I want to be part of policy rather than just going around and building things, but I can still go and build things that drive policy.

[00:21:21.318] Katitza Rodriguez: And from the international level or at the global level, we don't need another treaty on data protection. There is already a treaty on data protection. It's Convention 108 from the Council of Europe. It's like a type of GDPR, but global. States can ratify the treaty. The United States hasn't signed it, of course. So there is no need for principles on data protection. They already exist. What we need is to enforce them. And so there are more than 100 countries whose laws, the enforcement is what is the problem for many of groups.

[00:21:54.770] Kent Bye: So one of the things that I guess comes up when I'm looking at some of this is this biometric data or physiological data that has different things that you can infer different aspects of someone's personality, character, likes, dislikes, preferences. And a lot of the privacy law, at least that I see so far, especially in terms of biometrics, is very connected to identity. And so unless it's personally identifiable, then it's not, then it's sort of de-identified and it has a whole other class. But there's all this intimate data that I think is going to be made available from what Britton Howler, she took a look at this and as a human rights lawyer said, this actually needs to be defined as biometric psychography. So a whole new class of data that is going to be coming out of this And I think I'd say a paradigm shift that's happening away from just static identity into like the more contextually aware information that is contextually aware AI, being aware of everything that's happening in your virtual environment, either in AR or VR with egocentric data capture and just capturing everything, but trying to correlate with what is happening within your body, your physiological responses, galvanic stim response, eye tracking data, EMG, EOC, skin temperature, all this stuff is all going to be fused together. It's questionable as to whether or not we'll be able to mathematically model someone's consciousness, but I think it'll be enough perceived models of information that can be good enough to make actions on. As long as it's making actions that are close enough, then whether that's actual true or not, it's more perceived that a model that is good enough to be able to be dangerous in terms of our risks to mental privacy. So that's what I see is we have this whole new era where even if we use approaches like homeomorphic encryption or federated learning, that's again, trying to keep someone's individual identity from being revealed amongst a cluster of information or trying to do processing on data that as long as you have access to the raw data and be able to do some sort of analysis, then I feel like there's certain amounts of this information that has to be used for the VR. And because of that, then you have all these risks of all this inferences that are going to be made that there doesn't seem to be existing, at least as far as I can tell, a good legal framework to be able to counter some of these things that are happening with neurotechnologies and XR with augmented and virtual reality.

[00:24:08.183] Jon Callas: Yeah, I mean, from my viewpoint, and I'll take it from a tech plus policy thing, is that there's a lot of data that VR systems are going to be collecting and using for very good purposes. I mean, for example, if you have eye tracking, you can do things like make the display much better because you can draw in detail the thing that is in someone's foveal vision and blur the things or just not draw as well the things that are outside of there. But that's also really close to what your thoughts are. And I would say that it is actually evil to move that data off of the display. And if that display is a thing that is my display, it should be defending my interests. And the policy aspect should be enforced by the hardware. And you just can't get that data. That data is not pushed off because it's very sensitive to me as a person. And thus, the tech of the VR should be saying, you just can't get that.

[00:25:21.045] Kurt Opsahl: One of the things that they can really help with technology is have it be that it works for you and works for them as a concept, like as an overall policy concept to try to create a world in which when you have personal technology, like an AR or VR device, The information that it is using is using for your ends and your benefit and not working against you. And a lot of that can be handled by having it be on device or if it is off device, having strict rules, security parameters enforced by policy and by hardware or encryption to limit how it can be used outside of what is necessary to help you. The other point I wanted to sort of raise with the inference is that it's actually scary in multiple directions. One is that if it actually is very good at figuring out what's on your mind by looking at your eye tracking and your galvanic response and so on to figure out what's in your head. But the other thing is, what if it's bad at that? And then people are still treating that as valuable information and making judgments based on technology. Because there is a lot of technology that purports to be biometrically based inferences and such that is pseudoscience. Lie detectors for decades have been a variety of this with not very good scientific backing. We've seen with facial recognition that they often are making mistakes. There's actually a lot of bias built into the results, you know, looking at different response rates, people have been arrested because they've been mistakenly identified by facial recognition where someone said, oh, this is a technology, I should rely upon it. And the danger comes from all of this information being used is not just that they successfully figure out what's in your head and manipulate you with that, but also they unsuccessfully do so and then try and manipulate you or do something else, you know, based on that information.

[00:27:09.147] Kent Bye: Yeah. So I guess one of the concerns that I have is this whole notice of consent, but also the ways in which that these adhesion contracts that we're signing, signing over all these rights that are not really reasonable for anybody to really read or understand. Is there an alternative to this or like how do we get out of this existing adhesion contract terms of service privacy policy system where it feels like a form of digital colonization where You kind of trick people to sign the terms of service, and then you kind of seize ownership of all this data. And at that point, you have no control or no audit trail to be able to know where your data is going, how it's being used, and the contextual integrity theory of privacy, you know, the appropriate flow of information. Well, there's no way to really know whether it's appropriate or not once it's out of your control. And so what's a framework to like actually own our own data and to get out of this terms of service adhesion contract nightmare that we're in?

[00:27:58.881] Kurt Opsahl: I would, first of all, sort of like owning your data. I mean, having part of a privacy is a right to be left alone. So the ownership model of data has some problems we're talking about just a bit ago. But on the general thing, I think one of the concepts that's really useful here is not just sort of consent, raw consent, but informed consent, voluntary consent, that if you have a contract of adhesion, but at the end of it, someone says, okay, and they didn't know what they were saying okay to, then that's not informed consent. And we've seen some of the data protection laws going down and trying to define what makes something informed consent, what is going to be good enough. Also having things like consents have to be separable, that you can't say you have to consent to all of these things if they are in fact separable. There are ways of trying to address that, but ultimately, yeah, it's very important that if after whatever transaction you have, the person that has agreed to something knows what they have agreed to, and we've lost that. We have taken a contractual model that was built, at least in the legal system, on the notion of two individuals meeting perhaps with lawyers advising them, writing down the terms of their deal, and Sometimes it's a wet signature, and it's a very formal thing, and a rare thing. Someone 100 years ago, 200 years ago, might never write a contract in their entire lives. Or have a written contract, at least. They may have handshake deals and things like that, which are semi-contracts. But now you are probably agreeing to several hundred contracts of adhesion a day, or at least under the, if you use this site, you agree model. And I kind of wish I remember what the numbers were. Someone did a study of how long it would take to read all the terms and conditions of number of websites people typically visit in a day. And it turned out to be longer than a day. Like you literally couldn't read all of the contracts and continue to behave like most people behave. So it is a totally broken system. We need to figure out how to address that, whether it's about AR, VR issues, or for that matter, just visiting a website.

[00:30:05.497] Katitza Rodriguez: Yeah, only to share a little bit the model outside the United States. So we have the notice and consent in the United States, but in the European Union or in Latin America or many other countries in Asia or even Africa, when they have data protection laws, the term of service has to comply with the data protection law. So you have consent, but not only consent. You also have to have purpose limitation, meaning they're clear that you're using the data for this purpose and not the other one. It also, like in the GDPR, explain what consent means. It has to be not only informed, but also freely given. If I'm forced to sign something and I cannot withdraw my consent, then that's not freely given. And right now, there is a lot of litigation in the European Union on trying to enforce these terms of services. You are tricked to click. If there are dark patterns, you only don't have data protection laws. But in many other countries, you have consumer laws that also protect you against dark patterns. And so it's not a notice of takedown consent. It's like the term of service that has to comply with these principles and integrity and security among others.

[00:31:21.280] Rory Mir: A more conceptual point, I'll let the lawyers jump in if I say anything too off. But there's also the issue of with these services, it's all or nothing consent. You either consent to everything the service wants to use with your data, or you can't use it at all and you're locked out. So I think having it be more module and also the fewer uses of data, the less there is to consent to kind of naturally shrinks the contract. Yeah. And I think another issue of interoperability and this kind of user freedom to go the way of lease data collection, if to use Facebook as example, if I'm not comfortable with how Facebook uses my data, having a choice to use another platform, which does respect my data, but still be able to contract people and not weaponize the network effect to enforce these kinds of absurd contracts.

[00:32:10.382] Jon Callas: And from a privacy engineering standpoint, one of the things that the people that I work with talk about is transparency, consent and control. And you can hear that in everything that Kurt and Katica and Rory have been saying is you have to know what you were consenting to. And that's transparency. You have to affirmatively say, yes, I want to do that. And then you should be able to say, hey, wait a minute. And that's control. And you can't always have perfect transparency, consent or control. But the more of those that you have, the better things are, because it is indeed true that if I let you know where I am, I might have control to say, I don't want you to know where I am anymore, but it's very hard for me to revoke that you knew where I was. And so this is somewhat idealized and needs to be backed up in policy and other things. But this is the idea that you build your systems so that these are what you build into it so that the app, the device itself enforces these human rights principles.

[00:33:23.958] Katitza Rodriguez: Yeah, and I think this is really important point, John. I agree with what you're saying. I think there is many technical things to do. Data minimization, you don't need to collect all the data. And it's important that engineers, when they design their products, they have this concept from the start. Like the data protection principle should guide the development process of a product. So do I really need this data and for what purposes? Sometimes it's not possible. They said that it is complicated. Well, they have to be transparency. and more debate about differences and the type of processing that they are doing and giving to the users more control about this data. So it's not only about just the principles of the law, but also to implement those principles in the moment of the development of the design of any product and any software, any device.

[00:34:14.985] Kent Bye: Yeah, and we talked very briefly earlier around the differences between, say, corporate surveillance and government surveillance, and that a lot of times privacy concerns will get bifurcated into focusing on one or the other. But I did want to bring up the third-party doctrine aspect of the Fourth Amendment, meaning that anytime you give any data to any third party and it's recorded on a server, has no reasonable expectation to remain private. According to the government, all of cyberspace is essentially public, with maybe some exceptions with the recent Carpenter case. And maybe it's not just a total blanket, like anytime you give data to a third party, then government can get access. But more or less, I think a lot of the stuff that you do give over, it's fairly easy for the government to get access to without a warrant. And I feel like that is what we're facing here in the United States, but also, you know, internationally thinking about the types of harms that could happen when this type of intimate information gets into the wrong hands of an abusive government. But I'm curious to just get a little bit of like, what's happening with this fight that I know the FF has been involved with in terms of third party doctrine. If you see some sort of legislative pathway to have something different, or if we're kind of stuck with this situation where if anybody records something, then all of a sudden it's functionally available to the government if they really want it.

[00:35:22.423] Kurt Opsahl: you're absolutely right to mention Carpenter here, because I think that what we have seen is that the legal landscape is shifting and shifting away from the third-party doctrine, that Carpenter opened the door to questioning some of the roots of the third-party doctrine. And, you know, to be quite frank, the roots of the third-party doctrine are about very different circumstances than we are facing today. And the court was recognizing that throughout the opinion, that One of the early cases on the third party doctrine involved one person who was phoning another to make an obscene harassing phone call. And they looked at some information held by the phone company about that. And it has been tried to be used by the government to say, well, it is just a blanket rule. If it's with a third party, you know, forget about it. You don't have to think about it. And the court essentially said, no, we're not. And the other case that is very relevant to this is called Kylo. which was using new technologies to look inside a house because you do it with thermal imaging. And they were not comfortable, the Supreme Court, with allowing this notion that, you know, you're a private house to be sort of done away with by a new technology which allowed you to look inside through the thermal imaging. So I think that if these laws or these constitutional principles are applied to VR, I actually wrote up a post right along these lines saying that you actually do need a warrant if somebody wants to go inside your virtual house, wants to come and get inside a Fourth Amendment protected space. Now, we'll have to wait for a while perhaps to see a case that's actually on this point, but I think that these principles can and should apply to new forms of technology. And as the court was recognizing, we don't want to give up the core values that were part of the foundation of this country with a principle that overcomes the Constitution. So if you have a third-party doctrine and everybody's information is constantly being held by a third party, then it means the Fourth Amendment becomes meaningless, and that's not what the framers of the Constitution intended, and I think that's not where we need to go as a society.

[00:37:27.191] Kent Bye: Yeah. I wish that I'd, I'd hear more statements from the companies about this issue. It feels like they're just not really doing much about it.

[00:37:34.616] Kurt Opsahl: I actually, I mean, I hope to see that as well, because one of the things that, you know, project that we worked on at EFF for many years, starting maybe 12 years ago was pushing the companies who's got your back. We would give star ratings to companies for whether they had their users back. And one of the things that was a pressure point in the early framing of that, was whether or not they would require a warrant before they give up email, because there was an interesting question as to whether or not the Fourth Amendment applied to getting and requiring a warrant to get email, because of course email being held by a third party, and there was a statute about it. the Electronic Communications Privacy Act that was written in an era long before web mail or such. The notion was that if you used email during 1986 when we were crafting this, you would go to a site, download it to your computer, and then it would be on your computer. So there would be just a brief moment when it would be on the site. And it said you didn't need a warrant to get it after six months because what the legislature was thinking is that if you hadn't collected your mail in six months, you'd basically abandon this mailing account I mean, I think that shouldn't mean you abandon your Fourth Amendment, but that was the logic. And then came things like Gmail, where you keep your email for your entire life for years. And like this logic just didn't apply. And there was a circuit split where some courts in the U.S. were saying you did need to get a warrant to get at emails and other were saying you're not. And the DOJ's strong position was that you didn't need a warrant. And so we put a lot of pressure and a lot of the companies eventually agreed and published in their privacy policies or their law enforcement response guides that they would require, they would not give up email without a warrant, which would at a minimum force the government to go to court to say that they needed to get it without a warrant, which is probably harder than actually getting a warrant. So it was a really strong protection that came about by ourselves, but also public pressure like other people trying to convince the companies and the employees in the company saying, why aren't we doing this? And so I would hope that for any companies who listen to this podcast, if you're listening right now, go talk to your legal and policy people and say that you think that they should require a warrant before giving up private information that's held in the system.

[00:39:53.709] Jon Callas: A thing that I'll add to that is that when we talk about all of these things, because all rights and all virtues are in tension with each other, that you also need to think about the world that we're in versus the world that we want to live in. And particularly with VR, we are constructing a world that we want to live in. So that means that the people who are building these really ought to be thinking about some of these issues and then putting technology and their own policies like declaring, I'm going to require a warrant in place that say, I am building a world that I believe people will want to live in and will have these good characteristics that are better than the real world. And that will actually help VR as a technology. If VR can convincingly say, you have more rights in VR than real life, then that will encourage people to do things in VR that they wouldn't have done otherwise.

[00:40:57.922] Kent Bye: I love that. Just as we start to wind down, I have a couple more questions. Because we just had this whole RightsCon session and everybody that's on this podcast was there, I gave a brief little introduction, just trying to set a context for the gathering to get people up to speed as to what VR technologies are doing, what's happening with all this physiological and biometric data. I was in the biometric inference and mental privacy session. There was a whole avatars and virtual agents session, law enforcement access to potential evidence and surveillance, and then finally a whole session on content moderation. I'd be curious to hear some takeaways from each of you in terms of that session and what you got out of that that may be feeding into the next steps of where this is going for either your own personal work or overall with what the EFF is doing.

[00:41:41.894] Katitza Rodriguez: Well, in EFF we are starting to dig into the technology. We have been researching topic by topic and we hope to keep doing the research. The last article we published was Avatars. how to control your avatar, and we have been researching, you know, come back with a warrant to your virtual house or your virtual home. We hope to keep doing that. We hope also to keep building the community. I really enjoyed the RISECOM session. I love that we met a lot of very interesting people, people with a strong background on human rights who never heard about AR, VR, or have never wore a headset. together with people who have a lot of experience. There were a few ones with 30 years of experience on AR and VR, you know. But when you put them together, they will start just to talk and they want to keep chatting, but just run out of time. So we hope we could bring this community together in the future.

[00:42:40.519] Kurt Opsahl: Yeah, I very much agree. I think it was great to get people in the community together. There was far more to talk about than we could accomplish in an hour, even though half hour of that was four different groups. So, you know, collectively it was closer to three hours, but nevertheless, like it was a good starting point in the conversation for some being introduced to the topic for the first time, others sharing what they already knew, different perspectives. We had people who came from a wide variety of human rights and technology backgrounds. But this is a part of it, like, as new technology comes about and you start to look at and apply some of these long-standing principles to try and find this better, brighter world that we might be able to make. with AR, VR, you know, trying to get people who know about some of the principles to know about the technology, think of ways that it comes together and also bring new perspectives that can help inform how to look at interesting tech policy questions that come out from it. So I was very pleased with the session. It was a great group of people and glad we did it. And I think there's more conversations to come.

[00:43:41.836] Katitza Rodriguez: Yeah, and I love it also that it was very diverse. So we have people from Latin America, from Europe, from Asia, and the United States. And I enjoy that debate, and technologists, and lawyers, and activists. I think one takeaway from the meeting was there are so many interesting questions. and many are passionate about the technology. We really want to make it like the world we want to live in, as John and Kurt have been saying. And I think there's a lot of passion and a lot of interest and a lot of very technical and difficult questions to solve. So it's not easy technology, it's very complex and it's dealing with very sensitive data. At the same time, you need the data to have an amazing experience, at least some of them. And so it makes it even more challenging. So I think from the EFF perspective, I hope we can keep digging into the topics as we go.

[00:44:41.270] Jon Callas: Yeah, for me, the best part of it was to be able to have the start of these conversations, find out what people were doing and to be pulled into other things. For example, I heard about a tech group that is doing VR ethics. And so I immediately sent an email and said, I'd like to join. And I hadn't known about that. So I am now getting involved with other policy things with that. So it was really good to find out about that.

[00:45:14.234] Katitza Rodriguez: The IEEE that can run.

[00:45:18.577] Rory Mir: And I just wanted to echo what Katica mentioned, just having this wide background of perspectives on the issue were really interesting. I was in the avatar group and I think that was a really interesting one because identity is at the center of a lot of these issues and being able to hear from a wider group of people. I mean, I learned that the term avatar itself is problematic and it's being culturally appropriative of Hinduism. So kind of brainstorming new terms for that was an interesting aspect, but also just Myself as someone who's non binary being impacted by this kind of static nature of these virtual representations and virtual personas. So yeah, I think more events hearing from more backgrounds and more diversity is really essential moving forward.

[00:46:07.382] Kent Bye: And so the final question that I usually like to ask people is the ultimate potential of where all this is going. And we mentioned this idea that we'd have more human rights, which is very provocative in terms of being able to actually have more freedoms. I did an interview with Sarah Downey a while ago, where she said, VR is either going to be the world's worst dystopic surveillance technology, or it's going to be one of the last bastions of privacy. given all the stuff that's happening with our biometrics that we can't change with our body, with facial recognition. VR may be a safe place for people to go to and have anonymity or getting outside of your existing identities. But also at the RightsCon, I'd say that there was a sense that a lot of the human rights is looking at the violations of how things can go horribly wrong on the more dystopic side of things in terms of, if we do nothing, then this is where we could be heading down. So as this last question, I'd love to, you can pick which one you're more temperamentally drawn towards of the more dystopic vision that we need to avoid or the more utopic vision that we would love to live into. But what you see is kind of like the ultimate or the worst potential of this technology.

[00:47:10.658] Kurt Opsahl: Well, I'll give that a shot. I mean, I try to be an optimist about these things and try and get to, you know, a brighter future. But I think it's not we are going to get there if we don't do any work, right? We have to do the work. We have to do the things to keep us away from the dystopias, whether the dystopia is made by made by those with malicious intent or sort of more commonly by just people not thinking about the consequences, having more of what can we do instead of what should we do. And I think this reflects a little bit, if you look back on the last, say, 20 years or so of the internet, or more, when you look in the 90s, when the internet was just coming into the major public consciousness, there was a lot of utopic visions of that, and I really appreciated those. There were also warnings. If you were reading any cyberpunk fiction at the time, you were seeing lots of warnings about what might come. Unfortunately, some of those warnings seemed to come about. It makes you wonder if some people that read those same novels were like, I should do that as opposed to never let that happen. But one of the things we're trying to do, I think, with AR and VR is get in there, while it's still a nascent technology, with that experience that we've had of other technologies becoming widespread and what the effects they've had on people and society, to try and embed some of these better notions in as early as possible and have them go into the hardware. Just an example is for things like privacy and security on the internet. For a lot of the early days of it, there was no security, things were very open, and email, trying to put security and privacy on top of email has been an extreme challenge because it was never engineered with those sort of things in mind. If we engineer VR technologies from the beginning to be thinking about these concerns, then we absolutely can have an ability to take advantage of the promises that they offer, like being able to interact in an immersive environment with people thousands of miles away, you know, representing as you would like to represent, doing things flying, you know, being able to exceed the bounds of reality without giving up the privacy and security that you want to have, like this would be a dreamy world. But if we just say, let people do whatever comes to mind and try out disruptive technologies that are trying to maximize profits through data abuse, well then we're going to head more towards the dystopia. So that's why we have to look at this, be vigilant, be proactive, try and get lots of support, work on the legal, the policy, the technical sides to make it so this technology grows in a manner heading towards the light.

[00:49:56.248] Jon Callas: So I'll say, why not both? There's an aphorism that an optimist is never pleasantly surprised, and I'm usually pleasantly surprised. So this is sort of the security person's fate, is that things often aren't as bad as you were afraid they were going to be. As a quick anecdote, the first VR system that I worked on was building While we were doing development and while we were in alpha test, Snow Crash came out. One of the things about Snow Crash is that human malware gets transmitted through VR. The security part of me went, this is utterly ridiculous. Why did they permit this to happen? And yet, the mechanisms where you could hand someone something that was a virtual object and have it work conveniently, I had to admit was really nice. And so I went back and redesigned things based upon Snow Crash, where it had slightly less explicit security to be convenient, but also learned from that novel even while we were in alpha test.

[00:51:07.120] Rory Mir: I'll throw out there, I'm super optimistic about the hackers going forward. I think hacker spaces and the creators always are impressive in what they can do with this technology or any new technology. What I'm worried about is the major players sculpting the landscape to serve themselves. That's why it's so important for folks to shine more light on these issues and to start getting mobilized now while we can still have an early impact and preserve our privacy moving forward.

[00:51:36.886] Katitza Rodriguez: I really would like to see ways in which I have most of the data on device. I think that one of my biggest concern is law enforcement or intelligence agencies getting upset of how I think. You know, how I think or how I will react, my patterns or how I would react to things. All those inferences that can come from the combination of our body data. I think that we need a space like you have privileged conversations between, you know, your lawyer and the client. I think that we need to have that protected in the space of AR, VR. So we can enjoy the experiences, get immersed with that technology without have to worry that because the data exceeds, the government can access to that. And I think it's also important that in the design of the product, make it difficult for law enforcement to access. If you have data in the cloud, it's easier to access data. If you have it in device, you have to go to the device to access the data. If you rewrite it every time, then you don't have that data. And so I hope that those who have the power and the capacity to design hardware, to design the application, have these considerations into mind. Because the technology could become very powerful for law enforcement, for authorities. And so while I'm very positive and excited with the technology, I also have this concern about the potential dangers of dystopian future.

[00:53:11.955] Kent Bye: Is there anything else that's left unsaid that you'd like to say to the broader immersive community?

[00:53:17.093] Kurt Opsahl: Well, one thing, which is for those who are in this community who are in a position of designing hardware, software environments on this, just to take a moment to think about these principles when you're designing it. And I think not only is it good for the user, good for the world to have this brighter future that we're imagining, but I think it also will be good for you as the designer because people will be more comfortable adopting this technology if they feel comfortable with its privacy, security, the human rights aspects to it, that it will make it so that it is a technology that people are comfortable with, and that'll be good for the world. And I look at a little bit of the example of Google Glass, which came out as like a very early form of an AR, and it essentially failed in part because it was too creepy. and that people were very uncomfortable by seeing someone enter into a space with a Google Glass on. They were made fun of, shaming, and these are the sort of things that can work against the adoption of this technology. And a way to address that is get ahead of it and make it so that people are comfortable if they see someone else with an AR Glass. They're not worried about being caught up as a blind or they're comfortable wearing it because they know that the information that it's gathering, whether it's VR, AR, gathering about you is in a safe space. So these important human rights principles are not just good for the world, but if you're a designer working on it, good for you and good for making a successful product.

[00:54:35.965] Katitza Rodriguez: just to the community. I hope we can organize future events. And I truly would like to meet more people from the AR, VR community, especially technologies. The technology is not easy. It's new. Many products are not even available. And so trying to understand the technology, law, policy, first you need to understand the technology. So meeting more technologists and hackers and people who are really passionate in design and get together with them and the EFF will be great. So that's my layout. I hope we come out at future events and get together with your community, Kent.

[00:55:14.141] Rory Mir: And speaking of getting together with your community, I encourage folks to check out EFF.org slash fight, because in addition to EFF's work, we have this electronic frontier alliance of groups that might be more close to you than San Francisco. And we love connecting those communities and inviting new communities into the alliance.

[00:55:33.760] Jon Callas: I would say that this is an opportunity for people who are building and designing things to take a lot of what they want to do to make the world they want to see that they can literally go and do it. You can literally go and say, I want to live in a world that is more respectful to a lot more people and then make the decisions whereby that is built into things that are kind of like the laws of physics. you can do this and it can really be a better world. And that's the opportunity that is ahead. It's going to end up being how much do we deliver on things? It's like, are our selves going to be delivering on the checks that our principles want to write?

[00:56:23.532] Kent Bye: Awesome. Well, Kurt, Katica, John, and Rory, thank you so much for joining me here on the podcast today. And for all the work that you're doing at the EFF, it's so important to have these lawyers and tech policy folks and hardware designers, technologists working at all these different layers of both the technological architectures, the laws, the cultural dynamics, all these things are really important. creating a future that we all want to live into. And so it's such a important work and thanks so much for being on the front lines of a lot of this as well. And yeah, just thanks for taking the time to be able to unpack a little bit here on the podcast. So thanks. Thanks for having us.

[00:56:56.453] Kurt Opsahl: Thanks for having me on the program.

[00:56:57.834] Katitza Rodriguez: Thank you Kent for all your time, your kindness on helping us with the topic.

[00:57:04.781] Jon Callas: Yes. Thank you very much.

[00:57:06.675] Kent Bye: So that was Kurt Opsahl. He's the Deputy Executive Director and Consul for Electronic Frontier Foundation, Constituto Rodriguez, the Policy Director for Global Policy for the EFF, John Callas, the Director of Technology Projects, and Rory Meir. They're the Grassroots Organizer. So I have a number of different takeaways from this interview is that, first of all, Well, there was a really balanced approach from looking at from the perspective of law, from tech policy, from the technological architecture and from the grassroots organizing in the culture. So lots of different perspectives on these different issues. And the FF is really trying to take a multifactor approach of trying to address these issues from making the privacy badger from working on tech policy from going and working at these international law perspectives. Some takeaways that I got was just listening to Conchita Rodriguez talk about the global policy perspective, how there's these different data protection regulations, everything from the GDPR or each of the individual constitutions around the world. For some companies, they have to follow the local laws in terms of all these different data protection. The GDPR has more of an ominous one-stop-shop approach, whereas in the United States, it's very fragmented. There's also these data protection treaties that the United States has not signed. like the Convention 108 from the Council of Europe, which was passed on the 28th of January of 1981. One of the really striking things that John Callas said was that, what if, in virtual reality, you actually have more rights than you have in reality? That's a pretty interesting idea, that you would actually have more human rights and civil rights and civil liberties rights, you know, that you'd be able to perhaps express yourself more freely or to be able to travel more. you know, there are potentials in which that we could have even more rights when it comes to virtual augmented reality. However, Kurt Upsal was talking about how with email, it wasn't designed with privacy in mind at first. And so then they had to try to tack it on later. And it was a lot harder from once it was already implemented, try to implement all these different layers of privacy and security. And so just the same, I think we're in this space with virtual augmented reality where we're not necessarily architecting every single core level to have the most private or secure communications that we possibly can. There's lots of ways in which people can listen in or be able to track what we're doing. I think that's the types of things that the EFF is trying to look at. What are all the different ways that we can start to address that? So, you know, there's ways in which the human rights framework, I talked to Britton Heller about this and talked to Rafael Usta, trying to go to the United Nations and these international organizations and to, in some ways, pass these higher level principles of like, here's some universal human rights, you know, the right to agency, the right to mental privacy, the right to identity, the right to be free from algorithmic bias, and the right to fair and equitable access to these technologies. You know, those are the five neuro rights that Rafael Usta and the Morningstar group had put together. But there's discussions that are happening at places like RightsCon, but also with the United Nations themselves. You know, I did a big deep dive into the history and evolution of privacy at the United Nations, going all the way back to the Universal Declaration of Human Rights, which was passed on the 10th of December of 1948. In Article 12, it says, no one shall be subjected to arbitrary interference with his privacy, family, home, or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks. So that's in the UN Declaration of Human Rights from 1948. And then that kind of theme evolves over the year, and it gets repeated in different contexts. And so in 1966, there was a number of different covenants passed. There was the International Covenant on Civil and Political Rights. Article 17, no one shall be subjected to the arbitrary unlawful interference with his privacy, family, home, or correspondence, nor to unlawful attacks on his honor and reputation. And everyone has a right to protection of the law against such interference or attacks. I guess that was one of the other things in my conversation with EFF is that there's surveillance from corporations and companies, and then there's government surveillance. And then sometimes the EFF will take positions where they don't want the government to have access to facial recognition because of all the potential abuses, but they see that private companies, there's not as large a risk and harm that could be done there in terms of someone could be misidentified and you could put them in jail for like 30 years. So that's a pretty severe way in which some of the bias within these systems could be affecting someone's lives and their human rights. So then there was a UN World Conference on Human Rights in Vienna from the 14th to the 25th of June of 1993. And then 2006 was the start of the UN Human Rights Council. It was sort of shifting over from the Commission on Human Rights. There was a lot of different complaints and issues with that in terms of it not really being effective and they just wanted to reorganize that and they did a little bit of a reboot in June of 2006. And out of that they eventually had a number of different special rapporteurs that are assigned to looking at specific issues, starting with the special rapporteur on the promotion and protection of the right to the freedom of opinion and expression. And it's out of that, out of some of the different reports that were made, where they started to see that in order to have freedom of speech, you really need to have the sanctity of your privacy of communications, you know, not to have, like, government surveillance tracking everything that you're saying. And it's from that that actually kicked up more discussions around privacy as its own special rapporteur that was needed. the representative from Brazil on the Human Rights Council, Ms. Alameda Watanabe Patriota, saying that in the view of recent revelations of the violations of the fundamental right to privacy, she asked what the international community could do to help enforce the right and whether, in the High Commissioner's opinion, the absence of internet privacy guarantees might undermine freedom of expression. She would also appreciate further comments on what member states could do in order to realized that securing basic human rights for all vulnerable groups, including lesbian, gay, bisexual, transgender people, and did not represent an emphasis on any one group. And so it sounds like that, you know, looking at the different human rights violations and this different surveillance that happened at the United Nations on Wednesday, the 23rd of October of 2013, So from that, the first General Assembly resolution passed a few months later on the 18th of December of 2013, the right to privacy in the digital age. And so then the UN General Assembly has actually passed one of these resolutions of the right to privacy in the general age on 2013, 2014, 2016, 2018, and 2020. And there's also a number of different of the Human Rights Council resolutions with the right to privacy in the digital age in 2015, 17, 18, and then 2019. So overall, as I've read through all these different United Nations documents, as one resolution is passed and it says, OK, given this, given this, given that, then it traces out the whole evolution of this issue. And by jumping through a lot of these different resolutions, I was able to come up with a little bit of like a rough timeline in terms of this specific issue of privacy, how it's evolved over time. And I was just really curious after Kitsitsa had mentioned that, just to tie in together the evolution of this topic and how it's changed over time. And also just that there's a whole special repertoire for Privacy. In the Human Rights Council Resolution 2816, so the 28th session and the 16th resolution, they passed the right to privacy in the digital age. That was on April 1st of 2015. And in there, that's when they created the Special Rapporteur for Privacy, who was then assigned in July 2015, Professor Joseph Kinetake was appointed as the first Special Rapporteur of the right to privacy. So as a Special Rapporteur, it was kind of the liaison going around to these different governments and doing different conversations, actually made a trip to the United States in 2017. It took a number of years for the report to come out, but it was kind of interesting to read through that. So you have this situation where there's these human rights laws that are passed and then these higher-level principles, and then you have some interactions from the international law into the regional laws and trying to either critique or improve different ways of upping the different layers of privacy. I guess as I dug into all this, I was really trying to figure out, is this a viable path for these international organizations to interface with the United States? It's one step removed from directly changing the laws here in the United States, but I do think it's still important to think about it from that international perspective, especially through how this could go all horribly wrong. Because if you don't think about some of these things in the United States context, it may not be an issue. But when you go into these other countries, that's when you start to have these bigger human rights violations. And so how do you ensure from both the tech policy as well as the technological architecture, and then also just from ways in which that people are using technology and being in the same rooms with each other, how are you going to sort of moderate? Or once this is at scale, then what are the things that you need to start to think about? I think for most people in the tech industry, they are hoping that someone will figure all this out and it will all work out in the end. But I think that organizations like the EFF are really on the front lines of protecting consumer rights and civil liberties and digital rights. So I think, you know, when you look at the surveillance capitalism and the ads, it is quite an issue in terms of what is the future of our mental privacy. And it's not going to just be a, you know, trust them to be able to self regulate themselves. That's where we're going to start. But I think that ultimately, we're going to need some other levels of technology policy or human rights of the right to mental privacy, because, you know, as these technologies move forward, it's going to just get you know, more and more concerning in terms of how much potential stuff they could start to decode from our brain. And, you know, Kurt was basically saying, it's not just the stuff that they do get right, it's also the stuff that they get wrong in terms of they do misattribution in terms of whatever you're thinking. So that's yet another issue in terms of, you know, we can't give 100% credibility to everything that they're doing, because there's going to be a little bit of loss in the signal when you are trying to metaphorically and literally read someone's mind. Ultimately, there is a level of passing some of these things. But like Kotsitsa says, we actually also need a lot of enforcement and strong enforcement of all these principles that are already on the books. So like I said, there's lots of different things here. I'll try to add lots of different breadcrumbs in the show notes to be able to dig into more information here, and also just the last four episodes and previous episodes as well, just kind of doing a little bit of a wrap up and a survey of some of these different issues. I think it's probably one of the deepest open questions in terms of the future of this technology, and it's going to only continue to be a topic of discussion to keep the conversation going and to think of some constructive viable solutions for how to actually protect these rights that we want to have. So that's all I have for today and I just wanted to thank you for listening to the Voices of VR podcast and if you enjoy the podcast then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a less-than-supported podcast and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show