Breonna’s Garden is an augmented reality experience that premiered at the 2021 Tribeca Film Festival that was created by Lady PheOnix in collaboration with Ju’Niyah Palmer to honor the life of her sister, Breonna Taylor. I found it to be a profoundly moving experience, and lived into the intention to connect to the tender parts of myself in listening to the recorded memories by Taylor’s family and friends. The iOS app for Breonna’s Garden is available to try out here.

lady-phoenixI had a chance to talk with the creator Lady PheOnix about her journey in creating this project, the process of collaborating with Breonna Taylor’s family in creating it, and the underlying intentions and invitations that she has embedded into this piece — including an opportunity to record your own memories of loved ones that you may have lost. Lady PheOnix referred to this piece a sort of healing balm where you can be tender, batter and bruised, and I certainly was able to experience that in this powerful piece.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality


yusteNeuroscientist researcher Rafael Yuste started the Columbia University’s Neuro-Rights Initiative to promote an ethical framework to preserve a set of human rights within neuro-technologies. He co-authored a Nature paper titled “Four ethical priorities for neurotechnologies and AI” in 2017, after creating the “Morningside Group” of over 20 neuroscientists who were also concerned about the potential ethical harms caused by neuro-technologies.

Another neuro-right was added to the latest Neuro-Rights paper titled It’s Time for Neuro – Rights. This brings the list up to the right to identity, right to agency, right to mental privacy, the right for equitable access to neural augmentation, and the right to be free from algorithmic bias. In the end, Yuste hopes to gain momentum within the United Nations to add these fundamental neuro-rights to the Universal Declaration of Human Rights, which could then put pressure on regional legislators to change their laws to stay into compliance with these neural rights.

On May 26th, there was a Non-Invasive Neural Neural Interfaces: Ethical Considerations day-long symposium featuring cutting-edge neuroscientists working to decode the brain, EMG specialists, and other companies working on commercial-grade, neuro-technologies. The gathering was sponsored by the Columbia Neuro-Rights Initiative as well as by Facebook Reality Labs as both sponsors wanted to bring scientists and ethicists together in order to debate the ethical and privacy implications of these neuro-technologies.

I did some extensive coverage of the Non-Invasive Neural Interfaces: Ethical Considerations event within this Twitter thread here:

Part of the concern about these neuro-technologies is that there is already a large amount of data from the brain that can be decoded, and this is only going to increase over time. Yuste also brought up that there as existing methods to stimulate the brain in a way that could violate our right to agency. Whether it’s reading or writing to our brains, Yuste says that we can’t be walking around with the metaphoric hoods of our brains opened up for any outside actor to measure or stimulate.

In the end, there was a lot more science shared at the Non-Invasive Neural Interfaces gathering than meaty ethical debates. There was not enough diversity of speaker backgrounds to hold a true Multi-Stakeholder Immersion gathering that included perspectives from privacy advocates, philosophers, or privacy lawyers. Part of what makes this topic of how to preserve mental privacy so challenging is that it does require a multi-disciplinary approach representing a critical mass of stakeholders and differing competing interests in order to have robust debates on all of the risks and benefits across different contexts. Also, dealing with the complexity of these emerging technologies requires some potential new paradigm conceptual frameworks around the philosophy of privacy such as Dr. Helen Nissembaum’s theory of Contextual Integrity or Dr. Anita Allen’s approach of treating privacy as a human right (see my talk for more context on this)

There was some interesting resistance to one of Yuste’s proposed strategies for preserving our right mental privacy for navigating the threats to mental privacy, since one of his suggestions was to treat the data from these non-invasive neural interfaces as medical devices and medical data. This would regulate data that could be used to decode what’s happening within the body, but also limit how the variety of different brain stimulation devices could be used.

Neuro-tech start-ups like Open BCI and Kernel Co resisted this suggested classification and regulation of neuro-tech as medical devices since their companies probably wouldn’t exist at the point they are today had there been additional medical regulations that they’d have to follow. But Yuste argues that the use of neural data could have profound impacts on the integrity of our body, and so there is a compelling argument that it’s a type of sensitive data that is most analogous to medical data.

After listening to Yuste at the “Non-Invasive Neural Neural Interfaces: Ethical Considerations” conference, I reached out to have him onto the Voices of VR podcast so that he could elaborate on the current state of the art neuroscience of neuro-tech, what he sees as the most viable strategy for protecting our right to mental privacy, why looking at these issues through the lens of human rights is so compelling, where the future of neuro-rights is headed, and why he’s so excited about the revolutionary and humanistic potential of neuro-technologies to help us understand our brains and ourselves better.


Here’s my 22-minute talk on “State of Privacy in XR & Neuro-Tech: Conceptual Frames” presented at the VR/AR Global Summit on June 2, 2021

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

HTC announced two new, enterprise-focused VR headsets at their Vivecon on Tuesday May 11th. The Vive Focus 3 is a standalone VR HMD with an impressive 2,448 x 2,448 per-eye resolution, 90Hz, 120° FoV, new controllers, swappable battery, and priced at $1,300. The New Vive Pro 2 VR HMD was also announced also with a 2,448 × 2,448 (6.0MP) per-eye resolution, but with 120 Hz, dual-element Fresnel lenses, 120° diagonal FoV, 120Hz refresh rate, $800, and June 3 release. HTC also annoucned a number of new Vive Business software offerings including Vive Business App Store, Vive Business Training, Vive Business Streaming, & Vive Business Device Management.

alvin-wang-graylinI had a chance to talk with Alvin Wang Graylin, the China President at HTC about HTC’s two new VR headsets, the launch of Vive Business, and more context on their Vive XR Ecosystem, new Vive Trackers, Facial tracker, and trends of virtual idols & VTubers including their new virtual spokeperson named VEE.

Here’s my Twitter thread from Vivecon and Virtual Vive Ecosystem Conference:


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

The second volume of the Immersive Arcade Showcase featuring four immersive stories from the United Kingdom launches today as DLC within the Museum of Other Realities. The theme of the second volume is Memories & Dreams and features Vestige, Limbo, Lucid, & Somnai. This showcase will be running for the next 8 weeks, and iss produced by Digital Catapult in collaboration with Kaleidoscope’s Immersive Production Studio, UK Research & Innovation, and Audience of the Future.

jessica-driscoll2I had a chance to talk with Jessica Driscoll, Head of Immersive Technologies at Digital Catapult, about the first and second showcases as well as more context on this government-funded, digital technology innovation centre, which is “accelerating the adoption of new and emerging technologies to drive regional, national and international growth for UK businesses across the economy.” Their CreativeXR program is a technology accelerator for arts and culture industries, which funds a lot of VR & AR stories, but there are other initiatives at Digital Catapult around IoT, AI, and 5G that has a lot of overlap with the companies working on XR projects. Digital Catapult’s Immersive Arcade program has also produced a timeline of immersive art & story projects from the past 20 years that were produced in the UK.

It’s great to see this type of funding and support from the United Kindgom into the immersive industry, and I definitely have been seeing the impact of the projects that they’re funding as many of them have appeared throughout the film festival circuit since the CreativeXR program started in 2017. This three-volume retrospective series of Immersive Arcade is a great opportunity to see some of the immersive stories that have come out of the UK over the past 5 years within the context of the Musuem of Other Realities, which has created some really impressive immersive installations and transportative worlds to help set the context for each of these pieces.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Facebook’s Project Aria announcement in September at Facebook Connect raised a number of different ethical questions with anthropologists and technological ethicists. Journalist Lawrence Dodds described it on Twitter by saying, “Facebook will send ‘hundreds’ of employees out into public spaces recording everything they see in order to research privacy risks of AR glasses.” During the Facebook Connect keynote, Head of Facebook Reality Labs Andrew Bosworth described Project Aria as a prototype research device worn by Facebook employees and contractors that would be “recording audio, video, eye tracking, and location data” of “egocentric data capture.” In the Project Aria Launch video, Director of Research Science at Facebook Realty Labs Research Richard Newcomb said that “starting in September, a few hundred Facebook workers will be wearing Aria on campus and in public spaces to help us collect data to uncover the underlying technical and ethical questions, and start to look at answers to those.”

The idea of Facebook workers wearing always-on AR devices recording egocentric video and audio data streams across private and public spaces in order to research the ethical and privacy implications raised a lot red flags from social science researchers. Anthropologist Dr. Sally A. Applin wrote a Twitter thread explaining “Why this is very, very bad.” And tech ethicist Dr. Catherine Flick said, “And yet Facebook has a head of responsible innovation. Who is featured in an independent publication about responsible tech talking about ethics at Facebook. Just mindboggling. Does this guy actually know anything about ethics or social impact of tech? Or is it just lip service?” The two researchers connected via Twitter an agreed to collaborate on a paper over the course of six months, and the result is a 15,000-word peer-review paper titled “Facebook’s Project Aria indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the Commons” that was published in latest Journal of Responsible Technology.

Applin & Flick deconstruct the ethics of Project Aria based upon own Facebook’s four Responsible Innovation Principles that were announced by Boz in the same Facebook Connect keynote after the Project Aria launch video. Those principles are #1) Don’t surprise people. #2) Provide controls that matter. #3) Consider everyone. And #4) Put People First. In their paper, Applin & Flick conclude that

Facebook’s Project Aria has incomplete and conflicting Principles of Responsible Innovation. It violates its own principles of Responsible Innovation, and uses these to “ethics wash” what appears to be a technological and social colonization of the Commons. Facebook enables itself to avoid responsibility and accountability for the hard questions about its practices, including its approach to informed consent. Additionally, Facebook’s Responsible Innovation Principles are written from a technocentric perspective, which precludes Facebook from cessation of the project should ethical issues arise. We argue that the ethical issues that have already arisen should be basis enough to stop development—even for “research”. Therefore, we conclude that the Facebook Responsible Innovation Principles are irresponsible and as such, insufficient to enable the development of Project Aria as an ethical technology.

I reached out to Applin & Flick to come onto the Voices of VR podcast to give a bit more context as to their analysis through their anthropological & technology ethics lenses. Sally Applin is an anthropologist looking at the cultural adoption of emerging technologies through the lens of anthropology and her social multi-dimensional communications theory called PolySocial Reality. She’s a Research Fellow at HRAF Advanced Research Centres (EU), Canterbury, Centre for Social Anthropology and Computing (CSAC), and Research Associate at Human Relations Area Files (HRAF), Yale University. Catherine Flick is a Reader (aka associate professor) of Centre for Computing and Social Responsibility at the De Montfort University, United Kingdom.

We deconstruct Facebook’s Reponsible Innovation Principles in the context of technology ethics and other responsible innovation best practices, but also critically analyze their principles and how quickly they break down even when looking at the Project Aria research project. Facebook has been talking about their responsible innovation principles whenever ethical questions come up, but as we discuss in this podcast, these principles are not really clear, coherent, or robust enough to provide useful insight into some of the most basic aspects of bystander privacy and consent for augmented reality. Applin & Flick have a much more comprehensive breakdown in their paper at https://doi.org/10.1016/j.jrt.2021.100010, and this conversation should help give an overview and primer for how to critically evaluate Facebook’s responsibile innovation principles.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

The 29 immersive experiences that are a part of the Tribeca Immersive 2021 line-up were announced on Tuesday, April 29. There will be 11 Virtual Arcade experiences available starting on June 9 within the Museum of Other Realities, 5 Storyscape experiences only available in-person at Tribeca, and then 13 outdoor screenings (some of which will be also available remotely). I got the run-down of the Storyscapes & highlights from the outdoor screenings from chief curator Loren Hammonds, and more context about the first major film festival that will be having IRL gatherings since the pandemic turned everything remote in March 2020. The 17 New Images Paris experiences in competition were also announced today, and will be showing next to the Tribeca Virtual Arcade within th MOR.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

On April 22, 2020, Magic Leap announced layoffs of somewhere between 600 to 1000 employees, which amounted to around 1/3 to 1/2 of their total staff. On December 2019, The Information reported that Magic Leap had only sold 6,000 AR headsets against their target goal of over 100,000. The full story as to what exactly happened and why has yet to be fully told, and it’s stories like these that reinforce dominant media narratives around the hype around Magic Leap.

After listening to the stories of a number of Magic Leap employees, then there are alternative stories about what the experience of working there meant to them and what the company has to contribute to the overall XR industry. Portions of the $2.6 billion raised through Magic Leap were spent on salaries and helping to bootstrap an entire cohort of XR professionals who are now working at a number of Big Tech companies. The Protocol reported in June of 2020 that there were a number of laid Magic Leap employees who ended up at Apple (~40%), Facebook (~20%), Google (~20%), Microsoft (~15%), and Amazon (~5%). [Note that these are rough estimates based upon the proportions of the pie graph].

Andre Elijah hosts a bi-weekly discussion on Clubhouse called the “No BS Realities of AR/VR” along with co-moderators Daliso Ngoma & Azad Balabanian. There were seven former Magic Leap employees who participated in a retrospective discussion reflecting on their time working on a lot of creative and difficult challenges of trying to shape the emerging spatial computing medium of AR. Elijah got consent from each of the speakers and shared the recording with me to air as a special edition of the Voices of VR podcast since there’s a lot of oral history perspectives of their time working there, what went wrong from their perspective, and why some of them still see the overall experience as one of the most exalted times of their career.

Here’s the list of seven former Magic Leap employees who were a part of the conversation:

  • Anastasia Devana – Audio Director (until April 2020)
  • Steve Lukas – Head of Developer Relations Engineering at Magic Leap (until April 2021)
  • Paul Reynolds – Senior Director, SDK & Applications (until May 2016)
  • Dave Shumway – Lead Audio Designer/Composer (until April 2020)
  • Jeremy Vanhoozer – VP, Creative Content (until October 2020)
  • Tim Stutts – Lead Interaction Designer (until June 2020 April 2020 [Updated: April 29, 2021])
  • Joe Gabriel – Developer Relations Community & Program Manager (until April 2020)

They also share more details on the drama of trying to get The Last Light released onto the store after it’s premiere at SXSW was cancelled, but also after Magic Leap’s pivot from entertainment to enterprise applications. There’s some reflections about the unique challenges of working on an emergent medium, what their content strategy way, what it was like to work with their co-founder and former CEO Rony Abovitz, as well as some of their final thoughts and reflections of their time at Magic Leap.

There were a lot of new insights and perspectives that I hadn’t heard before, and while this conversation won’t fully answer all of the questions of what exactly happened at Magic Leap and why. But I do think it add a lot of valuable testimony about the early days of consumer AR from the lens of what it was like to be a part of a company that will ultimately be a part of historical evolution of the augmented reality medium.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality


brittan-heller-2Brittan Heller is a human rights lawyer who recently published a paper pointing out that there are some significant gaps in privacy laws that do not cover the types of physiological and biometric data that will be available from virtual and augmented reality. Existing laws around biometrics are tightly connected to identity, but she argues that there are entirely new classes of data available from XR that she’s calling “biometric psychography,” which she says is a “new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests.”

Her paper published in Vanderbilt Journal of Entertainment and Technology Law is titled “Watching Androids Dream of Electric Sheep: Immersive Technology, Biometric Psychography, and the Law.” She points out that “biometric data” is actually pretty narrowly defined in most state laws to be tightly connected to identity and personally-identifiable information. She says,

Under Illinois state law, a “biometric identifier” is a bodily imprint or attribute that can be used to uniquely distinguish an individual, defined in the statute as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” 224 Exclusions from the definition of biometric identifier are “writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color” and biological material or information collected in a health care setting. 225

The types of biometric data that will be coming from immersive technologies are more like types of data that used to only be collected within the context of a health care setting. One of her citations is a 2017 Voices of VR podcast interview I did with behavioral neuroscientist John Burkhardt on the “Biometric Data Streams & the Unknown Ethical Threshold of Predicting & Controlling Behavior,” which lists some of the types of biometric psychographic data that will be made available to XR technologists. Heller says in her paper,

What type of information would be included as part of biometric psychographics? One part is biological info that may be classified as biometric information or biometric identifiers. 176 Looking to immersive technology, the following are biometric tracking techniques: (1) eye tracking and pupil response; 177 (2) facial scans; 178 (3) galvanic skin response; 179 (4) electroencephalography (EEG); 180 (5) electromyography (EMG); 181 and (6) electrocardiography (ECG). 182 These measurements tell much more than they may indicate on the surface. For example, facial tracking can be used to predict how and when a user experiences emotional feelings. 183 It can trace indications of the seven emotions that are highly correlated with certain muscle movements in the face: anger, surprise, fear, joy, sadness, contempt, and disgust. 184 EEG shows brain waves, which can reveal states of mind. 185 EEG can also indicate one’s cognitive load. 186 How aversive or repetitive is a particular task? How challenging is a particular cognitive task? 187 Galvanic skin response shows how intensely a user may feel an emotion, like anxiety or stress, and is used in lie detector tests. 188 EMG senses how tense the user’s muscles are and can detect involuntary micro-expressions, which is useful in detecting whether or not people are telling the truth since telling a lie would require faking involuntary reactions. 189 ECG can similarly indicate truthfulness, by seeing if one’s pulse or blood pressure increases in response to a stimulus. 190

While it’s still unclear if these data streams will end up having personally-identifiable information signatures that are only detectable by machine learning, the larger issue here is that when this physiological data streams are fused together then it’s going to be able to extrapolate a lot of psychographic information about our “likes, dislikes, preferences, and interests.”

Currently, there are no legal protections around this data that are setting any limits about what private companies or third party developers can do with this data. There’s a lot of open questions around the limits of what we consent to sharing, but also to what degree might having access to all of this data might put users in a position where their Neuro-Rights of agency, identity, or mental privacy are undermined by whomever has access to this data.

Heller is a human rights lawyer, who I previously interviewed in July 2019 on how she’s been applying human rights frameworks to curtail harassment and hate speech in virtual spaces. Now she’s taking the approach of looking at how human rights frameworks and agreements may be able to help set a baseline of human rights that are more consensus-based in the sense that there’s not a legal enforcement mechanism. She cited the “UN Guiding Principles on Business and Human Rights” as an example of a human rights framework that is used combine a human rights lens with company business practices around the world. Here’s a European Parliament policy study of the UN Guiding Principles on Business and Human Rights that gives a graphical overview:


One of the biggest open issues that needs to be resolved is how this concept of “biometric psychography” is enshrined into some sort of Federal or State privacy law in order for it to be legally binding to these companies. Heller talked about a hierarchy between the laws, and this is one way to look at the different layers of how international law is at a higher and more abstract level that isn’t always legally binding in national, regional, or state jurisdictions. She said that citing International Law in a US court is often not going to be a winning strategy.


Another way to look at this issue is that there’s a nested set of contexts where there’s cultural norms, a set of international, national, regional, and city laws, but also the economic business layers. So even though Article 12 of the UN’s Universal Declaration of Human Rights says, “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” There are contextual dimensions of privacy where individuals can enter into Terms of Service & Privacy Policy contractual agreements with these businesses where they can consent for companies to have privileged information could be used to undermine our sense of mental privacy and agency.


Ultimately, the United States may need to implement a Federal Privacy Law that sets up some guardrails for companies for what they can and cannot do with the types of biometric psychographic data that comes from XR. I previously discussed the history and larger context of US Privacy Law with Privacy Lawyer Joe Jerome where he explains that even though there’s a lot of bi-partisan consensus for the need for some sort of Federal Privacy Law, there are still a lot of partisan disagreements on a number of issues. There is a lot of United States legislation on privacy being passed at the State level, which the International Association of Privacy Professionals is tracking here.

Heller’s paper is a great first step in starting to explain some of the types of biometric psychographic data that are made available by XR technologies, but it’s still an open question as to whether or not there should be laws implemented at the Federal or State level in order to set up some guardrails for how this data are being used and in what context. I’m a fan of Helen Nissenbaum’s contextual integrity approach to privacy as a framework to help differentiate the different contexts and information flows, but I have not seen a generalized approach that maps out the range of different contexts and how this could flow back into a generalized privacy framework or privacy law. Heller suggested to me that creating a consensus-driven, ethical framework that businesses consent to could be a first step, even if there is no real accountability or enforcement.

Another community that is starting to have these conversations are neuroscientists interested in Neuro Ethics and Neuro-Rights. There is an upcoming, free Symposium on the Ethics of Noninvasive Neural Interfaces on May 26th hosted at the Columbia Neuro-Rights Initiative and co-organized by Facebook Realty Labs.

Columbia’s Rafael Yuste is one of the co-authors of the paper “It’s Time for Neuro-Rights” published in Horizons: Journal of International Relations and Sustainable Development. They are also taking a human rights approach of defining some fundamental rights to agency, identity, mental privacy, fair access to mental augmentation, and protection from algorithmic bias. But again, the real challenge is how these higher level rights at the international law or human rights level get implemented at a level that has a direct impact on these companies who are delivering these neural technologies. How are these rights going to be negotiated from context to context (especially within the context of consumer technologies that within themselves can span a wide range of contexts)? What should the limits be of who has access to this biometric psychographic data from non-invasive neuro-technologies like XR? And should there be limits of what they’re able to do with this data?

I have a lot more questions than answers, but Heller’s definition of “biometric psychography” will hopefully start to move these discussions around privacy beyond personal-identifiable information and our identity, and look at how this data provides benefits and risks to our agency, identity, and mental privacy. Figuring out how to conceptualize, comprehend, and weigh all of these tradeoffs is one of the more challenging aspects of XR Ethics, and something that we need to still collectively figure out as a community. It’s going to require a lot of interdisciplinary collaboration between immersive technology creators, neuroscientists, human rights and privacy lawyers, ethicists and philosophers, and many other producers and consumers of XR technologies.


Update April 20th On April 11th, I posted this visualization of the relational dynamics that we covered in this discussion:

Here is a simplified version of this graphic that helps to visualize the relational dynamics for how human rights and ethical design principles fit into technology policy and the ethics of technology design.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

On March 18th, Facebook Reality Labs Research announced some of their research into next-generation neuromotor input devices for mixed reality applications. I paraphrased the most interesting insights from their press conferences announcement, but I was still left with a lot of questions on the specific neuroscience principles underlying their ability to be able to target individual motor neurons. I also had a lot of follow-up questions about some of the privacy implications of these technologies, and so thankfully I was able to follow up with Thomas Reardon, Director of Neuromotor Interfaces at Facebook Reality Labs and co-founder of CTRL-Labs to get more context on the neuroscience foundations and privacy risks associated with these breakthrough “adaptive interfaces.”

Reardon described his journey into working on wearable computing by starting at Microsoft, where he created the Internet Explorer browser. He eventually went back to school to get his Ph.D. in neuroscience at Columbia University, and then joined with other neuroscience colleagues to start CTRL-Labs as a startup (be sure to check out my previous interview with CTRL-Labs on neural interfaces). Reardon and his team set out to override the dogma on motor unit recruitment, and they were successful in being able to detect the action potentials of individual motor neurons through the combination of surface-level electromyography and machine learning. These wrist-based neural input devices are able to puppeteer virtual embodiments, and even cause action based on the mere intention of movement rather than actually moving. This breakthrough has the potential to revolutionize the fidelity of input that isn’t constrained by the human body, and the brain and motor neurons have a lot more low-latency capacity and higher degrees of freedom that may solve some of the most intractable bottlenecks for robust 3DUI input for virtual and augmented reality.

But with the increase in orders of magnitude of new opportunities of agency, then there are also a similar increase in the sensitivity of this data in terms of the nature of how the network of these signals could be even more personally-identifiable than DNA. And there’s also a lot of open questions around how the action potentials of these motor neurons representing both the intentional and actual dimensions of movement could be used within a sensor-fusion approach with other biometric information. Facebook Reality Labs Research has a poster a IEEE VR 2021 that is able to extrapolate eye gaze information with access to head and hand pose data and contextual information about the surrounding virtual environment. So there’s already a lot of sensor fusion work happening towards Facebook’s goal of “contextually-aware AI,” which is not only going to be aware of the world around you, but also potentially and eventually what’s happening inside of your own body moment to moment.

Part of the reason why Facebook Realty Labs is making more public appearances talking about the ethics of virtual and augmented reality is because they want to get ahead of some of the ethics and policy implications of AR devices. Reardon was able to answer a lot of the questions around the identifiability of this nueromotor interface data, but it’s still an open scientific question as to exactly how that motor movement data could be combined with other information in order to extrapolate with Brittan Heller has called “biometric psychography” that tries to identify this new class of data.

Heller says, “Biometric psychography is a new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests. Immersive technology must capture this data to function, meaning that while biometric psychography may be relevant beyond immersive tech, it will become increasingly inescapable as immersive tech spreads. This is important because current thinking around biometrics is focused primarily on identity, but biometric psychography is the practice of using biometric data to instead identify a person’s interests.”

Heller continues on to evaluate how there are gaps in the existing privacy law that don’t cover these emerging challenges of biometric psychography that “most regulators and consumers incorrectly assume will be governed by existing law.” For a really comprehensive overview of the current state of U.S. privacy law, then be sure to listen my interview with Joe Jerome (or read through the annotated HTML or PDF transcript with citations). There are a lot of current debates about a pending U.S. Federal Privacy law, and I’d be really curious to hear about Facebook’s thoughts their current thinking on how the types of biometric and psychographic data from XR could shape the future of privacy law in the United States.

Another point that came up again again is the context dependence of these issues around ethics and privacy. Lessig’s Pathetic Dot Theory tends to look at the culture, laws, economics, laws, and technological architecture/code as independent contexts, but I’m proposing more of a mereological structure of wholes and parts where the cultural context drives the laws, the economy is within the context of the laws, and then the design frameworks, app code, operating systems, and technology hardware are nested within the hierarchy of other contexts. Because these are nested wholes and parts, then there are also feedback loops here technology platforms can result in the shifting of culture.

I’ve previously covered how Alfred North Whitehead’s Process Philosophy takes a paradigm-shifting process-relational approach to some of these issues, which I think provides a useful framing to help provide a deeper contextual framing for some of these issues. Whitehead helped to popularize these types mereological structures with a lot of his mathematics and philosophy work, and this type of fractal geometry has been a really useful conceptual frame for understanding the different levels of context and how they’re related to each other.

Context is a topic that comes up again and again is thinking about these ethical questions. Despite Facebook’s promotion of “contextually-aware AI,” most of how they’ve been using talking about context was through a lens of your environmental context, but during their last press conference they said that the other people around you also helps to shape your context. It’s not just the people, but it’s also the topic of conversation that also has the ability to jump between different contexts. Up to this point Facebook has not elaborated on any of their foundational theoretical work for how they’re conceiving of the topic of context, contextually-aware AI, and the boundaries around it. One pointer that I’d provide is Helen Nissenbaum’s Contextual Integrity approach to privacy, which tries to map out how the relationship of information flows with different stakeholders in different contexts (e.g. how you’re okay with sharing intimate medical information with a doctor and financial information with a bank teller, but not vice versa).

A lot of the deeper ethical questions around data from XR are elucidated when looking it at through the lens of context. Having access to hand motion data and the motor neuron data driving it may actually not have that many privacy concerns. However, FRL Research is able to extrapolate gaze data when that hand pose is combined with head pose and information about the environment. So in isolation it’s not as much of a problem, but when it’s combined within an economic context of contextually-aware AI and the potential extension of Facebook’s business model of surveillance capitalism into spatial computing. How much of all of this XR data is going to be fused and synthesized towards the end goal of biometric psychography is also a big question that could shape future discussions about XR policy.

It’s possible to see a future where these XR technologies could be abused to lower our agency over the long run weaken our body autonomy and privacy. In order to prevent this from happening, then what are the guardrails from a policy perspective that need to be implemented? What would the viable enforcement of these guidelines look like? Do we need a privacy institutional review board to provide oversight and independent auditing? What is Facebook’s perspective on a potential Federal Privacy law and how XR could shape that discussion.

So overall, I’m optimistic about the amazing benefits of neuromotor input devices like the one Facebook Reality Lab is working on as a research project, and how it has the potential to completely revolutionize 3DUI and exponentially increase our degrees of freedom in expressing our agency in user interfaces and virtual worlds. But yet I also still have outstanding concerns since there’s a larger conversation that needs to happen with policy makers and the larger public, and for Facebook to be more proactive in doing more of the conceptual and theoretical work about how to weigh the tradeoffs of this technology. There are always benefits and risks for any new technology, and we currently don’t have robust conceptual or ethical frameworks to be able to navigate the complexity of some of these tradeoffs.

This public conversation is just starting to get under way, and I’m glad to have had the opportunity to explore some of the lower-level neuroscience foundations mechanics of neuromotor interfaces and some of their associated privacy risks. But I’m also left feeling like of the more challenging ethical and privacy discussions are going to be happening at a higher level within the business and economic context for how all of this biometric XR data will end up being captured, analyzed, and stored over time. At the end of the day, how this data are used and for what purpose are beyond the control of foundational researchers like Reardon, as these types of business decisions are made further up in the foodchain. Reardon expressed his personal preference that these data aren’t mined and recorded, and so at the research level there’s a lot of work to see whether or not they can do real-time processing on edge compute devices instead. But again, Facebook has not committed to any specific business model, and they’re currently leaving everything on the table in terms of what data recorded and how they choose to use it. If it’s not already covered in their current privacy policy, then it’d just be a matter of updating it for them to declare.

Historically, Facebook has not had a great history of living up to their privacy promises, and they still need to embody a lot more actions over time before I or the rest of the XR community has more trust and faith in the alignment between what they’re saying an how they’re embodying those principles in action. The good news is that I’m seeing a lot more embodied action from both their public statements and my own interactions with them both in this interview with Reardon, but also my interview with the privacy policy manager Nathan White back in October 2020. Is that enough for me yet? No. There’s still a long way to go, such as seeing their details on any of their policy ideas or have a better accountability process to be able to have some checks and balances over time. This XR tech represents some amazing potential and terrifying risks, and the broader XR community has a responsibility to brainstorm what some of the policy guardrails might look like in order to nudge us down the more protopian path.

Update: 4/1/2021: Here’s some more info on the
Facebook Reality Labs symposium on Ethics of Noninvasive Neural Interfaces in collaboration with Columbia University’s NeuroRights Initiative.

Also here’s a human rights proposal for Neuro-Rights:


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Dream was a series of 10 live performances over 8 days that used motion captured actors who had virtual embodiments set within an immersive storyworld of Shakespeare’s Midsummer Night’s Dream powered by the Unreal Engine. This project was a research & development initiative funded by the United Kingdom’s Audience of the Future initiative that involves the Royal Shakespeare Company, Marshmallow Laser Feast, Philharmonia Orchestra, and the Manchester Film Festival.

They were originally going to produce a site-specific, location-based experience focusing on playing with different haptic & sensory experiences within the audience members, but they had to do a digital pivot to an online performance in the midst of the pandemic. They set a goal of trying to reach 100,000 people with their show that had two tiers including a paid interactive experience and free livestream of the live performance mediated through the simulated environment and broadcast onto a 2D screen.

Robin-McNicholas2I had a chance to break down the evolution and journey of this project with Pippa Hill, Head of Literary Department at Royal Shakespeare Company, as well as with Robin McNicholas, Director at Marshmallow Laser Feast as well as Director of Dream. We talked about adapting the constraints and goals that there were setting out to do, which was to also feature some of their R&D findings within the context of an experience. There was a lot of work with figuring out how to translate real-time motion capture with the puppeteering of virtual characters, and some very early experiments with audience paritipation and limited interactivity with an underlying goal of making it accessible to a broad demographic ranging in ages from 4 to 104 years old.

We explore some of the existential tradeoffs and design constraints that they had to navigate, but overall Hill said that there wasn’t anything left on the cutting room floor in terms of the potential for how these immersive technologies will be able to continue to impact future experiments with live theatrical experiments in the context of virtual reality, augmented reality, or mixed reality. There’s also lots of exciting and difficult narrative challenges for figuring out different ways for the audience to participate and interact with the story.

There’s also some opportunities to futher explore a tiered model of participation with differing levels of interaction, and also a lot more underlying narrative structures and opportunities to receive either individual or collective agency for how that feeds back into the unfolding of a story or experience.

At the end, there’s probably more new questions that firm answers on a lot of these existential questions of interactive and immersive narratives, but the scale and positive response that Dream has received so far help to prove out that there is a potential market for these types of interactive narrative and live performance experiments. There was also a 60-question survey that I filled out afterwards, and so I also expect there to be even more empirical data and research insights to be digested and reported on in the future as well.


Here’s some behind-the-scenes video clips sent to me by part of the production team.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality