biometric-psychography

brittan-heller-2Brittan Heller is a human rights lawyer who recently published a paper pointing out that there are some significant gaps in privacy laws that do not cover the types of physiological and biometric data that will be available from virtual and augmented reality. Existing laws around biometrics are tightly connected to identity, but she argues that there are entirely new classes of data available from XR that she’s calling “biometric psychography,” which she says is a “new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests.”

Her paper published in Vanderbilt Journal of Entertainment and Technology Law is titled “Watching Androids Dream of Electric Sheep: Immersive Technology, Biometric Psychography, and the Law.” She points out that “biometric data” is actually pretty narrowly defined in most state laws to be tightly connected to identity and personally-identifiable information. She says,

Under Illinois state law, a “biometric identifier” is a bodily imprint or attribute that can be used to uniquely distinguish an individual, defined in the statute as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” 224 Exclusions from the definition of biometric identifier are “writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color” and biological material or information collected in a health care setting. 225

The types of biometric data that will be coming from immersive technologies are more like types of data that used to only be collected within the context of a health care setting. One of her citations is a 2017 Voices of VR podcast interview I did with behavioral neuroscientist John Burkhardt on the “Biometric Data Streams & the Unknown Ethical Threshold of Predicting & Controlling Behavior,” which lists some of the types of biometric psychographic data that will be made available to XR technologists. Heller says in her paper,

What type of information would be included as part of biometric psychographics? One part is biological info that may be classified as biometric information or biometric identifiers. 176 Looking to immersive technology, the following are biometric tracking techniques: (1) eye tracking and pupil response; 177 (2) facial scans; 178 (3) galvanic skin response; 179 (4) electroencephalography (EEG); 180 (5) electromyography (EMG); 181 and (6) electrocardiography (ECG). 182 These measurements tell much more than they may indicate on the surface. For example, facial tracking can be used to predict how and when a user experiences emotional feelings. 183 It can trace indications of the seven emotions that are highly correlated with certain muscle movements in the face: anger, surprise, fear, joy, sadness, contempt, and disgust. 184 EEG shows brain waves, which can reveal states of mind. 185 EEG can also indicate one’s cognitive load. 186 How aversive or repetitive is a particular task? How challenging is a particular cognitive task? 187 Galvanic skin response shows how intensely a user may feel an emotion, like anxiety or stress, and is used in lie detector tests. 188 EMG senses how tense the user’s muscles are and can detect involuntary micro-expressions, which is useful in detecting whether or not people are telling the truth since telling a lie would require faking involuntary reactions. 189 ECG can similarly indicate truthfulness, by seeing if one’s pulse or blood pressure increases in response to a stimulus. 190

While it’s still unclear if these data streams will end up having personally-identifiable information signatures that are only detectable by machine learning, the larger issue here is that when this physiological data streams are fused together then it’s going to be able to extrapolate a lot of psychographic information about our “likes, dislikes, preferences, and interests.”

Currently, there are no legal protections around this data that are setting any limits about what private companies or third party developers can do with this data. There’s a lot of open questions around the limits of what we consent to sharing, but also to what degree might having access to all of this data might put users in a position where their Neuro-Rights of agency, identity, or mental privacy are undermined by whomever has access to this data.

Heller is a human rights lawyer, who I previously interviewed in July 2019 on how she’s been applying human rights frameworks to curtail harassment and hate speech in virtual spaces. Now she’s taking the approach of looking at how human rights frameworks and agreements may be able to help set a baseline of human rights that are more consensus-based in the sense that there’s not a legal enforcement mechanism. She cited the “UN Guiding Principles on Business and Human Rights” as an example of a human rights framework that is used combine a human rights lens with company business practices around the world. Here’s a European Parliament policy study of the UN Guiding Principles on Business and Human Rights that gives a graphical overview:

un-guiding-principles-on-busness-and-human-rights

One of the biggest open issues that needs to be resolved is how this concept of “biometric psychography” is enshrined into some sort of Federal or State privacy law in order for it to be legally binding to these companies. Heller talked about a hierarchy between the laws, and this is one way to look at the different layers of how international law is at a higher and more abstract level that isn’t always legally binding in national, regional, or state jurisdictions. She said that citing International Law in a US court is often not going to be a winning strategy.

hierarchy-of-contexts

Another way to look at this issue is that there’s a nested set of contexts where there’s cultural norms, a set of international, national, regional, and city laws, but also the economic business layers. So even though Article 12 of the UN’s Universal Declaration of Human Rights says, “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” There are contextual dimensions of privacy where individuals can enter into Terms of Service & Privacy Policy contractual agreements with these businesses where they can consent for companies to have privileged information could be used to undermine our sense of mental privacy and agency.

nested-hierarchy-of-contexts

Ultimately, the United States may need to implement a Federal Privacy Law that sets up some guardrails for companies for what they can and cannot do with the types of biometric psychographic data that comes from XR. I previously discussed the history and larger context of US Privacy Law with Privacy Lawyer Joe Jerome where he explains that even though there’s a lot of bi-partisan consensus for the need for some sort of Federal Privacy Law, there are still a lot of partisan disagreements on a number of issues. There is a lot of United States legislation on privacy being passed at the State level, which the International Association of Privacy Professionals is tracking here.

Heller’s paper is a great first step in starting to explain some of the types of biometric psychographic data that are made available by XR technologies, but it’s still an open question as to whether or not there should be laws implemented at the Federal or State level in order to set up some guardrails for how this data are being used and in what context. I’m a fan of Helen Nissenbaum’s contextual integrity approach to privacy as a framework to help differentiate the different contexts and information flows, but I have not seen a generalized approach that maps out the range of different contexts and how this could flow back into a generalized privacy framework or privacy law. Heller suggested to me that creating a consensus-driven, ethical framework that businesses consent to could be a first step, even if there is no real accountability or enforcement.

Another community that is starting to have these conversations are neuroscientists interested in Neuro Ethics and Neuro-Rights. There is an upcoming, free Symposium on the Ethics of Noninvasive Neural Interfaces on May 26th hosted at the Columbia Neuro-Rights Initiative and co-organized by Facebook Realty Labs.

Columbia’s Rafael Yuste is one of the co-authors of the paper “It’s Time for Neuro-Rights” published in Horizons: Journal of International Relations and Sustainable Development. They are also taking a human rights approach of defining some fundamental rights to agency, identity, mental privacy, fair access to mental augmentation, and protection from algorithmic bias. But again, the real challenge is how these higher level rights at the international law or human rights level get implemented at a level that has a direct impact on these companies who are delivering these neural technologies. How are these rights going to be negotiated from context to context (especially within the context of consumer technologies that within themselves can span a wide range of contexts)? What should the limits be of who has access to this biometric psychographic data from non-invasive neuro-technologies like XR? And should there be limits of what they’re able to do with this data?

I have a lot more questions than answers, but Heller’s definition of “biometric psychography” will hopefully start to move these discussions around privacy beyond personal-identifiable information and our identity, and look at how this data provides benefits and risks to our agency, identity, and mental privacy. Figuring out how to conceptualize, comprehend, and weigh all of these tradeoffs is one of the more challenging aspects of XR Ethics, and something that we need to still collectively figure out as a community. It’s going to require a lot of interdisciplinary collaboration between immersive technology creators, neuroscientists, human rights and privacy lawyers, ethicists and philosophers, and many other producers and consumers of XR technologies.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

FRL-neuromotor-interface
On March 18th, Facebook Reality Labs Research announced some of their research into next-generation neuromotor input devices for mixed reality applications. I paraphrased the most interesting insights from their press conferences announcement, but I was still left with a lot of questions on the specific neuroscience principles underlying their ability to be able to target individual motor neurons. I also had a lot of follow-up questions about some of the privacy implications of these technologies, and so thankfully I was able to follow up with Thomas Reardon, Director of Neuromotor Interfaces at Facebook Reality Labs and co-founder of CTRL-Labs to get more context on the neuroscience foundations and privacy risks associated with these breakthrough “adaptive interfaces.”

Reardon described his journey into working on wearable computing by starting at Microsoft, where he created the Internet Explorer browser. He eventually went back to school to get his Ph.D. in neuroscience at Columbia University, and then joined with other neuroscience colleagues to start CTRL-Labs as a startup (be sure to check out my previous interview with CTRL-Labs on neural interfaces). Reardon and his team set out to override the dogma on motor unit recruitment, and they were successful in being able to detect the action potentials of individual motor neurons through the combination of surface-level electromyography and machine learning. These wrist-based neural input devices are able to puppeteer virtual embodiments, and even cause action based on the mere intention of movement rather than actually moving. This breakthrough has the potential to revolutionize the fidelity of input that isn’t constrained by the human body, and the brain and motor neurons have a lot more low-latency capacity and higher degrees of freedom that may solve some of the most intractable bottlenecks for robust 3DUI input for virtual and augmented reality.

But with the increase in orders of magnitude of new opportunities of agency, then there are also a similar increase in the sensitivity of this data in terms of the nature of how the network of these signals could be even more personally-identifiable than DNA. And there’s also a lot of open questions around how the action potentials of these motor neurons representing both the intentional and actual dimensions of movement could be used within a sensor-fusion approach with other biometric information. Facebook Reality Labs Research has a poster a IEEE VR 2021 that is able to extrapolate eye gaze information with access to head and hand pose data and contextual information about the surrounding virtual environment. So there’s already a lot of sensor fusion work happening towards Facebook’s goal of “contextually-aware AI,” which is not only going to be aware of the world around you, but also potentially and eventually what’s happening inside of your own body moment to moment.

Part of the reason why Facebook Realty Labs is making more public appearances talking about the ethics of virtual and augmented reality is because they want to get ahead of some of the ethics and policy implications of AR devices. Reardon was able to answer a lot of the questions around the identifiability of this nueromotor interface data, but it’s still an open scientific question as to exactly how that motor movement data could be combined with other information in order to extrapolate with Brittan Heller has called “biometric psychography” that tries to identify this new class of data.

Heller says, “Biometric psychography is a new concept for a novel type of bodily-centered information that can reveal intimate details about users’ likes, dislikes, preferences, and interests. Immersive technology must capture this data to function, meaning that while biometric psychography may be relevant beyond immersive tech, it will become increasingly inescapable as immersive tech spreads. This is important because current thinking around biometrics is focused primarily on identity, but biometric psychography is the practice of using biometric data to instead identify a person’s interests.”

Heller continues on to evaluate how there are gaps in the existing privacy law that don’t cover these emerging challenges of biometric psychography that “most regulators and consumers incorrectly assume will be governed by existing law.” For a really comprehensive overview of the current state of U.S. privacy law, then be sure to listen my interview with Joe Jerome (or read through the annotated HTML or PDF transcript with citations). There are a lot of current debates about a pending U.S. Federal Privacy law, and I’d be really curious to hear about Facebook’s thoughts their current thinking on how the types of biometric and psychographic data from XR could shape the future of privacy law in the United States.

nested-context-lessig
Another point that came up again again is the context dependence of these issues around ethics and privacy. Lessig’s Pathetic Dot Theory tends to look at the culture, laws, economics, laws, and technological architecture/code as independent contexts, but I’m proposing more of a mereological structure of wholes and parts where the cultural context drives the laws, the economy is within the context of the laws, and then the design frameworks, app code, operating systems, and technology hardware are nested within the hierarchy of other contexts. Because these are nested wholes and parts, then there are also feedback loops here technology platforms can result in the shifting of culture.

I’ve previously covered how Alfred North Whitehead’s Process Philosophy takes a paradigm-shifting process-relational approach to some of these issues, which I think provides a useful framing to help provide a deeper contextual framing for some of these issues. Whitehead helped to popularize these types mereological structures with a lot of his mathematics and philosophy work, and this type of fractal geometry has been a really useful conceptual frame for understanding the different levels of context and how they’re related to each other.

Context is a topic that comes up again and again is thinking about these ethical questions. Despite Facebook’s promotion of “contextually-aware AI,” most of how they’ve been using talking about context was through a lens of your environmental context, but during their last press conference they said that the other people around you also helps to shape your context. It’s not just the people, but it’s also the topic of conversation that also has the ability to jump between different contexts. Up to this point Facebook has not elaborated on any of their foundational theoretical work for how they’re conceiving of the topic of context, contextually-aware AI, and the boundaries around it. One pointer that I’d provide is Helen Nissenbaum’s Contextual Integrity approach to privacy, which tries to map out how the relationship of information flows with different stakeholders in different contexts (e.g. how you’re okay with sharing intimate medical information with a doctor and financial information with a bank teller, but not vice versa).

A lot of the deeper ethical questions around data from XR are elucidated when looking it at through the lens of context. Having access to hand motion data and the motor neuron data driving it may actually not have that many privacy concerns. However, FRL Research is able to extrapolate gaze data when that hand pose is combined with head pose and information about the environment. So in isolation it’s not as much of a problem, but when it’s combined within an economic context of contextually-aware AI and the potential extension of Facebook’s business model of surveillance capitalism into spatial computing. How much of all of this XR data is going to be fused and synthesized towards the end goal of biometric psychography is also a big question that could shape future discussions about XR policy.

It’s possible to see a future where these XR technologies could be abused to lower our agency over the long run weaken our body autonomy and privacy. In order to prevent this from happening, then what are the guardrails from a policy perspective that need to be implemented? What would the viable enforcement of these guidelines look like? Do we need a privacy institutional review board to provide oversight and independent auditing? What is Facebook’s perspective on a potential Federal Privacy law and how XR could shape that discussion.

So overall, I’m optimistic about the amazing benefits of neuromotor input devices like the one Facebook Reality Lab is working on as a research project, and how it has the potential to completely revolutionize 3DUI and exponentially increase our degrees of freedom in expressing our agency in user interfaces and virtual worlds. But yet I also still have outstanding concerns since there’s a larger conversation that needs to happen with policy makers and the larger public, and for Facebook to be more proactive in doing more of the conceptual and theoretical work about how to weigh the tradeoffs of this technology. There are always benefits and risks for any new technology, and we currently don’t have robust conceptual or ethical frameworks to be able to navigate the complexity of some of these tradeoffs.

This public conversation is just starting to get under way, and I’m glad to have had the opportunity to explore some of the lower-level neuroscience foundations mechanics of neuromotor interfaces and some of their associated privacy risks. But I’m also left feeling like of the more challenging ethical and privacy discussions are going to be happening at a higher level within the business and economic context for how all of this biometric XR data will end up being captured, analyzed, and stored over time. At the end of the day, how this data are used and for what purpose are beyond the control of foundational researchers like Reardon, as these types of business decisions are made further up in the foodchain. Reardon expressed his personal preference that these data aren’t mined and recorded, and so at the research level there’s a lot of work to see whether or not they can do real-time processing on edge compute devices instead. But again, Facebook has not committed to any specific business model, and they’re currently leaving everything on the table in terms of what data recorded and how they choose to use it. If it’s not already covered in their current privacy policy, then it’d just be a matter of updating it for them to declare.

Historically, Facebook has not had a great history of living up to their privacy promises, and they still need to embody a lot more actions over time before I or the rest of the XR community has more trust and faith in the alignment between what they’re saying an how they’re embodying those principles in action. The good news is that I’m seeing a lot more embodied action from both their public statements and my own interactions with them both in this interview with Reardon, but also my interview with the privacy policy manager Nathan White back in October 2020. Is that enough for me yet? No. There’s still a long way to go, such as seeing their details on any of their policy ideas or have a better accountability process to be able to have some checks and balances over time. This XR tech represents some amazing potential and terrifying risks, and the broader XR community has a responsibility to brainstorm what some of the policy guardrails might look like in order to nudge us down the more protopian path.

Update: 4/1/2021: Here’s some more info on the
Facebook Reality Labs symposium on Ethics of Noninvasive Neural Interfaces in collaboration with Columbia University’s NeuroRights Initiative.

Also here’s a human rights proposal for Neuro-Rights:

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

dream-rsc-mlf
Dream was a series of 10 live performances over 8 days that used motion captured actors who had virtual embodiments set within an immersive storyworld of Shakespeare’s Midsummer Night’s Dream powered by the Unreal Engine. This project was a research & development initiative funded by the United Kingdom’s Audience of the Future initiative that involves the Royal Shakespeare Company, Marshmallow Laser Feast, Philharmonia Orchestra, and the Manchester Film Festival.

They were originally going to produce a site-specific, location-based experience focusing on playing with different haptic & sensory experiences within the audience members, but they had to do a digital pivot to an online performance in the midst of the pandemic. They set a goal of trying to reach 100,000 people with their show that had two tiers including a paid interactive experience and free livestream of the live performance mediated through the simulated environment and broadcast onto a 2D screen.

Robin-McNicholas2I had a chance to break down the evolution and journey of this project with Pippa Hill, Head of Literary Department at Royal Shakespeare Company, as well as with Robin McNicholas, Director at Marshmallow Laser Feast as well as Director of Dream. We talked about adapting the constraints and goals that there were setting out to do, which was to also feature some of their R&D findings within the context of an experience. There was a lot of work with figuring out how to translate real-time motion capture with the puppeteering of virtual characters, and some very early experiments with audience paritipation and limited interactivity with an underlying goal of making it accessible to a broad demographic ranging in ages from 4 to 104 years old.

We explore some of the existential tradeoffs and design constraints that they had to navigate, but overall Hill said that there wasn’t anything left on the cutting room floor in terms of the potential for how these immersive technologies will be able to continue to impact future experiments with live theatrical experiments in the context of virtual reality, augmented reality, or mixed reality. There’s also lots of exciting and difficult narrative challenges for figuring out different ways for the audience to participate and interact with the story.

There’s also some opportunities to futher explore a tiered model of participation with differing levels of interaction, and also a lot more underlying narrative structures and opportunities to receive either individual or collective agency for how that feeds back into the unfolding of a story or experience.

At the end, there’s probably more new questions that firm answers on a lot of these existential questions of interactive and immersive narratives, but the scale and positive response that Dream has received so far help to prove out that there is a potential market for these types of interactive narrative and live performance experiments. There was also a 60-question survey that I filled out afterwards, and so I also expect there to be even more empirical data and research insights to be digested and reported on in the future as well.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s some behind-the-scenes video clips sent to me by part of the production team.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

AR-neural-input
I participated in a Facebook press event on Tuesday, March 16th that featuring some Facebook Human-Computer Interaction Research on AR Neural Inputs, Haptics, Contextually-Aware-AI, & Intelligent Clicks. It was an on-the-record event for print quotes, however I was not given permission to use any direct audio quotes and so I try to paraphrase, summarize, and analyze the announcements through a lens of XR technology, ethics, and privacy.

I’m generally a big fan of these types of neural inputs, because as CTRL-labs neuroscientist Dan Wetmore told me in 2019, these EMG sensors are able to target individual motor neurons that can be used to control virtual embodiment. They even showed videos of people training themselves being able to control individual neurons without actually moving anything in their body. There’s a lot of really exciting neural input and haptic innovations on the horizon that will be laying down the foundation for a pretty significant human-computer interaction paradigm shift from 2D to 3D.

The biggest thing that gives me pause is these neural inputs are currently being paired with Facebook’s vision of “contextually-aware AI” that is presumably an always-on, AI assistant who is constantly capturing & modeling your current context. This is so their “Intelligent Click” process can extrapolate your intentions through inferences and aims to give you the right interface, within the right context, at the right time.

I don’t think Facebook hasn’t really thought through how to opt-in or opt-out of specific contexts or how third-party bystanders who revoke their consent and opt-out or if there’s even any opt-in process. When I asked about how Facebook plans to to handle consent for bystanders to either opt-in or opt-out, then they pointed me to an external RFP to get feedback from the outside community for how to handle this. I hear a lot of rhetoric from Facebook about how the fact they are in charge of the platform is allowing them to “bake in privacy, security, and safety” from the beginning, which sort of implies that they’d be taking a privacy-first architectural approach. But yet at the same time, when asked how they plan on handling bystander consent or opt-out option for their always-on & omnipresent contextually-aware AI assistant, then they’re outsourcing these privacy architectures to the responsibility of third parties via their RFP process, which has already closed for submissions in October 2020.

They also have been mentioning their four Responsible Innovation principles announced at Facebook Connect 2020 of #1 Never surprise people, #2 Provide controls that matter, #3 Consider everyone, #4 Put people first. My interpretation is that these are stack ranked because there’s language elsewhere that indicates that the “#3 Consider Everyone” specifically refers to non-users and bystanders of their technology (as well as underrepresented minorities). Part of why I say this is because there are other passages that seem to indicate that the people in “#4 Put people first” is actually referring to Facebook’s community of hardware and software users, “#4 Put people first: We strive to do what’s right for our community, individuals, and our business. When faced with tradeoffs, we prioritize what’s best for our community.”

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s some research prototype videos that Facebook has released:

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

loveseat-venice-2019
Kiira Benzing‘s Loveseat was an ambitious fusion of live theater performance with VR technologies that premiered at the Venice Film Festival in 2019. They performed live for the audience in Venice while wearing VR headets as their performances were simultaneously broadcast within the virtual reality environment of High Fidelity (back when they still had VR components before their pivot to spatial audio).

There have been a lot of other fusions of live theater with VR technologies recently, including Benzing’s follow-up live theater in VR piece a year later called Finding Pandora X, which won the Best VR Immersive User Experience at Venice 2020. She took a lot of the lessons learned from Loveseat and applied them to Finding Pandora X, especially the fact that when you do a live performance simultaneously in VR and IRL, then you end up doubling the production needs and staff that need to attend to both performance contexts.

I had a chance to unpack some of the other lessons learned from Benzing at Venice 2019 after one of their performances, including some of the specific acting insights from all three of the actors involved in the production: Jenn Harris, Jonathan David Martin, and Sam Kebede.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

SXSW-VR-in-VRChat
Blake-Kammerdiener
SXSW Online is happening March 16-20, and I had a chance to get a sneak peak of the SXSW Online XR world in VRChat and talk with the chief curator of the SXSW Virtual Cinema program Blake Kammerdiener about the program and special events that he’s been able to put together. The $399 entry price for SXSW is a bit steep if you’re only interested in the immersive storytelling program, but this also includes all of the tech conference, music conference, film conference, film festival, and music festival events in additional to the SXSW Online XR program. I’ll be attending a number of the different aspects of the SXSW Online next week, and talking to Kammerdiener helped me get a bit more of an idea of what to expect next week and where to track all of the different events, talks, meetups, and live performances.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

WebXRAwards-Polys

The Polys WebXR Awards was an awards show on February 20th, 2021 founded by Ben Irwin in collaboration with Sophia Moshasha and Julie Smithson. Irwin wanted to feature a lot of the work that’s been happening on the immersive web in the year 2020 since the official WebXR spec was finally shipped on the Chrome Browser on December 10, 2019. The Polys WebXR Awards was a live show streamed on Twitch and with “Meta Multiverse” watch parties within Mozilla Hubs, AltSpaceVR, Engage, and Tivoli Cloud. They had an hour-long pre-awards show featuring pre-recorded, red carpet interviews, and they awarded 11 awards across a number of different categories.

I brought together Irwin, Moshasha, and Smithson three days after the show in order to unpack their journey in producing the event, as well as some of their highlights and takeaways in celebrating the experiences and developers who are helping to make the immersive web possible. There is not a video archive available for the show as they wanted to keep it ephemeral and in the moment, but you can see all of the nomminees on their WebXR Awards website or links to all of the nominees and winner can also be found in my Twitter thread coverage of the WebXR awards. They are also planning on posting more clips over the next year on their WebXR Awards YouTube Channel considering they captured a lot of historically interesting interviews and conversations.

Here’s a full list of the Polys WebXR Awards winners:

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s my Twitter thread from the event listing all of the winners and nominees:

UPDATE (March 11) Ricardo clarifies on Twitter that he as indeed won a few awards prior to his WebXR Lifetime Achievement Award.

https://twitter.com/mrdoob/status/1369958753823522820

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Traveling the Interstitium with Octavia Butler - Still 2
One of my favorite pieces at Sundance New Frontier 2021 this year was a series of four open web experiences created as a part of the piece Traveling the Interstitium with Octavia Butler. Born out of The Guild of Future Architects‘ Futurist Writer Room, lead artists Sophia Nahli Allison, idris brewster, Stephanie Dinkins, Ari Melenciano, and Terence Nance participated in a series of worldbuilding workshops featured on the themes and imagination of Octavia Butler’s body of science fiction work. Their original output were going to be live performances, but with COVID-19, they decided to use open web technologies to distribute their speculative design art pieces. You can see these four immersive web pieces on the website Interstitium.space/.

kamal-sinclair2I had a chance about how this project came about with Kamal Sinclair, Founding Executive Director of Guild of the Future Architects, as well as with Ari Melenciano, a creative technologist & founder of AfroTecTopia. We trace the lineage of these worldbuilding processes that take inspiration from Alex McDowell’s World Building Institute, Afrofuturist designers, Allied Media Projects, AfroTecTopia, NYU’s Interactive Telecommunications Program (ITP), Skawennati Fragnito’s Initiative for Indigenous Futures, Afrocentric Design, Processes Centered in Blackness, Janet Wong & Bill T. Jones of New York Live Arts, Future Imagination Summit 2019, as well as Octavia Butler’s body of work.

Ari-MelencianoSinclair and Melenciano talk about how this type of speculative worldbuilding allows Black artists to go beyond deficit-based narratives focused on trauma, and the space to step deeply into “the audacity of bold imaginations of our future” where reconciliation is possible and new potentials are released. They are cultivating a practice of creative & collaborative foresite that’s able to “liberate minds of calcified understandings” and ultimately democratize of the imagination of our future through these creative, worldbuilding processes. Sinclair has become convinced of the power of radical imagination facilitated through these worldbuilding processes, because she has witness multiple times how these imaginal Afrofuturism visions expressed through art have come to pass when given enough resources and community members with the capacity and willingness make it happen.

Each of the four pieces within the Traveling the Interstitium with Octavia Butler have their own speculative designs and take on the future, and you can experience them yourself on the Interstitium.space webite.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Beyond the Breakdown - Still 1
Worldbuilding and speculative design was a big theme at Sundance New Frontier 2021, and I had a chance to participate in an experience that facilitated a collaborative & deliberative process of worldbuilding that was called Beyond the Breakdown. Created by Tony Patrick, Lauren Lee McCarthy, and Grace Lee, it builds off some of the foundational work and processes developed by Alex McDowell and the USC Worldbuilding Institute. The core idea of worldbuilding is to design the underlying context and structures of society projected out within the context of a future time and place, and then to apply an evolutionary cultural, technological, economic, political model in order to imagine some potential futures from a variety of different perspectives and points of view. In the end, there will hopefully be some common themes and consensus that emerges.

In order to facilitate this process the Beyond the Breakdown collaborators created a simplified teleconference application that replicates the feeling of a group Zoom call. There were six participants who are on this call along with an AI-assistant named Serenity that’s puppeteered by a human off screen. The goal is to project out into the future into 2050, and then have a group discussion that’s catalyzed by a series of prompts provided by the AI assistant. The goal is to find the underlying principles and values that are consistent today and in the future, to imagine a better potential future, and then create a collaborative community dialogue to see where there are common interests and goals so that people can individuals within a community can start to think about what types of actions that can take today in order to make these imaginal futures a reality today.

I’m really excited about the power and potential of democratizing these types of community worldbuilding practices, an especially the potential of using immersive storytelling to actually build out some prototypes of these speculative futures within a virtual environmnt in order to start to prototype the large-scale designs, architecture, and emergent social dynamics of some of these imaginal futures. We’ll be taking a look at some specific examples of this in our next episode on Traveling the Interstitium with Octavia Butler.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Nightsss - Still 1
Nightss is an sensual experience that’s structured around an erotic Polish-language poem by Weronika Lewandowska that uses dance and spatial metaphors in VR to create an immersive poem. She collaborated with co-director and co-screenwriter Sandra Frydrysiak who also has a background in dance. They both are very interested in researching how the immersive experience they created impacts the neuroscience of embodiment, perception, and empathy in collaboration with the University of Social Sciences & Humanities in Warsaw.

Lewandowska and Frydrysiak are also interested in creating immersive experiences that help the audience feel embraced, immersed, safe, intimate, and sensual, and they’re working with the Visual Narratives Lab do help do some research into directing attention and other foundational research topics for immersive storytelling. They coded Lewandowska’s poem, and used it to structure multiple layers of story that included the emotions, visuals, movement, music, interaction, and overall immersion. Poetry uses a lot of powerful visual metaphor, and so it makes sense that the translation of poetry into immersive poems will help to form the underlying affordances of the spatial language of virtual reality and immersive storytelling.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Here’s a performance of Weronika Lewandowska’s poem featured in Nightsss

Music: Fatality