star-wars-galaxys-edge-seezelslack
The new Star Wars: Tales from the Galaxy’s Edge VR experience releases today, and I had a chance to play through it and talk to a couple of the creators from ILMxLab about their latest steps towards their vision of “storyliving.” It represents a lot of progress from their initial three chapters of Vadar Immortal, but there’s certainly a long ways to go to be live fully into this vision of storyliving. Galaxy’s Edge does have a lot more sophisticated gameplay with smarter AI enemies, you have more agency to move around during the cinematic cut scenes with more aspects of open-world exploration, and the pacing of the story elements with interactive exploration has also evolved quite a bit as well.

For anyone interested in tracking the evolution of VR and immersive storytelling, then there’s certainly a lot of innovations that are building off what Half-Life: Alyx was able to achieve. There are more choices for missions or challenges to accomplish as well as a few different combat gameplay options as well, but it’s still mostly a linear unfolding of missions and stories. The full manifestation of storyliving will need a lot more technological innovations with AI and natural language processing, the underlying narrative structure with more dynamic character interactions, and finding ways to move beyond the constraints of the mobile compute capabilities of the Oculus Quest through the future of remote rendering and distributed compute. In the large context of storyliving, this represents some small incremental steps on a much larger and more ambitious journey for where this is all headed in the next 10-20 years.

ILMxLab is certainly pushing the limits for quality and fidelity as the worlds are very expansive and immersive, with a compelling cast of characters and virtual performances featuring some the voices of Frank Oz as Yoda and Anthony Daniels as C3PO, which triggered some deep fanboy nostalgia in me that was honestly a bit surprising.

It’s also worth noting that Tales from the Galaxy’s Edge has a significant amount of accessibility options for different types of locomotion, standing vs seated, torso & height customization, and being able to turn on sticky grip so you don’t have to spend hours holding down buttons. All of this is a substantial foundational work that will allow the team to release additional story chapters as told by the bartender Seezelslak as voiced by SNL’s Bobby Moynihan that can be unlocked by gathering enough ingredients in the world.

alyssa-finleyI had a chance to catch up with ILMxLab senior producer Alyssa Finley and art director Steven Hendricks to talk about the journey from Vadar Immortal to Galaxy’s Edge, how they designed around the story and compelling combat, and some of the different tradeoffs they faced along the way.

steve-hendricksThere’s some interesting use of augmented reality-enabled watch to help provide direction and guidance on some of your missions through a compass and destination overlay. You can also learn more about the world through scanning objects to unlock journals, and it’s also a holographic communications device so characters can provide you guidance to keep the story moving forward and provide some deeper context to situate you in this place and time.

In the end, I ended up completing all of the missions and achieving all of the challenges. I was hoping for more narrative payoff with cinematic cut scenes or pushing a deeper story forward, and so some of the tasks are more about the journey than the final destination. Going through each of the challenges and missions encouraged me to explore around and discover part of the world that I would’ve not otherwise stumbled across, and it also gave me more things to do in the world after completing the main campaign. The store shows up too late in the game to do anything meaningful with the accumulation of Galactic Credits, and some of the options are more cosmetic skins than changing any core mechanics of the gameplay. But it’s providing the foundation for future stories and expansions into new worlds and stories that will be told across space and time in the Star Wars universe.

Overall, I was satisfied with my time spent in the experience, but if anything wanting more bits of character interactions and narrative rewards for completing some of the side missions. There’s some really great sound design, soundtrack, world building, and overall polish that gives an immersive quality that makes it a joy to explore this world. And I expect there to be a lot more storyliving iterations and innovations as ILMxLab continues to explore this intersection between immersive storytelling and interactive gameplay.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

vr-awards

Daniel-Colaianni-Rachel-Knight
The first VR Awards shows was produced by Academy of International Extended Reality (AIXR) in 2017, and after producing three annual events in London the producers were forced to produce their fourth show in 2020 virtually. After evaluating many of the different social VR platforms, AIXR decided to go with VRChat in collaboration with over 40 volunteer developers, world builders, and modelers from the community to reimagine the conceit of an awards show into an immersive world hopping adventure.

I thought the awards show featured some bold and ambitious experimentation that was worth unpacking with AIXR’s Chief Executive Daniel Colaianni and Event Planner Rachel Knight. They chose to optimize for the Quest, which meant that it was a solo adventure rather than a group of experience. They also wanted to have complete focus on each world’s puzzle, interaction, and keep the worlds lean enough to be able to run on the Quest 1. It was split up into four chapters with each worlds featuring 3-4 different 3D scenes that symbolically represented the overall category. Each of the nominees has a 2D object that was shown on a virtual TV screen, and then it would fade to black and then reveal the winner of the category with a higher resolution model from the actual experience.

Colaianni and Knight talk about the challenges of trying to recreate the same level of premiere experience, with meaningful networking opportunities, but also recreate the overall atmosphere of tension, suspense, and awe and wonder that can come from these types of shows. Their focus on accessibility and optimizing for the Quest meant that there was a lot of other compromises ranging from polygon counts to limitations of featuring embedded video files.

Overall, it was really refreshing to see this type of ambitious experimentation with trying to explore new aspects of the affordances of VR in this type of awards and marketing context. AIXR lead a community-driven effort here, and it was through this unique collaboration with V Chat and the VR Chat community that they were able to pull off such an impressive immersive fourth edition of the VR Awards show. You can see the full list of winners here.

The VR Awards will only be in VR Chat for a few more days, until midnight GMT on Sunday, November 22nd. But you can check out the show in VR Chat starting with the VR Awards Act 1.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

brcvr-10-principles-talk
athena-demos
Athena Demos is the Chief Cultural & Community Officer for Black Rock Creative and Black Rock City VR (BRCvr). I previously covered BRCvr’s officially-sanctioned multiverse world in AltSpaceVR for Burning Man 2020 with my conversation with lead developer BRCvr Greg Edwards. Demos was a key part of originally connecting Edwards with the Burning Man leadership in 2015 to demo his VR prototype of photogrammetry scans of the Playa in 2014.

Demos helped to coordinate the logistics of a collaboration with Microsoft in hosting BRCvr, and she notes that there were a number of AltSpaceVR & Microsoft staffers who were also Burners, including Alex Kipman. The guiding vision of this collaboration were the 10 Principles of Burning Man, which are: Radical Inclusion, Gifting, Decommodification, Radical Self-reliance, Radical Self-expression, Communal Effort, Civic Responsibility, Leaving No Trace, Participation, & Immediacy. Demos shares some of the fascinating backstory of the collaboration between BRCVR and Microsoft, and the complications of navigating the dynamics of decommodification and gifting.

Overall, it’s interesting to see how these guiding principles were translated from a physical co-located festival into a virtual gathering in AltSpaceVR. These cultural principles were embedded into the worlds and within the social dynamics of the gathering, that I think this made my personal experience different than what I’ve seen before. Perhaps this was due to my own relational dynamics of being connecting to a critical mass of participants that made it interesting to hop between different worlds, but there other aspects as well. There was also the ephemeral promotion of trending worlds isolated to this event, nearly 200 worlds created for this event with a consistent design aesthetic and interlinking between then, as well my own experiences of how the “playa magic” or serendipitous collisions and moments of synchronicity played out through the course of the week. Demos also said that she had a more intense experience of these “playa magic” moments virtually, as she was a nodal connection for many of the participants where it made it easier to teleport directly to the worlds without having to navigate 7-square miles of desert.

Overall, this first virtual Burning Man experience within BRCvr in AltSpaceVR was just the first iteration, and there were a lot of artists who have been really inspired to create a virtual world for next year. With nearly 200 worlds submitted, there were many who tried to recreate the essence of their own theme camps, but seeing what type of virtual art was possible and to see what types of virtual interactions were the most satisfying has provided a lot of inspiration for an entire community of people with a rich history of the Burning Man culture that’s guided by their 10 principles.

I was able to talk with Demos on Wednesday, September 9th, which was just after their final day of BRCvr. We talked about the history and evolution of BRCvr, the unique collaboration with Microsoft, but also her fresh insights into the 10 principles of Burning Man after giving a number of talks in VR about the topic, listening to feedback, and meditating on how these principles are getting implemented in VR.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

us-political-polarization-crop

john-robbJohn Robb has a concept of Networked Tribalism, which is a result of the combination of filter bubbles, AI algorithms, and political polarization. Robb is a military analyst who looks at the impact of technology on culture, and he started his Global Guerrillas blog back in 2003 where he was tracking what he calls the “Open Source Insurgency” in Iraq where there over 70 different factions who were all collaborating against the U.S. occupation. Open Source Insurgency morphed into Open Source Protest with Occupy Wall Street and the protest movements around the world, and then eventually into Open Source Politics with the 2016 Trump campaign, and then eventually into what he calls “Networked Tribalism.” Robb now runs his Global Guerrillas Report through his Patreon.

Robb uses David Ronfeldt’s framework laid out in a 1996 paper titled “Tribes, Institutions, Markets, Networks: A Framework About Societal Evolution” that splits society into the following groups:

  • “the kinship-based tribe, as denoted by the structure of extended families, clans, and other lineage systems;”
  • “the hierarchical institution, as exemplified by the army, the (Catholic) church, and ultimately the bureaucratic state;”
  • “the competitive-exchange market, as symbolized by merchants and traders responding to forces of supply and demand;”
  • “and the collaborative network, as found today in the web-like ties among some NGOs devoted to social advocacy.”

Media theorist Marshall McLuhan talked about tribalism with Mike McManus in his last televised interview on September 19, 1977, where he predicted digitally-mediated tribalism.

In the years following McLuhan and Ronfeldt’s work using the terms of tribes and tribalism, there have been historians like Chris Lowe who have pointed some of the cultural baggage that comes with these terms that’s explored in his essay The Trouble with Tribe. But Robb is referencing McLuhan’s & Ronfeldt’s work when he talks about the “Networked Tribalism” dynamics with regards to the types of mob-mentality and phase alignment he’s seeing in online behaviors.

The type of networked tribalism that Robb is looking at is happening within a deeper cultural context of political polarization. The United States election was called for Joe Biden by the Associated Press on 11:25am EST on Saturday, November 7th after 3.5 days of counting ballots in a few key battleground states. The electoral college race was a lot closer than the overall popular vote where Biden got over 74 millions total votes while Donald Trump received over 70 million votes. For a lot of people inside and outside of the United States, then it may be a bit confusing as to why this race was even as close as it was. But the level of political polarization in the United States can’t be underestimated as there seems to be two completely different filter bubbles that have a different set of facts that form mutually exclusive narratives on the story of truth and reality.

There’s a deeper cultural context for this political polarization that Pew Research reported on back in October 5, 2017. Going back through nearly 20 years of surveys, their research found “widening differences between Republicans and Democrats on a range of measures the Center has been asking about since 1994.”

Michigan State Associate Professor Zachary Neal did a network analysis of legislation over the past 40 years, in order to document an increasing amount of political polarization. His 2020 paper in Social Networks titled A sign of the times? Weak and strong polarization in the U.S. Congress, 1973–2016 documents decreasing amounts of bi-partisan collaboration in favor of no-compromise, partisan alignment.

There are also elements within the media ecosystem that have been becoming more and more explicitly partisan in their coverage as documented by the Media Bias Chart 6.0 by Ad Fontes Media. They map out different news organizations on a spectrum of left vs right political bias as well as on a spectrum of reliability vs unreliability.

It’s within this larger cultural context where user behavior combined with technology algorithms at Facebook, Google, YouTube, and Twitter that have made the boundaries of these filter bubbles of reality more explicit. Eli Pariser’s 2011 TED talk popularized the “filter bubble” concept, and the technology firms may be merely reflecting and amplifying our patterns of behavior that are driven by a confirmation bias to consume information that reinforces rather than challenges our assumptions about the nature of reality. It’s a lot harder to train algorithms to provide users with aspirational content that’s both relevant, important, uncomfortable, challenging, and has a diversity of alternative points of view and perspectives.

The issue of filter bubbles has reached the level of technology policy with Senator John Thune introducing the Filter Bubble Transparency Act that would require technology companies to disclose how algorithms filter information on their services, but also the option to turn off the algorithmic-driven timelines and search results in order to escape these data-driven filter bubbles.

The combination of the political polarization, filter bubbles, and AI algorithms is cultivating a deeper context for networked tribalism to thrive within our culture. Robb was the Sensemaker in Residence for a four-part series of Zoom talks at Peter Limberg’s Stoa throughout August 2020 that were posted on their YouTube Channel on October 3, 2020: PART 1: August 10, PART 2: August 17, PART 3: August 24, & PART 4: August 31.

I wanted to invite Robb onto the Voices of VR podcast because I found his “Networked Tribalism” sensemaking framework to be helpful in making sense of some of the cultural and political dynamics in the United States, and how they’re interfacing with technology policy issues around filter bubbles & the impacts of algorithmic filtering, as well as the dynamics of censorship online weighed against the role of a code of conduct and community standards in order to create online spaces that are free from abuse and harassment.

The First Amendment protects the freedom of speech in relationship to the U.S. government, but it doesn’t extend out to private property or for big technology platforms where speech is regulated by their terms of service, codes of conduct, and community guidelines. But even the First Amendment has a number of different free speech exceptions such as the “Fighting Words” category of speech, incitement, false statements of fact, obscenity, child pornography, threatening the President of the United States, or speech owned by others. Each technology company has to decide how it weights the benefits of free speech with the potential harms that come from all of the unprotected classes of free speech, and how they will enforce it on their platform.

Whether or not the enforcement of these codes of conduct and community guidelines is seen as political censorship or the regulation of unprotected speech depends a lot on the larger cultural and political context that’s driven by the narratives that leaders and influencers within these political factions are creating. Then what happens when the boundaries of acceptable and unacceptable behavior and speech are determined by machine learning data sets that operate at scale and are imperfect in their implementation? And then what happens when your access to virtual and augmented reality will be determined by your actions and behaviors in a media ecosystem that’s being monitored by these same black box AI algorithms?

Robb expects that the intersection between Filter Bubbles + AI Algorithms + Political Polarization will continue to accelerate and drive collective behaviors through networked tribalism, and it’s an open question to what degree the technology policy and legal legislation will be able to reign this larger cultural and political dynamic.

You can look at this issue through a couple of lenses like Ronfeldt’s framework of “Tribes, Institutions, Markets, Networks” or Lawrence Lessig’s Pathetic Dot Theory of law, social norms/culture, the market, and technological architecture/code. Either way, there’s certainly a large cultural and political aspect where these affinity groups cultivate in-group dynamics through the combination of networked communication architectures that build alignment either through psychologically-driven confirmation bias, algorithmic-enforced filter bubbles, or a increasingly-biased media ecosystem that’s not accurately representing all sides of a story or countering misinformation and propaganda coming from political leaders. This is obviously a very complicated, but also deeply relevant topic for how the intersections of culture, politics, and technology policy will continue to unfold into the 21st Century.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

responsible-innovation-principles
nathan-white
There is a lot of sensitive data that will captured by virtual reality devices that present a wide range of ethical and moral dilemmas that I’ve been covering on The Voices of VR podcast since 2016. During Facebook Connect, Facebook released their responsible innovation principles, started talking to the media about these principles, & Facebook CEO Mark Zukerberg told the Verge that “One of the things that I’ve learned over the last several years is that you don’t want to wait until you have issues to be discussing how you want to address [ethical issues]. And not just internally — having a social conversation publicly about how society thinks these things should be addressed.” However, the public record showed that hardly any of these ethical discussions about XR having been happening publicly.

Most of the ethical discussions Facebook has been having have almost exclusively happening in private contexts, under non-disclosure agreements, or under Chatam House rules that occluded any public transparency or accountability. I have been asking Facebook for the past couple of years to get some privacy experts to come speak about the ethical implications of biometric data, but they’ve been resistant to go on the record about some of these more thorny ethical issues about the future of XR. The good news is that this seems to be changing as I was given the opportunity to speak on the record with Nathan White who is Facebook Reality Labs’ Privacy Policy Manager for AR & VR.

White has an impressive history with advocating for human rights and technology policy, helping to reform US Surveillance Law while working with Dennis Kucinich, and was motivated to bring about change from the inside by working at Facebook over the objections from some of his friends who thought that it would be a morally-compromising position. White calls himself a privacy advocate within Facebook, and his role is to try to synthesize the outside perspectives about privacy implications from a wide range of privacy advocates, academics, non-profit organizations, civil society, and generally the types of privacy & ethics discussions that are happening here on the Voices of VR podcast.

Part of White’s role is to collaborate with the external organizations and experts on these issues, but most of these opportunities for outside council have been happening behind closed doors and under NDA. But he’s hope to be more engaged in these conversations within a public context because it’s these organizations who are going to collaboratively help to set some of the normative standards for what we do with the data from XR way before the government enshrines some of these boundaries within some type of legal framework. The ethical boundaries and framework more likely to come from a collaboration from the XR community first from organizations like the XR Security Initiative, Electronic Frontier Foundation, and other tech policy non-profit organizations.

Is that enough for me to be assured that Facebook is doing everything they can to be on the right side of XR privacy? No, not yet. We still need to have more mechanisms for transparency and accountability that go beyond the community collaborations and listening to what the culture is saying. Privacy advocate Joe Jerome told me that the trap is feeling like the feedback that’s provided to a company like Facebook can feel like it’s just a “box-checking exercise” for them so that they can say that they talked to privacy advocates. An example is VP Facebook Reality Labs Andrew Bosworth saying, “Consulting with experts across privacy, safety, and AR/VR from the very start is crucial to our product development process to ensure that we have the right frameworks as the technologies we build continue to evolve.” It’s great that experts where consulted, but there’s no transparency as to what exactly any of these privacy experts told Facebook or the degree to which any of their advice was implemented.

This is part of the reason why Jerome advocates for strong enforcement mechanisms in order to have a satisfactory level of accountability when it comes to privacy issues. In the absence of a strong oversight mandate and ability to bring consequences, then it’s have consumers be ensured that companies like Facebook are doing everything they can to ensure that they’re taking consumer privacy seriously.

I trust that White is going to serve as a strong voice for consumer privacy, but at the same time there’s no way for anyone on the outside to know to what degree those consumer privacy concerns are outweighed by competing business interests or used for secondary purposes that fall within a broad range of interpretations of the open-ended and vague language of Facebook’s privacy policy. It’s also an open question for what types of things that Facebook needs to do in order build trust that they’re being good stewards of XR data.

But a big takeaway that I get from my conversation with White is that he doesn’t want to be the lone voice and sole advocate for privacy from within Facebook, and that he’s interested in building more connections and relationships to other tech policy experts who are ramped up on the implications of the data from virtual and augmented reality. There’s a big role for things like XRSI’s Privacy Framework, my XR Ethics Manifesto, or XR ethics & privacy conferences and discussions. There are a lot more open questions than answers, and it’s reassuring to know that there are people within Facebook who are both listening and participating in these discussions. Given this, now is more important more than ever to continue to work on a broad range of foundational ethics and privacy issues in the XR space.

My closing thought is that there’s still a lot more things that I personally will need to see from Facebook when it comes to having more transparency and accountability that they’re moving beyond these discussions and actually putting this type of advice into action. There’s also a lot more open questions that I have about the relationship between the public and companies like Facebook who are becoming more and more like governments. But the type of government is more like a technological dictatorship rather than any sort of representative democracy that has established protocols for how to interact and respond to the will of people. But at the same time, I’m at least encouraged that these open dialogues are starting to happen, and I hope to continue the conversation with Facebook on many other fronts as well. Overall, it’s a move in the right direction, but I think we all need to see more evidence of how Facebook plans on taking action on this front, and how exactly the plan on living into their four new responsible innovation principles.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

fivars-festival-2020

keram-malicki-sanchez

James-Baicoianu

I talk with the founder of the FIVARS VR & AR story festival Keram Malicki-Sánchez & WebXR developer James Baicoianu to talk about how the repurposed the open source JanusXR code in order to create a platform to deliver 360 video. It’s a pioneering effort to push the technology forward this way, and they share details about the struggles of their journey and why they see it as import to create a sovereign platform. The festival itself has an independent, avant-garde spirit, and that’s reflected in how they’re also trying to push the technology forward to show folks what’s possible with WebXR.

You can read more thoughts that I have on the FIVARS festival in this Twitter thread:

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

raindance-immersive-2020
Maria-RakusanovaMária Rakušanová got her start in the VR industry in 2014 when she was a product marketer for the Gear VR at Samsung. Her job meant that she would watch a lot of immersive content that she would use to market this stand-alone, mobile VR headset. In 2016, she started curating immersive work for the Raindance Immersive section of the Raindance Film Festival, which has always had a distinctly independent selection. The 2020 edition of Raindance Immersive marks her fifth year of curating the best-of works from the year, and it features over 50 hours of content.

Raindance Immersive gives out 10 different awards that include best experience for narrative experience, documentary experience, animation experience, multi-player experience, immersive game, immersive world, and Discovery World for Best debut. There are also outstanding achievement awards in audio, art, and design. The immersive world award category is new this year, which features a number of different VRChat worlds.

I had a chance to catch up with Rakušanová on October 19th in order to get a sneak peak of the Raindance program, the opening night party on November 2nd in a custom-built VRChat world that will serve as the Raindance Immersive hub, different master class workshops, as well as a sneak peak of the live performances that start during the second week.

If you want to catch up from a lot of good independent VR games, stories, and experiences from 2020, then this should be a great opportunity to see what you may have missed. In terms of how to access the Raindance Immersive 2020 experiences, there isn’t an accreditation fee, but you may need to either purchase some of the games from Steam/Viveport, or use the free two-week Viveport trial in order to get access to the full selection. There’s also six live performances during the second week that will require advanced booking.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Starship_Commander_Arcade_KeyArt
Alexander-Mejia
Human Interact’s Alexander Mejia has a hard time describing what exactly Starship Commander: Arcade is. For movie lovers, he says it’s a choose-your-adventure narrative where your the main character of your own movie. For gamers, he calls it a cinematic narrative with some story choices driven by language and voice. In the end, the vast adventure game was scoped down to a 12 to 15-minute sci-fi, arcade experience that uses your voice to interact with a repository of nearly two hours of pre-recorded video clips of Sergeant Sarah Pearson (played by Human Interact VP of Business development Sophie Wright) answering your questions. It’s a very unique combination that explores how you can do worldbuilding for an experience by just using your voice, and it sets the foundations for what it could be like to become a character within a cinematic, immersive experience.

Starship Commander: Arcade came out on September 10, 2020 after 4 years of working on it. From original idea to 2016, to prototype demo shown publicly at GDC 2017, to re-scoping it as a location-based entertainment, arcade experience, and then to it’s eventual release on Steam. Included in the purchae of this experience includes a very well-produced documentary called Passion at All Costs where Mejia documents his whole journey as an indie VR developer. It really captures the early days of VR, and will prove to be a valuable historical document of this time period that tells a larger story through the lens of Human Interact’s struggles and successes over the years.

There’s quite a bit of innovation and experimentation that’s contained in this experience, and I hope that folks will check it out. Despite the fact that AI voice services have been available for quite a while, then there haven’t been a lot of commercial VR releases that really explore and utilize the potential of using natural language processing technology to deepen your sense of social presence and immersion within a world.

So if you’re interested in the future of interactive narrative, then I’d encourage folks to listen to the first half of our podcast if you haven’t already decided to try out Starship Commander: Arcade. Go have the actual experience, check out the Passion at All Costs documentary, and then come back and listen to the last half of this podcast to get the full story. We are moving into a world where we become protagonists in our own interactive & immersive experiences, and there’s a lot of key insights that Mejia and Wright have figured out here.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

stanford-study-motion-tracked-data
mark-roman-miller
Stanford University has just published an important research paper that hows how motion tracked data in VR can be identifiable of specific users. The paper titled Personal identifiability of user tracking data during observation of 360-degree VR video was published in Scientific Reports on October 15th with authors including Mark Roman Miller, Fernanda Herrera, Hanseul Jun, James A. Landay & Jeremy N. Bailenson.

I had a chance to catch up with Miller on October 12th to summarize their major findings that included a 95% accuracy in being able to identify one of 511 different participants from a 20-second sample size from a 10-minute session of watching a 360-video, and then rating their emotional reaction using the HTC Vive hand-tracked controllers. Even though they’re watching a 360 video, they have access to a 90Hz feed of 6DoF information from the head pose in addition to two 6Dof-tracked hands. From this basic motion-tracked data, they’re able to extrapolate a unique signature of someone’s body size, height, and nuances of how they hold and use the controllers, which ends up being enough information to reliably identify someone given the right machine learning algorithm.

I talk with Miller about the experimental process and analysis, as well as some of the implications of this study. Currently this type of motion tracked data is typically considered to be de-identified data, but research like this may start to reclassify motion tracked data as personally-identifiable and potentially even classified as biometric data. We also talk about how specific medical information can also be inferred from the recording on this motion-tracked data. There’s more ways to make this type of research more robust across multiple contexts over time, but it’s generally pointing to the possibility that there are some immutable characteristics that can be extrapolated and inferred from this data.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

vrx-brennan-spiegel
On October 6, 2020, Dr. Brennan Spiegel released his book titled “VRx: How Virtual Therapeutics Will Revolutionize Medicine,” which is an amazing survey of different applications of what could be called virtual medicine, experiential medicine, digiceuticals, or cyberdelics. The FDA is calling it Medical XR (MXR), and they’re recognizing it as an entirely new field of medicine that needs special regulatory considerations.

Not only is Spiegel’s book a comprehensive survey on what’s happening with virtual therapeutics, but also explores this fascinating intersection between neuroscience, psychology, clinical medicine, technology, the Mind/Body connection, embodied cognition, & different branches of philosophy but specifically the philosophy of mind and the mysterious nature of consciousness itself. Spiegel is able to deftly explore all of these different dimensions through George Engle’s bio-psycholsocial model of medicine created in 1977 or from how the World Health Organization in 1947 recognized that there’s a physical, emotional, and social dimension to health. VR is helping Western Medicine transcend the biomedical lens of health and healing, and start to leverage the body’s innate healing capacities that can be unlocked though digitally-mediated experiences.

Spiegal has also been deploying VR to over 3000 patients at Cedars-Sinai Hospital in Los Angeles, California as a part of patient care and research projects. He was a co-author of a paper titled “Recommendations for Methodology of Virtual Reality Clinical Trials in Health Care by an International Working Group: Iterative Study” that aims to develop a methodological framework to “guide the design, implementation, analysis, interpretation, and communication of trials that develop and test VR treatments.”

I had a chance to talk with Spiegel about this whole new field of medicine, and what it can teach us about the underlying mechanisms of perception, health & healing, and the mysterious nature of consciousness itself.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality