webxr-design-summit

The WebXR Design Summit was a 9-hour series of talks about immersive design & experiential design that was organized by Ben Erwin and his WebXR Polys Awards team on October 12th. I was the host and moderator for the day, but also helped to curate a few sessions including a fireside chat with two immersive design professors Doug North Cook, Program director & Assistant Professor of Immersive Media at Chatham University, as well as Robin Hunicke Game Designer and CEO of Funomena as well as Full Professor at UC Santa Cruz, and the founding Director of the Art + Design: Games & Playable Media program.

I wanted to hear about some of the latest trends of how they’re teaching immersive technologies and experiential design, and a common theme was interdisciplinary collaborations. North Cook was talking about a Design Dialogue practice that was sort of like a telephone game where an architect would create a blueprint, which was then created into a 3D model, transformed into an immersive audio file, turned into a water color painting, and then created into an immersive VR experience. The goal was to see what kind of design practices translate well into other modalities and affordances, but also to escape the normal production pipelines for each media.

The theme of interdisciplinary collaboration also showed up for Hunicke, who announced that UC Santa Cruz’s Art + Design: Games & Playable Media Department is creating a newly shared department with her colleagues in Theater Arts that blends game design, game production, game art, with dance, theater, performing arts, dramaturgy, and critical practice to create a shared hybrid curriculum combining the affordances of play, performance, design. She mentioned that A.M. Darke is creating an Open Source Afro Hair Library, micha cárdenas is looking at embodiment & performance in terms of environmental challenges, Ted Warburton is looking at the combination of the environment, dance, and teleawareness, and dramaturg Michael Chemers is looking what insights Japanese Puppet Theater can provide to the location of embodied performances of puppeteered avatars and virtual beings.

North Cook and Hunicke also talked about what insights that web design and WebXR could provide to the overall design practices for immersive technologies and experiential design talking about aspects of telepresence and the fusion of lots of data into experiences, but also the uncertainty around the economic sustainability of the open web and what the new models to support a diverse and robust ecosystem of immersive experiences.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

fivars-2021

keram-malicki-sanchezThe Festival of International Virtual and Augmented Reality Stories (FIVARS) is holding a hybrid event where the IRL component in LA starts today October 15th, and the virtual festival starts on October 22nd. FIVARS has been curating independent and avant-garde immersive stories since 2015, and I had a chance to talk with founder, executive director, and chief curator Keram Malicki-Sánchez who provides some highlights of the interactive and 360-degree video featured pieces at FIVARS. The interactive pieces will only be available at the West Hollywood location in LA, and all of the 360-degree videos will be available remotely.

Even if you’re not planning on attending FIVARS physically or virtually, Malicki-Sánchez shares a lot of interesting trends that he’s seeing as a curator who sifts through a lot of the early experiments and prototypes of immersive storytelling. He also eloquently describes his curation process and philosophy that is able to tap into the larger collective zeitgeist of stories that need to be shared right now, and how the immersive nature of XR provides a visceral way to deeply connect to other people’s wide range of human experiences.

Malicki-Sánchez is also a visionary when it comes to using interoperable and open WebXR immersive technologies to host both his VRTO conference and FIVARS festival (see my previous conversation with JanusWeb developer James Baicoianu and him here.) So I’m looking forward to checking out the latest updates and upgrades of visual aesthetics that leverage the latest build of Three.js when the virtual portion opens up next Friday.

You can get the latest information on programming at FIVARS.net and get ticket on EventBrite here: https://www.eventbrite.ca/e/fivars-festival-of-intl-vr-ar-stories-fall-2021-los-angeles-tickets-122313391647

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Vive-Flow
HTC is launching the Vive Flow today available for pre-order $499 and launching in early November. They’re 6-DoF “immersive glasses” weighing 189 grams that are meant to be the tablet of VR. You have to bring your own power supply, and so you’ll need either a battery pack, pug it into a USB port, or use your phone’s battery. It’s intended to carve out a new market for VR for folks who are focused more on lifestyle applications like watching a 200-inch cinema screen on an airplane, immersive meditation apps, casual gaming, or productivity apps.

The resolution is only 1.6k per eye, but the full megapixel resolution was not provided [UPDATE 6:25p Oct 14 2021]: It’s 1600×1600 per eye as shown here]. In fact, HTC is not sharing a lot of details on the specifications since if you’re paying attention to the details of these specs, then you’re probably already a power user of VR wanting full embodiment, and this is not the best option for that especially as you have to use your phone as a 3DoF input device to select options and scroll. Road to VR is reporting that it is running on a Qualcomm XR1 chip with 64GB of space and 4GM of RAM. Another use case is streaming your phone via Miracast to the display device to watch streaming services or play cloud gaming services, which is currently Android only and requires support for HDCP 2.2. [UPDATE 6:32p Oct 14, 2021: It also requires Android to be used as a 3DoF controller as iOS is currently not supported].

So in the absence of having all of the specs, I had a chance to talk with Shen Ye, Senior Director and Global Head of Hardware Products at HTC about the story of the Vive Flow, and where they see it fitting into the overall ecosystem. Ye said that they know where VR is going with lightweight glasses that you wear all day, and eventually replaces aspect of your phone. They don’t know what the exact next step is to get to that final goal, but this is their next step they’re taking to expand their VR market. So listen in to get a bit more context of the development of the Vive Flow and how they see it fleshing out the VR ecosystem with the aim of becoming the tablet of VR.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

I did a round-up of additional context and reviews here in this Twitter thread (also don’t miss Shen’s thread here, which is like the Twitter thread version of our conversation)

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

hp-omnicept

HP’s G2 Reverb Omnicept edition is a special version of their VR headset targeted for enterprises that includes Tobii eye tracking that can track real-time pupil dilation, eye saccades, visual attention with eye gaze vectors, a camera for facial tracking, as well as photoplethysmogram (PPG) that can measure pulse rate variability and heart rate. Combining these physiological measurements together enables certain human psychological inferences including things like cognitive load that can be correlated to the situational and contextual dimensions of a training scenario, or to measure real-time responses to cognitive effort.

HP lent me G2 Reverb Omnicept edition to try out along with access to OvationVR’s public speaking application as well as a demo of Mimbus’ Virtual Indus electrical training application, which both had integrations of the Omnicept’s cognitive load features. In the absence of having and calibration or real-time graphing features exposed, then I found it hard to correlate the calculated cognitive load numbers from my VR experiences to my personal experiences. The Virtual Indus just gave a minimum and maximum range of cognitive load as well as an average number, and I was able to get my average cognitive load down on the second try of the demo experience. And I wasn’t able to figure out how to get a more granular graphing of cognitive load over the course of the exercise within the VR app (although it looked theoretically possible to do within their Vulcan website). I was able to look at a graph of my cognitive load while give an impromptu speech in OvationVR, but only with slight fluctuations over the course of the talk with a peak value coming at what seemed to be a pretty arbitrary moment.

The challenge with capturing and using this type of physiological data is that sometimes it is really hard for users to see deeper patterns or draw immediate conclusions from these new streams of data, especially in the absence of having any real-time biofeedback to help calibrate and orient to these changes in physiology that may or may not have corresponding changes in your direct experience. I have found this to be a recurring challenge and issue whenever I test out VR experiences that have biofeedback integrated into it. Verifying that it’s accurately calibrated and can provide data that has utility relative to a specific task is the biggest challenge and open question.

It would be nice if HP developed some apps to help users do their own QA testing on each of the sensors, and that provided some real-time graphs to help with this real-time calibration and orientation. Having some canonical reference implementations could also help more developers adopt these technologies, since the success of enterprise platforms like this has a lot to do with how many different Independent Software Vendors (ISVs) implement these sensors into their applications.

I also had a chance to talk with Scott Rawlings, Manager of the HP G2 Reverb Omnicept Platform, Henry Wang, Product Manager for Omnicept, and Erika Siegel, Experimental Psychologist, Research Scientist, Subject Matter Expert on Human Inferences. We talk about the current physiological sensors and what types of human inferences are enabled, how these could be used in different industry verticals including training, education, simulation, wellness, and architecture, engineering, and construction.

Overall, I get that the Omnicept is still within an early and nascent phase of development where ISV developers are still building up the larger XR design and business infrastructure around training use cases within specific industry verticals. In addition to OvationVR and Mimbus’ Virtual Indus, Claria Product Design was mentioned as another company who is shipping support for the Omnicept.

The G2 Reverb is a Windows Mixed Reality headset that still has some quirky workflows. The inside-out tracking has a simpler set up in terms of hardware that needs to be installed, but there’s still some increased complexities with it’s reliance on the Windows Mixed Reality Portal, and how that integrates with Steam. I personally found that it was easier to get the Omnicept to work if launched from Steam first rather than from the Mixed Reality Portal, but this is also more of a reflection of the state of Windows Mixed Reality devices having some technical complexity and quirks that may be dependent upon your computer. There were times when the G2′s room tracking wasn’t as solid as my external lighthouses with my Index, but for these enterprise use cases I was testing it was definitely sufficient overall.

Overall, the HP G2 Reverb Omnicept features access to a lot of physiological data that will eventually also be coming to the consumer market. There’s still many design challenges for translating the potential of these biometric sensors into pragmatic value within the context of an enterprise VR application, but with these challenges come new market opportunities for developers and companies to tap into new ultimate potentials within the medium of VR. The Omnicept edition starts at $1,249, and has been available since May 2021. You can hear more context about the development and potential applications of the Omnicept in my conversation with Rawlings, Wang, & Siegel.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

london-film-festival-expanded

The London Film Festival Expanded runs from October 6-17 with a total of 18 pieces of immersive stories and art, with 9 of those pieces available remotely via their LFF Expanse art gallery that is available for free on Viveport (with nine interactive & 360 video experiences), as well as an Oculus Quest App Lab version of LFF Expanse 2021 (featuring five 360 videos), and Mac & PC versions as well.

Ulrich-Schrauth
I had a chance to talk to the XR and immersive programming lead at the British Film Institute and lead curator of the London Film Festival Expanded, Ulrich Schrauth. Schrauth is also the founding curator of the VRHAM! festival in Hamburg, Germany, which was one of the first immersive storytelling festivals to have a virtual exhibition in the Museum of Other Realities in June 2020. We talk about his journey into XR, the process of bringing immersive stories into virtual platforms, latest trends in immersive storytelling from theater and fine arts, and the different site-specific performances that’ll be happening at the London Film Festival Expanded 2021 selection. A selection of 9 experiences are available remotely on “>Viveport LFF Expanse 2021 app now until October 17th.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s the trailer of the 18 experiences that are being featured within the London Film Festival Expanded selection.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

container

em>Container is a hyperreal 180-degree spatial art installation that explores modern day slavery. Visual artist/visual anthropologist Meghna Singh and documentary filmmaker Simon Wood utilize the shipping container as consistent piece of architecture across space and time to create a spatial metaphor to viscerally connect the products shipped in these containers with the oppression and exploited human labor that’s invisible to consumers.

Container is a provocative & stylized piece of immersive storytelling that has created some visceral scenes of slavery that are deeply lodged into my memory. It pushes forward the grammar of immersive storytelling by combining art installation, history, theater, and 180-degree video to create a sort of poetic spatial anthropology that makes associative connections in an embodied and dreamlike fashion. The piece designed to implicate the audience into reflecting on how we may be unwittingly participating in systems of modern-day slavery, and the artists hope to take it to different film festivals around the world and create shipping container installations and showings at port cities involved in slave trade.

The piece is situated within the context of the port city of Cape Town, South Africa, but the piece also doesn’t have spoken words and so it’s generalizable to a global context.

I had a chance to talk remotely with co-directors Meghna Singh and documentary filmmaker Simon Wood to talk about their 4-year journey of producing this piece during it’s World Premiere at the Venice Film Festival.

Container is one of the more evocative pieces of 180 or 360-degree video I’ve seen this year, and it is currently available at the Venice VR Expanded until September 19th.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

anandala

Kevin Mack’s Anandala is an awe-inspiring piece of VR-native, generative, abstract art with some really compelling experiments in interactive, embodied NPC entities he calls “blorts.” There are 130 of these blorts spread out throughout a massive series of never-ending tunnels split into different zones. I’ve had some of the most compelling and playful interactions with these blorts that go beyond anything else I’ve experienced in VR. These blorts feel alive as they exhibit a broad range of responsive behaviors that I interpret as curious, playful, contextually-aware, interactive, and non-aggressively embodied.

While not driven by any machine learning, neural architecture, the heuristic-based AI behind the blorts take in enough external inputs to make them very difficult to predict what they’re going to do. This includes reacting to your position, gestures, movements, musical sonifications, what is happening in the surrounding context over time as it maintains a short-term memory of your actions and it’s own changing state. Mack said that there are a number of ways that he can tune each blort so that they each have their unique set of behaviors, personality, and character.

Each blort also has a unique underlying topology, but it is constantly shapeshifting via a vertex-shader that that is responding to a number of environmental inputs. Each blort also has an interactive, dynamic shader texture that’s like a psychedelic abstraction of fluid dynamics. It’s a variation of the shader that is on the walls of the series of never-ending tunnels, and the combination of these generative inputs produces an infinite source of perpetual novelty that’s both viscerally stimulating and feels like a boundless source of awe as it consistently perverts my ability to predict what’s going to happen next.

Interestingly enough, even though Anandala is viscerally stimulating and cognitively engaging, it still manages to have an overall calming and hypnotic quality with a sort of visual entrainment, transcendental Buddhist soundtrack, and induction of a flow state driven by curiosity and poetic interpretation of embodied behaviors of blorts like a divinatory reading of tea leaves to try to discern what these alien intelligence metaphors are doing and why. It’s hard not to anthropomorphize these blorts, talk to them, and use them as a blank slate to project ourselves onto. Mack shares a series of his own interactions and conversations as he’s been developing it over the last couple of years.

Anandala the spiritual successor to Blortasia, which features a similar underlying architecture and environmental shaders with slightly-refined, flying mechanic that enables intuitive locomotion to explore the space. Again the biggest change in Anandala are how sophisticated and complex the blorts have become, as well as some Easter egg behaviors and locations to discover.

I had a chance to interview Mack after Anandala’s World Premiere at the Venice VR Expanded festival that runs until September 19th. I highly recommend checking it out if you got the Venice accreditation, or you can also get temporary access to it via Viveport Infinity until the end of the festival.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

goliath_-_playing_with_reality

GOLIATH: PLAYING WITH REALITY is an interactive VR story that explores the experience of schizophrenia & psychosis through the story of a Twitch Streamer named GoliathGames. It’s a really strong piece of immersive storytelling balancing interactivity of gaming metaphors that serves the story, great pacing, and brilliant onboarding and offboarding as voiced by Tilda Swinton.

I had a chance to talk with Barry Murphy, Director at Anagram, and May Abdalla, Co-founder at Anagram remotely while they were at the Venice VR Expanded. We unpack the art and experiential design direction, their background research for how to best represent psychosis, and the evolution of the piece since it’s Tribeca World Premiere.

GOLIATH launches on Oculus for free on Thursday, September 9th, and is currently in competition at the Venice Film Festival.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

kusunda

Kusunda is a spatial documentary about a indigenous language in Nepal that is on the brink of going dormant and falling asleep. It’s an incredibly powerful story that uses photogrammetry, Tilt Brush interpretations of oral history, and interactive natural language processing actions that help teach the interactor a number of Kununda vocabulary terms. It’s a production that had to adapt to COVID-19 by innovating other metaphoric & spatial ways of telling this story.

I had a chance to talk with Now Here Media co-founders Gayatri Parameswaran and Felix Gaedtke after their Tribeca World Premier in June, 2021. We explore their journey producing this piece, the special considerations telling this story spatially, and the deep listening involved in producing a piece like this.

Kusunda won the Tribeca Storyscapes Grand Jury prize, and is currently featured out of competition at the Venice Film Festival VR Expanded selection until September 19th.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

The Venice VR Expanded portion of the Venice Film Festival opens today, and runs from September 1-19, 2021. I had a chance to talk to the co-creators Liz Rosenthal and Michel Reilhac to get a sneak peak of the 37 immersive storytelling projects with 24 projects in Competition, 11 projects in the Best Of section, 1 project in the Biennale College Cinema VR section, and 1 Special Event Out of Competition.

There are also 34 VRChat worlds that are being featured in a VRChat world gallery, which are accessible via portal doors within a public instance of the Venice VR Expanded 2021 hub world in VRChat. There will be events in the private instance of this world, which you can get access to those events as well as all of the experiences with a 100€ accreditation fee that will get you a download code for the projects that are hosted on a combination of either Viveport and Oculus.

We talk about the evolution of the Venice VR Expanded selection into it’s fifth edition in 2021, as well as some of the other financing opportunities that are made available through their production bridge as well as their special Biennale College Cinema VR program.

I’m looking forward to digging into this year’s selection, as there’s always a lot of amazing innovations in immersive storytelling each and every year.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality