aardvark

joe-ludwig
Aardvark is an AR platform within the context of virtual reality applications. Here’s how Aardvark is described on it’s Steam page:

Aardvark is a new kind of web browser that allows users to bring multiple interactive, 3D “gadgets” into any SteamVR app. It extends the open platform of the web into VR and lets anyone build a gadget and share it with the community. Aardvark gadgets are inherently multi-user so it is easy to collaborate at the Aardvark layer with the people you are in a VR experience with.

Aardvark was originally announced by Joe Ludwig on March 19, 2020 and had it’s first, early access release on Steam on December 19, 2020. I did a Pluto VR demo in December that integrated their telepresence app with Aardvark AR gadgets and Metachromium WebXR overlays, and I got a taste of how multiple applications will start to interact with each other within a spatial computing environment.

Aardvark and Metachromium are both overlaying objects and layers on top of virtual environments, but they are taking different approaches. Metachromium uses WebXR depth maps to composite the pixels on top of the existing virtual environments. Aardvark is tracking your head and hand poses, and attaching declarative web app objects to these reference points or the room center.

Ludwig says that Aardvark is his white paper for why he thinks his approach could be easier to scale in the long run. Metachromium runs WebXR at the framerate, which has a lot more overhead. While it’s only the Aardvark app that’s running at framerate while the rest of the gadget is a declarative web application approach using the React framework that only runs JavaScript code when the user takes actions. Ludwig is skeptical that JavaScript will be able to run within the context of a 90 to 120 Hz render loop on top of pushing more an more pixels to displays in VR apps that are already pushing the GPUs and CPUs to their limits, and Aarvark gadgets reflect this design philosophy.

I had a chance to catch up with Aardvark creator Joe Ludwig on January 12, 2021 to get some more context on Aardvark, how it started, where it’s at now, and where it’s going in the future. Ludwig is still in the early phases of getting all of the component parts in place in order to bootstrap this new platform.

communication-medium-as-process

It’s still early days in fleshing out the flywheel of this communications medium feedback loop, but the potential is pretty significant. Ludwig says that Aardvark has the potential to start to prototype the user interface design and functionality of augmented reality applications within the context of a virtual reality app.

There’s still a lot of missing information to fully manifest this vision, especially in not having any equivalent of a virtual positioning system to get the X, Y, & Z coordinates of the virtual work instance and specific map and conditional states. Ludwig expects that his may eventually be provided through an OpenXR extension, but for now these AR gadgets will need to exist relative to the head or hand poses or localized to the center of your play space.

When Aardvark was first started, Ludwig conceived of it as an overlay layer. And so it’s been surprising to him to discover that there’s been a lot of of work in trying to get these spatialized gadgets to be able to communicate with other gadgets, especially within a multiplayer context. The early experiments show the power and potential of a multiple-application AR ecosystem, but there isn’t a single killer app or utility that’s tightly focused on a specific use case or context. This leaves a lot of room for exploration and discovery starting with a backlog of ideas, but without a lot of clear direction as to what will be compelling or build momentum within a specific community.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Mark-Pesce-Augmented-Reality

Mark Pesce’s new book Augmented Reality: Unboxing Tech’s Next Big Thing was released on Friday, January 8th, 2021. It’s a lucidly-written look at the past, present, and future of augmented reality. He contextualizes AR within the history of computing and evolution Human-Computer Interaction, while also looking at the underlying principles of adding metadata to space, spatial computing, and what it means for a physical space to go viral.

Pesce also looks at how AR is a technology that has to be watching, and the open ethical and technology policy questions around privacy. He says that the closer the technology is to our skin, then the more that it knows about us and the more that it has the capacity to potentially undermine our agency.

He also points out that as you change the space around us, then you’re also changing us. There will be a lot of emphasis on feedback loops for consumers wanting to have specific information and context about the world around us as well as an aspirational aspect of the world providing that information. Pesce describes this interaction as a combination of the space itself, how the space expresses itself through radiating out information, then there is how the people who are present in that space interpret and make meaning out that information, but then feed more information back into the space changing the meaning of that space.

He is grateful for Netflix documentaries like The Social Dilemma that starts to provide metaphors for some dynamics of technology companies and the role of algorithms in our lives, but that this role is going to only become more important as augmented reality technologies are able to overlay context, meaning, stories, and metadata onto physical reality that could have a lot more physical collisions with differing perspectives where they were not possible in cyberspace.

I had early access to the book, and I was able to conduct an interview with Pesce back on November 19, 2020. Pesce wanted to contextualize AR within the history of computing and human-computer interaction, but also to catalyze some technology policy discussions around the privacy and inherent surveillance aspects of augmented reality. With consumer AR on the horizon in the next couple of years, then there are a lot of deeper questions around how to navigate the relationship between humans and machines and Pesce’s Augmented Reality book provides a lot of historical context that brings up some of the first discussions and writings on this topic like Norbert Weiner’s The Human Use of Human Beings (1954) and J.C.R. Licklider’s Man-Computer Symbiosis (1960).

Pesce is able to not only contextualize the history of AR, but also give us some pointers of where the technology is heading in the future. If you’re interested in some deeper discussion and analysis of spatial computing, then this is a must-read account that’s grounding in this history and evolution of consumer augmented reality.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

metl-residents-2020

Note: This is a sponsored content post from the University University of North Carolina School of the Arts

I was a mentor for the UNCSA’s Media + Emerging Technology Lab’s Immersive Storytelling Residency program in 2020, which was a 6-month program featuring a technical artist, software engineer, and writer who were collaborating on an immersive story that was called BonsaiAI. I wanted to have the first cohort to come on the podcast for a retrospective, and so I invited Trent Spivey, Fernando Goyret, and Alex Moro along with METL director Ryan Schmaltz to reflect on their journey, challenges, and lessons learned from their immersive storytelling artist residency.

This was a really interesting experience for me as well in 2020 as I get more involved with providing my intuitive reactions and feedback along the way on a project that’s still within development. One of the hardest problems in experiential design is matching the top-down design with the bottom-up actual experience. Films are able to have a small gap between the representations of a pitch, idea, and overall story, relative to how it ends up on the screen. But games and immersive experiences that involve user agency are require more of a bottom-up approach with a lot of user testing, but there still needs to find a way for the overall experience to have a narrative throughline from beginning, middle, to the end. So being able to predict how the change of certain conditions within an experience effects the overall feeling of an experience is the essential an existential challenge of all experiential design. So there’s a lot of reflections upon that process within the context of bringing three strangers together for six months in the middle of a pandemic in order to figure some of that out.

If you’re interested in the 2021 residency, then January 31st, 2021 is the deadline for a six-month Immersive Storytelling Residency Program that runs from May 1st to November 1st, 2021. There are three different residency roles that will be collaborating over the six months including a game developer/programmer, 3D modeler/technical artist, as well as screenwriter/producer. There’s a monthly $3500 stipend, and a requirement to relocate to Winston-Salem, North Carolina for the duration of the 6-month residency. If you’re interested in applying, then be sure to also check this conversation with METL director Ryan Schmaltz to get more context and details for the 2021 edition.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

The-Devouring
The Devouring is an epic, 5-6 hour horror adventure game within VRChat that launched on August 14, 2020. It quickly went viral in the VRChat community because it was a unique, shared experience that had a vast world to explore, an evocative soundtrack, enemies that catalyzed unique reactions, a recursive map that allowed groups to split up and reconvene, and other technological innovations like late joiner capability that facilitated new social dynamics and group gameplay. Overall, it was a unique bonding experience as users had to commit to the full 5-6 hours of the experience as there was no way to save progress, and it ended up being a very satisfying and memorable journey.

The Devouring caught the attention of Raindance curator Mária Rakušanová, who created an a new category of Best Immersive World so that VRChat worlds like this could compete in the Raindance Immersive Festival. I was invited to be a juror, and after going through the 5-hour experience, the jury awarded The Devouring with Best Immersive World, Best Immersive Game, and runner-up for Best Multiplayer Game. We were all impressed with the novel social dynamics, game play innovations, and level of polish and interaction that went beyond anything else we had seen from a VRChat world.

4poniesThe Devouring started off as a much smaller project for Spookality Halloween worldbuilding contest in 2019, but the 4 Poneys Team blew through the initial deadline and the project soon expanded in size and scope over the course of an epic, 10-month production timeline. This volunteer team includes CyanLaser, Legends, Lakuza, and Jen Davis-Wilson (aka Fionna) who all met each other from the VRChat Prefabs community of worldbuilders. It started on the VRChat Discord, but it was spun off into it’s own Discord channel in order to share knowledge, keep track of free worldbuilding assets, and explore the newest worlds through the community meetup.

After watching CyanLaser’s Devouring Tech Overview at the Prefabs TLX conference on December 5th, then I reached out to The Devouring team to do a full retrospective of their creative journey and launch of one of the most impressive and successful VRChat immersive experiences yet. It’s not only an epic recap of this project, but also a window into the mission-driven, Prefabs worldbuilding community that’s pushing the edges of innovation and knowledge sharing within VRChat.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s the trailer for The Devouring:

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

virtual-market-5
Virtual Market 5 opens today December 18th, 2020 at 6p PST (Dec 19th at 11a JST), and will run until January 10th at 6a PST (11p JST). It will feature over 1500 VR artists and creators in 30+ worlds in VRChat that features avatars, meshes, clothes, music, prefabs, shaders, tutorials, and all sorts of other virtual goods for VR enthusiasts and immersive creators. It’s the impressive virtual expo of the year with some of the most awe-inspiring worlds and virtual experiences.

The first Virtual Market (aka Vket) started in the Japanese VRChat community on August 26, 2018, and featured around 80 artists and creators. HIKKY is a Japanese company that’s been running the virtual markets, and so there’s been a distinct imprint of Japanese culture and language in Vket 1-4. But the Vket Global Team has been bringing in more international creators, and creating English language options to make it more accessible to non-native Japanese speakers.

pfp
There’s an impressive amount of innovation and creativity when it comes to world building and experiential design for the consumer experience, as the Virtual Market is run and managed by 80% volunteer work. There are a number of corporate sponsor worlds with companies from around the world who are interested in experimenting with experiential marketing, but also connecting to the bleeding edge of virtual culture within VRChat.

lhunI talk with the Director of the Vket Global Team LilBagel as well as the CTO and Process Manager Lhun on Wednesday, December 16th after getting a 2.5-hour tour and sneak peak of some of the new worlds for Vket 5. I’m really blown away at the increased level of worldbuilding and experiential design, and the size, scope, and professional polish of the Virtual Market, which won the well-deserved AIXR award for best marketing experience in 2020 with Virtual Market 4. We explore some of the deeper context for why Vket originally came about, and how it’s evolved over the years. I also get some of history of how Japanese anime culture has become so ubiquitous in apps like VRChat.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Check out Shanie MyrsTear’s preview of Virtual Market 5

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

uncsa-immersive-storytelling

Note: This is a sponsored content post from the University University of North Carolina School of the Arts

Melissa Upton, UNCSA Staff. 2/7/18January 31st, 2021 is the deadline for a six-month Immersive Storytelling Residency Program that runs from May 1st to November 1st, 2021 as a part of the University of North Carolina School of the Arts’ Media + Emerging Technology Lab (METL). There are three different residency roles that will be collaborating over the six months including a game developer/programmer, 3D modeler/technical artist, as well as screenwriter/producer. There’s a monthly $3500 stipend, and a requirement to relocate to Winston-Salem, North Carolina for the duration of the 6-month residency.

I talk with METL Director Ryan Schmaltz to get more context on the emerging media and XR programs at METL, but also do a bit of a recap of the first artist residency program that ran from March to September of 2020. We talk about some of the lessons learned from the first cohort with the biggest shift of moving the program to span May to November (rather than March to September) in order to make it more accessible to graduating seniors.

We also talk about some of the open problems and challenges from design to implementation of an immersive storytelling project, and the opportunity to get feedback from a number of different mentors. I served as a mentor for the first cohort, and I’m planning on returning again for the next cohort to help provide feedback and guidance along the way. METL’s Immersive Storytelling Residency Program is a unique opportunity to get paid a living wage to work on immersive project with a team for six months, and you can get more information on their website as well as more context from Schmaltz in this episode.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

pluto-vr
Pluto VR is a general-purpose VR, telepresence application that hopes to provide the social presence glue for a standards-driven, multiple-application ecosystem using SteamVR on the PC. I had some new conceptual breakthroughs about the potential future of spatial computing after getting a demo of how Pluto VR is works with other applications like Metachromiumor Aardvark. Metachromium can run entire WebXR applications as an overlay on SteamVR applications, and Aardvark is an future-forward framewok that allows for the development of augmented reality “gadgets” that run on top of virtual reality experiences.

All of these technologies are utilizing the Overlay extension of OpenXR in order to overlay augmented layers of reality on top of VR experiences, and they’re working together in way that will facilitate emergent behaviors that break out the normal 2D frames of our existing computing paradigm. When you run an application on mobile on your computer, then there’s usually only one application that’s in focus for any given moment. You can context-switch between apps, or copy and paste, but our computing paradigm has been all happening within the context of these 2D frames and windows.

The promise of spatial computing is that we’ll be able to break out of this 2D frame, and create a spatial context that allows for apps to directly interact with each other. This will originally happen through lighting, opacity shifts, or occlusion between 3D object, but eventually there will be more complicated interactions, collisions, and emergent behaviors that are discovered between these apps.

Moving from 2D to 3D will allow application developers to break out of this metaphorical frame, but this also means that app developers won’t have as much control over the precise environmental and spatial context that their application will be running. This is a good example for where I think the more process-relational thinking of Alfred North Whitehead’s Process Philosophy has a lot to teach XR application developers to start thinking in terms of the deeper ecological context under which their spatial computing app is going to exist.

Facebook has not even made it possible to run multiple VR applications at once on either their PC or Quest platforms. It’s Valve’s SteamVR that is providing the platform on PCs for experimentation and innovation here. It’s admittedly a bit cumbersome to launch and connect each of these disparate applications together, but over time I expect the onboarding and overall user experience to improve as value is discovered for what types of augmentations will be provided with these overlay layers. But Pluto VR has an opportunity to become the Discord of VR in providing a persistent social graph and real-time context for social interactions that transcends any one VR application. It’s an app that you can hop into before diving into a multi-player experience, but it’s also enabling players to stay connected during loading screens and other liminal and interstitial virtual spaces, like the Matrix home screen of Steam VR.

Pluto VR has been working with a number of open standards that will be driving innovation on XR as an open platform including Web-RTC, glTF, VRM, OpenXR, WebXR, Web Bundles, XRPackage (XRPK) as well as the Immersive Technology Media Format (ITMF) from the Immersive Digital Experience Alliance (IDEA). They also hosted the W3C workshop on XR accessibility to get more insights for helping to cultivate accessible standards in XR. The Pluto VR team is taking a really future-looking strategy, and hoping to help kickstart a lot of innovation when it comes to creating AR widgets that could be used in VR environments, but perhaps eventually be ported to proper AR applications.

Covering the emerging technologies of augmented and virtual reality since May 2014 has helped me to isolate some of the key phases in the development and evolution of a new medium.

communication-medium-as-process

First there’s a new emerging technology platform that enables new affordances, then the artists, creators, makers, & entrepreneurs create apps and experiences that explore the new affordances of the new technology, then there’s a distribution channel in order to get these experimental pieces of content into the hands of audiences, and then audiences are able to watch the work and provide feedback for both the tech platform providers and the content creators.

The OpenXR and WebXR standards are enabling distribution channels of immersive content through apps like Metachromium and Aardvark, and the OpenXR overlay extension allows for this more modular AR gadget content to be used within the context of existing VR applications run on Steam. Then Pluto VR is connecting creators directly with their audience in order to share their WebXR apps or Aardvark AR gadgets in order to get that real-time, audience feedback loop cycle. This has the potential to complete the cycle and catalyze a lot of experimentation and innovation when it comes to what types of AR apps and widgets prove to be useful within the context of these VR experiences.

Here’s a demo video that shows how a variety of WebXR applications can be launched within a shared, Pluto VR social context:

https://www.youtube.com/watch?v=uuN3KnUIJgw

I’ve had a number of interactions with the Pluto VR team over the past couple of years, and I’ve just been super impressed with their vision of where they want to take VR. They also likely have a lot of cash reserves as they’ve kept a small footprint after raising a $13.9 million Series Funding round announced on April 13th, 2017.

I had a chance to talk with two of Pluto VR’s co-founders Forest Gibson and Jared Cheshier on Friday, October 11th after getting a demo that blew my mind about the future concepts of spatial computing. We cover their journey of into VR, and how Tim Sweeney’s The Future of VR & Games talk on October 12, 2016 at Steam Dev Days where Sweeney laid out some of his vision of how the metaverse is going to include a lot of different applications within the same game engine-like, spatial context. So much of the VR industry, mobile computing, and PC applications have been stuck inside of a 2D, windowed frame and closed context that it was really refreshing to get a small taste of where all of this is going to go. They share some of their early surprises for spatial computing, and they know that there will be so many other key insights and innovations that will be discovered with the multi-application, technology stack that they’ve been able to set up. This is a very community-driven effort, and they’ll be showing off their technology and connecting to the wider VR community during the Virtual Market 5 (VKet 5) in VR Chat starting on December 19th.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

matt-segall-process

matt-segall
Virtual reality has the potential catalyze a paradigm shift around our concepts about the nature of reality, and one of the most influential philosophers on my thinking has been Alfred North Whitehead. His Process Philosophy emphases unfolding processes and relationships as the core metaphysical grounding rather than static, concrete objects. The Stanford Encyclopedia of Philosophy entry contrasts some of the fundamental differences to Western philosophy:

Process philosophy is based on the premise that being is dynamic and that the dynamic nature of being should be the primary focus of any comprehensive philosophical account of reality and our place within it. Even though we experience our world and ourselves as continuously changing, Western metaphysics has long been obsessed with describing reality as an assembly of static individuals whose dynamic features are either taken to be mere appearances or ontologically secondary and derivative…

If we admit that the basic entities of our world are processes, we can generate better philosophical descriptions of all the kinds of entities and relationships we are committed to when we reason about our world in common sense and in science: from quantum entanglement to consciousness, from computation to feelings, from things to institutions, from organisms to societies, from traffic jams to climate change, from spacetime to beauty.

Virtual reality is all about experiences that unfold over time, and the medium is asking to interrogate the differences between how we experience the virtual and the real. A slight shift of our metaphysical assumptions from substance metaphysics to process-relational metaphysics allows us to compare and contrast the physical to the experiential dimensions of our experiences. Process philosophy opens up new conceptual frameworks to look the world through the lens of dynamic flux, becoming, and experience as core fundamental aspects of reality, rather than treating these processes as derivative properties of static objects.

I think process philosophy makes a lot more sense when thinking about the process of experiential design. Human beings are not mathematical formulas, which means you have no idea who other people will experience your immersive piece until you play test it. There’s an inherent agile and iterative nature of game design, software design, and experiential design, where you have to test it lots of time with lots of people. This is different than the linear, waterfall approaches of building physical buildings or producing films where there’s clearly demarcated phases of pre-production, production, and post-production. For Whitehead, these iterative processes aren’t just metaphoric at the human scale, but he’s suggesting that these processes reveal deep insights about the fundamental nature of reality itself as having a dynamic and participatory aspect of navigating non-deterministic potentials that’s really “experience all the way down.”

The thing that I love about Whitehead is that he was a brilliant mathematician that turned to philosophy later in his life, and so has an amazing ability to make generalizations that deconstruct the linear and hierarchical aspects of language and make more sophisticated models of reality. He replaces physics as the fundamental science with biological organisms, or more abstractly as an unfolding processes that are in relationship to each other. This creates a scale-free, fractal geometrical way of understanding reality at the full range of microscopic and macroscopic scales.

Whitehead’s thinking has also impacted a wide range of areas including ecology, theology, education, physics, biology, economics, and psychology. Some specific examples include work in quantum mechanics, new foundations for the philosophy of biology, the psychedelic musings of Terence McKenna, and has opened up new pathways to be able to integrate insights from Eastern Philosophies, like Chinese Philosophy.

On October 31st, I attended The Cobb Institute’s 2-hour program on Process Thought at a New Threshold, which brought together an interdisciplinary group of scholars, researchers, and practitioners where summarizing how Whitehead’s Process Philosophy was transforming their specific domains. One of the presenters was philosopher Matthew D Segall, who is one of my favorite Whitehead scholars who writes and shares videos on his YouTube channel.

I wanted to get a full primer of Whitehead, his journey into philosophy, as well as how his thinking could facilitate a fundamental paradigm shift that the world needs right now. My experience is that VR and AR can provide an experiential shift in how we relate to ourselves and others, but the Process Philosophy brings a whole other conceptual level that has the potential to unlock a lot more radical shifts in all sorts of ways. I also think that the spatial nature of VR and AR is particularly suited in order to produce embodied experiences of process-relational thinking, but also help artists and creators have a cosmological grounding that helps them connect more deeply to their own creative process of unlocking flow states and using the medium to communicate about their experiences in new ways.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

YUR-Fit-Cix-Liv

cix-liv
On October 4th, YUR Fit co-founder Cix Liv spoke out on social media about his frustrations in collaborating with Facebook as an independent developer by saying, “they will block you from the store without reason, break your application, try to POACH YOUR CTO, then copy your app and carrot you the entire time pretending they are going to work with you.” This tweet catalyzed me to reach out to Liv to see if he’d be willing to record his testimony on the record, which we did 6 days later on Saturday, October 10, 2020. Liv was also in conversation with a reporter from Bloomberg for an article titled “Facebook Accused of Squeezing Rival Startups in Virtual Reality” that was just released today, December 3, 2020. Liv is no longer with YUR Fit because he chose to speak out against what he perceives to be the anti-competitive behaviors that he experienced with Facebook.

Liv sent out some other tweets as well, which provided the catalyst that led to the end of his time at YUR Fit. He told Bloomberg that, “he was forced out of his company after speaking out against Facebook on Twitter about a month ago. He said the venture capital fund backing his startup, Venture Reality Fund, told him that he would have to leave the company if he continued criticizing Facebook.” Bloomberg also reports that VR Fund partner Tipatat Chennavasin “denied telling Liv that he had to leave the company if he kept criticizing Facebook.” Whatever ended up happening, the recording I did with Cix Liv was when he was presumably still working at the company, but I was withholding publication until the wider story was reported and vetted by other professional news organizations.

https://twitter.com/CixLiv/status/1320475459830697984

Bloomberg also reports that there are some larger pending, anti-trust lawsuits against Facebook, and that Facebook’s practices “are now drawing the attention of the Justice Department’s antitrust division, which is talking to developers about their interactions with the company, according to two people familiar with the matter.”

Whether or not the experiences that Cix Liv describes fits into a deeper pattern of the other VR developers is yet to be seen. There may be more folks that either come forward to share their story, or perhaps they’ll be sharing more details with the US Justice Department as a part of this larger anti-trust lawsuit. There’s certainly other VR developers that have come forward starting with BigScreenVR’s Darhshan Shankar and Virtual Desktop’s Guy Godin, who have each experienced their own flavors of alleged, anti-competitive behaviors and shared their stories with me here on the Voices of VR podcast over the past three months.

But there may be insights from Cix Liv’s testimony of his experiences and stories with Facebook that could reveal some deeper intentions that is driving Facebook’s behavior. Matching someone’s desired intent with their observed actions and behaviors over time is never a precise science, but there could some underlying patterns of the mechanics of how Facebook goes about “acquiring, killing, or cloning” competition as a part of their larger business practices. It may be that Facebook has been living within the letter of the law, and if there are transgressions here, then perhaps it will lead to anti-trust law reform in order to deal with the unique dynamics for how technology companies are creating entire platforms and marketplaces that end up being functional monopolies.

The Sherman Anti-Trust Act of 1890 was written over 130 years ago, and it’s quite possible that the anti-trust legislation is not equipped to handle the unique challenges and behaviors of the major tech companies. So I think it’s really vital to pay attention to what these virtual reality developers are saying they’re experiencing because there could be some bad faith and fundamentally anti-competitive behavior that’s been functionally normalized as “industry standard” for how to run a 21st Century technology firm.

As the power and wealth consolidates into the hands of fewer and fewer companies, then what types of technology policy and legislation needs to be in place in order to foster healthy and vibrant marketplaces and ecosystems? There’s more questions than answers at this point for the entire tech industry, and we should be grateful that Cix Liv was willing to sacrifice so much of his own personal technology career in order to share some of his own direct experiences that may or may not end up helping us all figure out the future of anti-trust legislation in the United States.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

inside-covid-19

Inside COVID-19 is a new 360 video series that premiered on Oculus TV on Friday, November 20, 2020, and it features Dr. Josiah Child who manages emergency departments at five different hospitals. Dr. Child contracted COVID-19, and this film takes us to the all of the different places of quarantine, hospital rooms, and even inside of his body through immersive medical visualized CT scans and animations of the microscopic SARS-CoV-2 as it goes through it’s entire life cycle. The series focuses on Dr. Child’s journey of facing death, and all of the lessons and perspectives he’s gained after surviving. I found the series helpful translating the abstractions of statistics into someone’s direct experiences, and the surrounding contexts for where they happened.

Adam-Loften-Gary-Yost
The series was produced by WisdomVR Project’s Gary Yost and Adam Loften after the production on their previous series was halted due to the pandemic. They have been capturing oral history stories from elders in different communities, and they wanted to carry forward the spirit of this form of honoring elders by focusing on the story of a veteran doctor and his insights into this pandemic.

Inside COVID-19 shows the power of 360 video to immerse the viewer into a very personal journey of contracting COVID-19. VR can take us into the story in a completely new way in both microcosmic and macrocosmic scales. From taking us through the isolation of hospital rooms to 3D visualizations of MRI scans to animations of SARS-CoV-2 fusing in an endosomal membrane to floating above the earth as we reflect on our relationships to each other and the planet as a whole. In the end, Loften & Yost use the immersive medium in a unique way to tell a very powerful personal story that connects us to the deeper challenges we face as humanity that are revealed through this pandemic crisis.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality