chomsky-vs-chomsky2
Chomsky vs Chomsky: First Encounter is a virtual reality experience where you get to interact with a virtual representation of Noam Chomsky, and essentially ask him any question that you want to. Trained on a corpus of 60 years worth of Chomsky interviews and data covering a wide range of his expertise as a linguist, philosopher, cognitive scientist, social critic and political activist. As with all natural language processing applications, the detection and comprehension of the input speech can be hit or miss, and then there’s the question as to whether or your not your inquiry will be matched up with a contextually relevant response that’s synthesized in real-time.

So it’s still early days aiming towards the dream of artificial general intelligence, and so constraining the bounds of possibility within an immersive narrative can help showcase what AI can do successfully. When an inconsistent or incomplete answer came back to one of my inquiries, then I found that it was a stark reminder that I was interacting with a primitive machine that had a hard time understanding my deeper meaning. But when there was a contextually-relevant, direct response to question, and even sometimes joyfully novel or interesting, then it’s a magical experience that increased my sense of social presence and gave me some early glimmers of a feeling that I was interacting with an intelligent entity.

However, this form of plausibility illusion is like a house of cards, and it doesn’t take much for my suspension of disbelief, expectation detectors to get triggered and for me to be reminded of the limitations of AI. Perhaps part of the point is to demystify the capabilities of what AI can do, but it’s still worth iterating on and incrementally increasing the capacity, accuracy, and training of their models. This was one of the unique experiences at Sundance this year where each interaction was helping to train and improve the underlying models.

I had a chance to do an interview at Sundance with lead artist Sandra Rodriguez, interactive developer Cindy Bishop, and visual designer Michael Burk to unpack the evolution of the project and their experiential design process. Combining interactivity and user agency with vignettes of immersive stories can be a challenge when you’re working with a machine learning black box that makes it hard to predict how it’ll react to a given input at any given time (and how even that will change over time). It’s a moving target, and they shared some of the milestones they were able to achieve and whats still yet to come in order to have more of a memory and a context-preserving functionality in the future.

Like I said, it’s still early days for these types of AI-driven narratives with virtual beings and conversational interfaces, but they’re continuing to learn and get better over time and so it’s important to keep iterating, experimenting, and trying to find the right constraints and narrative contexts in order to hide some of the current limitations of comprehension and reacting in a contextually-appropriate way. And I’m glad to see groups like The National Film Board of Canada, Schnellebuntebilder and EyeSteelFilm continue to experiment and push forward what’s possible with the technology that exists today.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s the trailer for Chomsky vs Chomsky: First Encounter

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Comments are closed.

Voices of VR Podcast © 2020