#620: AR & AI Storytelling Innovations from Tender Claws’ “TendAR”

samantha_gormanTender Claws, the creators of the award-winning interactive VR narrative Virtual Virtual Reality, premiered a new, site-specific, interactive AR narrative experience at the Sundance New Frontier called TendAR. It was a social augmented reality experience that paired two people holding a cell phone and sharing two channels of an audio stream featuring an fish guiding the participants through a number of interactions with each other and exploring the surrounding environment. The participants were to instructed to express different emotions in order to “feed” and “train” the AI fish. Google’s ARCore technology was used for the augmented reality overlays, Google’s Cloud Vision AI API for object detection, as well as early access to some of Google’s cutting-edge Human Sensing Technology that could detect emotional expressions of the participants.

Overall, TendAR was a really fun and dynamic experience that showed how the power of AR storytelling lies in doing interesting collaborative exercises with another person, but also becoming aware of your immediate surroundings, context, and environment where objects can be discovered, detected, and integrated as a part of a interaction that’s happen with a virtual character.

I had a chance to talk with Tender Claws co-founder Samantha Gorman to talk about their approach to experiential design for an open-ended interactive AR experience, the unique affordances and challenges of augmented reality storytelling, their collaboration with interactive storytelling theater group Piehole, the challenges of using bleeding-edge AI technologies from Google, and some of their future plans in expanding this prototype experience into a full-fledged 3-hour, solo AR experience with a number of excursions and social performative components.

LISTEN TO THE VOICES OF VR PODCAST

Here’s a teaser trailer for TendAR
https://www.youtube.com/watch?v=dUIZV23PQlI

Gorman said that they’re not planning on storing or saving any of the emotional recognition data on their side, and this is the first time that I’ve ever heard anything about Google’s Human Sensing group. I trust Tender Claws to be good stewards of my emotional data, and their TendAR experience shows the potential of what type of immersive narrative experiences are possible when integrating emotional detection as an interactive biofeedback mechanic. Mimicking a wide range of different emotional states can evoke a similarly wide range of different emotional states, and so I found that TendAR provided a really robust emotional journey that was a satisfying phenomenological experience. TendAR was also an emotionally intimate experience to share with a stranger at a conference like Sundance, but it demonstrates the power of where AR storytelling starts to shine — creating contexts for connection and opportunities to create new patterns of meaning in your immediate surroundings.

However, the fact that Google is working on technology that can capture and potentially store emotional data of users introduces some more complicated privacy implications that are worth expanding upon. Google and Facebook are performance-based marketing companies who are driven to capture as much data about everyone in the world as possible, and VR & AR technologies introduce the opportunity to capture much more intimate data about ourselves. Biometric data and profiles of our emotional reactions could reveal unconscious patterns of behavior that could be ripe for abuse, or be used to train AI algorithms that reinforce the worst aspects of our unconscious behaviors.

I’ve had previous conversations about privacy in VR with behavioral neuroscientist John Burkhardt who talked about the unknown ethical threshold of capturing biometric data, and how the line between advertising and thought-control starts to get blurred when you’re able to have access to biometric data that can unlock unconscious triggers that drive behavior. VC investor and privacy advocate Sarah Downey talked about how VR could become the most powerful surveillance technology every invented or it could become one of our last bastions of privacy if we architect systems with privacy in mind (SPOILER ALERT: Most of our current systems are not architected with privacy in mind since they’re capturing and storing as much data about us as possible). And I also talked with VR privacy philosopher Jim Preston who told me about the problems with the surveillance-based capitalism business models of performance-based marketing companies like Google and Facebook, and how privacy in VR is complicated and that it’s going to take the entire VR community having honest conversations about it in order to figure it out.

Most people get a lot of benefit from these services, and they’re happy to trade their private data for free access to products and services. But VR & AR represent a whole new level of intimacy and level of detail of information that is more similar to medical data that’s protected by HIPAA regulations than it is to data that is consciously provided by the user through a keyboard. It’s been difficult for me to have an in-depth and honest conversation with Google about privacy or with Facebook/Oculus about privacy because the technological roadmap for integrating biometric data streams into VR products or advertising business models have still been in the theoretical future.

But news of Google’s Human Sensing departing building products for detecting human emotions shows that these types of products are on the technological roadmap for the near future, and that it’s worth having a more in-depth and honest conversation about what types of data will be capture, what won’t be captured, what will be connected to our personal identity, and whether or not we’ll have options to opt-out of data collection.

Here’s a list of open questions about privacy for virtual reality hardware and software developers that I first laid out in episode #520:

  • What information is being tracked, recorded, and permanently stored from VR technologies?
  • How will Privacy Policies be updated to account for Biometric Data?
  • Do we need to evolve the business models in order to sustain VR content creation in the long-term?
  • If not then what are the tradeoffs of privacy in using the existing ad-based revenue streams that are based upon a system of privatized surveillance that we’ve consented to over time?
  • Should biometric data should be classified as medical information and protected under HIPAA protections?
  • What is a conceptual framework for what data should be private and what should be public?
  • What type of transparency and controls should users expect from companies?
  • Should companies be getting explicit consent for the type of biometric data that they to capture, store, and tie back to our personal identities?
  • If companies are able to diagnose medical conditions from these new biometric indicators, then what is their ethical responsibility of reporting this users?
  • What is the potential for some of anonymized physical data to end up being personally identifiable using machine learning?
  • What controls will be made available for users to opt-out of being tracked?
  • What will be the safeguards in place to prevent the use of eye tracking cameras to personally identify people with biometric retina or iris scans?
  • Are any of our voice conversations are being recorded for social VR interactions?
  • Can VR companies ensure that there any private contexts in virtual reality where we are not being tracked and recorded? Or is recording everything the default?
  • What kind of safeguards can be imposed to limit the tying our virtual actions to our actual identity in order to preserve our Fourth Amendment rights?
  • How are VR application developers going to be educated and held accountable for their responsibilities of the types of sensitive personally identifiable information that could be recorded and stored within their experiences?

The business models of virtual reality and augmented reality have yet to be fully fleshed out, and the new and powerful immersive affordances of these media suggest that new business models may be required that both work well and respect user privacy. Are we willing to continue to mortgage our privacy in exchange to access to free services? Or will new subscription models emerge within the immersive media space where we pay upfront to have access to experiences similar to Netflix, Amazon Prime, or Spotify? There’s a lot more questions than answers right now, but I hope to continue to engage VR companies in a dialogue about these privacy issues throughout 2018 and beyond.

This is a listener-supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So one of my very favorite production teams that's out there is Tender Claws. That's Samantha Gorman and Danny Cannizzaro. They originally did the virtual virtual reality experience for the Daydream and I had a chance to go through it and it's absolutely my most favorite interactive story that's out there. And I have an interview that I did with them back at VRLA 2017 that'll be airing in the next couple of episodes. At Sundance this year, Tender Claws had a new interactive experience called TendAR. And so what they were doing is they were using a phone and they were using Google ARCore and some facial recognition, emotional recognition software that's prototype software from Google. And they're able to create this interactive story where you're with another person, you're holding a phone with another person. And as you go through this experience, you're holding the phone together and you each are getting like one half of the audio stream. You kind of go on this adventure of training this AI by doing different emotional expressions on your face. So it was a really interesting experience and I think really starts to show what the potential for AR storytelling is going to be and how it's going to be different than virtual reality storytelling. I had a chance to sit down with Samantha Gorman to unpack their experience, which is just kind of like a mini excerpt of a much larger three-hour narrative that they're planning. And so, Tend.IR should be released sometime later this year, but this was kind of like a sneak peek of one of their more social performative components of their experience. So that's what we'll be talking about on today's episode of the Voices of VR podcast. So this interview with Samantha happened on Sunday, January 21st, 2018 at the Sundance Film Festival in Park City, Utah. So with that, let's go ahead and dive right in.

[00:02:00.778] Samantha Gorman: Hi, I'm Samantha Gorman. I'm one of the co-founders of Tender Claws. And at Sundance, I'm showing TendAR, which is one of our upcoming projects. Right now, we have a site-specific version for Sundance, and it will be a full experience for ARCore and Android at the end of April. The app itself will be about a virtual pet fish of AI that evolves as you feed it the emotions of you and your friends. Our version at Sundance is a sort of storytelling walkthrough that's framed in a fictional company that has a emotion recognition platform for creatives and brands to be able to decide through the emotions of its users how to make decisions about their product. And the thing that sets our technology apart is that instead of rows of servers, we have AI-trained goldfish, based on goldfish brains, that are trained to swim in different directions as they analyze the emotion data coming in from the users. And our project has users go through and train the fish AI to understand human emotion as it's giving the fish data. Yeah.

[00:03:08.197] Kent Bye: I have to say, it's a pretty great AI that's running it. I mean, it was a very fun experience, I have to say. And I think the thing that, for me, that was so interesting about this experience was that the focus on emotion. And you're kind of almost being coached to artificially generate different emotions in your face, but yet you have this kind of feedback loop such that you kind of start to feel those emotions. So as you start to mimic the facial expressions of those emotions, you start to feel them. And so you're kind of guided through an experience of doing that. So I guess one first question is, are you using facial recognition to actually detect the Paul Ekman style of how people are actually moving their face in order to detect those emotions?

[00:03:48.001] Samantha Gorman: Yeah, this project in particular is actually an interesting process because we're partnering with Google for ARCore, but also some of their advances in human sensing technology. And the actual technology that they've developed is really nuanced. but we're developing it as the technology is being coded itself. So there's a lot of back and forth and it's actually been a really cool process. In the project itself, one of the things we're trying to think about here at Sundance is what is nuance of the human experience. How can you bend emotions? A lot of other platforms bend into specific five emotions. And yeah, so it's a lot about like the human experience and the kind of dynamics between the two people.

[00:04:36.397] Kent Bye: Yeah, I guess the other thing that's really striking is that you are doing this experience with another person. So you have like one pair of headphones and you kind of split and you're each kind of listening into one ear of this experience that's going on. And so maybe could talk a bit about your process as you're trying to create an experience here. How do you architect an experience that is trying to have these different beats of different connection points with another person?

[00:05:01.327] Samantha Gorman: Yeah, I should also mention that we are, both this project and a project coming up, we're collaborating with the longtime collaborators, there's this experimental theater group in New York called Piehole. So the actual process of creating the script was an iterative process with a lot of writers and a theater director. where we're thinking about almost the blocking to make it all casual, but also think about how to play off the users. There's various points where the audio splits into two different tracks, things like that.

[00:05:32.229] Kent Bye: Oh, wow. So the other person was listening to something completely different than I was hearing?

[00:05:36.431] Samantha Gorman: Yes, at certain points, yeah. Oh.

[00:05:39.412] Kent Bye: I just assumed that we were all listening to the same thing. In terms of experiential design, what happens from an experiential design perspective when you start to do things like that?

[00:05:48.257] Samantha Gorman: Yeah, so one of the sections we're experimenting with that might be in the full app later, there'll be a sort of excursions you can do with your family and friends and like grocery stores and just, you know, like general areas. And one of the things we're experimenting with and towards the end of the app, there's a section where after it sort of calibrates your emotions and puts you through these sort of emotional paces, you get to a point where it tells both users to close your eyes and it gives one a different story and then the other like a slightly different emotional context and then it asks the other player to open their eyes and look at the person's closed their eyes and imitate their emotions and try to express it to the fish.

[00:06:28.188] Kent Bye: Oh wow, I missed that part. I think I got the part where I had my eyes closed. I didn't get the part. Okay. That's really fascinating. And so I guess the other thing that was really striking about this experience was the blocking of like directly opposite somebody looking into their eye, you know, sort of looking at people or side by side. Maybe you could talk a bit about that in terms of the spatial relationships as you're going through an experience like this. You're both holding a phone, but you're actually kind of like in different configurations with each other.

[00:06:54.910] Samantha Gorman: Yeah, this project began from a different project that was a HoloLens project about intimacy and different wavelengths of what the layer of technology on the screen can provide and interface between two people as they're trying to engage with each other. So we wanted to have a mix of both kind of directional output from the app but also is organic experience of being next to a body in space and like what is the presence of another person and what is the you know the kind of feedback loop itself of like looking into someone's eyes and you know how does that like infect the feedback loop on screen as you're getting your like data.

[00:07:34.063] Kent Bye: Yeah, and maybe you could talk a bit about some of the core technologies that are underneath here, because there's a certain point where you're walking around and you're tapping on things to do some identification of objects, but also some of this other emotional recognition. Maybe you could talk a bit about some of the core technologies that are driving this experience.

[00:07:49.892] Samantha Gorman: Yeah, actually this is one of the first, the full experience will be almost like a three hour narrative AR piece and this is one of the first experience I know about that combines a bunch of different core technologies, both AR core and two different branches of Google's human sensing unit. One of them is the emotion recognition capabilities and the other is object recognition capabilities. And for the emotion recognition capabilities, what I was saying earlier is actually they're getting pretty advanced and there's full nuanced, in the full API you can tell the difference between disgust and anger and indifference. Yeah, but other platforms, it's, you know, a lot more like joy, sadness, surprise, you know, the very, like, exaggerated expression. So these things are getting pretty subtle and nuanced, which is interesting.

[00:08:45.297] Kent Bye: Yeah, it's also interesting that just in Google's sort of technological roadmap, they had gone down the road of having Tango phones with depth sensor cameras facing outwards, and then the iPhone X comes out, and they have the depth sensor facing forward towards your face, and I imagine that Moving forward, we're likely going to see a lot more depth sensors focused on the face because of an experience like this that you have, both in the technology like, you know, Snapchat or in Emojis for the iPhone where you're able to start to really embody these virtual avatars and start to, you know, kind of really take on those that you really need that front-facing camera. So, I guess for this, does the depth sensor camera facing at your face give you a higher fidelity of being able to detect the emotion?

[00:09:27.580] Samantha Gorman: Yeah, it's definitely possible to have higher fidelity and that's something we're working on as the technology evolves. It's actually funny because one of the things we're doing is getting the camera texture, bringing it to Unity, and there's a lot of processes going on, and it's actually really light sensitive. So in a place like this where there's gel and yellow over the lights and it's dim, it still works well, but it's definitely something that we had to program around to up the exposure.

[00:09:54.615] Kent Bye: Now, when you have a three-hour experience, is this meant to happen continuously, like they do it all at the same time? Or can people kind of pick it up if they decide they want to sort of pause?

[00:10:06.930] Samantha Gorman: Yeah, so the app is actually a single player app, but has social components to it. And the version at Sundance we have is sort of like a mini version of, there'll also be options for like eight different excursions you can do with family and friends that are more like social performative experiences. And in the full app, it's actually, there's like a sort of narrative arc that emerges, but we're also doing a lot of generative natural language processing, so it becomes almost like the fish starts to evolve as a virtual pet, that you have to sustain it by feeding it objects in your world and emotions, and that changes its personality over time.

[00:10:44.922] Kent Bye: Wow, yeah, and going through, playing through the virtual virtual reality, I saw that there was a pretty overall linear narrative structure, but a lot of open world exploration. It was a really great combination of maybe providing the illusion of complete agency, but yet sort of there's different ways that you're able to drive the narrative forward. So you're able to have that sense of exploration, but still build that narrative tension by being able to control that. Maybe you could talk a bit about the architecture of the narrative, how you design, and if this is similar with the Tend AR, if you're going to start to do something where you blend the open world exploration with that narrative. Because I think that's a huge combination of how do you allow people agency but also tell a story.

[00:11:26.533] Samantha Gorman: Yeah, so that's something that is one of the main things we explore in all our projects, even in Pry, which was a drama project, and that was like actually live-action videos and direct live-action video. It's like you can see different story beats, but you also make sure that the interaction the user is doing, they feel like they can do at any time. So in some ways, they're in control of the edits, but you're in control of sort of the, in Pry especially, like the story or the narrative. And so I guess it's basically like figuring out what the story beats are, what the structure is, and then allowing a lot of like permutations and opportunity around that. And doing subtle things that people might not notice. For instance in Pry, like you can essentially you're in the mind of this character and you can pinch to open his eyes and you see his external world and you let go and you go back into his internal thoughts. And you can do that anytime, but we wanted certain things seen on the external world, like story beats, like a car coming down the road. So we make sure that if the person doesn't have the character's eyes open, you still hear the diegetic audio of the outside world on loop as if it's continuing and we wait until that moment when they open the eyes to hit that story beat.

[00:12:36.323] Kent Bye: Oh, wow. OK, so you're actually having that interaction to the level of kind of paying attention to the face in a way that in virtual reality, you can't really see the face or the eyes. And so I guess that's one of the unique affordances of some of the stuff that you're doing with AR is now you're able to actually take some feedback of what's happening with people's faces and then sort of detect the emotions. And maybe are you changing the narrative based upon the emotions they're showing?

[00:13:00.023] Samantha Gorman: Yeah, so Pry was actually more orchestrated and linear and more of us as directors seeding things than 10 Day R is going to be our more even like open generative project where the narrative structure is like more like binnable and more like nebulous. We're like, OK, well, between this point and this point, these number of things can happen. You know, these number of things can happen. But yeah, there's definitely certain beats we want to hit because the character of the fish essentially starts out It's sort of mock AI, but it is also partially AI in that some of it is natural language processing. So it starts out as less sentient, less verbalized, and it goes more coherent over time. So what is that journey of that character is sort of something we're currently working into formatting.

[00:13:47.670] Kent Bye: Wow. And I guess, are you planning on using natural language input so you can start to talk to this character and maybe interact with it in different ways?

[00:13:55.292] Samantha Gorman: To keep things simple, especially for localization, we're not using microphone input. A lot of the way you'd speak to the character is by showing it objects in the world.

[00:14:06.133] Kent Bye: So I'm curious, now that you've done both some virtual reality projects with like virtual virtual reality and now you're doing this 10 day AR and starting to explore storytelling in AR. Maybe you could tell me your take of like both the similarities and differences when it comes to these unique affordances of immersive storytelling between virtual reality and augmented reality.

[00:14:26.387] Samantha Gorman: Yeah, so one of the main things we try to do is think about what the medium can really offer and then like tailor the narrative to that. And it's sort of more of a like a symbiotic process, whereas we're changing the narrative, like what we want with the narrative will have to influence like how we're coding it. So in terms of the affordances of AR and VR, It's more like the affordance of working with other media, like Pride was an iPad project. Tumble, which is a game played with leaf blowers and tumbleweeds, has like a fictional history to it. But I think the difference between AR and VR is that AR is more of a sort of an outward-facing experience. Like, why are you, what's the purpose of AR? Like, why are you integrating, you know, objects into your world? And it's really like, I think, the focus on like surroundings and worlds. that has different affordances for a platform. And VR, I feel like, is more internal, introspective, and the concept of this idea of transportation, that's a more nuanced concept, but I think that holds true for VR.

[00:15:32.165] Kent Bye: Yeah, and imagine that in the process of designing an AR experience, because it is outward facing, you have to have some philosophy or framework for being able to categorize these different objects into different things. Like if you're at home, you might have a couch and lamp and tables and things that like, you have to kind of design for that in terms of like, what's the most likely thing that's going to be there. So what's your strategy for doing that? For like trying to take an unbounded amount of possibilities and potentials and objects and start to split them up into categories and then from those sort of archetypal categories then draft a narrative and story from that.

[00:16:06.222] Samantha Gorman: Yeah, it's actually kind of funny. I'm working on that right now with the writers, like scaling everything and planning it out. I would say that some of the things that like the two-person excursions we know people will do, like, you know, they're really, we're trying to keep them really broad, like if we can tell on a map if you're near a body of water, so we can like sort of, it's essentially writers for that part are gonna have to seed like like 200 objects. And yeah, we're trying to figure out a way around that. We're working with a chatbot programmer too, you know, because we can't like write for like every single object in the world, obviously. And then in the sort of flow of the app as the fish evolves, we're going to have the fish as part of its character, maybe part generative, maybe not, ask to see specific things. Like one day it could get really obsessed with bananas. And like, you know, that could be like a recurring theme in its evolution. And we can actually seed and have different fish have different personalities and request different objects.

[00:17:07.033] Kent Bye: Wow. That's interesting. So you have a little bit more of like you being directed to go on an adventure to go find a banana or something like that.

[00:17:14.261] Samantha Gorman: Yeah. Um, that's a hundred percent true in the excursions. And we're also going to kind of see that in the, you know, the actual app as well.

[00:17:21.158] Kent Bye: Interesting. What inspiration have you taken from Pokemon Go? Because they have a process of figuring out big landmarks and then kind of driving people to go through physical space towards those landmarks, but yet they've been basically mapping those landmarks all over the entire Earth. That's a pretty big initiative to do that from, you know, their previous experiences and to the one with Pokemon Go. So how do you sort of make sense of all that space? Is that something that, you know, that level of landmark fidelity of an excursion that is either kind of localized to like what is a common objects in all the places or specific to geography so that, you know, if you're playing this game you would go to maybe Paris, France or something like that?

[00:18:01.273] Samantha Gorman: Yeah, I mean, obviously, as a small team, we can't have that fidelity. But ways we can do it is we can just know, like on a map, let's say like you're within half a mile of a giant body of water, right? Or, like one of the excursions might be at a grocery store. And that would be the most specific it could get, you know, we're trying to still make it very broad. So in the grocery store, for instance, like part of the social experience might be like showing the fish the fish section. and like, you know, what evolves from there.

[00:18:33.122] Kent Bye: Yeah. All of his distant relatives are now being eaten. That's interesting. So one of the things that I see as like the holy grail of interactive narrative is to be able to identify somebody's temperament or things that are going to be very specific to them. And I think there's different personality frameworks and other things that you start to almost tailor different experiences based upon one of these bends of personality or temperament that you may have on an individual playing them. Just curious if that has come up and starting to design Tendayar, if that's going to be something where you're getting feedback from the type or the temperament of a character and whether or not they're going to be more willing to do active adventures or maybe more emotionally driven or somebody who's more thinking and solving puzzles, or somebody who just wants to do things in their body. To me, I sort of use the elements to kind of split up different higher-level archetypes of those different personalities. But I'm just curious, from your perspective, as you're doing this experiential design, if you've started to think about these temperaments for the people that are going through it, and if you're designing for that.

[00:19:32.766] Samantha Gorman: Yeah, actually the whole app is based on, um, so at a certain point you sign up for the beta test with, uh, you know, the software platform and you're introduced to your fish. And part of what you have to do to do that is to almost avoid comp tests that, you know, self-resituates you in this space. And then through the course of the experience, it will sort of check back in with you. But the main way you feed your fish and keep it alive is by giving it emotions. And that changes the fish's temperament as well.

[00:20:02.957] Kent Bye: Oh, wow. Okay. Great. So what are some of the biggest open problems that you're trying to solve right now?

[00:20:09.648] Samantha Gorman: Yeah, right now, as soon as I get back from Sundance, I have to sit down and scope out the project and see what's So as a project's main writing director, my particular section of it right now is figuring out what is doable in the narrative. What is the corpus of what we can possibly do and what needs to be generative, what needs to be seeded, how those things align in a way that feels natural as you're going through the experience, and really mapping it out.

[00:20:38.407] Kent Bye: Great. And finally, what do you think is the ultimate potential of virtual and augmented reality, and what it might be able to enable?

[00:20:47.910] Samantha Gorman: Yeah, I mean, I think a lot of people say this, but I think it's sort of obvious. So I've been working in virtual reality since like 2002 in like a cave software, which was like a crazy, you know, room back projected stereographics. And I've seen different iterations of it. And I feel like obviously like VR and AR will merge into, you know, a cohesive system in some way where there'll be different features and affordances of both that will Like I'm not so much sure if it's like a binary like VR on AR on you know Like on and off, but I think there'll be some sort of integration Is there anything else that's left unsaid that you'd like to say I? Don't think so. Thank you so much for talking Okay, awesome.

[00:21:36.446] Kent Bye: Thank you so much for joining me today. Yes, I So that was Samantha Gorman. She's one of the co-founders of Tender Claws, and they were showing Tend AR at the Sundance Film Festival. So I have a number of different takeaways about this interview is that, first of all, I think that, you know, of all the different experiences that were being shown at Sundance, and when you look at who is really pushing for the different cutting edge technologies, I think For sure, both TendAR and the Frankenstein AI were both using some of the bleeding edge techniques from both artificial intelligence as well as immersive theater components, and in the case of TendAR, using augmented reality. So at both the Sundance Film Festival and the Tribeca Film Festival over the last couple of years, there haven't been a lot of augmented reality storytelling experiences. And I think that TendAR is starting to uncover some of what's really compelling about augmented reality storytelling as opposed to virtual reality storytelling. So it reminds me of the mixed reality spectrum, which originally came out in 1994 by Milgram and Cushino in a paper called the Taxonomy of Mixed Reality Visual Displays. And what they were trying to do is they were trying to set the spectrum between what the environment was. Are you in a real environment? Are you in a virtual environment? And as you start to add these augmented and virtual reality technologies, you go from the real environment, and then you're still centered within the real environment. And then you're adding different virtual components and layering it on top of that. And that's what the augmented reality really is. And then at the far extreme, you're completely unveiled within virtual reality environment. There's no real environment that you're seeing. And then sort of the part that's in between augmented reality and virtual reality is what Milgram and Cascino called augmented virtual reality, which We don't ever hear that but it's probably the most of what people colloquially call mixed reality which is that you're primarily within a VR experience and you're seeing components within the real world coming in and you have this kind of blending of real objects that you may be having some sort of passive haptic feedback but you're in a virtual environment and so you're having this interface between the real world and the virtual reality. Calling that mixed reality is a little problematic. So I think that depending on what orientation that you're coming from, the word mixed reality can mean all sorts of different things. So if you're coming from VR and you're talking about mixed reality streaming, that means you're in a VR experience, but you're overlaying with a green screen and there's mixed reality. But then, you know, with Windows, they basically took this mixed reality spectrum and they call a VR headset, mixed reality headsets, which is arguably more of a VR headset rather than mixed reality headsets, because there's no actual augmentation that's being fed in. But I think in the long term, this is all going to seamlessly blend together and that Windows Mixed Reality headsets will actually have Mixed Reality included within it. Anyway, that was a little bit of a diversion. The main point that I'm trying to make here is that the center of gravity within an augmented reality experience is the environment. And I think within Tender Claws, they're starting to do things that you couldn't necessarily do as much within virtual reality. Primarily, you're holding a phone and you're able to have access to someone's face and their full fidelity of their facial expression. So you can start to use some of the facial recognition and use the emotions. And I think that what they're doing with the story is starting to integrate that emotions in a really kind of fun and interesting way. Just the process of mimicking different emotional expressions on your face makes you actually start to have this biofeedback phenomena where you actually start to feel those emotions. And so you actually start to have this kind of really rich emotional journey. And because you can also start to interact with other people within the augmented reality, which is, you know, you can interact with people within virtual reality, but it's a lot different to actually be co-present with another human being. especially at Sundance, where you're doing this, what ends up being somewhat of an intimate experience with what could be a complete stranger. And if you don't know them, then you have this sort of connection with them afterwards. Or if you do know someone, this is just sort of an opportunity to do something fun together and to have this shared experience that is not like anything else that you've experienced before. And so the other thing that comes to mind is this interview that I did with Colin Nightingale of Punch Drunk. And in Punch Drunk, they started to use like these audio tours where they would start to play with blending what is real, what is a part of this experience, where you go on this audio tour of a city, and the main thrust of the narrative that's being told to you is coming through the audio. And I think that's what TendAR is also doing, that they're really focusing on that audio storytelling, and that depending on what you're looking at, you're kind of actively editing how that story is unfolding. And so you do have this really nice blend of your agency of your exploration, and then you have different objects that are in the world, and those objects are being detected by the AI, and then as writers of these experiences, they have to figure out, well, you have an infinite number of objects. So how do you sort of break that down into different categories and then actually make an interesting story around that? And they can do different things like have you go on an excursion to a grocery store and they kind of know the different, you know, sections of a grocery store, whether there's a meat department or fruit or, you know, there's just a limited amount of things that all grocery stores have. And so they can start to do those types of things as well. But What they were able to achieve with this experience, I think, is really quite interesting and fascinating because you have the differences of augmented and virtual reality is that sort of embodied presence, environmental presence, what I would call sort of that earth element, is that you are trying to actually integrate your surroundings in the world. And the question that Samantha says is, you know, how and why are you integrating objects into your world and into this story? the absence of actually doing that within a augmented reality experience is like, well, couldn't this just be kind of a VR experience? And I think that that was one of the things that as you're kind of walking around the experience, you're starting to tap on a light or a couch. And it's kind of delightful when the AI is able to kind of actually respond to that and still push the story forward while you're interacting with the environment. in terms of the active presence, you are sort of taking actions based upon exploring your environment, but you're also doing that with another actual person. And I think the fact that you're able to interact with other people in real time and to see the full emotional fidelity of their expressions is also really interesting. And it was really interesting to also kind of split the agency between the two different characters. I had just assumed because, you know, whenever you normally share headphones, you just have the same audio stream. But they're doing some interesting things where they start to split off the different actions that are happening between the two different people. So this collaboration with Google, to me, is something that's really fascinating. I know that a lot of the highest people within Google have really, really loved the virtual virtual reality experience that came out for the Daydream and that they actually won an award for the best VR experience back in 2017 at the Google I.O. And so I think it's interesting that Google is collaborating with Tender Claws on some of these more cutting edge technologies that they're working on. And that makes total sense because I totally trust Tender Claws to be able to take some of these new possibilities and to do something really interesting with them. And I think that from a narrative perspective, what they're doing is super fascinating. And I talked to Samantha afterwards, and she said that within this experience, when it gets released, that they're not going to be capturing, recording, and storing any of this emotional data. Now, from a business side, I guess there's different concerns that I have with how much data are you willing to give over to Google when it comes to your emotional data or your biometric data. So there's this trajectory of immersive technologies that they're going to be able to capture more and more information about our unconscious processes. The more that companies like Facebook and Google are centralizing all that data into a centralized profile, then the more they're able to kind of know more about you than you know about yourself. And I think in talking to different neuroscientists, there's this kind of unknown ethical threshold of how much data is too much data. And when you start to get into your emotional data and your biometric data, then that line starts to get really blurry and fuzzy. And this is a larger conversation that I need to have both with Facebook and Google. I'd love to just sit down with them and to have this hard conversation because There's a lot of privacy implications. There's a lot of downsides and right now It's a trade-off between the different kind of new and exciting things that you can do with the technology versus kind of the potential Dystopian possibilities of where this could go just as an example the third-party doctrine says that any information that you give over to a third party is is no longer considered private. So if you're willing to give your emotional data over to Google, that means the government can go to Google and say, hey, we want the emotional profile of all these people. And the potential for abuse is just really high. And I think that there are these various different collaborations and connections between these huge companies like Google and Facebook and the government where it's kind of opaque. We don't really know the full extent of that collaboration. Just from the terms of a business standpoint, if they have more and more information about you at this degree and this fidelity, the line between advertising and thought control becomes very blurry. And this is just a conversation that I've had with a number of different neuroscientists, as well as people who are concerned about these different various privacy issues. So the fact that they're working on a human sensing group at Google, and they're potentially going to be working on technologies to capture this information, then the question becomes like, how much of this information do we want to actually give over to these companies? And what are the trade-offs of that? This is a larger conversation that kind of goes beyond what Tender Clause is doing, and I just need to have a conversation with Google and Facebook about this, and I'd love to sit down and have that conversation. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon your gracious donations in order to continue to bring you this coverage. So, you can donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show