Visionary VR premiered their Mindshow interactive storytelling platform at VRLA, and was quite a unique experience to be able to record an improv acting session within the body of a virtual character and then step outside of myself to then watch my performance. I’ve recorded myself with a 2D camera plenty of times before, but there’s something qualitatively different in being able to watch my body movements while immersed within a spatial environment.
The core mechanic of reacting to a story prompt was simple and intuitive, and the number of variations in how a scene plays out is only limited by human creativity. The initial Mindshow demo at VRLA had a simple linear capture where you could layer additional characters within a scene while you have previous takes play back to you. You could develop an entire story by rapidly iterating different performances of yourself much like a looping musician might construct a song.
But the true power of Mindshow will be in the collaborative features where you’ll be able to communicate with your friends with the power of the direct experience of a story, rather than by using abstracted and symbolic language. You could pass a scene back and forth to each other like an asynchronous improv performance, or you could eventually interact in real-time, once that feature is implemented.
I had a chance to catch up with Visionary VR and VRLA co-founder Cosmo Scharf where we talked about some of the inspiration behind Minshow including the Buddhist philosophy of Alan Watts and the post-symbolic, direct experience ideas from Terence McKenna.
LISTEN TO THE VOICES OF VR PODCAST
https://www.youtube.com/watch?v=2p9Cx4iX47E
Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. My name is Kent Bye and welcome to the Voices of VR podcast. So when I was at VR LA this year, there was a number of different announcements, but one of the ones that really stood out was from Visionary VR about their new experience called Mindshow. So Mindshow is a way for you to tell stories by actually embodying different characters and kind of doing some improv acting where you record yourself and then you're able to jump between different characters and create an entire story, an entire scene. And then after you're done recording, then you can step outside of yourself and watch whatever you created. And it's pretty mind-blowing. I think it was one of the first times that I've been able to record some of my first-person perspective embodied actions and to be able to step outside of that and witness myself. And so Cosmo Scharf is a co-founder of both VRLA as well as Mindshow. And so he was at VRLA making the announcement for Mindshow and talking about how he's been really inspired by Alan Watts as well as Terrence McKenna and how he sees Mindshow as kind of like this expression of post-symbolic communication. And so that's what we'll be talking about on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. This is a paid sponsored ad by the Intel Core i7 processor. You might be asking, what's the CPU have to do with VR? Well, it processes all the game logic and multiplayer data, physics simulation and spatialized audio. It also calculates the positional tracking, which is only going to increase as more and more objects are tracked. It also runs all of your other PC apps that you may be running when you're within a virtualized desktop environment. And there's probably a lot of other things that it'll do in VR that we don't even know about yet. So Intel asked me to share my process, which is that I decided to future-proof my PC by selecting the Intel Core i7 processor. So this interview with Cosmo happened at VRLA that was happening at the Los Angeles Convention Center on August 5th and 6th. So with that, let's go ahead and dive right in.
[00:02:16.391] Cosmo Scharf: Hi, my name is Cosmo Scharf and I'm one of the co-founders of VRLA as well as another co-founder of Visionary VR. And yesterday we just revealed Mindshow, which is really awesome.
[00:02:29.840] Kent Bye: Yeah, I had a chance to try it out, but maybe you could describe what Mindshow is.
[00:02:34.781] Cosmo Scharf: Sure. So Mindshow is an entirely new way of creating stories. It allows you to create, share, and experience shows in VR. So what we mean by shows are anything from a short, bite-sized moment to a longer, more complex narrative. And you start by choosing a world to inhabit and decorating it with props. And the core of the experience is you get to embody a character and basically puppeteer them and move their body, interact with other characters, interact with props, record yourself doing that and create a story in that way. And then once you're done, you can share your show with other people
[00:03:18.178] Kent Bye: Friends in VR and then also outside of VR in like social media yeah, the thing that I was really surprised by was how immersed I got into each of the different characters and having the actual embodiment of an alien character and then the captain hero character and to start off you kind of have some acting that you're reacting to so you have just kind of a simple like a you're scaring the other character. But then you have the chance to actually start to kind of layer and almost like a musician would create a whole song through looping, you're kind of creating a looping of improv acting and being able to record it and play it back to yourself and modulate the voices and give a little emoticon expressions. But it was just really satisfying to kind of then stand back from the first person perspective and actually kind of watch myself from a disembodied third person perspective, the whole scene that I just acted out.
[00:04:10.357] Cosmo Scharf: Yeah, it's a really kind of unique thing that we're doing there. I remember the first time I tried puppeteering a character and then watching it back. It was such a cool and kind of weird experience because you're watching the character basically, you know, move as you just moved. and there's like this weird like duality in the sense like that's me but it's not me because it's a character and so what happens is like when you get to perform as a character it's almost like wearing a costume and what we hope that enables is kind of breaking down people's barriers towards creativity so that people basically get over the like fear of like messing up or saying something stupid and just being able to be themselves or not themselves.
[00:04:54.338] Kent Bye: Yeah, the other thing that I really liked about it was you kind of have this almost telephone game that I could imagine starting to be played where someone records maybe one bit of dialogue and then you could record your reaction to that and then you kind of send it back and forth to each other and kind of have this emergent story based upon that.
[00:05:10.108] Cosmo Scharf: Yeah, it's a whole new kind of creative conversation, right? So it's like stories as communication, right? And we, every day we tell stories, whether we know it or not, stories are everywhere. They're often things that maybe you don't think of as stories, but this concept of, I can make something and send it to you and you can react to it or you can add it or you can change it. You can really do whatever you want. Right. So I could, record myself as an alien and then, you know, say, hey, what's up? This is Cosmo. Welcome to my planet. And then, you know, I could send it to you and you could ultimately change the voice. You could change the character. You can change what world they're in. You could change everything about it. So all of a sudden, like storytelling is not only bi-directional, but is completely malleable because it's in a game engine, right? That's the power of VR is that we're taking advantage of everything that you can do that is not possible in real life.
[00:06:09.485] Kent Bye: Now, yeah, one thing that this makes me think of is something along the lines of immersive theater. And more like a theater take than a film take, if you know what I mean. Like there's not a lot of cuts and edits. And so I'd imagine that if there's any bit of extended dialogue, then I'm just wondering if you have the ability to kind of pick up at any moment and then kind of start from there and then go. Or how are you going to deal with, how do you do a complex scene with dialogue and get the timing and pacing right?
[00:06:37.561] Cosmo Scharf: Right, so the demo you saw is, again, a first look. It's the first time we're showing it to anyone. The core of the experience is puppeteering, but certainly what we're building up towards is enabling people to create potentially entire feature-length pieces of content. And to do that, you're going to need a timeline, you're going to need to be able to have cameras, etc. It's pretty simple right now, and you'll be able to do more and more crazy things with it. The whole premise of what we're building is the two main things is simple and fun, right? Our hope is that if there's anything that you experience in the demo, whether you created a good story or not, is that you had a fun time, right? That's really what it's about.
[00:07:17.366] Kent Bye: Yeah, I think that the first time that I ever was in a VR experience and started to witness myself from a third-person perspective was in the wave that was shown at SVVR and I was on a DJ booth and then I got kind of teleported into the audience and it was in real time so I was able to kind of see myself almost like a mirror. I was like getting a digital representation of myself in a real-time mirror but this was the first time that I've actually seen a recorded version of myself played back to myself, and it was really quite compelling. I mean, it's something that I've never experienced before. It's like a first in VR to really record myself, and, you know, as a VR technician, I'm, like, trying to move my arms around and see if I can break it. And then I watch myself, and I'm doing that same sort of, like, twisting of my hands around to see what the inverse kinematics are doing. But it was something that I just had a lot of fun doing, and I just wanted to keep doing and making stuff.
[00:08:09.669] Cosmo Scharf: Yeah, the thing that's really exciting is a lot of the feedback that we're getting so far is like, wow, that was awesome. I want to keep doing more. It was just a little taste, a bit frustrating in the sense that it was kind of short. But yeah, I mean, when you talk about recording yourself and then watching it back, yeah, that's something that we're really proud of and excited to show people for the first time.
[00:08:30.896] Kent Bye: Yeah, well, just two days ago, HTC announced that they're going to have more trackers to be able to actually have more capability. to track more objects, which means, in my sense, I could imagine a time where you have both the feet and knees and elbows. You can do some lot more sophisticated motion tracking. There's no facial tracking right now, so I can imagine a time where The facial tracking gets a lot better to actually do emotion emoting. It's tricky because to do full facial capture with an HMD on, there's all sorts of complications. And so to do like the full motion capture, you kind of have to be outside of VR. But then when you're outside of VR, you're not embodied within the character. So imagine these are a lot of the issues that as you're looking forward, I can kind of see the roadmap of how the fidelity is just going to get a lot better, but there's still a lot of challenges to be able to be within VR and still do the full emotional expression that you want to do.
[00:09:26.370] Cosmo Scharf: Yeah, it's pretty simple right now. You know, the magic of it is that we're able to, you know, essentially take what was once several thousands of dollars worth of really crazy mocap equipment that you need to go to a special place for, and you need to have specialized skills for, And now anyone with a Vive and a PC can have that in their living room. And we're just taking the three inputs, your head and your hands, and extrapolating the entire rest of the character through that. So that's really cool. But yeah, you're right. The fidelity of it will continue to increase dramatically. So things like, you know, being able to totally track, you know, your face and your whole body, you know, we could theoretically do that today actually with like a perception neuron, but that stuff's pretty cool. I was just talking to Reggie Watts about this, like he was like a big, you know, fan of them. I think we could work on something like that, but it's kind of niche at the moment and it'll be interesting to see how VR tech advances, right? Like ultimately, perhaps the Vive will come bundled with some other things you put on your body, or maybe the Rift will come bundled with extra cameras that will track the whole body as well. So, who knows?
[00:10:34.010] Kent Bye: Yeah, I can imagine how you'd want to keep it with what most people have access to, rather than creating a niche, something specialized for the industry.
[00:10:42.417] Cosmo Scharf: Yeah, exactly. That's really our goal with this is to make it accessible to anyone. That's what's super exciting is it's not just for storytellers. It's not just for people who see themselves as creative. It's not just for people who are filmmakers. And certainly it's going to apply to a lot of those people. And those are the writers, directors, those sort of people are going to be our improv people are going to be the first sort of people who get excited about this. But really, we're trying to democratize storytelling in the world's most profound and emotional medium.
[00:11:12.140] Kent Bye: And can you describe your own experiences or anecdotes of using MindShow as a storytelling medium?
[00:11:19.718] Cosmo Scharf: Yeah, I mean, there's all sorts of crazy, like, GIFs and photos that our team posts on, like, our internal Slack. It's really cool to see, because it's like, it's just like our team kind of messing around with, like, seeing how they could break the system. And, you know, we honestly, like, you know, we've done a lot of user testing, but we don't really quite know how real users are actually going to use it. So that'll be fun to see how that works. But really, it's a platform for enabling other people's creativity. So I think we're going to be very surprised.
[00:11:46.596] Kent Bye: Have you told any stories in it yet? Constructed an actual story from beginning to end that was using the tools?
[00:11:52.109] Cosmo Scharf: Yeah, absolutely. I mean, mostly right now, it's more focused towards creating sort of like short little comedic moments that aren't necessarily narratives. And that's why we call them shows. And, you know, obviously it's very much about storytelling and enabling people to create stories, but it's a bit of a loaded term in that, like, not everyone necessarily wants to make stories, but like, as long as the experience is fun and like, you know, you can get in there with other people, like you're going to use it however you choose.
[00:12:22.217] Kent Bye: Yeah, it felt like the closest thing I've experienced to being on stage for an improv show, you know, and there's somebody there and I have to react to them and wasn't necessarily mentally prepared to be like, okay, now I'm on stage and this is exactly my approach doing it. But there's a little bit of yes and type of like, oh, I'm going to have to respond and go with whatever's happening here, you know, and then, With having something there, it gives me something to react to. So I can imagine a time where you kind of create these story stubs where it's a starter line to just get the story going, and then you start reacting to it from there.
[00:12:54.794] Cosmo Scharf: Yeah, absolutely. I mean, right now it's a bit like Mad Libs in VR, right? And yeah, we're just really excited to see where it goes. It's just the beginning, but there's a lot of things that we can do with it.
[00:13:06.487] Kent Bye: Now here at VRLA, you're giving a talk introducing some of your deeper metaphysical thinking about the importance of mind show. And I'm just curious to hear maybe a brief little summary of some of the main points that you were trying to make there.
[00:13:19.271] Cosmo Scharf: Sure. Yeah. So, you know, I referenced Terrence McKenna and Alan Watts. They've been pretty influential on how I think about VR. And I talked about that in the speech, talked about the illusion of separation and how ultimately through VR, you know, more and more people could, realize that, you know, in fact, we are not separate entities, or rather, that there's this sort of deeper underlying connection between all things and all people. And Terrence McKenna has this brilliant article that he wrote about virtual reality in the 90s, you know, right before he died. is kind of like an evolution of language itself. He's talking about how, you know, in the 90s, like how VR could bring about this new kind of visible language. So instead of, you know, language right now is symbolic, right? It's referencing other things. It's not the actual thing, right? So a great example is like Alan Watts talks about the concept of money, right? Like money is an actual wealth. It only is like as good as like what it represents as long as enough people believe it, right? So, with VR and MindShow, we have potentially an opportunity to create a new kind of visible language where my body is now the interface and how I move and how I speak directly comes out. It goes very much back to the name MindShow, right? Being able to show your mind, taking what is this idea in your head, this vague thing that you feel excited about, and being able to bring it out into existence. And yeah, the possibilities are kind of endless.
[00:14:51.772] Kent Bye: Yeah, the thing that makes me think of, as you're saying the language to me, when I think of like the left brain is the part of our brain that processes language, and the right brain is more visually based. And so ever since we've had the written language, we've been really dominated by the linearity of that language. And I think that we're moving to a world that's going to be moving away from the left brain and more into the right brain. And it feels like kind of what you're saying is that through the process of body language, you're able to give more of a visual representation of the communication, which I think when you are talking to somebody face to face, you end up picking up on a lot of the tonality and all that sort of subtle things that are more right brain driven than left brain.
[00:15:32.442] Cosmo Scharf: So to add on what I was talking about before, like you ever have a conversation with someone and you're trying to describe something, really it can be anything like a moment in your life or a movie or a work of art and like pretty much anything. And often it's like frustrating because the thing that you're trying to describe with words, language and words only goes so far. And now with VR and with MindShow, you can communicate through direct experience, right? Obviously the spoken word is a part of that, but When I want to convey a certain thing, I don't have to talk about it. I just use my body and voice to create it, and you experience it exactly how I intended.
[00:16:11.478] Kent Bye: Yeah, I just did an interview with Charlie Meltzer from the Future of Storytelling Summit. And the way that he told it to me was that they used to have oral traditions of storytelling. And then when the written language came, then Socrates actually said that that's a dead language because it's being removed from the emotion of the moment of actually being expressed from the person's mouth. And so language written down becomes a dead language when it's not actually expressed through the person who's actually telling it. So it feels like he sort of sees this as moving to living stories. And so it's actually bringing back the human spirit or human soul into that communication.
[00:16:50.295] Cosmo Scharf: Yeah, the concept of how language has evolved and how it's impacted our consciousness and our understanding of reality is really fascinating and I think What we're doing with MindShow is kind of taking us back in a way to what you were just talking about, where it was like the time before the written word, right? Where the written word is super useful, right? It's a social convenience, right? As Alan Watts describes, where, you know, if I want to talk about a bank, I don't have to be at the bank or I don't have to literally point at it. I just write it down and it represents that thing. But the problem is that often, as useful as that is, and as useful as it is to be able to distribute concepts globally, people often confuse words and language for actual reality. They're not actual reality. Actual reality is direct experience. That's what Terrence McKenna talks about a lot, and that's what VR is providing, so that's why it's so exciting.
[00:17:43.674] Kent Bye: I think the other thing is that storytelling with written language in films has tended to be a little bit like a singular perspective, where it's one person's idea of what the story of their experience was, rather than in VR, it feels like a medium that's really set up to be able to have multiple different perspectives. And so one of the things that Charlie Meltzer said is that, you know, in oral traditions, when people told a story, people would add a little poem or have a little joke or it was like this communal part of actually listening and receiving but also adding and expanding on the story and so it's a little less about one person doing a broadcast of a story but yeah it's more of a participatory process.
[00:18:19.869] Cosmo Scharf: Yeah, absolutely. There's so many correlations there between what we're doing is what I was talking about earlier is like Storytelling right now when you go to see a movie in movies and stuff are great Like I'm not like shitting on that stuff, but it's the directors experience, right? It's them telling me, you know, I go to a movie theater I sit down and I watch the thing for an hour and a half, you know I just sit there and I watch and I take it in same thing with TV shows, etc. And I Now because you're in VR because you have the power of a game engine where you can do anything you change the physics, etc You have this bidirectionality that hasn't existed before and so becomes a kind of conversation where you can change things and that's that's really like What we're so stoked about just to be able to enable that kind of concept
[00:19:02.703] Kent Bye: Right now in this initial demo, it's sort of an asynchronous type of recording where someone may record something, send it to you. But I can imagine a time where you really take this to the next level where it's actually live and synchronous. And I imagine one big issue may be latency over the network and other things. So I'm just curious if you've started to look at trying to do actually real live time recording with multiple actors.
[00:19:25.148] Cosmo Scharf: Yeah, so I can't go too much into the roadmap right now, but certainly multiplayer and live stuff is really compelling.
[00:19:32.713] Kent Bye: Yeah. To me, I think to get to that level of actually getting that communal storytelling thing, I think it seems the next logical step. So for you, what is the next step? What are the things that you can announce for what happens next?
[00:19:43.660] Cosmo Scharf: Sure. So we're showing it for the first time here, and we're going to be launching in closed alpha this year. And you can go to our website, mindshow.com, to sign up for early access.
[00:19:56.161] Kent Bye: So we're here at VRLA. And I remember talking to you back in May of 2014. And I don't know if you had even done your first VRLA yet, or if you just had just started. And it's sort of grown a lot since May 2014. So maybe just say a few words of how far you've come.
[00:20:13.033] Cosmo Scharf: Yeah, it's really crazy. You know, we had, I think we had done maybe one or two or three events. I can't remember, but it's pretty insane. I mean, when we first started the first meetup, we were excited about it, but you don't really have a vision for like what it might be. And so here we are at the LA Convention Center. 6,000 people, 133 exhibitors in the biggest space we've ever had. It's completely packed in all the theaters and Expo Hall. The lines are freaking long as hell. And we've grown with the VR industry, right? And there's so much excitement and hype around this right now. And we're really fortunate because we just happened to start it at the right time and with the right people. And we've put our lives into this. So much is putting this on is and as well the visionaries is crazy and you know, all of my team members would agree but we're just really fortunate to be able to create a platform for other people to you know show what they've been working on and for you know, just the general public to come check out the future of tech and entertainment and and life itself and You know, there's all sorts of other VR events out there but the cool thing about what we're doing I think in particular is like really anyone can come, right? It's not just for developers. And I've heard so many stories about people who are like, yeah, I came to the last VRLA and they got me into VR. And it's like, oh my God, yeah, we're actually having some kind of impact. I don't know how big really, but it's kind of hard to measure that. But it's awesome and just a lot of responsibility to see what we've been able to achieve and just looking forward to what the next show will be like and the futures in general.
[00:21:54.607] Kent Bye: Just from going to a lot of different conferences, I'll say that being here in LA, it's really the entertainment capital. You've got a lot of focus on actual content. And then, it being an expo that has affordable prices for the consumer, you're able to put 6,000 people through and experience a lot of experiences that they would cost hundreds or thousands of dollars in terms of conference fees and travel for them to even experience.
[00:22:16.645] Cosmo Scharf: Yeah, we're not doing this to be rich, right? We're not doing this for the money. We started this group because we wanted to see who else was interested in VR, and it just turned into this crazy thing. And really, we just want to make VR successful because we believe it's so awesome. And every time you show it to someone else, they hopefully agree.
[00:22:41.654] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality? And what am I people to enable?
[00:22:48.200] Cosmo Scharf: Sure. So I think a lot of that has to do with my talk, actually, like, I think the ultimate potential is for it to help dissolve boundaries between people and reveal the illusion of separation, right? I think a lot of the problems that we have in our world, whether they're social, economic, environmental, etc. is because we've completely disconnected from our environment and from other people and we see ourselves as completely separate from each other when in fact there's really this underlying connective core between everyone and everything. Once we have technology that allows us to experience really anything we want, when we're able to teleport, when we're able to become, you know, when we have other bodies, when we can fly, like when we can bounce back and forth between all kinds of crazy virtual worlds, like we'll understand that there's more than today's kind of experience of reality, that connectivity runs much deeper than like you would imagine.
[00:23:49.454] Kent Bye: Okay, great. Thank you so much. Yeah, thank you. And so that was Cosmo Scharf. He's the co-founder of VRLA, as well as the co-founder of Visionary VR, where they were debuting Mindshow for the first time at VRLA. So I have a number of different takeaways from this interview is that, first of all, I had a lot of fun with this experience. And it's one of the ones that I really wanted to play a lot more with, because there's just something really super compelling to be able to actually watch yourself in VR. And I think that there's a bit of like feeling like I suck, like I'm not an actor and I haven't had a lot of experience with an improv. And so I kind of find myself like within the scene and then somebody is acting at me. And I think that these prompts actually make a huge difference. And so I could imagine a time where there's a whole set of different kind of like writing prompts. But in this case, it's acting prompts where you're in this specific scene and scenario and there's somebody who's acting really scared and frightened and so you just kind of step into that role and you just feel naturally like okay well in this situation i would be trying to scare this guy but i think that going back and forth with your friends i think is going to be pretty fun as well and like cosmo said he said it's kind of like mad libs for vr where you're kind of like filling in the different blanks and so i could see you kind of going back and forth and sending these different shows or moments back and forth to each other But I wanted to get back to this being able to be in the first person perspective and then stepping out and watching yourself in third person. So the first time I really experienced that within VR, I think, was within The Wave when they were showing their demo at the Silicon Valley Virtual Reality Conference. you're on stage and you're DJing and you're kind of operating all this DJ equipment and when you teleported into the crowd and watch yourself then as soon as you move your hands you start to see like oh that's my body and I'm going to move around and it's real time and dynamic and so it's a little bit like being in a virtual reality mirror. Now this was a little bit different because you record yourself and then you step out and you kind of are able to watch what you look like and I think this was really super compelling because it's starting to cultivate this sense of witnessing and self-reflective consciousness so that you're able to actually observe and watch yourself in a way. And I think as the fidelity gets better and better, because right now it's a little clunky to be able to do emotional expression through pushing a button and selecting an emoji. I think that that's a bit of a stopgap solution to deal with the fact that we don't have really great facial tracking natively integrated with a lot of this VR technology yet. But I think that will eventually come, I'm hoping, because pushing buttons to show emotion is a bit of an abstraction that I think we're going to want a lot higher fidelity and control eventually. But I think that Mindshow's approach is to just stick with what's available with consumers right now and to really push the limits of what you can do with just the Vive and these hand-track controllers. And hopefully being able to track a lot more different points I think is going to help out a lot because You know with the hands you can do some inverse kinematics but it kind of breaks down at some point. You really kind of need like more points like on your elbows and knees and feet and I think once that all gets into place and you're able to really fully track your body accurately then the extra points of tracking both your hands and feet will give you is this full invocation of the virtual body ownership illusion. I have to say that I was pretty surprised to really feel how different it felt to be able to actually step into this alien monster body versus the male protagonist hero character. Just being able to embody those different avatars, you start to be inspired in how you react to these different situations. It really is like putting on a costume and starting to really play the part. But the reason why the virtual body ownership illusion is important because if you really actually feel like you're embodied into this character and then you step outside of yourself and watch yourself then it'll be a little surreal because you will have a memory of being that entity and that it'll feel like time traveling. And so I think that there is a very subtle time travel illusion that's being evoked here. This is an illusion that was studied more deliberately by Mel Slater in this experiment where he had people operate this machine where you're essentially deciding whether or not a number of people walking through an art gallery go to the first or second floor. It's pretty simple and mundane and pretty arbitrary as to whether or not you decide when to let people up to different floors. But what happens is eventually that there is this gunman that comes in and shoots everybody in the floor that you decided to shoot everybody on one of the floors that you chose. And so it's a bit of a moral dilemma in retrospect, where you start to really question about some of your decisions that you made without really even thinking about it. Well, the interesting thing is that they put people back into this experience like a week or two later, and because they had invoked the virtual body ownership illusion, people reported like they felt like they were time traveling. And they actually gave them the capability to intervene into their previous actions. And when I was talking to Mel Slater about this, like, well, what does it feel like, you know, this time travel illusion? And he was basically like, well, it's an illusion. It's something that you can't quite fully describe, but it just kind of felt like you were time traveling. And I think that with experiences like mine show, it's kind of like the first time that I'd really directly experienced this time travel phenomena. But in this case, it's only just a few minutes right after you've recorded it. But I can imagine kind of collaborating and recording scenes with other people and then coming back and watching it and then being able to really just kind of watch these kind of nonverbal or explicitly verbal behaviors that you may have never been able to really be aware of before. And so I think there's a really fascinating dimension there that is going to emerge with this type of communication. But I wanted to call out another thing that Cosmo was trying to say, and it came out really clearly when he was talking to the crowd on Saturday at VRLA. He really went into his inspirations from Alan Watts and Terence McKenna and talking a lot about some of their ideas about how he saw virtual reality as a technology that would start to break down the barriers and boundaries of separation between people. But not only that, but some of Terrence McKenna's ideas about words as abstractions. And so a word is not an actual reality, it's more of a symbolic representation. And that from Terrence McKenna's perspective, the only thing that's real is actually your direct experience of that. And so I think that's kind of the spirit of what MindShow is trying to create, is that instead of trying to communicate through words, they're trying to actually create a direct experience of these words by creating entire scenes within virtual reality. Instead of talking about a bank, maybe you're at the bank and you're able to have a direct experience of the bank within virtual reality. And so I kind of see this as part of this larger shift from the information age into the experiential age. where before the written word was very linear and left-brained and abstracted symbolic communication, and that with MindShow and other virtual reality technologies that are going to be coming in the future, and just the fact that you can have motion track controls and be able to kind of give people a simulcrum of direct experience that's mediated through the technology, then I think that we really are moving into this experiential age where we're able to symbolically communicate to each other through these stories and metaphors and anecdotes and to be able to share a sense of an experience of something with other people. Now there's something a lot different from recording something and watching it on a 2D screen. It's way different than actually being immersed in it and I think that's part of the reason why it's so different for me to be able to actually act out within one of these characters and then stand outside of myself and be able to watch myself because It's a spatial medium and I'm able to actually kind of watch myself and how I move in a new way where I'm not able to actually witness that before. And I can kind of watch myself and identify like, yeah, that's definitely me. Like I made those exact movements. I know what it looked like from the first person perspective. And yeah, that kind of matches what I guess it should look like from the third person. So at the moment, Mindshow is pretty asynchronous. And I think that to be able to do live interactive improv types of scenarios, I think it's going to really take this program to the next level. I mean, I think that the next logical step may be to be able to do that asynchronously and to be able to edit. But one of the things that's really challenging within editing, especially as you're moving around, is that you start to have like these animation that happens where you're dynamically moving around and if you cut out like 10 seconds, then essentially the Animation is going to glitch and so you have to really kind of pick the right edit points it's something that I faced before and trying to create the crossover experience of a trying to create dialogue that was happening asynchronously and then edit it in post-production, but it's not so easy to be able to cut it down. So I think to really get the timing right, you almost need to have live actors happening at the same time. Or maybe you have the capability to be able to, in a direct linear sequence, be able to pick the moment where you're really picking up from and moving into the next scenes. I think it'll be interesting to see how they handle some of those timeline capabilities and how they really allow you to build out a full scene with other people in this kind of distributed asynchronous way. And I'm really looking forward to being able to actually dive into these situations and play and create these improv stories that are happening. I think for me it was really starting to tap into a part of my brain that hasn't been really well exercised which is really this stepping into a scene and you really are needing to act and so some people just aren't not going to be very good at acting and so I did get some feedback from people who just kind of felt like they had a bit of a stage fright or weren't able to really give a convincing acting performance for themselves. You know, I think that people already have an issue with recording themselves on video or on audio and then to be able to actually record yourself in VR. I think that it's going to be well suited for some people, but certainly not everybody is going to want to be recording themselves in this way. But I think that one of the things that was really fun was to be able to actually listen to myself afterwards because they were doing some sort of voice modulation. And so I think one of the things that you look at Snapchat is that there's some filters where if you don't really feel like you want to show your face to the world, there's certainly a lot of different Snapchat filters that can occlude or block your face or beautify you in some ways. And I think that just the same, there's going to be a similar effect where ordinarily, if you may not be willing to document yourself on video or camera, then people may be a little bit more willing to jump into a digital avatar and be able to act things out. So it could actually go the other way as well, where it makes it more comfortable for them to be able to do these types of acting scenarios. So that's all that I had for today. I wanted to thank you for listening. And please do spread the word, tell your friends, and leave a review on iTunes. And if you'd like to support the podcast, then become a donor at patreon.com slash Voices of VR.