#850 DocLab: Using Theater to Explore AI & the Future of CAPTHCA with “Artificial: Room One”

alexander-devriendtAlexander Devriendt is an immersive theater creator with the company Ontroerend Goed, and he was presenting an interactive piece about artificial intelligence called Artificial: Room One. It was an early-iteration prototype that is starting to explore the evolution and future of the CAPTCHA tests that we take in order to prove our humanity. When originally tasked with creating a project around AI, Devriendt was surprised to find that AI was simultaneously a lot more limited than he had expected, but also at the same time made some huge technological leaps to do things that he didn’t think was possible. It was this gap that he wanted to explore in his piece, but also using the experience as a mirror for us to reflect upon our own humanity, and how would we go about the process of trying to prove that to other people using the affordances of our technologically-mediated modes of communication.

I was a part of the first batch of users to go through the experience on the night that it opened, and had an opportunity to reflect upon the experiential design process with Devriendt, and then dig a bit deeper into some of the deeper philosophical reflections about artificial intelligence and what it means to be human.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So continuing on in my series of looking at some of the immersive storytelling and narrative innovations coming out of the IDVA DocLab in Amsterdam, today's interview is with Alexander De Vrient. He's a theater maker and he was tasked with this project with Casper Sonnen. He said, do you think that you could replace your theater actors with AI? And so that got him down this whole path of looking at where is AI at? and kind of coming into this interface between what's the difference between humans and AI through the CAPTCHA test, which is an acronym that actually stands for this completely automated public Turing test to tell computers and humans apart. So he started to dig into this phenomena of the Turing test, and then was surprised to see, on one hand, how he was expecting AI to be a lot more powerful than it was. But at the same time, there are certain aspects of where AI had been much more advanced than things that he was expecting. And so he played with this idea of creating this whole immersive theater type of piece where you go into this dark room. It's at Infodoc Lab. You walk down in this park into this building, and there's this computer there. And you start to do this whole one-on-one interaction for about 12 minutes with the computer. And you're asked these series of different questions to essentially prove that you're human, and then get into this deeper dialogue into how would you try to prove your humanity if you have the continued evolution of these different types of capture chests. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Alexander happened on Friday, November 22nd, 2019 at the IDFA DocLab in Amsterdam, Netherlands. So with that, let's go ahead and dive right in.

[00:01:53.220] Alexander Devriendt: I'm Alexander De Vrindt. I'm a teetermaker. And I have a company, I'm the artistic director and founder of Ontroerend Goed, which is a theatrical company that's been around for, I think, over 20 years. And we're known for several things, but one of them is that we're known for immersive theater shows that we made.

[00:02:14.836] Kent Bye: So maybe you could give me a bit more context as to your background and your journey into making these immersive pieces.

[00:02:21.484] Alexander Devriendt: There's always a long story trying to make it short. So I'll try to do that. But why do you make the work you make? But I've never studied theatre. I studied English literature and Dutch literature and we had a poetry company. And I think what happened is we were looking for ways outside of the theatre thinking schools or outside of peers telling you what to do, thinking of ways to talk about an experience or to share an experience. And I remember at a certain point, one of my friends and I said, okay, what if we changed everything from a normal theater show? Like, normally you're sitting there with a lot of people, let's go individual, normally see, let's blindfold the visitor. Normally you go together, let's put you one by one. And normally you're immobile, so let's put you in a wheelchair. And we started from that, from an experience where you're just alone in the dark, surrounded by senses and smells. And that show was called The Smile of Your Face. And I think it's basically that. How do you think outside of this theatre box where there's an audience and a room, how do you make theatre use its unique abilities, its strengths? Because some see it as a weakness, like you're tied to a physical space. But what if you make it into one of their strengths? Because what can theatre do that no other medium can do?

[00:03:45.508] Kent Bye: And so we're here at the IFA DocLab 2019, and I just had a chance to see the artificial Room One just launched tonight here at the DocLab. And so maybe you could tell me a bit about how this project came about and what you were trying to do with it.

[00:04:00.757] Alexander Devriendt: Well, it's very a prototype of an idea, the kernel of an idea, but we wanted to test this first phase. That's why it's called Room One. And the idea for it was, it was Kasper here from IDFA trying to find links with theater makers and other new media and trying to find possibilities and he put this into contact with National Theater, a new storytelling I think there's a department is called. And so we started thinking about the show and the first idea was, can we remake a show with artificial intelligence? And it started the conversation from there, but more and more it grew for me into one of the most, I think, inspiring things that comes out of AI and it's old, but the Turing test is still a very interesting thing for me. It speaks easily to your mind because sometimes artificial intelligence can be so... The only thing that I always think about artificial intelligence is, is it good or is it bad? And that's like a very boring discussion. Like, I don't know. We don't know. It's there and it's gonna be there. So for me, these new technologies are not so interesting in a way of making predicaments about the future, but the only thing that is interesting about it is what do they tell us about ourselves. So I wanted to change the steering test into what if you don't have to prove, the computer doesn't have to prove that they're human, but what if a human has to prove that they are human? Can you do that? and trying to find the first difficulties of that, because some people think the answer is easy, but the more you dive into it, the harder the answer is.

[00:05:40.733] Kent Bye: Yeah, in computer speak, it's called a CAPTCHA, which actually is an acronym for something about proving that you're human. I forget what it actually stands for. It's a long one. Yeah, but you see it a lot with I'm not a robot or the Google image where you have to choose things that are connected to each other and there's always this bit of ambiguity where you're trying to teach the computer vision things by based upon your answers and so they're they're taking some sort of statistical approach here to be able to like actually test and So they're kind of serving both two purposes, which is to prove that you're human to do a task that's very difficult for robots to do. But in the same process, you are teaching that robot to do that. So you're, in fact, continually to refine the AI algorithms to be able to get more and more sophisticated. So I know that a jester's tale that happened last year at Sundance was using the same concept of, like, what does it mean in the future to do this? CAPTCHA where you have these volumetric AIs who are like asking you to sacrifice your life in order to prove that you're human but it goes off the rails to that point where it really starts to beg the question of like the ethics of how far do we go into having us prove that we're human and so this whole experience that I just went through was going through a lot of these different moral dilemmas or ethical questions that you're asking me and in this captcha and a lot about categorizing things. So how do I categorize things and how do people categorize them and how is language used but also how would most humans answer those questions or like really re-contextualizing these questions which in an artistic context I start to think about a lot more. But then there's always the game of like, what do you expect to answer in a way that the computer can understand? So I found myself going through this experience where I know what I would think about stuff, but then there's like, oh, in order to pass this CAPTCHA, then you have to answer in a certain way that they would expect. So you'd have to kind of like answer on multiple layers here, which I thought was interesting. And yeah, maybe you could talk about like this experience in terms of story and arc of what kind of using these interactions and taking people through this thread, like how you start to think about what type of journey you wanted people to go on.

[00:07:59.099] Alexander Devriendt: But the first one that I followed and that started with me was the history of copyright tests. Because it used to be, remember, with the lines and the words that was a bit blurry and then you have to type them in. And at a certain point they realized that humans were worse at it than computers. Because we learned it so much that at a certain point it's just better at it. And as you were saying, like with the choose the storefronts or where is the traffic lights or all that, suddenly you start to realize it's all traffic, because they're basically just feeding their self-driving cars, like they need more information there. And then I followed more and more into that, and now there are patents about CAPTCHA tests that a computer can't help to answer correctly, but a human will always make a mistake. So we've gone from being more intelligent or better at certain tasks at worse at sometimes, so proving our humanity by being worse at something. And there's a part in the show where you also follow your cursor. That's not the newest CAPTCHA test. The CAPTCHA test is not the CAPTCHA test. It's how you got there before that, that they're tracking. because a computer would never make random or strange gestures to get there. So, and I was putting all this into it. Yeah, for me, it was a fascinating journey to read about, like, wow. But the problem was, I also have to, like, you know a lot about these topics, but I also know a lot of people who don't know where it's at. Like Google Duplex, when I talk about it here at IETFA, everybody knows about Google Duplex. Nobody knows, none of my friends ever heard it. So it's also trying to also show how much it is already capable of and at the same time also throwing it back. So that was a bit the journey, trying to make it encompassing for both the ones from inside and also the ones who don't know the story yet. But I think the history of, maybe to summarize, the history of Kacha tests, where at the end you have to prove your Fallacy? Is that a word? I don't know. It's for me almost fascinating. Is that the last resort to prove that we're human, that we're fallible? I love it. Beautiful also as an idea.

[00:10:11.965] Kent Bye: Yeah, well I found myself, there's a certain section where it tells me that I've been silent for 7 minutes and 37 seconds like I was using the computer and using the normal affordances of human-computer interaction of there's just a mouse there and I'm clicking on things and so I'm not necessarily thinking about trying to speak to this computer because it's a computer and I I know the approximate level of AI and how there's these huge companies like Google and Microsoft and Amazon have very well-trained natural language processing to be able to speak and understand. And so, you know, when it starts to ask me to have this dialogue, then I'm constantly, again, having this meta layer of like, here's the real answer, and then here's, The test to be able to see well, how good is this AI and how reactive and so I'm sort of like probing it to see how reactive it is Like just to understand like what I'm dealing with and so but I also thought well, this is an art installation They could have somebody listening to me and then doing a Wizard of Oz and giving me very specific responses to this to be able to like then I didn't know whether or not it was actual AI or a person. Like, I got the sense that it was probably either pretty decent AI or just a person that's there, like, kind of typing in text and be able to respond to me, so.

[00:11:32.264] Alexander Devriendt: But for me the question at the end is also like that is also the question that computer could have two words to you. It's like are you human and are you talking to a human or computer is basically also the center of the Turing test. There's always a human human and there's always a computer to be human. There's always a recipient. So for me the doubt, your doubt is for me part of the experience. But look, I'll just have to say I am like an AI and we all are, I'm still learning. So that last bit that you mentioned now, how can I turn this doubt? How can I use this as a strength? Like for instance, was it you who was saying it or was the person before me saying that you were resisting or you're giving up?

[00:12:16.240] Kent Bye: Yeah, I gave up because there was a certain part where the computer was like, you have 60 seconds to prove that you're human. And I'm like, well, I give up. I was like, what's the point? Like, what does it matter? You know, like, I'm not going to be able to prove it to you. Like, I don't know what I would say to be able to prove to this capture device that I'm human. So I kind of gave up.

[00:12:33.148] Alexander Devriendt: Well and I loved when you say that then because and that's what I want to investigate here and build further upon that because maybe that's the most human thing to do at that point. Maybe a computer is incapable of giving up and like I said that fallacy we always saw it as a yeah as a fallacy but maybe it's also a beautiful strength and there's also poetry and giving something up. I don't know but I think all these things are only interesting in a way because we're selfish that way and that's also okay. What can it learn about ourselves? I don't want to make a show about AI but what can AI learn us to be better humans or more conscious about what we are as humans just as dealing with how animals are. What do animals learn about being humans? How similar are we? So some people see ourselves we're just also algorithms without a free will so maybe it's not bad to realize that. So for me, that journey and that part is the interesting journey that I want to build further upon.

[00:13:31.071] Kent Bye: Yeah, when you say fallacy, I think maybe fallible, like having imperfections. I don't know if that's a better word for what you're meaning. Like you're trying to get to the imperfections of humans. So there's ways in which we're fallible. Fallacy is more of like telling lies or like something that's untrue. So fallible is sort of more being imperfect. Yeah.

[00:13:49.448] Alexander Devriendt: I just like the word fallacy more. But okay, you're right. No, it's fallible. It's indeed that one. The incompleteness, the things you can't do, not being perfect, the imperfections is maybe the things that makes us human and makes us also nice. For instance, one of my favorite movies, Eternal Sunshine of the Spotless Mind, touches upon that topic in a very beautiful way. If the journey would be perfect, would you do it? No. But okay, maybe I'm going too far here, but like I said, what can an AI teach us about ourselves is for me the most interesting thing about making a show that deals with that.

[00:14:27.494] Kent Bye: Yeah, and so what has been the reaction so far? I know it just started tonight. And so I know that people are just starting to see it. So what have been some of the reactions you've gotten so far?

[00:14:36.063] Alexander Devriendt: You're the first one I talked to. So I'm very curious. I'm very curious. No, and we record at the end. We record some of the... So I'm also curious to see how people's responses will be. And like I said, What do people really think? Because that's the thing with an AI. If I ask anybody, what's the biggest difference between a computer and you? There is always something that they will think is the difference. Whether it's love or the ability to do nothing or blood. Somebody said, we have blood, I can drink, you can't drink. There's always this thing where you can prove to a human, no, that doesn't necessarily make you human. But then afterwards, there's always this little thing that a human is like, yeah, but you can't do that. And whether it's something they really can put into words or just an abstract idea of something, I love it when we always say, yeah, yeah, you can say that, but no, still. It's like the idea of free will. Every time they show that if you really think this is true, there is no free will, but then somebody will say, yeah, I understand, but still. And that's what I like about it. So I want to have a show where I take the ground beneath your feet and you always think, yeah, but I have a ground to fall on, because this is what makes me unique. And if I've taken away the ground enough, maybe the last bit you stand on, you'll be like, hmm, maybe I won't be able to keep on standing on this one. That is a nice way to be, I think.

[00:16:04.230] Kent Bye: I think one of the things that makes humans human is the ability to resolve paradoxes and ambiguity. Because there's certain ways in which when you're interacting with other people, you're able to then collapse a complexity to the point where you're able to Resolve what the ambiguity is but the computer they have different ways of solving that problem by Having a massive amount of data to be able to have those paradox like there's certain, you know computer version They'll say well, we think it's like 70% this and 20% this and so there's still a level of that Ambiguity, but I just I feel like that the human capacity to be able to handle that paradox and ambiguity feels like it's there's something about common sense and common sense reasoning and different problems with an AI that I kind of get back to this ability to resolve ambiguity through context.

[00:16:55.099] Alexander Devriendt: I'm pretty confident that in 10 years time that's solved. Like at the speed... Like I saw this YouTube movie about the most human human. That's a person who... It's from, I think it's from 2008, like only 10 years ago. And he is the one who always goes through the Turing test. He's the human. So he's always, when there's somebody who's testing a computer, he plays the human. And he's very good at being a very human human. And I love it. But he showed a picture to an audience. He did a TED talk or a keynote. And he showed a picture of a chest problem and a picture of a dog. And he asked the audience at night, what do you think a computer has the most difficulty with? And people still answered recognizing and solving the chess problem. Whereas they thought that a computer would... Look, the idea that the computer now easily does these simple tasks of recognizing a dog or recognizing people's face, 10 years ago we didn't even believe that it would be. So what you're saying about ambiguity, I think it's like, how do you call that? The last thing you're holding on to, but easily will dissolve the closer we get to that.

[00:18:08.424] Kent Bye: I find myself trying to prove to you the difference between humans and computers. But I think these are some of the hard problems of artificial intelligence common sense reasoning and different storytelling there's certain things in which Computers actually have a hard time with creating games is another area So there's like an issue of AI magazine that went into the Turing test of the next 50 years. And so I kind of looking at the frontier problems because the Turing test as we understand it is relatively simple in terms of like the actual AI problems and so they've had within the AI community they've had to think a lot more deeply about what are the next generation Turing tests and so there's a number of Yeah, it's an issue of AI Magazine that was going into the Turing tests of the next generation. Came out around 2016 or so. So yeah, the AI Magazine did a whole issue of the next generation Turing tests. And so being able to assign ambiguous Pronouns within a sentence and what you're actually referring to are very easy for humans to do but that same level of resolving sort of linguistic ambiguity in terms of what the pronouns are referring to is Something that goes back to this deeper context is something that we've have in a whole life that we've lived around and so it also gets back to like common sense reasoning and different stuff like that storytelling and developing games so there's certain frontier aspects within AI that You know, eventually we'll get there, but the advantage of using interactive experiences or stories is that you suspend your disbelief by entering in the story world, and they're able to control the context in a very specific way. So you're able to cheat what the AI can actually do with what is called the artificial general intelligence, being able to just speak openly and naturally about anything and being able to have a generalized intelligence that is able to respond to you. We're quite a long ways away from that AGI, but you're able to bootstrap what is possible with the technology today within the context of these immersive experiences and within the games to be able to allow us to suspend our disbelief and to have experiences that give us the experience of having an AI, even if the AI isn't as good as it is.

[00:20:16.304] Alexander Devriendt: But that's also the difficulty what I had from, for instance, with movies like Her or Ex Machina. The idea of AI will always be more intelligent or the public opinion about AI. For instance, Her, the movie from Swag Jones, that is what people think AI is. Because the fiction is so strong that we forget about the reality of it and that what you're now referring is to having the difficulty of pronouns or all that. I understand what you're saying, but being able to even name the problem is for me, the interesting thing is not, will the AI solve that or not? I think it will eventually. But what is my humanity? If you would say like it will never be solved and that's now the hardest case. So being ambiguous is the most human thing now that proves that I'm human. I think it's only now. I think when it has solved, for me it's interesting what comes next. And what comes next we can't predict or it will at a certain point be indistinguishable. But at the same time you can do the same with animals. We're more and more realizing that we're not so different. It's only recent that... So we've learned more about ourselves by realizing how close animals are to us. And couldn't you do the same with AI? The closer it comes, the more you realize who you are? For me that is, if I'm zooming out and not focusing on specifics.

[00:21:49.032] Kent Bye: Yeah, and one of the things that I really enjoyed about your experience was that I would be presented with a specific question and then you're kind of making an argument through a series of different interactions. You're just like, well, how different are you from a computer? Are you a computer? And you're like, obviously no, but then You're like, well, compared to a human to a computer, which one needs energy? Well, they both need energy. And so there's a series of questions that you're asking. Well, it's like, oh, well, they both need this. They both need this. And then you sort of go through, and then you're like, oh, wow, am I just a computer? Until it got to a certain point where it started to be more about, I forget what it was, but it was one that I was like, clearly, this is a human thing, and this is a computer thing, where it did sort of start to diverge. And it wasn't just like, I was indistinguishable from a computer, which is kind of the argument. But it was fun to go through that line of questioning and then answer a certain way that was very extreme, but then having to deal with the ambiguity and the paradox by saying, well, actually, both are true here. And you're kind of stepping through. And I like that process of the revealing of the ambiguities within even just like saying, is this food? And you have a lamb that's an animal and you're like, well, it's not dead, so it's still an animal, but is it food because we eat it? So like these questions of like starting to blur our lines about our categories that we have and really starting to question what we believe, but at the same time being critical at the same time. So having this ability to have that doubt and critique, but also being credulous and be able to actually believe. And I feel like the belief is something that feels like a human quality as well, but that the mixing of the belief and the doubt and how we navigate what we believe and what we think is true.

[00:23:31.893] Alexander Devriendt: Yeah, and at the same time, you're teaching that to a computer. Like by you saying and you having your opinion on that, you're also teaching these emotions or disambiguity to a computer. So for me, that is also like when you have indeed the chicken wings, like is that pain? But again, the difficulty is you are somebody who's thinking about it, but for some people thinking about energy, do we need both? I think the person after you just pushed the energy all the way to herself, like, wait, wait, wait, you haven't thought this through yet. So it's finding a balance between touching upon something and making you realize, but for somebody who already thought a lot about it, How can I make it still interesting for you? So I'm glad you're touching upon these things that we put underneath. For instance, does this person exist? We ask it at a certain point. We show these pictures. None of them exist. But you have to point out that they're human to prove that you're human. And then you have the voice. And then I ask you, does your face say something about you being human? So I'm also trying to mix these things. It's that journey of learning something to a human and in learning something, learning something about yourself. I think that is something that, for instance, the best teachers have. And I think that, for instance, I just became a parent is also something that is happening. And I think that's a strong way of being human and it's one of our strengths. So while we're learning something to computers, we learn some things about ourselves. And I think that's maybe Again, we are having this talk because you talk to a computer and I like that. I like that it does that and it's selfish because it's only about humans. But that's okay. Worrying about climate change is also a selfish thing because the planet doesn't care. We just can't continue.

[00:25:27.087] Kent Bye: Well, as I take a step back and look at this larger discussion about AI and humans, I think, for me, the answer gets back to consciousness and what the nature of human consciousness is. And it's a bit of an open question as to whether or not we will ever have machines that have their own consciousness. I suspect that there's something about biological organisms and that if you're able to have something that is living and breathing and alive in a certain way, like biological computing, like what is the future of these synthetic, constructed either through nanotechnologies or using organic things that have to actually have life, and then you have artificial intelligence within that, then does that have a consciousness? I feel like there's something about silicone that only has a representation of consciousness, that we're able to project our consciousness and it's able to reflect back to us like a mirror, I'm more skeptical as to whether or not we're going to have conscious AI or AI that has their own phenomenal experience. But at the same time, philosophically, this is a question where, can I prove that I have consciousness to anybody other than myself? And I think that's a bit of like, no, I actually can't. And so when I was doing this experience, it's like, well, I know that I have my own phenomenal experience, but I can't prove that to anybody outside of me because this is the only thing that I know is my ground truth. And there's people who philosophically are in the realm of eliminative materialism who would say that, well, that even your consciousness is an illusion and that it's just like the neurons firing and that if there's a naturalistic world that that is the base reality that there's nothing beyond space and time, then there's only physical objects and physical stuff. that your consciousness is an illusion in another perspective would say, well, actually maybe consciousness is the fundamental foundation of reality. And that's the foundation of what reality even is, is consciousness. And that all the physical reality on top of that is on top of something that is fundamental to all of these other things. And so that question is a philosophical question, but when you interrogate this question deep enough, you get down to, Where does consciousness sit? Is it emergent from the physical properties of reality, or is it a fundamental fabric of reality that goes beyond space and time, or more of a pain psychic Tononi aggregation of consciousness? So this question is a philosophical one. I don't know if AI is going to ever be able to solve it. Even if we do create these intelligent beings, then we will never know whether or not they have consciousness. We've got to get into the reverse of the Turing test, which is, Does this AI have a phenomenal experience? Have we been able to generate consciousness within that? So, I feel like as we have these CAPTCHAs, then we have this reverse as we look at the future of AI. We may never be able to answer that question because the only entity that will be able to answer it will be the thing that's constructed itself. And then, is it just repeating back what it's learned from what humans say about what the nature of consciousness is? So that's, when I think about this experience and what, if you start unpack it down to its core level, that's where I get to. It's like, where is consciousness and what is the nature of consciousness? And we'll be able to ever create AI that has its own phenomenal experience.

[00:28:33.782] Alexander Devriendt: And we probably will, and what will it teach us about us then? What makes us then human and still different from that? Because I always believe there will be a difference. And we will always learn more about ourselves by realizing this difference. And indeed, we're also limited in thinking about when the clock was invented, the mind was a clock. When computer software was there, it's software and you have hardware. Every new technology makes you look at everything in a different way and now the whole computer is like a quantum computer or your mind is like... We only have the tools that we invented to talk about ourselves or to mirror ourselves or to frame reality by words and by these representations. But I'm just also lingering still on, again, it's close to the free will. At a certain point, I saw this funny little video about a guy showing little cars and explaining a little bit how artificial intelligence was. And at a certain point, I can't remember the exact phrasing, but at a certain point he said the sentence, realize, at the end you saw a car driving and he showed the whole process. And he said, realize that the AI doesn't know it is a car and doesn't know distances and doesn't know anything. It's just an input. And the most successful variant is now what I showed to you. And I was like, yeah, but that's me. Like, I'm also just that. I'm the most successful variant of genes and electricity and neurons and whatever, making me walking here and being successful and just existing.

[00:30:16.696] Kent Bye: Yeah, and that's what I really enjoyed about the takeaway from this experience as we're having this conversation is like maybe the most thing that we can get from these technologies is that it is this mirror to ourselves and we're able to learn more about what it means to be human as we create these technologies and compare ourselves to it and then interrogate it and have these types of conversations. Because, you know, with the advent of the immersive technologies, with virtual reality, augmented reality, with AI and cryptocurrency, all these exponential technologies, quantum computing and how that starts to play into all this as well, you start to blend all these things together and it's going to start to allow us to find the replication of these entities. Jaron Lanier from virtual reality, he suspects that it's possible that when we're in a virtual reality experience, there will always be some part of our mind that knows that it's a simulation. And that maybe we're going to always be able to improve our perceptual systems that will always tell the difference between the virtual and the real. I think it's a very provocative idea, just the fact that with the evolution of these immersive technologies that we'll just continually be blazing new neural pathways in our brain and be able to recognize the deep subtle patterns and that we may be fooled by these deep fakes and these synthetic voices now, but what if we are able to cultivate our perceptions to be able to actually hear the difference, you know, like even like an autotune and people can start to subtly hear when things have been autotuned. They sort of cultivate a certain hearing for it and appreciate the fractal noise and the little imperfections of things. And when that gets erased out, maybe we're in this process where it's a temporary aspect. And then the alternative is that, well, we will be able to trick ourselves into believing that these immersive realities are completely real. And we're able to just, suspend our disbelief as we go into a movie, but we will not be able to tell the difference between the virtual and the real. So that, I think, is a bit of a thought experiment I keep coming back to, is that, like, well, maybe these technologies are just improving our perceptual capacities, but that, as a collective humanity, we'll just be able to tune into more subtle aspects of the nature of reality.

[00:32:22.779] Alexander Devriendt: And you basically could say, like, indeed, instead of saying, like, the technologies are new insights, learn us what to be human, but also change what it means to be human.

[00:32:35.090] Kent Bye: Well, yeah, just to wrap things up and final question. So for you, what do you think the ultimate potential of these immersive technologies might be and what they might be able to enable?

[00:32:46.905] Alexander Devriendt: For me as a theatre maker, when I look back at my own medium, let's compare it with painting. My father was a painter and so I always have a heart for it. But when photography was invented, suddenly the whole idea of painting had to reinvent itself. Cubism started and all these abstract paintings started. Art just blew up and I think these new technologies sometimes can make you blow up and suddenly discover all new possibilities of your own medium and in that way have new ways of Maybe telling the same stories? Maybe. But by telling them in a different way, from a different perspective, from a different medium, there are new insights to be gained every time. And that's... I don't like the newness of technologies. I like to use something that's already there, like that's almost invisible, but that you just use it for a new story, because otherwise it's about the gimmick of the thing. But I like it how it opens up your mind of what's possible and what's possible of giving you new insights and ancient problems Awesome, well, thanks so much for this project and for this conversation.

[00:34:01.306] Kent Bye: So, thank you.

[00:34:01.966] Alexander Devriendt: Thank you. Thank you.

[00:34:03.006] Kent Bye: Thank you So that was Alexander De Vrient. He's a theater maker and the artistic director and co-founder of the theater company called Odorantruent So I have a number of different takeaways about this interview is that first of all Well, first of all, the overall context is that you kind of walk through this dark park into this room. There's a computer there. You're isolated. You sit down and you start to have this whole interaction with the computer. And you're essentially taking these quiz, a series of different CAPTCHA tasks. So it's very much stimulating your sense of mental and social presence. You're trying to solve these different puzzles, but you're also trying to like understand the meta-narrative of what the story might be. And the story is essentially trying to, you know, make this argument as you're taking this test of the CAPTCHA test. The CAPTCHA is this acronym for a completely automated public Turing test to be able to tell computers and humans apart. And so you find yourself there trying to convince this computer that you're human. And at some point, you start to actually talk and speak and seeing these different arguments that have been made. And for me, I'm fairly familiar with some of the different advances of different AI technologies. And so there's things in there like these completely constructed and generated faces. And it's asking me whether or not those people are real or not. you know, have the different voices from the Google duplex. And it's kind of like testing the balance. And it gets pretty philosophical at some point, because you're sitting there, I found myself talking to the computer, you're trying to convince it that I was human. And like, what kind of arguments would you make? If you were asked that if you were to have some sort of interaction online, and you were trying to prove your humanity, then what would you do? So for me, I sort of had a meta level of just going to be like, well, what's the point? I kind of gave up, but at the same time, I didn't completely give up. It's not like I got up and walked out of the experience. I was so frustrated because it was just like, ah, not really trusting that I was going to be able to actually do it. Or if there was even like an end case to be able to do it in, there's a certain amount of doubt because I'm doing this experience and you know, the different types of reactions that I'm getting. makes it feel like that it's either some really, really good AI, or they're just behind the scenes and doing a chat. And I suspect that's what they were doing, because this is an interactive experience. And so at the heart, it feels like you're trying to exercise your mental capacities to be able to like, cross this threshold boundary to be able to prove the essence of humanity. And that's like the fundamental character of the experience. And you know, the thing that I'm taking away is that this experience and exercise of trying to really articulate the essence of my humanity. And for me, it made me think about like, well, what arguments do I make? And whenever you start to do this, you have the certain amount of context of like, okay, what is the certain amount of understanding that you're going to make? You know, are you going to start to dive deep into the philosophy of consciousness with an AI? You know, it's not, usually a good approach because if it is actually an AI, then there's certain things that you're kind of assuming that it can and cannot understand. And so I found myself evaluating different things at many different levels. You know, if you take these different types of tests or find yourself in this situation or project it out into the future, it starts to get really interesting to see like, okay, if this does continue to happen where we need to be able to do these types of stress tests to find out whether or not you're interacting with an actual human or a robot, then what is the continual evolution of the CAPTCHA tests? And as the Turing tests continue to grow and evolve, like AI Magazine is looking at these next-generation AI tests, then is that going to be a further evolution of us needing to continue to find these different tests to be able to prove our humanity? And, you know, he said that, you know, there's a certain amount of like, actually, if you do the test too perfectly, then it may actually like not think that you're humans. There's a certain amount of being fallible that is maybe a part of being human. And also it was interesting to think about, well, what if the giving up is like the ultimate human thing that you could do in a test like this, and that maybe a computer would never give up and keep on going and going. And so, yeah to me it was an interesting experience and it feels like still just a very early prototype and trying to unpack what's it mean to take this level of doubt that i might have through these experiences or to unpack these different interactions and these different tests and to either make it dynamic and change based upon how people are doing it or to really choose what type of message that they're trying to give because in some sense the underlying mission is to try to undermine the ground truth that we think that we're standing on and by going through the series of these different tasks and these interactions it's kind of like a series of different arguments that are being made as you're answering these questions you're going through these different choices that you're going to have to make and you kind of see that there's a deeper intention for deeper points that are being made and that there's a fundamental just trying to undermine the grounding that we're standing on and to perhaps be a little bit more skeptical or aware or to be a little bit more self-reflexive of this relationship between man and machine. as this continues to evolve, as he suspects that it does, we're going to have this mirror that is also continuing to evolve and that we're going to continue to look at ourselves and to understand the difference between what it means to be human and what these machine learning and AI algorithms can do and the difference between what those can do and what humans can do. Anyway, it's a fascinating experience and I look forward to see how this continues to evolve. And also just to see how people with a background in poetry and theater are starting to address these different types of issues and to try to get to the core essence of what it means to be human. So that's all that I have for today. And I just wanted to thank you for listening to the voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a list of supported podcasts. And so I do rely upon listeners like yourself to become a member of the podcast. And this could be your own sense of Turing test. If you, if you join up to the Patreon, then I will trust that you're human. So, uh, just $5 a month is a great amount and just allows me to continue to do this type of coverage. So you can become a member and donate today at patreon.com slash which is a VR. Thanks for listening.

More from this show