In the Event of a Moon Disaster uses AI deep fake and speech synthesis technology to produce a Nixon speech that never happened. Bill Safire wrote a contingency speech on July 18, 1969 for President Richard Nixon to read in the event that something went wrong with the Apollo 11 mission and astronauts Neil Armstrong and Buzz Aldrin were to be stranded on the moon until they would ultimately die.
Immersive audio artists Francesca Panetta and Halsey Burgund were captivated with the speech, and wanted to use the latest in AI technologies to bring it to life. They wanted to raise awareness of how these new technologies fit within a long tradition and spectrum of misinformation and disinformation tools, and so they collaborated with Canny AI on the visuals and the Respeecher on the speech-to-speech synthesis of Nixon’s voice after training thousands of clips with a voice actor.
Watching the experience at IDFA DocLab was a surreal experience as they recreated a 1960s living room with an authentic television from that era to be able to watch footage of a manufactured moon crash and then a completely fabricated and synthetically-created Nixon speech that never actually happened.
I had a chance to talk with the artists Panetta & Burgund about their design process, their deeper intention behind the work, the philosophical implications of being able to modulate truth and reality, and the ethical implications of deep fake and synthetic speech synthesis AI technologies.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Here’s a brief excerpt of the In the Event of a Moon Disaster Nixon speech.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to The Voices of VR Podcast. So I'm going to be starting a new series based on my coverage that I did at the IDFA DocLab. That's the International Documentary Festival in Amsterdam. The DocLab is the new media program started by Casper Sonen in 2007. looking at ways of doing interactive media, immersive media, using virtual reality, augmented reality, artificial intelligence, video projection. It's really one of the most innovative experimental festivals that's out there featuring artists and the different work that's out there. A lot of the other programmers go to that festival and start to program some of the pieces that are there. Like the Collider was at the IFFA DocLab last year in 2018 and then showed at Tribeca. So it's just a place for artists to come together and show what the latest in media that's exploring different aspects of reality Documentary defined by Gerson is the creative treatment of actuality. So in this first episode I'm gonna be diving into this very interesting piece that was called in the event of a moon disaster by Francesca Panetta and Halsey Burgund and they manufactured an entire speech that never existed using deepfake technologies with AI. They wanted to show the dangers and potentials of what could be done with the technology that's out there. And so they took a look at this speech that was written for in the case of a disaster on the Apollo 11 moon landing that if there would have been some sort of disaster and the astronauts weren't able to come back then there was actually a speech that was written for Nixon to be able to read that speech and so that speech was never given and so they took that speech and wanted to actually present it and because they are immersive artists they created this entire installation that was there using all these old televisions and all this retro equipment to build up this entire set that was there at the IFADOC lab. So you sit down in this 1960s living room and you sit down to watch this, you push the button and you see this whole series of the Apollo 11 in this kind of imaginal, recreated, deepfake speech that never actually happened. So I had a chance to talk to the creators of this piece to kind of unpack it a little bit more, talk about the ethical implications and their educational impulse for why they did it. So that's what we're covering on today's episode of Voices of VR podcast. So this interview with Francesca and Halsey happened on Friday, November 22nd, 2019 at the IFA DocLab in Amsterdam, Netherlands. So with that, let's go ahead and dive right in.
[00:02:35.247] Francesca Panetta: I'm Francesca Panetta. I'm a creative director in a new center at MIT, the Center for Advanced Virtuality, working in all kinds of different technologies.
[00:02:47.030] Halsey Burgund: And I'm Halsey Burgund. I'm a sound artist and technologist. And I don't work for anybody other than myself right now. And we're here at IDFA in Amsterdam as co-directors for a piece of ours called In Event of Moon Disaster.
[00:03:02.401] Kent Bye: So maybe you could give me a bit more context for how this project came about.
[00:03:06.712] Francesca Panetta: I'm sure, yeah. The starting point is a speech that was written for Richard Nixon if the moon landing, the Apollo 11 moon landing, had gone badly and if the astronauts hadn't been able to make their way back to Earth. And so his speechwriter, Bill Sapphire, wrote this very moving speech for him to read on television if this had happened. And so we found this speech and decided that we would use the latest artificial intelligence technologies to have Richard Nixon actually reading this on TV as an installation here at IDFA. And so that is just one part of the larger installation here, but it's the kind of core element.
[00:03:47.964] Kent Bye: And where did you come into the project then?
[00:03:50.776] Halsey Burgund: Well, Fran and I were brainstorming from the start about how to bring this speech back to life and we had a whole bunch of different ideas and figured that once we hit on the deep fake idea, we were very excited about bringing this speech back to life. The speech is incredibly beautiful, very moving not only about the individual astronauts who were, this speech was designed to be delivered as if they were still alive on the moon but not able to leave the moon, so they were going to die. So it's this beautiful sort of elegy to their lives, but also to the notion of exploration, of human exploration of new worlds and looking up at the sky and seeing possibility and all of that. So it was this amazingly beautiful speech and we were like, wow, it's sort of, of course we're glad the tragedy didn't happen, but at the same time, there was sort of this sense of sadness that the speech never got to be delivered. So here we go, we step in and we bring Nixon back and allow him to quote unquote deliver the speech and yeah.
[00:04:48.154] Francesca Panetta: what when when you say like how did we expand that idea because we work in immersive technologies and then we didn't want just to make this like most people who are working in deep fakes or trying to use this technology are making stuff putting up on YouTube and that's it but you know Halsey and I work in the context of film festivals and installations and museums and galleries. And so the idea of just making a one-and-a-half-minute speech and slapping it onto YouTube wasn't necessarily the approach that we would come. So right from the beginning, we were like, we want people to walk into a 1960s living room, and we want people to have the TV there and feel like they're stepping back to July 1969. And so what we have here at IDFA, and we work with the people here really closely for the last few months, is to design something that really feels like you are in that era. We also made a newspaper which has a whole load of additional information about deepfake technologies, what deepfakes are, how they're made, how we made this piece around the difficulties of deepfakes, the problems around democracy, also around the work that's being done in terms of detecting deepfakes, and so we wanted something that was like really much wider than just a YouTube deepfake stock up. So the film actually starts with the Apollo 11 mission taking off, going into the space. The astronauts are doing absolutely fine. And then the Eagle actually kind of crashes onto the moon. It cuts to black. And then Richard Nixon delivers his speech. And I guess that's the way that we would approach something, is to try and find this kind of quite rich story world that people can go into.
[00:06:30.890] Kent Bye: Yeah, I was really struck by being able to sit down in this installation and I really did feel like I was like time traveling back because there's the phone and the TV and all the art and the commercials on TV and so you have a little box with a red button you push the button and then it launches into the whole launch sequence and then kind of like if you were watching the launch of Apollo, of course that happened over a number of days so it's time compressed of course but if you're to like digest it all down into an edited version over the launch and the crash of the Apollo 11, then it gets into the speech. And when I was listening to the speech, I was struck by, if I were to just take a step back and not knowing anything about it, it's super convincing, you know, both the video as well as the audio. And as I'm watching it, I know it's a deepfake. And so I'm like trying to hear, and there's, I guess there's a quality, like there's a richness in the voice that I would expect. in real audio and then there's like a certain hollowness that's computer generated but that's maybe just because I listen to so much audio as an audio engineer and editing all the time that sometimes I can notice what happens when you do an audio filter and you kind of get a lot of the richness of a voice that's taken away so that was the only thing I could really notice but that could also be from old recordings that that has already filtered out anyway and so If I were to just look at it, I may have not known at all that this was completely manufactured. So maybe you could talk a bit about the process for how you actually pulled this off to be able to have this illusion as if this speech that didn't actually happen, how do you recreate that?
[00:08:00.795] Halsey Burgund: Sure, yeah, there's many parts to this, and as Fran alluded to, it was a lot more difficult than popular conception might be with some apps that you can just, you know, upload a photo of yourself, and then we'll slap it on these famous movie scenes, and we'll make it seem like you're in them yourself, and those are fun and whatnot. But in this situation, we wanted to really create what you experience, which is as sort of real a situation as possible, use the most recent technology. So we worked with two companies, one company to do the visuals, to do the voice, the video dialogue replacement, which is a company called Cani AI, and they're out of Israel. And then we worked with a speech synthesis company called Respeacher, who builds synthetic voices. And both of those processes were, well, the voice process turned out to be much more in-depth, and I can explain that a little bit, you can handle the visual stuff. So basically the voice, the way to create a voice, a synthetic voice, is to, you know, it's an AI deep learning process that they use. I don't know what goes on inside that black box, but there's a lot of training that needs to happen. You need to train the system to produce Nixon's voice, and then you actually go ahead and produce the speech you want. The method that Respeacher uses is called speech-to-speech. So what that means is you give an audio file into this model of what you want the model to say, what you want Nixon to say in our case, and then the model will output Nixon, you know, the same performance of the input but with a different voice. So it kind of maps the spectrograms essentially. But in order to do that, we need to produce the model. And producing the model requires a ton of training. And as you alluded to before, you need audio recordings in order to train this. And the audio recordings we have are from Nixon delivering speeches back in that era. And they're not of the greatest quality. And that causes some challenges. But essentially, we have to get three hours of Nixon speeches delivering in the same kind of performance that we want, a speech to the camera, which is the type of performance we want. And then ReSpeecher broke that down into tiny little snippets, one to three second snippets, and divided it on pauses or whatever in the speeches. And then what we had to do was get an actor to deliver those speeches in the same method that they were delivered by Nixon. We would sit there at a computer, press play, Nixon would say one to three seconds of something, and then the actor would record. We would press record, and the actor would record that same clip, not trying to be Nixon. It's very important not to try to impersonate Nixon. It's important to capture the performance of the way Nixon spoke and the arc of the pitches and whatnot, but not try to impersonate. We did that thousands of times, 2,000 times, I think, clips. Our poor actor, Lewis D. Wheeler, who was amazing, stuck with it, and we created this parallel set of data. One recording of Nixon, one correlating recording of our actor. That all went into the AI black box, which we don't really understand, and then that enables us to output, you know, take another clip of Lewis speaking, delivering the speech, and then it would output a speech of the same speech, same performative qualities, but delivered in the voice of Nixon. So it was a very in-depth process, and we learned a whole lot during it, but we really, we went with the speech-to-speech approach because of the way it enables you to sort of keep and retain that performative aspect of it. If you go text-to-speech, then some of the performative aspects are sort of handled differently, so we didn't do that. That might have been just as good, we don't know, but this is the approach we took.
[00:11:21.862] Francesca Panetta: So the visuals, Fran can explain the visuals, which are maybe a little less in-depth. Yeah, the visuals for us were much, much easier and much quicker. So we used our same actor, Lewis D. Wheeler, and we filmed him reading the speech, the contingency speech, and a couple of minutes more. And really, that was it. We sent that off to Omer at County AI, and within a couple of days, he had delivered us back the speech with Lewis's voice on, but with Nixon mouthing the words. We did have some limitations. Omer told us that we couldn't have any visual movement in the original source material. So we had set our hearts on using the resignation speech of Nixon because it was definitely the most emotive visually. It really looked like he cared about what he was talking about. He looked visually moved. whereas all the Vietnam speeches they didn't feel like the same amount of expression on his face. So we really liked this resignation speech and it had this wonderful setting at the beginning with the flags either side and then it zooms in really close on his face and Omer said no we can't have any movement at all, you need a locked frame, you can do this in post-production. The other thing that he said is every movement of the body, of the face, blinks, moves of the eyebrows, page turns, all of that will be kept. The only thing that's going to change is the movement of the lips. And so we were basically trying to line up a speech that never was read with another speech so that every gesture of Nixon aligned with this speech. and so maybe he's moving his hands or looking animated when in our speech he's resting in between paragraphs and so on the timeline we're shifting things up and down trying to make it align and it's kind of impossible really because this is a speech that well as you know there are so many sentences and paragraph stops in the way that we talk and it's never going to really absolutely align so In the end, Omar did manage to use that beginning bit of the resignation speech. I don't know what was stopping him to begin with and what he magically did within his different AI system, but he sent us back the file and it was the beginning of the resignation speech with this great zoom in. So we did end up with what we wanted in the end.
[00:13:47.631] Kent Bye: I know with projects like this, there's always the process of producing the sausage and going through the mechanics of the creative process, but when was the moment when you were able to actually stop and watch it and see the impact of this completely fabricated speech that you were able to create, and what was that like?
[00:14:06.362] Halsey Burgund: Wow. I think there were several moments for me about that, but really the main moment was actually when I arrived here at IDFA and got to... I had seen the speech, of course, many, many times before. I'd watched the video hundreds of times as we were editing it. But when I got here, we had the entire set. It was, like you were saying before, it kind of feels like you're stepping back in time into this other era. Critically I hadn't seen it being delivered on an actual old television before I'd seen it always as video on a computer screen and That makes a huge huge difference. We have this TV that was somehow still working from 1966 or something like that that the info folks procured for us and they're running this digital source obviously through various converters and getting it into that old TV and it just I There's a smoothness and a sort of analog beauty to it that is very different, but to me makes the whole thing that much more convincing. So when I saw it here for the first time, sitting on the couch with the blast off, you know, start me a button in front of me and the newspaper sitting on the coffee table and the flight manual that we printed out, the actual flight manual from the Apollo 11 trip and the copies of the speech and all the context. I felt like, wow, people might actually believe this. We don't want them to believe it forever. We want them to believe it and then realize, oh, it's not true, and this is how we did it, and this is how it's fake. But we want them to say, oh my gosh, this is a situation where I could be fooled. I could be fooled by the technology that's out there, and I better be a little more careful about it.
[00:15:39.603] Francesca Panetta: I've been showing various prototypes to people all the way through this, which is something that I always do and think is really important. And one of the people I showed it to said, oh, so actually, Nixon did record this then, just in case this happened. And I was like, wow. Oh, my god. Even though I've described this project and said this is about misinformation and deep fakes, they still really believed it. So there's a few bits of feedback I've got where I thought, this is pretty impressive.
[00:16:07.954] Kent Bye: Well, having worked at the Guardian newspaper, you're in the used to be in the news business. And so now that you're at MIT, you're able to explore with these different creative projects and look at these different applications. But just curious, some of your deeper thoughts about what is truth? What is reality as we move forward with this technology? And what can we trust out there unless we see it ourselves?
[00:16:31.052] Francesca Panetta: Yeah, I mean the question of what is reality is something that philosophers have been discussing for thousands of years and something we've been thinking about a lot and that is not an easy quick question and I think that the idea of crusading for absolute truth and absolute reality is is not what we're trying to do or even think exists. You know, I think truth is socially constructed and it's also nuanced. Even in journalism, I think that the idea of absolute truth and not within all the work that we do, it having an element of perspective is always going to be there. So I don't think we're trying to say you must know the truth, you must separate the truth from falsity because I think that's a little bit too black and white but I think that there are techniques of very conscious deception that are being used at the moment and I think that we're trying to say people who are trying to knowingly deceive you by using technologies such as deepfake technology, but also other types of misinformation. So, I mean, our project is about deepfakes, but it's about misinformation in general. We have used all kinds of techniques. We reversed the eagle, so instead of taking off from the moon, it crash lands down. We sped it up, we edited it, we used all kinds of audio editing techniques to make it sound like there was a disaster happening. We want people to be looking at the range of techniques of deception that people are knowingly use and to be aware of them and I think that's quite specific but I think that saying we want you to know the absolute truth is probably not within the realms of this project.
[00:18:16.342] Halsey Burgund: Yeah, I second everything Fran said. I mean, there is this sort of continuum of misinformation, continuum of ability for people to often maliciously try to influence others by falsely representing what happened. And deep fakes and lots of AI, machine learning technologies sort of expand this continuum well beyond where it was before, beyond Photoshop, beyond some of the audio editing techniques that Fran was talking about. And this project hopefully shows that, you know, not only is this continuum expanded by this new technology, but don't forget about the fact that we have all these other things that are, I mean, simply slowing stuff down, simply slowing a video down. You know, the whole slow Pelosi thing was just slowing Nancy Pelosi down slightly, made her seem drunk, and that was used to significant effect. And that is something that's been available for decades as far as that technique goes. You don't need AI, you don't need machine learning to try to deceive. And we're hoping to not freak people out, to not have people leave our experience saying, oh my god, I can't believe anything. Because that is, to a large extent, what some people want. They want you to not believe anything and not trust yourself. But we want to have sort of a healthy amount of skepticism where when you're consuming your newsfeed, you're thinking, well, could this have been manipulated? Maybe so, maybe not. What's the source? I should think about this a little more. So take a few more steps before just intaking and believing everything.
[00:19:42.199] Francesca Panetta: I had a really interesting conversation with Ethan Zuckerman, who's at the Media Lab at MIT, around being so critical of all of the media around you that you lose any sense of reality at all. In our conversation, he was saying it kind of destabilizes the whole of reality. So he said, for him, the most worrying things about deepfake technology was not the actual media itself, but was actually the idea that everything you see could be false, could be fake, and that if the public becomes so nervous that everything they're seeing is unreal, then you've got no kind of stability, you've got no grounds, and it means that leaders can come in and question everything. You see the CNN report. How do you know it's real? How do you know anything's real? and the power and the abuse of power that's potential from that. So it's very interesting trying to figure out how you deal with this skepticism, but also a kind of trustiness of what we're seeing around us as well. And, you know, to go back to my old role within journalism, the idea of kind of respected companies and respected bodies that you can trust probably is going to be one of the most important things going forward in terms of how we deal with the information around us. What is the context we're seeing it? Who has posted it? What's the information around it? And a lot of the work that's being done around Harvard at the moment is talking a lot about making sure that the kind of metadata and the information around it is there so that you're not necessarily needing to scrutinize the media itself, but the environment in which it's in.
[00:21:16.972] Kent Bye: Yeah, I was just watching a clip of a philosopher talking about the Socratic method of how embedded within the Socratic method is the combination of skepticism, but also belief and trust. And so like how to be critical and skeptical, but you can't always be critical and skeptical because you need to believe Sometimes you need to say that there is a ground of truth and there's a bit of that dialectic and just in the criminal justice system that we have there's a way of you're innocent until proven guilty and then you have the prosecution and the defense and you're able to have this process to weigh that information but It feels like as we move into this new era where anybody can modulate a version of truth that is deceptive in some ways, I see there's two main areas where this is of concern. One is around identity and what is your representation of your identity, especially within virtual environments, because if you can deep fake either your voice and synthesize your voice, then you can start to Emulate being people in these virtual worlds and so how do you either use the blockchain technology is something like self-sovereign identity to be able to do like a cryptographic key that could be verifying identities Facebook has their own ways of verifying identity as well as Twitter. So there's ways of authenticating whatever information is coming out is actually from that person and But the other issue I see is from the government and either changing or modulating or memory holing, shifting the historical record of what actually happened. I just think about going into the National Archives and like the film and whatever was there, there used to be like, oh yeah, this is what happened. But if a government wants to go back and delete and change the history now with these digital records, then if they decide to change it, then is the only way to really tell if something's been changed with AI is with another AI. Where if you have to have like AI battles of what's the better trained model, then both of them are black boxes to a certain extent. So if the only way to really tell if something is fake is with another thing that has to be trained, it feels like it's a little of this cat and mouse game of this battle. It's like, which black box do you trust more? So at that point, it feels like unless you see it yourself, even eyewitness testimony can be problematic though. So I feel like we're in this era of how do we navigate a world where we need to have a level of consensus reality, but yet there's so many tools out there to shift and control and manipulate that. So how do we navigate that?
[00:23:38.552] Halsey Burgund: I clearly am not going to be able to actually answer that question, I guess, because it's a very thorny one and it's a very good point that you make. I think that, you know, this sort of shared sense of accepted truth is something that's, you know, that's extremely important to have us all be able to grab onto one way or another. And it feels to me like right now we're in this time where there is this sort of rather significant technological shift that is kind of an earthquake, that is kind of you know, all this truth, our ground-based truth that we've been thinking about is kind of shaking and we're not really sure of it anymore because of this new technology. And I think, I'm hoping, you know, I tend to be somewhat optimistic generally in my outlook. I hope that as society sort of gets more used to these technologies and more used to the ability that AIs and machine learning and all these sorts of things have to manipulate stuff that will self-inoculate to a certain extent. I mean, people were freaking out about Photoshop when it came out in the beginning and saying, how are we going to ever believe a photo anymore? And that was a good point back then, and I think we're at the point now where we can detect and we can understand and we know that some photos are believable and some are not. I'm hoping that these new technologies are sort of similar and we're going to go through a bit of the same cycle and we're going to, you know, that earthquake, that the trembling is going to settle down a little bit. I realize that might be a naive outlook, but, because I worry too, but I'm really hoping that is the case.
[00:25:01.835] Francesca Panetta: Yeah, and I think our project is trying to highlight, as you're saying, the potential and the breadth that these kind of technologies can be used for. So all of the deepfakes that we've seen have been very current, quite comic, lots of celebrities used. We've heard a lot about the use in pornography as well, but our piece is trying to say, yes, historically we could be rewriting things as well. Like, look at the wide range you know you talked about identity but we've been thinking a lot about history rewriting history and your question around like what technically can we do to stop that well we've you know also been talking to a lot of scientists in MIT and elsewhere around exactly this kind of cat and mouse game that we're talking about but most people I think accept that it needs to be a kind of multi-pronged approach and there's not just one solution so it's not just the AIs that are going to be able to kill the AIs But media awareness is not enough on its own. There's law, there's technical regulation. So there are many prongs that are going to have to be kind of figuring them in real time, trying to figure their way out. And there needs to be a dialogue between them as well. But it doesn't seem that there's a very obvious single solution at all.
[00:26:13.882] Kent Bye: Well, the thing that I worry about is that with art projects like this, I can see that there's a real function, but the fact that there are companies out there that this is now their profession to be able to generate these, I feel like there's all these ethical questions for me that come up with, like, what is the ethical line between what is going to be produced and what is not going to be produced? Because There's a sort of thesis of that the technology is neutral, but at the same time there's all of these dual uses of these technologies both for just creative expression and to be able to educate and have a bit of a cautionary tale, which I see this is, but then there's the whole other realm of the Black Mirror scenarios where you're creating these self-fulfilling prophecies where now it's inspiring people to go out and create their own version So I guess there's a bit of double-sidedness to a piece like this where it could be a cautionary tale, but it could also be a blueprint and a roadmap for people to go out and get inspired to go and then start to do that. Especially if the technology is out there and then businesses around it, then what kind of projects are the legitimate uses versus the ethically questionable uses of what could be used with that?
[00:27:17.962] Halsey Burgund: Yeah, I mean, we have had lots of discussions with the companies that we've worked with on this, and they are all very, very concerned about the ethical and potential bad uses of the technology that they are developing. They have business models which are designed to be, you know, Hollywood might want to dub a film, and instead of just dubbing the audio, they will actually change the way the mouth of somebody moves in order to reflect that audio better, and then a film can be viewed in many different languages without the sort of oddness of the current dubbing technique. That's a perfectly legitimate use of the video dialogue replacement technology. As far as creating new voices, we talked to one company called Vocal ID, which is a synthetic voice production company, and their main purpose is to create voices for people who are no longer able to speak. for either psychological or physical reasons, they have some kind of medical issue which either their voice degrades over time and they know that's going to happen so they kind of capture their own voice and then keep it in this model and then can speak with it in the future or somebody might donate a voice to somebody else to use so that they're not stuck with a sort of canned set of artificial voices that exist, you know, the Siris and the Alexas and whatnot of the world. There are some really, really wonderful, legitimate, and great uses of this technology. That said, these companies all know that the initial place people are going to go is, oh gosh, you're trying to trick people. You're trying to do something nefarious. And they're very concerned about it. They're creating consortiums, groups of these synthetic media companies to try to battle that, to try to come up with some set of rules or standards that they will behave by and they will abide by. With our project, it was, I think, somewhat easier from their perspective, ethically, because we're trying to, not only are we couching everything in this sort of larger experience of an educational project, but we're also, the voice and the video we're dealing with is from somebody from 50 years ago, this Nixon is no longer alive. He was a very public figure, so it's not like we're trying to do something, you know, actively behind someone's back to have them look bad, etc. So there was somewhat fewer concerns because of the actual nature of our project. But generally speaking, these companies are very concerned about that because their businesses are going to fall apart if they're viewed as someone who's purveying in a service that can only be used for bad purposes. So there's a lot of effort in that regard.
[00:29:35.297] Francesca Panetta: And as artists as well, we are thinking about these questions really for every technology that comes through. I feel like I sat on endless panels when I did a lot of VR around the ethics of VR, around how it's edited, about how open and transparent it is in the process, about the problems around isolating people even further. And we see the same with There are questions within each technology that emerges and I think for journalists and artists not to be using them and saying well this is just not for us and leave it to other companies that want to be doing what they want with it I think is really problematic. I think we need to like really actively engage with the ethical concerns and be part of that dialogue and figure out what is okay, what is not, how do you come up with a manual. I remember in VR we talked about this a lot, like what is a manual for ethical editing, how should we describe in the credits afterwards, like New York Times has done a great job of like this is how we made this, these are the techniques we used. And I think that just shying away from that and saying, you know what, this just isn't for us because we know where this is going, is a bad idea because then you're just not part of that dialogue. So being part of a media awareness campaign, which is what this project is, education and awareness, as well as then actively being part of the community that is defining what those ethical guidelines are is really important.
[00:30:56.642] Kent Bye: Yeah and Fran you said that you've been talking to different philosophers about epistemology and truth and you know I went to the American Philosophical Association Eastern meeting last January and talked to a number of different philosophers about epistemology and truth but also philosophy of technology which you know has been in some ways trying to fuse together a lot of different aspects of different philosophies but it's not like science, it's more about design practices, and so there's no optimal trade-off when you're creating something. There's all these different trade-offs. So it feels like the philosophy of technology is getting into a lot of these fusion of lots of other branches of philosophy. But with the epistemology, this is something that has been discussed for, like you said, thousands of years. And so what kind of insights have you been able to draw from the philosophical perspective to be able to kind of feed back into this project?
[00:31:45.468] Francesca Panetta: Well what I would like to see is the more abstract concepts engaged within the kind of media and tech world when we're talking about reality and truth because I feel like it's very black and white and so I think when we're discussing this within the kind of context of the arts and media and tech we should be thinking like is there absolute reality and if there's not what does that mean for the messaging that we're giving out but also trying to be practical like the philosopher that I was talking to who's at UCL who deals a lot with what the practical implications in our real world and how that intersects with philosophy he was saying well there are just some practical things with things like fake news with misinformation that also that we should be thinking about like we shouldn't abstract this so much that we're just like there is no reality because that's just also not helpful as well so I would like to see a real dialogue and not just the philosophers telling us okay this is epistemology but what does actually an engaged debate look like between media, art, tech, and philosophy, and how can we be working together to be discussing these concepts, and what we should be making, and how we should be dealing with the media around us.
[00:33:00.205] Kent Bye: Great, and as you move forward, I'm wondering what either some of the biggest open questions that you're trying to answer, or open problems you're trying to solve as you move forward.
[00:33:11.068] Halsey Burgund: From a practical standpoint, the moving forward will be a second phase of this project, which is supported by the Mozilla Foundation, which will be a sort of web version of the project, and that project is going to... We're still working on the design of that project, so we're not sure exactly what form it's going to take, but it will be an online version of the experience with the film and the deepfake and the sort of context setting around that, but then we will kind of... flip it around and get into the behind-the-scenes aspects. So we'll be including a lot of the articles that we have in the newspaper right now to talk about how we made it, and we're gonna be conducting a lot more interviews and do a lot more exploration with a lot of the implications of deepfake technology, how you can detect them, whether it's human beings detecting things or other AIs, you know, the battle of the AIs detecting various manipulations and whatnot. So we're really hoping that this project does turn into a very significant amount of information to educate the public and educate everybody on what the capabilities are of this technology right now and what we can do to inoculate or help help, you know, prevent society from, you know, totally falling apart, which is something that, you know, sometimes it feels like this technology might lead to, which is a little disturbing. But, Brian, I don't know if you have any additional comments on that.
[00:34:28.138] Francesca Panetta: What I'm thinking a lot about with how our project moves forward and also how these kind of projects live in the wider landscape is how can they provide wider context and more nuance. So you know we were just talking about that with regards to philosophy but a lot of the academics I've talked about have said that this kind of reporting on deep fakes is pretty sensational in a way. It's like Ah, deepfake is going to kill us. That's not what this project is trying to say. So trying to find a wider context of misinformation, the history of misinformation and how this fits into this and see it as, you know, we talked about a continuum earlier, but like, how does this fit into all the other techniques that are there in a history of misinformation that has happened for centuries? That's what I would like to provide rather than just adding, there is already an enormous amount of articles and material that's out there just about the fear of deepfakes.
[00:35:25.449] Kent Bye: Great. And finally, what do you each think is the ultimate potential of immersive technologies and what they might be able to enable?
[00:35:35.987] Halsey Burgund: Oh god, do I have to go first on this one? Immersive is such a broad word. I come from the world of audio, so I think much more about immersing yourself through your ears than through your eyes. I do a lot of audio AR projects, as does Fran. I sort of think of future scenarios where, as we wander around the physical world, there's various levels, various stations that you can tune into of additional information that can be fed into your ears, that can augment your experience, can give you a variety of different types of experiences, can transport you into different worlds, all while sort of remaining pinned to this world. And there's something to me that's very exciting about that. And I feel, again, I'm biased because I'm an audio guy, but I think that audio has a particularly immersive way of doing that. I think when some of the spatializing technology gets better and we're really able to pinpoint sound objects in the real world and have, you know, spin your head and feel like the object is staying in the same place physically, then I think we'll be able to create a sort of extension of reality that can hopefully be used for immersive experiences, taking us to different places, learning different skills, and having all sorts of different experiences that widen what we're able to do right now. So, I don't have a specific philosophy on where immersive is going. I try to stay on top of where the tech is, and I try to use some, you know, aesthetic and artistic thinking, as Fran and I have been doing with this project, to try to extend and push some of these technologies and directions that maybe they weren't initially designed for, but that hopefully can benefit from.
[00:37:19.042] Francesca Panetta: Yeah, I am really keen to embrace the super high-tech and low-tech at the same time and not just be doing what's the most complicated. And sound is also my background. And so I feel that fully immersive sound pieces, whether it's using audio AR or in site-specific places, can be as powerful as really complicated rigs using lots of headgear and room tracking and all that kind of thing. So I, for my own practice, want to be really open-minded about considering what immersive means and not just trying to use gadgets for the sake of it. I'm really pleased to see LBE stuff happening at the moment because I'm a big fan of site-specific, using real physical environments. You know, maybe that comes from doing so much audio AR stuff in the past as well. I really like the physical space we live in and trying to make it creative and imaginative by layering other things on top of it. I love to do, but it doesn't need to be the most complicated technologies. So maybe that's what we'll see some of that coming out, but I'm quite keen to see us interacting with each other and with real places, but in the most creative ways we can.
[00:38:38.190] Halsey Burgund: Yeah, just the real world is a pretty immersive place. So why not take advantage of that?
[00:38:44.292] Kent Bye: Great. Is there anything else that's left unsaid that you'd like to say to the immersive community?
[00:38:48.993] Halsey Burgund: Thank you. We love being a part of this community.
[00:38:51.654] Francesca Panetta: Yeah, same.
[00:38:54.955] Kent Bye: Awesome. Great. Well, thank you so much for joining me today on the podcast. So thank you.
[00:38:58.396] Francesca Panetta: Thanks very much, Kent. Thanks, Kent.
[00:39:00.927] Kent Bye: So that was Francesca Panetta. She's the creative director at a new center at MIT called the Center for Advanced Virtuality, as well as Halsey Bergen. He's a sound artist and technologist, and they were co-directors of the piece that was at IFA DocLab called In the Event of a Moon Disaster. So I have a number of different takeaways about this interview is that first of all, Well, sitting down and actually watching the piece, you kind of watch something like this and you're just like, Oh my God, what is going to happen to society once it gets into the wrong hands? And so I think part of the deeper intention for why they created this is to just educate people about what the potential is for this type of technology and to see that it's in the larger context of a long history of misinformation and disinformation. I really appreciated the point when Halsey talked about Photoshop and he said, you know, when Photoshop came out, people were really having this moral panic, like, you know, how are we ever going to trust any photograph ever again? And, you know, we've been able to live in a society where there are Photoshop photos that are out there, but usually we do a pretty good job of detecting what is not real. And maybe there's stuff that's out there that we have not been able to determine that it's fake, but there's a larger context of misinformation and disinformation. And as long as people have the will to deceive, then they're going to use whatever technology is available that's out there. So the other part that was, I think, interesting was to just hear about these different companies that are doing this. So both Kenny AI out of Israel, as well as the re speecher, that we're having to think about these ethical issues of what's it mean to be able to be in a business where people could get your services to be able to produce stuff that's going to be in the service of the powers that be to put out misinformation and disinformation. And so they themselves are trying to figure out what those code of ethics might be to be able to, to actually find the legitimate uses for this technology, whether it's vocal ID, which is an AI company that's trying to give people a voice. If they can no longer speak, then they want to have something that's a little bit more unique than just the typical synthesized voices that are out there. So I also appreciated what Francesco was saying around like how there needs to be a broader discussion and a dialogue between the philosophers, media creators and artists, because there are all these different issues of truth and what is reality. And there is this deeper impulse that's happening right now for us to not be so sure about what the absolute truth is and to embrace the plurality of many different perspectives and the the aspects of how these different truths are socially constructed. And so I think that was part of what Francesco was trying to say is that, you know, maybe we shouldn't sort of loosen our vision about what absolute truth and absolute reality might be. And that we can't be going around having, you know, one extreme the other, you can't be credulous and believe everything and you can't go around not believing anything because it sort of like undermines the ground of truth and Honestly, we're kind of in that point in our culture right now It feels like there's the ground of truth that's being eroded and as we start to introduce more of these technologies artificial intelligence virtuality augmented reality where people are going to be overlaying different layers of reality on top of the reality, then I think the ground truth of what consensus reality is, is going to continue to morph and shift. We already have all these different filter bubbles. And so one of the things that the philosopher that I was pointing to, I watched this five minute clip on a philosophy overdose channel on YouTube from Agnes Callard, And Agnes, she talks about how you can't at the same time simultaneously pursue all truths and avoid all falsehoods. Although they seem like they might be the same thing, she's arguing that they're actually a complete opposite. Like you can't do both at the same time. You actually need a bit of a dialectic process where there's two people that are arguing for the truth and other people that are arguing for skepticism and the doubt. You actually need to have a collaboration between those because there's a push and pull that happens through what Agnes is pointing to, the Socratic dialogue, or Hegel talked about this dialectic process where there are these opposing truths that are having some part of the truth, but maybe partial truth, and they can't see the entire truth unless they're engaged in that dialectic and being able to have that push and pull. And so to move to one extreme or the other to be completely skeptical or completely credulous is not very useful when you need to have that balance. And so there's different functions of that in our society, but also there's just checks and balances. Just even the way that the criminal justice system here in the United States is that you're basically innocent until proven guilty and you have the prosecution and the defense and you have that same type of approach of trying to have each of them in collaboration in the pursuit of justice, but that one cannot achieve the justice without that dialectic process because you can't avoid all falsehoods and pursue all truths at the same time because they're incompatible with each other. So I think keeping that in mind, this dialectic process, I think is important because this is the broader context of the philosophical grounding for how do we negotiate that at a collective level is something that we're reevaluating and trying to find new ways of how to determine what the heck is going on in a world where there is so much information and so much people who are not telling the truth and so hopefully, you know things like podcasts and having conversations and being able to dive into the details that go beyond the sensationalist headlines around deep fakes and how can the deep fake technology actually be a catalyst for us to sort of be a little bit more aware of I keep going back to this thing that Jeremy Lanier said, which is that he suspects that we're going to be able to always determine what's the virtual and what's the real. And I think this is an open philosophical question. I don't know, actually, if that's going to be true or not. But if it is true, what that means is that we're going to continue to be refining our perceptual input to be able to notice the subtle nuances of different things. And so whether it's being able to detect if something's been autotuned or if something's been photoshopped or if something has been generated with fake voices or completely fake generated content, then maybe we'll get better at being able to discern that. Or if not, then we have to kind of renegotiate that of how to live in a world where that's possible. And I think going back to the virtual reality connection here, especially when we talk about identity and your identity projection, what's it mean to be in a virtual environment when maybe somebody has a number of different clips of people out there and they'll be very easy to be able to try to fool other folks into believing that this person is actually them. So things like self-sovereign identity and to be able to actually do some sort of cryptographic check of this is a verification of your identity in a decentralized world, we need something like that. Or there could be things like verified accounts on Twitter and Facebook. That's one way that that identity verification has been handled. But as we move forward into a decentralized world, finding the different technologies out there like self-sovereign identity and other methods to be able to verify identity. And in terms of the news and just figuring out if information has been changed or not, being able to memory hold different parts of the digital record, I think that's also a huge threat. And I think they're trying to just bring a general awareness of what is out there so that people have a little bit more capacities to be able to critique and have a little bit more literacy so they're not fooled by deepfakes or to have some proof system that has another checks and balance to be able to see if there are things in the system, then what is a larger ecosystem way of handling it? So that's all that I have for today and I'm super excited to dive into this series of the doc lab I think the doc lab is doing some of the most Innovative work that's out there a lot of experimental things are really on the cutting edge of both VR Immersive art immersive storytelling interactive storytelling as well as artificial intelligence so excited to dive into the 17 interviews that I did and about 10 hours worth of interview content that's going to be here in this series and So if you enjoy the Voices of VR podcast and you've been thinking about joining, now is a great time to join. At the end of the year, I really need to get my Patreon up to probably around $3,000 to $4,000 a month, and I'm about at $1,500 right now, so I need to double or triple my Patreon just to be sustainable and to continue to grow and sustain the work that I'm doing already. So if you enjoy the podcast, then please spread the word, tell your friends, but also consider becoming a Patreon member. Five to $10 a month is a great amount to give and just allows me to continue to do this type of coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.