The next series in the IEEE XR Ethics series is deconstructing the paper title “Who Owns Our Second Lives: Virtual Clones and the Right to Your Identity” featuring the lead author Thommy Eriksson – Researcher and Teacher Chalmers University in Sweden) and contributor Mathana (a Berlin-based Tech Ethicist and Interplanetary Philosopher, & Executive Committee Vice Chair of the IEEE Global Initiative on the Ethics of Extended Reality).
This essence of this paper is exploring the thought experiment where “the risk is that everyone will be able to (virtually) do anything to anyone.” Who owns your digital identity? And what rights do you have around your image and likeness being recreated within the context of 3D spaces. It also explores the philosophical discussions around identity, the relational and contextual nature of identity, the limits of AI in trying to model our identities, and the corruption of memories via digital reconstructions.
We also explore the full spectrum of digital representation that spans the utterly mundane to speculative, science fiction future. On one end of this spectrum is Still Image & Video Recording, and then starts adding more layers of representation until it gets truly uncanny: Avatar (3D Model), Avatar with Replication of Behavior of Motion Patterns, Avatar with Full-Human Interaction, and Actually Self-Aware Virtual Clone. Of course the more far-term variations are still in the real of sci-fi potential, but it’s a thought experiment that opens the mind to many potentials and possibilities that may need to be design for.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
🎉THREAD of an 8-part Voices of VR podcast series on XR Ethics covering the white papers produced by @IEEESA's Global Initiative on the Ethics of Extended Reality.
Video Overview pic.twitter.com/0WArX9jGFL
— Kent Bye (Voices of VR) (@kentbye) June 6, 2022
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So continuing on in my series on XR ethics in collaboration with the IEEE Global Initiative on the Ethics of Extended Reality, today's episode is about virtual identity. It's a paper called Who Owns Our Second Lives? Virtual Clones and the Right to Your Identity. It's by Tommy Erickson and Mathana and They're diving into these questions around our virtual representations within the context of these virtual worlds as we go from taking photographs to like 3D reconstructions and then on up into photorealistic reconstructions that are adding different behaviors on top of these avatar representations. What happens when you start to have these virtual representations of yourself that are happening in these virtual spaces? So lots of different implications there. There's also other implications in terms of all the data that we're reading from our body and what's happening in terms of modeling our identity in these digital twins and more of a neural rights perspective. This conversation is more focused on the virtual representation, but that's certainly a part of it in terms of the information that's coming off of our bodies and the different privacy implications that are happening there. So anyway, that's why we're coming on today's episode of the voices of your podcast. So this interview with Tommy and with Donna happened on Wednesday, May 18th, 2022. So with that, let's go ahead and dive right in.
[00:01:26.503] Thommy Eriksson: My name is Tom Ericsson. I am a researcher and teacher at the Chalmers University of Technology in Sweden. And I do actually research on a lot of different topics. I guess the common denominator is digital media and very much concerned with teaching ICT and learning. and also visualization of different sorts. And now the last few, maybe five years, I have been focusing more and more on virtual reality and basically two areas, using VR for remote collaboration and using VR for teaching.
[00:02:14.032] Mathana: And my name is Mathana. I am a Berlin-based tech ethicist and interplanetary philosopher. My research is quite broad, ranging from topics around the ethics of extended reality. I also work on technical lead for the machine learning explainability framework, Explainer.ai. and I'm also quite interested in some of the emerging issues around philosophical approaches to human-robot interaction. And so, yeah, my research is, I guess, in a nutshell, is looking at the sociocultural impact of emerging technologies and to think about how we can start asking questions now, if not to reach consensus, but at least use our perspective at this very interesting point in time to start kind of mapping out some of the potential ways that Emerging technologies now could shape social dynamics, but also the way, you know, asking this question, I guess the big question is, what does it mean to be human in the 21st century?
[00:03:19.085] Kent Bye: Okay. And yeah, there's a recent IEEE XR paper that you both were involved in authoring and contributing to. It's who owns our second lives, virtual clones, and the right to your identity. So the right to identity, I've come across this in terms of privacy, but there's also avatar representation. And so maybe if we can set a bit of a baseline here for what is identity, and I know there's an introduction section that gets pretty philosophical in the sense of all the different thoughts on this, but maybe if we can just try to set at least a baseline for the context that we're talking about today for this paper of identity. And I'd love to hear each of your thoughts of how you start to think about and wrap your mind around this concept of identity.
[00:03:58.777] Thommy Eriksson: Yes, I can start. just as you say, Kent, it is a very complex topic. And you can have either very philosophical or very pragmatic views on it. And I think we have in the text, we have a quite pragmatic view on what is identity. So in my view, at least, it is what makes a person unique. and as it is expressed both physically, in the physical appearance of a person, but also the behavior of a person. as well as the memory of the person. So basically, a collection of the physical appearance, the behavior, the memory of a person, and of course, things like emotional state, disposition, how those people react to different circumstances, and so on. What in that makes the person unique? And as we highlight in the text also, we primarily have an external view. What is it that makes a person unique for others? But also, of course, we have the self-identity. What is it that makes me unique for myself? So I guess that if you really try to be very pragmatic, the answer to who am I, that is the identity.
[00:05:28.479] Mathana: And as Tom had said, I think in the paper, we talk a little about also the point of view a bit. And I think that this is really where particularly VR, but you know, in the extended reality framework of this juxtaposition and interplay between Tommy was saying, you know, who we are, who am I, but also who am I perceived to be? And I think that this is something that, you know, we'll get into a little bit, but we kind of break down even in our first section around, you know, what is identity, you know, the kind of original individual or virtual clone avatar. And so I think it's interesting as we talk about in this paper that even the, the who am I as maybe a person, quote unquote, behind the screen also can have different modalities of presentation. And each of these also can be seen as an external observer, as a different entity or different sort of layers of me. And so it does get theoretical and philosophical, but in this, even kind of in our introductory system, putting out this kind of large tapestry of the interplay between the who am I and the who am I to whom.
[00:06:34.485] Kent Bye: Yeah, I wanted to add a few thoughts on this because I think it actually comes up a lot in the context of privacy and also in the context of neuro rights, as well as in the context of your self-representation within these virtual platforms. And so for me, I've seen a lot of discussions of personal identifiable information, that's information that would identify you uniquely. And then I see this move towards what Britton Heller calls biometric psychography, which is, the reactions and what's happening and radiating from your body from the biometric and physiological reactions from moment to moment, that's maybe connected more to the context of what's happening, such that if you understand what the context is, then you might be able to extrapolate what people's values, interests, and other aspects about their essential character of who they are based upon the biometric information that's being radiated off of their body. So there's been a lot of emphasis on personal identifiable information. And then this other realm of the biometric psychography that could be everything from your intentions, your actions, your behaviors, your emotional affect, your micro expressions, facial expressions. Then you got other things of like your mental privacy, what's happening inside of your head, which starts to get into the neural rights that has the right to mental privacy, meaning that you're not tracing what's happening inside of your head, but then there's also the right to identity And then the right to agency, meaning that the whole end game is in the context of surveillance capitalism, capturing all this stuff to be able to nudge your behaviors. So understanding who you are and modeling that with a digital twin of yourself that then these companies use as a model to serve advertising to you. But there's also aspects of your cognitive load, what's happening inside of your mind that. intimacy of your thoughts and your relations of who you're talking to and your social graph. And then on top of that, you have the other aspects of your body and your physiological reactions and high gaze and intention. So all of that stuff, when you add all that together, it starts to kind of map out the identity. And I think this is a lot of ways the frontiers of the battles that we're going to be facing within XR is to what degree all that information is available, who has it and how they're trying to map our identity and how much do we have control over that identity. And I feel like there's a lot of things that haven't been defined legally. That means with all of this new data that's going to be made available, it's the frontiers of the next wave of surveillance capitalism. So I really feel like as we're having this discussion, it's important to keep in mind the threats of the privacy components, but also the self-representation, which I think we'll get into as well. So I'd love to hear any thoughts about that, because I feel like we're kind of dancing around a lot of the larger context for why this discussion is really important.
[00:09:05.437] Mathana: Tom, do you mind if I jump in first here? So I think that what you're describing is happening where we saw there was a new report on Android and iPhone, the number of requests and pings per day that large ad companies are able to gain, you know, that most people don't know that, you know, we are in kind of this constant surveillance, whether it's location tracking or, you know, whether it's hidden pixel sort of cookies and trackers on the websites, but hundreds of times a day, you know, that there are these silent signals and beacons going back to, data centers, whether it's being then aggregated by third party data brokers and all of this. And there is this sort of, I guess we could call it the shadow profile. But I think what is important also to remember is that a lot of these companies have, if you will, kind of drank their own Kool-Aid. And to say that they're going to use these tools because they exist, whether it's the optimization of their tools, whether it's higher engagement rates, or to have their own self-selective metrics to increase those, their own internal KPIs, I would caution people that are working in industry to take these sort of things with a grain of salt because, you know, there are the limits of modeling. And I think that that's something that kind of gets left out of the conversation sometimes, is that right now that actually the modeling that's going on with these large troves of data are actually first propagated on a small group of people. often times a homogeneous group of people. A lot of times people from the United States. English is their first language. They have a similar cultural background. As we scale these systems up, actually these models start to fall apart. The micro-expressions that are able to be gleaned from totems in the street actually don't mean what the original sample size did when these models were being trained. And I think this is something that gets left out because more and more we're going to see more biometrics and psychometrics being have this input from headsets and things like this, from haptic interfaces and for motion tracking. And a lot of times the models are built on a small sample size of early adopters of these technologies that then the companies just say, oh, this is how people are. And so I think that there is also important to realize that it's not just our identity that's kind of being scraped from the tools that we use, but they're also being scraped into a system that has pre-existing models and assumptions about anybody in this system and what their micro actions mean. And so I think that, you know, a lot of times it really comes down to actually with these complex algorithmic systems actually come down to simple things like A-B testing. right? Simple actually regression, you know, you know, these neural networks, you know, running both machine learning and reinforced learning and all these fancy sort of things actually come down to just which of these ads that you got served, you clicked on, you know, or what the highest prevalent rate of these things. So I just want to say that there are these things. And I think that the conversation around neural rights and what sort of the 360 protection of our quote unquote identity is super important, but not just because everybody's trying to get it from us, you know, ascertain our identity, but also because of the system that they're trying to ascertain identity into are fundamentally flawed.
[00:12:09.862] Kent Bye: Yeah, there's always going to be limits in terms of anything you try to model. There's going to be things that are outside of that, that are not being represented. So if you have a limited sample size, then it's going to be biased towards that. So I don't think it's ever going to be possible to completely eliminate that bias unless you are able to. get data from everybody on the earth, or even if you did, there'd still maybe be decisions that are made algorithmically that are preferencing one group or another. So yeah, I think there's a lot of discussion within the AI community around this type of algorithmic bias and the algorithmic justice league and coded bias as a documentary, I think starts to dig into that a little bit, but as we move forward, I think it's going to be more and more important. And I think the other thing I just want to sort of reiterate is the personal identifiable information that's a static part of our identity and how the more biometric psychography ends up being a little bit more relational in its context. And so the context that you're in ends up being a big part of how we start to understand our identity and who we are. in the context of a romantic relationship, who we are in the context of home, who we are in the context of our professional work. And so given what the context we are, then there's certain aspects of ourself and our personality that get expressed as well. And trying to relationally map that, I think is going to be another aspect of the future of what is referred to in the industry as the contextually aware AI. So given all the different contexts that you'll be able to track, then who are you in that context? But you said at the end of this first section about there's the original individual, the virtual clone, the avatar, which I think is also important to point out in terms of, in some ways we've been talking about yourself as the core essence of yourself, maybe as your identity. There's also the presentation that you are presenting either through a one-to-one clone of your identity or an avatar representation. So in the context of VR, it seems like there's these different steps and levels of the original individual, the virtual clone and the avatar. And yeah, Tommy, I'd love to hear any other thoughts as why you think that's an important framing as we move forward in this discussion.
[00:14:05.901] Thommy Eriksson: Yes, because I think the important reason why we used virtual clone as a concept is that When we talk about an avatar, an avatar in a virtual reality environment is basically your own representation of yourself. You have control, usually, over your avatar. You can choose if you want to present as a crocodile or a human or a plant or whatever. That is fine. But what we wanted to highlight with the virtual clone is that then you have a replica of yourself that you can risk to lose control over. And when we say replica, we talk about replication of the appearance of a person, but also the behavior of a person. And eventually, if artificial intelligence will succeed in that, you could even have a virtual clone that has the memory of a person. And that is possible to separate from the person itself. And of course, we have real life examples of it already like virtual characters that replace an actor in a feature film. There are a few cases, as many probably know, where dead actors have been revived, so to speak, with a virtual clone. Or this famous example of, I think it was a South Korean child that was replicated as a virtual clone in virtual reality, and then the parents get to meet this child. And the thing with that is that then you lose control or have the risk of losing control over that representation of yourself, either while you are alive, but also when you are dead. Because of course, this virtual clone can exist after you have disappeared. And then of course, you even lose control over it even more.
[00:16:11.123] Kent Bye: Yeah. And so this third section of the aspects of identity that can be virtually cloned, you go through the visual fidelity of the physical appearance, which again, if you imagine someone taking a picture of you, but then doing the photogrammetry and then creating your image and likeness, and then representing yourself within a virtual space. You know, within VR, people have more stylized representations of themselves, but if it's more photorealistic, so I think there's a spectrum there in terms of to what degree are you replicating your representation of yourself within these virtual spaces. But I can imagine a future where it would be starting to cross over ethical lines. If I walked into a space and I saw something that looked exactly like me, that was moving like me, the behavior, motion patterns, you talk about the body, the gesticulations, the walking patterns, the facial expressions and the eye gaze, because all the virtual technology is in some ways trying to capture representations of that self, what happens if that's recorded and then played back to other people when I'm not there? And then at some point, like you go into the dialogue, the speech patterns, the memory and the emotional states, you have a whole other affective dimension of yourself. And imagine there might be useful use cases. I know that Mel Slater has already done these things where you step outside of yourself and talk to yourself or talk to a Freud or some Jungian stand in of a therapist where you do therapy with yourself by switching perspectives within the virtual space. And imagine it might be interesting to have a reflection of yourself in that way. And that might be some legitimate use cases where it could help you understand yourself a little bit more if you're able to step outside of yourself and really observe yourself. But I also feel like there's a lot of ways in which that could be starting to be abused in terms of capturing all this information about yourself and then who has control over that. And then it starts to do actions that are misaligned with what you would actually be doing. So there's a representation of your identity being expressed in a way that is maybe out of integrity with what you consider to be who you are as an individual. And there's a mismatch between your virtual clone that's off doing stuff and mimicking you. And how do you set the barriers around that? Yeah, love to hear any other reflections on this section of the implications of virtual clones.
[00:18:20.089] Thommy Eriksson: Yes, and I think that one key aspect of it is the interactivity, because it is one thing that you develop a virtual clone of an actor in a feature film or so. There are ethical considerations with that also, but after all, it isn't so different from having a video recording and then someone passes away and then you have this video recording of a dead person. So maybe it is not so complex. But when you add interactivity and you can actually interact with this replica of the person, then it becomes something else. I think that one key phrase in the text that we have, that if you really think about what it means, then you can quite quickly go into dark places, so to speak. Because let's say that in the future, anyone would be able to do anything virtually with anyone. If you can consider that concept, that any person that have lived or live, you should be able or could be able to do anything with that person. If you think of that, and you can, as I said, quite quickly go into quite dark places where Well, when it comes to romantic relationships, sex, but also abuse, violence, there are quite scary scenarios there. And so far, it is basically science fiction. But science fiction have a very good way of pinpointing or calling out potential ethical problems. And I don't think we can say, yeah, that's just science fiction, because there is a potentiality to have it a reality in 10 or 20 years or so.
[00:20:11.288] Kent Bye: Yeah, there's certainly a number of Black Mirror episodes that come to mind that already are delving into these areas that we're trying to talk into. But Mathana, I know that you had some thoughts as well.
[00:20:22.965] Mathana: One of the outstanding issues here or one of our first recommendations is let's talk about it. And I think it's really important to have these conversations and this dialogue because they're interesting and they're kind of these un-demarcated domains because the rules don't necessarily apply yet. And so it's really important to create discourse and dialogue where we're able to map out in structured ways the boundaries and even start thinking about what are the relative ethics, what are the sort of things that are more than just AI ethics or more than just multimedia ethics. Interactivity does bring another layer. But I think one of the important things is whereas there might be IP protections on the replication of video. There might be legal requirements on the transmission of certain sorts of images, exploitation imagery, and these sort of things. There are laws against it. There is regulatory oversight around these things. But actually, in the world of VR, there's very little regulatory oversight, particularly in a global context. And I think it's important because harm can exist anywhere. We take a very top level view from this, but I think it's important for us to just highlight when we're having these conversations is that because this is a global phenomenon, harm can occur anywhere, that exploitation or abuse, it doesn't matter where the person is or their image, let's say, if they're digital clone, if they find their likeness inside of a VR and XR system without their consent, I think we can say that this can cause harm. Maybe not everybody agrees with this and says, oh, well, it's just a digital image. It's not real harm. But I think that most people, you know, that have maybe had their, you know, revenge porn, you know, other sorts of exploitation imagery online, there is a assault on dignity, I would probably call it. And because this can happen everywhere, it's very difficult to think that all the countries in the world are going to pass similar legislation protecting people inside of virtual worlds. And so we have to step back and say, well, where are the gatekeepers of protection? And a lot of times in going to the Web3 interface and, you know, the emerging metaverse with a lot of still unwritten rules, even inside of proprietary platforms like Meta's Horizon Worlds and stuff, that we're already seeing abuse occur. They're doing little band-aid fixes like putting up little perimeters and personal bubbles. But at the same time, you go in as a female-presenting character and you're swarmed by a group of male-presenting characters that are just harass-y. This also is not only impacting your ability as your likeness, your digital clone, have the same sort of experience as others in these virtual worlds. It's also bending the lines of where does content moderation come in. To believe this point, Andrew Bosworth was actually asked about this very thing in an interview with BBC recently, I think, and saying, oh, well, we don't want active AI surveillance. People don't want that. And so, we have to kind of learn these lessons as they come. But it's really kind of interesting to think about how much other sorts of image hashing and text, word recognition that takes place on Meta's other properties like Facebook. And so, these companies, I think now also have a burden. with a limited regulatory oversight on a global level, what layer of the stack is going to be responsible for policing some of these sort of things? And whether that's the, we could call portrait privacy, you know, somebody taking your image without consent, but also how do profiles and digital clones and avatars get cycled out? after someone passes away, let's say. And we do talk about that a little more, but I just wanted to raise this point now because as a defense of why ethics and the kind of normalization of ethics is super important, because there's not a lot of other actors, top-level actors in the room that have stepped up to actually put in safeguards and protections for people in these systems and platforms.
[00:24:11.612] Kent Bye: Yeah, and the issue of identity and privacy spans across many other domains. And there is another IEEE XR paper that goes into much greater detail in terms of the social harassment and the implications of how to create safe online spaces. And so I have a discussion with Jessica Outlaw, Michelle Cortez, the co-authors of that paper that is a whole deep dive within itself. But to kind of take it back to the identity aspects, there's a section here where you talk about the differences between both the biometric and the psychometric in terms of You know, there's different contexts and uses of identity. One could be through identification and authentication to say that this is actually you. And so there's different ways of using the stuff that's radiating from your body, whether it's your DNA, your eye retinal scans, or your fingerprints, you know, all these things that you can't change about your body, but could be captured in certain ways and represented in these virtual forms. And so the relationship between identity and authentication, I think, is an important aspect to think about here, especially when you think about if you do have a photorealistic representation of yourself, is there a way for you to either cryptographically sign that or have control over that? What's to prevent other people from ripping that avatar and then embodying your avatar. I know that MET has been showing these codec avatars that are super photorealistic that goes into this whole representation of yourself. And then there's a whole psychometric, which is, you say, the attempting objective measurement of the personality traits, skills, knowledge, attitudes, and mental disorders for the purpose in order to document, evaluate, and categorize individuals. especially in the context of advertising where you're trying to categorize people. But there can be other trying to categorize people from medical conditions, which is a whole other realm of diagnosing and the medical privacy implications of that. So there's advertising context for psychometrics, but there's also mental health implications that are more in the medical context that are also privacy related as well. So, yeah, I guess as you start to differentiate the two realms of the biometric and psychographic, I'm curious to hear any other thoughts of expanding on why those differentiations are important.
[00:26:10.273] Mathana: I mean, I think that it's, and these maybe aren't the only two ways to break down sort of parsed identity, but I think it's a very important, interesting interplay here. You know, even biometrics, we could break down between kind of hard biometrics and soft biometrics. And I think that it gets interesting of this sort of, you know, kind of continuum of identity in some ways. in my mind that, you know, we could say the soft biometrics, maybe the typing of keyboard patterns that, you know, maybe somebody could almost have authentication or whether it's looking into things like early onset Parkinson's or something like this, that actually we can start through something as simple as typing. We can have these kinds of large data sets and start modeling about what are the common traits or characteristics about these soft biometrics, but then we get into the psychometrics. And I think that this is Also, where as I talked about earlier, sort of the modeling comes in and I find it kind of interesting and a little worrisome as well that are people going to start making assumptions or start even modeling these things? This is where the privacy and identity kind of interplay. But, you know, if I choose a character, even not a digital clone, but like an avatar, Is somebody going to make an assumption whether it's targeted advertising or somebody looking to parse my mental state of the day because I chose one character over another character or I chose a character that does not match the gender profile that a shadow profile that a company has built for me? And is this going to lead into a thought of, oh, does this person have gender dysphoria or something, you know? And on top of that, are there going to be motives in place and incentives around this based on my character selection or things like the analysis of my vocal patterns and look at the peaks and maybe draw out some conclusions about my mental state. Oh, this person is angry. And I think it's really I'm worried about in some ways, actually, if we look at this whole continuity of experience, particularly inside of, you know, surveilled systems, if you will, in which that everything that we're doing is being recorded. Also, this information sits on a server somewhere, that as new tools come online, they're going to be able to retroactively apply some of these models to our past experience, then build out modeling for our future experience. right? And so I worry that as more and more data is collected around us and preserved that was at one point just biometric data eventually actually then becomes psychometric data because new techniques are being applied to it.
[00:28:32.900] Kent Bye: Any more thoughts on that Tommy?
[00:28:34.780] Thommy Eriksson: Yes, I'm thinking about fidelity of these representations of ourselves that At least when it comes to the scenario that we describe in the text, with a virtual clone that can be used in different ways. One thing that is interesting is that it doesn't really matter how good the fidelity is, because you have ethical problems anyway. Just to make it really like a thought experiment, Let's say that when I pass away, someone makes a virtual clone based on video recordings, photographs and what I have been writing online. So someone makes this virtual clone and then sell that or hire that to my kids. If this virtual clone is very similar to me and basically perfect fidelity compared to myself, then that could be distressing for my kids because they are interacting with a virtual clone that really feels as if it is me. and we don't know what that will do to the persons that are experiencing it. Maybe it is just great, maybe it is a good way to keep the deceased ones living, or it will really be traumatic for many people. And on the other side, let's say that this virtual clone is not good in the fidelity. It doesn't really look like me. It behaves in a strange way. It gives answers like, I don't like football at all. But let's say that this virtual clone all of a sudden starts talking about football. And my kids realize, well, is this me or not? And what happens if you mix up the clone with the actual memory you have of a person? Is there a potential danger that the memory of a person can actually be disrupted or changed in a way that you don't want to? Because maybe you want to keep the memory of the actual person instead of replacing it with an experience of a low-fidelity virtual clone of the person.
[00:30:48.002] Kent Bye: Yeah, it reminds me of Agnes Callard when she talks about the Socratic method of knowledge, how there's a process of believing truths and avoiding falsehoods, and how it's a dialectical process where any process of knowledge production involves taking a leap of faith and believing something is true, but there also has to be the more skeptical take in terms of avoiding the falsehoods so you don't believe something that isn't true, and that there's a dialectical process there that is a process of science and peer review and a communal process of knowledge production that's more relational rather than something that is declared to be the truth. I feel like that's the challenge with algorithms is it makes those assertions for what the truth is without having a check and balance for what may or may not be true. It reminds me of Shoshana Zuboff did a whole surveillance capitalism book where she talks about this instrumentation as we move into this new realm of surveillance capitalism, how it's a new paradigm. And Cory Doctorow did a whole rebuttal to surveillance capitalism saying, Hey, all of these algorithms are just complete snake oil. These companies don't know what they're doing. They just make up numbers. They exaggerate the numbers. We shouldn't believe that they're able to do this type of mind reading that it's dangerous for us to live in a world where we expect these companies can model us to the degree of fidelity that they can actually do, which means that their models have a lot of error. There's a lot of gaps. And I think Avi Bar's response to that is that you can think of these environments as casinos, where the house has the advantage with trying to at least have the odds in their favor where they're able to make profit. And if there's any of these algorithms, they're able to shift the odds into their favor. Even more, the more information they have, the more ability they're able to, here's the old saying in the advertising industry, which is that 50% of my budget is going to waste, but I just don't know which 50% that is. Meaning there's a lot of uncertainty as to what is actually working. What's not, they just know that they dump a lot of money and stuff that they know that's not working, but there's enough that's working that. ends up being profitable in the end. And so if they can, like a casino, just turn the odds just a little bit with all this extra information, for me, that's a little bit of my response to the global skepticism of like, we can never know anything because we can never perfectly model things. But I think when we talk about these issues, that it's a lot of the more Bayesian probability in terms of like having it be good enough But yet I think back to Mathana's point, that good enough could be benefiting some people and disadvantaging other people. And so how to navigate all of that, I think is a larger discussion in terms of this algorithmic justice. And as we talk about these realms, there is going to be imperfections in these models and there is going to be intrus, but it may be close enough and there may be benefits, but also harms that are both coexisting as we go into all these realms of how our identity is being used within these virtual spaces.
[00:33:35.463] Mathana: You said one more component to the fidelity that Tom and you both have mentioned. I think a lot of times we think about fidelity sort of being a very in the snapshot in the moment, you know, what is the fidelity of me as I'm interacting kind of in real time? But I think it's also, you know, around these, at least currently, it maybe takes a lot of computational power to run these real time dynamic things. And a lot of them are cloud based. And so I also worry that, you know, that identity is also going to be gate kept, or, you know, sort of these kind of digital clones. And what happens when I start to have to pay a subscription service you know, a monthly service to be able to access a loved one's virtual clone. And what does that do? And maybe I can get the free version, but as Tommy mentioned earlier, like football, a little too much, yeah? That club that, you know, is maybe paying for, you know, subsidizing, you know, some cloud architecture. And so I also think it's important to think about fidelity and a long-term access. If this is going to be something that like now enters on a societal level, what does it mean for kind of the long-term access and long-term fidelity of these systems, both in the continuity of access, but also is it going to happen if one day your loved one's digital clone is hosted on gets virtual acquired, or they're no longer supporting that version of virtual clone. And so I think it's also important to think about what are the long-term implications as far as continuity of access, but also long-term fidelity beyond just sort of visual representation, but almost a metaphysical ability to have this archival and legacy access.
[00:34:55.333] Kent Bye: Yeah, maybe it's a good time to move into this fourth section, which is how an identity can be stolen and reconstructed as a virtual clone. And there's kind of like six different phases as we talk about in these different metaphors. The first is a still image, like taking a photo of somebody, the video recording, and then we start to move into an avatar 3D model of a virtual clone, just modeling your identity and physical representation. And then you start to add in onto that avatar, the replication of your behaviors and motion patterns. And so it starts to get into a little bit more of the creepy area and uncanny area as you start to have a virtual representation of yourself. And then you have like the avatar with full human interaction, which is all the other emotions and with the codec avatars, which Meta is working on in terms of the photorealistic reconstruction of your identity. And then on top of that, we talked about earlier the memories and the emotional patterns and the other aspects that may have been captured that's being replicated in this virtual clone. And then you have the software virtual clone, which, you know, it gets into AGI and consciousness. And to what degree can we ever capture and model that gets into the deeper philosophy of consciousness and questions as to whether or not that will actually ever be possible, but kind of having more of an autonomous representation of yourself that's kind of running around in these virtual spaces. So I'd love to hear some reflections on these open questions that come up and also this progression through these different phases of photo to video to avatar or laying around different layers of behaviors. and motion patterns, but also the fully interactive dimension and then the self-aware speculative future that we're headed into.
[00:36:24.908] Thommy Eriksson: Yes, I guess one observation is that the different levels that you described there, Kent, is quite broad actually, because the most simple levels, we already have that. We have photographs, we have video and so on. We even have started to have the technical possibility to have some fake video, for example, of a person with deepfake technology and so on. At the same time, the upper levels there where the virtual clone is indistinguishable from the original person, That is still something that feels very far off, and I'm not really sure if we all agree on if it is technically possible eventually. At least from my perspective, we will reach that possibility eventually, I think so, but no one really knows. In that spectrum that you talked about, we basically have one end of the spectrum that is almost mundane and kind of everyday life. And the other end of the spectrum is science fiction. And most people don't really are aware of it, or maybe doesn't believe that it will be technically possible. And I think to connect with what you said before, Mathana, that it's important to talk about it. I think that this span here makes it a little bit difficult to talk about it, because some things like deep fake is, okay, we have that, and most people maybe doesn't have so much. attitude towards that. And the science fiction stuff, the black mirror stuff is a little bit far-fetched, so people don't care so much about that either. So I think that is an issue here with talking about it, that we are in a span of this continuum that that we just see a small slice of it right now. And also another thing that makes it difficult to talk about it is that some of the worst case scenario is a bit sensitive to talk about, because it involves things like romantic relationships, sex, death, and so on. And it's easy to talk about surveillance capitalism and Facebook and meta and social media and so on, because that is quite abstract. But when we start to talk about what one person could do to another person, then it becomes more... To be honest, I think that we kind of start digging in a quite dark part of our own imagination, and that makes it difficult to talk about.
[00:39:08.590] Mathana: Maybe just add a couple of almost metaphysical or philosophical questions or thoughts on this. We've created an ontology or taxonomy of different sorts of modalities of digital representation. Looking from a contextual media analysis, going from the static photograph all the way to the self-aware virtual clone, this kind of progression. This, again, we were talking about it at the beginning. identity of the thing itself from a media analysis point of view. What is this thing that we're talking about? We can describe this thing as a photograph. We can describe this as a theoretical, you know, self-aware virtual clone. But I think there's also this important part of how me as the observer also sees this. And I think as we're talking about deepfakes, part of the thing is people have talked about a lack of digital media literacy, not being able to parse a real video image from a deepfake and how this blurs the epistemic perception of people I think it is also interesting to think about this question, to what extent does it matter if the character, the digital avatar or seemingly virtual clone that I'm interacting with, to what degree on a philosophical and metaphysical level does it matter what is the agent behind this, whether it's a real person or an algorithm, you know, a virtually aware clone. And I think this is also something interesting to think about as well, because it also puts the mirror from the thing that we're analyzing back to the observer. And I think that, you know, what are the ethics of that, you know, and it kind of puts some other things on the table. You know, if the digital avatar that is maybe even a representation, it looks like one of my in-game play friends. Is it important that this character, when they are just on autopilot, let's say if they are the stage four of avatar with replication of behavior and motor patterns, let's say the stage four sort of thing. What are the ethical prescriptions in order to let me know who slash what I'm interacting with? And I think this is going to be important as well. Should there be a badge that says this person is autopilot? They're actually not here. Is that going to change the way that I interact with this character? So I think that it's also interesting to think about eventually, if we're kind of in the AR space as well, and a digital rendered police officer comes in the AR screen across my glasses and says, you stop, empty your pockets. All right, to what degree even is power and authority transferred through this sort of digital twinness, you know, is a digitally replicated, algorithmically generated police officer still imbibed with the same authority of the state as a police officer IRL. And so I think this is gonna get interesting not just on the idea of what the locus of analysis is, but also what is sort of the transparency around me knowing to what degree this sort of rendered identity is operating under And then also it gets even more complicated as we throw in power structures onto the mix and start asking the question of to what degree can authority be transferred into the very notion of digital clones.
[00:42:23.246] Kent Bye: Yeah, I feel like as we dive deeper into this topic, there's just sort of like these rabbit holes that go into book length discussions and maybe even unresolved philosophical questions that start to come up. You end this section actually with a series of questions that I just want to read through and calling back to what you were saying, Tommy, in terms of this spectrum between the mundane that we already have, and it's just people do it all the time, taking photos of each other and they don't really even think about it. all the way up to like the science fiction potential black mirror future that is so far out that people aren't even seriously considering. I remember I did an interview with a behavioral neuroscientist talking about with all this biometric data, we know there's going to be some sort of line or some sort of ethical threshold that we're going to cross over. and trying to map out the cartography and topology of what that line is and how to define it, because it sort of gets to the essence of the ethics around all of us in terms of we know if we go too far, it's going to be creepy and a world that's very dystopian that we don't want to live into. But we're in this mundane world. And so how do we know when we've crossed over from the mundane into this sort of extraordinary realm that's a transgression into what these thresholds are? So as I read through these questions, we're not going to have time to answer any of them really in-depth, but I just want to, as a reflection of this as an issue, read these and have any other thoughts you have. So these questions of when does replication of identity become unethical or problematic? And why at that point, where lies the essence of an individual human identity, the level of complexity, the amount of interactivity or the level of fidelity to that original individual? What is the essence of a person, which, again, is a really deep philosophical question? At what level of fidelity do ethical concerns become an issue? How should the level of fidelity be quantified and agreed upon? And what ways to the construction method have any impact on the ethical concerns in terms of Is there a difference between volumetric capture or manually modeled and textured avatar, like a photographer of a person? So we didn't mention it earlier, but like, yeah, there's a lot of photographic volumetric capture techniques of actual documentary captures. So just the same as a photo and a video, there's a kind of a video recording of you versus something that's generated from a computer, which I think that question kind of gets into a little bit. And are there differences between those ethically? And then the final questions you have is, is there reasoning for the differentiation between public figures and people who are more private in terms of the virtual clones? We kind of have a cultural understanding about what it means to be a public figure and what's allowed as a natural part of cultural discourse and critique around different power structures in that sense. And is it comparable to the journalistic code of ethics? And can it differ whether someone's a public figure or not? So I think that maps out a lot of questions. I'd love to hear any other thoughts on as you're mapping out the landscape of different potential questions that all of this starts to bring up.
[00:45:05.435] Thommy Eriksson: Yes, I think especially the last two are interesting to reflect on the construction method and public figures versus more private individuals, because we already have real life situations that we can consider. I think it is quite clear if we think about how people react and think about replications of themselves nowadays with the technology that we do have, I think it is clear that different people have very different ideas about it. Let's say that I walk around in the street and take photographs of people. There are some people that wouldn't mind at all and wouldn't say anything. But there's also people that would be very offended. and think that it was an invasion of privacy. And then another thought experiment, let's say that I walk around or sit in a park bench and make a drawing of a person. There you have another set of, maybe most people would feel that, yeah, okay, that's okay. But I'm sure that there are some people that would feel a little bit creepy or awkward that I'm making a drawing of them and so on. So it's kind of interesting where we have real life situations that at least give some indications on how we will feel if the replications were even more realistic and also have the interactivity with these replications. And I think it is quite likely that people would react very differently, that you will have a very broad spectrum. Some people just think that it's just a clone, it isn't me, so do whatever you want with this replication of me, while others would feel it very offensive.
[00:46:53.952] Mathana: Tell me brings up this interesting distinction between a photograph and a drawing in a public space. And I think there's something interesting about this as well, because a lot of these systems we're talking about also have capitalistic and monetary incentives behind them. So it's one thing perhaps if somebody is there sketching for their personal sketchbook, but people might also feel differently if they went into a gallery and saw a sketch of themselves for sale. And so I think there's also this, you know, about what are the fundamental rights and protections, you know, it's maybe possible to copyright an image of yourself. And if you are a public figure, in some cases, in some places like New York, there already are statutes called personality rights or persona rights that maybe give you some inherent protection. But these, again, are not universal. And I think to me, one of the big questions here, you know, just to go along with what Tony had just said in these questions that we ask is, What is the onus and burden on individuals in order to protect themselves in order so their likeness and image is not taken without consent, and worse, potentially monetized?
[00:47:55.875] Kent Bye: Nice. Well, we have a three more sections to get through here to kind of wrap things up. You know, section five is a reasons for people are deciding to do this one is because they can, you talk about some misdirected benevolent reasons and some direct malevolent reasons, and then the first person, second person and third person perspective. So this seems to be an exploration of different contexts. that are beneficial for why are we doing this? I mean, obviously, when people are going into VR, they want to have some sort of expression of themselves that is capturing the essence of themselves, but love to hear some of the reflections on why are we doing this and what the implications are in terms of how the culture is moving into this post-pandemic realm of these virtual spaces and then embodying ourselves as the physical embodiment and the virtual body ownership illusion of taking on on these identities. There seems to be compelling reasons for it. So I'd love to hear any reflections that you were going into in the section.
[00:48:49.495] Thommy Eriksson: Yes, I think all three are equally relevant, actually, because when you think about how technology can be used in a bad way, It is easy to think that, yeah, it is directly relevant behavior or usage of technology. But I think it is quite common also in a very broad sense that people or engineers, designers that develop technology, they have a good purpose, but it doesn't end up so very good anyway. I think we have many examples of that. And also the first alternative that we do it just because we can, that might sound superficial or it's kind of easy to think, well, that doesn't happen. But at least from my perspective as a technology skeptic, I definitely think that especially researchers, but also companies are doing things very often because they can. And that is not a good reason to treat technology. That is something that I believe very firmly.
[00:50:04.260] Kent Bye: Yeah. And I think there's starting to explore different aspects of crossing over that ethical line in terms of like virtual pornography, or is it okay to be cosplaying as a celebrity? Or what if a journalist is interviewing a dead person who is no longer around, but there's a recreation of the essence of that person. And so to what degree is that an accurate representation of who that person was that model? Is it more of the side of truth or is it more into the realm of fiction that's harmful in some ways. So trying to map out what those harms and what the benefits are, I think is a part of this, but also in the realm of, you know, this era of deep fakes and being able to reconstruct things and how people say things that they never would have said or have representations of people in situations and contexts that they would never actually be in. So I think there's a lot of ways that as we move forward and we're immersed into these virtual spaces where it becomes more a small differentiation between the virtual and the physical IRL reality that as we're in these virtual spaces, then The ability to manipulate and control this, I think it's going to get really quite weird. So yeah, I think sex and violence and all these other aspects that you're kind of digging into unpacking that a little bit more. Well, as we move on to the last couple of sections here, the dead or alive, maybe it's worth just kind of mentioning briefly in terms of there's a lot of cases where actors have passed away and then you have a digital reconstructions. And then there's already been applications within the context of VR, where maybe someone passed away early and then they try to take a capture of whatever media artifacts are happening there and try to reconstruct someone's identity. So after they've passed, you have this virtual immortality dimension, but again, maybe you could explore some of the other ethical issues that you pointed out here in this section of dead or alive.
[00:51:43.498] Thommy Eriksson: I can return to the question of memory of in this case, a deceased person. And I can take a small detour and tell about another VR project that I've been doing for a couple of years. Very shortly, it's a reconstruction of a vintage cinema here in my hometown. And the original idea was that it was one of the cinemas that I had my first movie going experiences in. And it was also an architectural interesting building. So I came up with the idea to reconstruct this cinema in VR and then be able to watch a movie in that, a little bit like big screen or similar applications. And now I had four different student groups that I've been doing this as student projects. And I have been watching a movie in this reconstructed cinema. And one thing that is very clear, at least for me, and I do this as a kind of phenomenological study, where I try to observe what happens with me emotionally. And my initial hypothesis was that seeing this cinema reconstructed would enhance the existing memory. But my experience is that it actually did the opposite. I don't remember the original experience at all now, or very vaguely, much more vaguely than before. So basically what happens is that the reconstruction of the experience basically overwrote the existing experience or the existing memory of the experience. So basically what it means, at least from my perspective, is that a reconstruction can replace the memory of the original experience. And if we cannot transfer that to the potential opportunity to reconstruct deceased persons or relatives, Well, then maybe we have a risk that the reconstruction can actually deform or even take away the actual memory of the person. And I'm not really sure that is what we want.
[00:53:57.338] Kent Bye: I've definitely had this similar experience of say, going into Google Earth VR and going to places that I've lived in. And there's like a really low poly reconstruction, but maybe it's enough of the architecture of the place to kind of remind me of the place. But I have experienced that pollution or fragmentation of my existing memories based upon the virtual reconstruction. So yeah, when you start to talk about loved ones and reconstructing them after they've died, then you're playing with fire in that sense of risking, destroying your authentic memories of that person versus something that you know, coming back to this limitations of the algorithms and what you've been able to model and the things that are kind of off. So yeah, I don't know if you have any other thoughts on that section.
[00:54:35.228] Mathana: So there's one section and I think Tommy for, you know, we had this whole paper had come out of a interesting discussion that had been going on for a while and Tommy really led the development of this paper. And I was going to say, there's some really great lines. And I think that one line really stood out to me as I was rereading, which says a dead person themselves loses control over their identity. I think this is a small thing that gets missed sometimes. Even more so in the idea of digital mimicry, especially being able to have speculative memories put into an overlay over the digital clone. Also in this same section, bring the historical context. and to say that throughout history that the dead have been preserved in different ways, whether mummified or celebrated, prayed to, in certain cases seen as deities, ancestral worship as well, all these sort of things. And so it's interesting as well as we're talking about particularly as we have this conversation from an often Western logic-centric perspective and this sort of life ends at death sort of thing. If we look anthropologically through the world, this is actually not a phenomenon that for most of human history and most people have taken when it comes to the end of life. I think there is Also, the different cultural contexts are important to remember here. Confucianism and other philosophies have respect for one's elders, but also, as I mentioned, for ancestors and praying to days of the dead and these sort of things. So, I think that there are also some interesting ways to think about what outside of a Western logic context, a very hyper-scientific, hyper-rational, realist perspective, some of these tools also can be used as ways to reconnect with those that have come before us. But I think it's important also to make sure that it's not a pay-to-play sort of scenario in which that only those access to capital are able to enter into these sort of almost metaphysical relationship with the dead. And so I think that as we're thinking about the ethics of it is also important to not get pigeonholed in just the contemporary 2020s hyper-Western logic-based system that says, oh, here's the nice little box for life and death, and to be able to also open up our ethics more to a global perspective here.
[00:56:56.463] Kent Bye: Yeah, I think there's really important points there, especially when we go back to the discussion about the spectrum of the mundane of taking photos and videos all the way up to the reconstructions. I mean, there's some cultures that don't even want to have their photographs taken because there's beliefs around a capturing of the essence or the soul, which I think is replicating when you have a representation of something that is taking away from the lived experience of that as you have a dynamic experience with that as a person. So yeah, I think it's worth bringing in those alternative perspectives to see what those lines are. And some of them have, depending on what the culture is, what they're comfortable with, their threshold is at different locations. So important to reiterate that. In the last two sections, I think the risk assessment, you start to look at both the present day, the near term, and then kind of the far future in terms of the existing landscape of threats that we have in terms of transgressions that may happen in terms of identity, and then what to do about it in terms of recommendations. So I'd love to maybe briefly map out some of how you see this progressing. And it kind of goes back to the spectrum from the mundane to the science fiction, as we move into the future of these technologies, and we have more capabilities of modeling and capturing different aspects of our identity and having the potential of AGI and self-aware expressions of our identities, then what are the implications as we move into more and more of the science fiction future, what are the implications and things we should be aware of in terms of our identity, and how to start to understand a mapping of the harms that are possible that can help inform us as we go into the last section of the recommendations of what to actually do in terms of policies and other things. So I'd love to hear a little bit of reflections on the risks and then recommendations.
[00:58:33.310] Thommy Eriksson: In the text, we kind of mention a lot of different risks, and we have been talking about it different sexual abuse, violence towards virtual clones and also the disruption of memory as we have been talking about and so on. But I think it is hard to predict what will happen with technology. I think that is an important aspect of technology that we don't really know what it will be used for. So instead of pinpointing, yeah, this is the risk, this is the risk, I would like to highlight the quote that I mentioned before, that if you imagine a future where everyone will be able to do virtually anything to anyone, And then from that, you dig into your dark imaginations, then you can come up with pretty scary scenarios, I think. And as I've said before, I really appreciate how science fiction is very good at doing this. And I think that even if some of the ideas that we discuss in the text feels new, very much of it have been discussed or reflected on for decades, really, in science fiction and also by academics, of course. So I think my answer is use your worst imagination.
[01:00:06.843] Kent Bye: Yeah, that's probably a good place to leave that. And you go into a little bit more details there in that section of exploring a little bit some of the other contexts as well. But yeah, as we move into the future, what if you can do anything to anyone virtually, then yeah, it doesn't take too much imagination to see how that could start to go really wrong. Okay. So the last section that I wanted to cover here is the recommendations. And we already heard from Athana about the recommendation number one, which is let's talk about it, which is essentially what we've been embodying here throughout the course of this conversation. And Tommy, I'd love to hear you maybe go through recommendation number two of establishing a body, right. As well as the last recommendation, recommendation three of establishing an identity donor card, and just kind of some reflections on those other two recommendations that are in this eighth section here.
[01:00:49.597] Thommy Eriksson: Yes, and as you say, the second recommendation that we have is something that we both call it a person right and a body right. The idea is to suggest basically a law. I'm not really sure who would design such a law, but If it doesn't come to legal legislations, then at least when people are developing technology and VR experiences and applications and so on, at least we can have it as a rule of thumb. And the idea with this one is do something that works a little bit like copyright. So when you have created a material or a piece of art or something like that, then you have the right to it. And you have two rights. You have the right to use it, to sell it and so on. But you also have the right to be acknowledged as the creator. So that's very short copyright. Our idea is to suggest, as I said, a similar right for person. I am very much in favor of finding easy to remember, easy to understand and catchy phrase. I prefer the term body right, but there is a problem with that, of course, and that is that it focuses on the body, it focuses on the physical appearance, and it's easy to misunderstand that it also should include behavior and memory and so on. But of course, the advantage with a term like body right is that It's very easy to understand that it's something similar to copyright. So you have an intuitive understanding of what body right could be.
[01:02:40.818] Kent Bye: And yeah, there's the idea of creating an identity donor card. So maybe you could go into what's it mean to donate your identity?
[01:02:48.664] Thommy Eriksson: Yes, and it's a little bit similar there that, as we've been talking about previously also, I think that sometimes we can look at what we already have and base our new ideas or new ethical limitations on what we have, because after all, we might already have something that is close to a solution. And that was our thinking when it comes to the donation card or the identity donor card that To kind of piggyback on how donor card works, that in a donor card, you specify if it is allowed to use your physical body and in what context and what part and so on. And if you don't have a donor card, I'm not really into the legal matters here, but I assume that in most countries, if you don't have a donor card, then it is assumed that you don't allow donation. And basically, the idea is to devise identity donor card in a similar way, but for your identity as you specify. Yeah, it is OK to create a dialogue system that my relatives can communicate with, but I don't want it to be used by commercial companies, for example. So you basically specify which part of your body you want to be used. You can specify which part of the identity that you accept that others use, and also who can use it. And in a similar way, if you don't have this card, then it is assumed that your identity is not allowed to be used in any way. And this is kind of focused on when you pass away, obviously.
[01:04:35.915] Kent Bye: Yeah, some of my own reactions to the future of identity and how to wrap your mind around it. Cause I do think there is going to be the larger context of surveillance capitalism and the newer rights of the right to identity, the right to mental privacy and the right to agency. I think the right to identity and to how to define the limits of identity and how it's used in the context of surveillance capitalism in concert with our mental privacy and our right to agency, you know, the risk of, modeling our identity to the level where you start to understand people maybe better than they understand themselves and to start to nudge their behaviors and undermine their ability to take intentional action. So I think that's a larger context we talked about earlier, but I think in terms of the recommendations, we're kind of in a realm where a lot of identity has been defined as personally identifiable information, and there needs to be what Britton Heller has defined as the biometric psychography of starting to at least legally define some of these biometrics and physiological data and the implications of how it's tied back to identity as an individual. Maybe that's not personally identifiable, but it's revealing parts of identity amongst different contexts and the relational contextual dimensions of our identity that isn't so immutable in terms of who we are and our fingerprints and our DNA. So I feel like we're kind of moving into this realm and we need to have some legal frameworks to do that, but also to have a larger context for how to live into a world where there's surveillance capitalism and knowing that there may be these ethical thresholds that we're starting to transcend. That I think you start to map out some of those different dimensions within this paper, but I think it's worth calling out in terms of recommendations, some of the other things that still are yet to be done in terms of from a legal perspective, but also the tech policy perspective of how to start to address this. And I think Britton Heller's concepts of biometric psychography is a good start. And there's lots of other discussions in the AI realm and other places as well. But before we start to wrap up, I'd love to hear some of your thoughts, Mathana.
[01:06:23.590] Mathana: I'm unfortunately going to have to take off in a moment. And I just, before I talk about this, I just want to thank Kent for taking this time and for having this conversation. I think it's super important. Tommy, thank you for, again, as I said, like leading this chapter development. You know, I hopefully that people that are watching this come back to this paper. First, they've gotten an idea about what we put together here and also can be a benchmark for others to start thinking about how they can start thinking about these issues, both kind of ontologically and taxonomy based thing. And again, my name is Mithana and it's been a pleasure speaking with you all. I think the one thing I'll just say that kind of sums up and kind of puts in both the risk assessment and recommendations into one final thought is I think that we need to start thinking about this in one way as an intergenerational issue. Both, you know, we talked about in some ways the right of memory, but it's also, you know, the right to be remembered as well. And also, what are the ways in which I'm able to also control the way that I'm remembered? And also, if I'm remembering through these systems, the sort of transparency that I have around about what exactly I am remembering. So I think that this dynamic between, you know, how does memory play with a visual representation in some ways? And finally, I think, you know, what is the chain of custody over identity and memory going to look like? And to me, I think my final point here of taking these two together is, how can we create systems that create the lowest burden of individuals in order to actually have control over this chain of custody and the sort of right to memory and remembering? And I think it's going to be important to not make this bar so high that people have to opt into systems. And so maybe one day we will see a system in which that we're going to need a living will that is going to cover our end of life decisions, you know, in a digitally rendered way. But that's going to put a lot of burden on the individuals. And this is going to be a new thing inside of a new societal structure in which that a lot of people might not even know exists or that they need to do. And so, yeah, I think that that's my final thoughts on that. And I'm not just all pessimistic, but I think that there are things that we need to now think about both across, you know, the ethics field, academia, industry, the regulatory side, and also for policymakers. And I think that right now is a very exciting time. And I'm glad that people like Kent are spending a lot of I've dedicated their life to start having conversations with such a wide range of individuals that are working and thinking about these things, and also the work that you've done, Kent, to kind of help us structure over the last couple of years the way that these papers come out. So thank you for your work here. And Tommy, again, thank you for your insight and leading the development on this paper. And so with that, I'll leave you all and let Tommy take over the rest. But it's been a pleasure. And thank you all both for the opportunity and privilege to be here with you.
[01:09:02.978] Kent Bye: Awesome. Thanks so much, Mathana. One quick question before you go, Mathana. I don't know if you wanted to answer the ultimate potential of VR and what it might be able to enable. If you have any quick thoughts on that, or if you have to go, that's fine too.
[01:09:16.223] Mathana: I think that, as I mentioned at the top, I am an interplanetary philosopher and a lot of the work that I do is also thinking about human inhabitation of space. And I think that actually one of the great potentials might be actually in the future when there are enclosed environments, whether it's climate change or, you know, catastrophe, or eventually humanity becomes a multi-planetary or even interstellar species. One of the ways along for long voyages that's actually that we are going to be able to use VR and other sort of digitally mediated interfaces in order to both onboard people into new ethical paradigms. And particularly, we think about maybe removing family and ancestral legacy, socio-cultural context out of it, be able to have actually kind of a standardized onboarding process into being human. And I think that there are some very interesting long-term things. But to me, if we're thinking about the far future in this, that actually being able to integrate this for long voyages, for space travel, but also a way to use these tools as a singular sort of education, and I use the word onboarding, but I think that's what it is. If we take away the thought of if one day machines are responsible for education and these sort of things, that is a way that actually a common ground that it might enable for this new sort of epistemic knowledge transferal. And in that sense, yeah, I think we have to think about using all the tools at our disposal as we're thinking about both the mid and far future implications.
[01:10:38.997] Kent Bye: Awesome. Well, thank you so much, Matano. I'll let you go on to the rest of your day. Thanks for taking the time to share some of your thoughts.
[01:10:45.681] Mathana: Rocking. Thank you both again.
[01:10:48.323] Kent Bye: Great. So I guess, Tommy, I'd love to hear what do you think is the ultimate potential of virtual reality and what it might be able to enable?
[01:10:56.228] Thommy Eriksson: Well, that is, of course, a very good question. And I love all the answers that you have been getting with the previous interviews. My answer to this concerns with my view on technology. For me, technology is a tool, and it is primarily a tool to solve problems that we have. It can, of course, be a tool for just the enjoyment, like in computer games and so on, but you can phrase that as a problem also. I want to be enjoyed. What is the solution? A computer game. So if we see technology as a tool, I would say that the ultimate potential of VR is to be the ultimate tool. And computers are, in a way, a very ultimate tool already. Because, of course, one of the great benefits with a computer is that It is truly a multipurpose machine because historically we have constructed machines that are specific. They do one thing. A car drives you from A to B. A hammer hits a nail and drives it in the wall and so on. But computers are the very versatile ultimate machine. And maybe one way to see VR is that it is computers 2.0. It is an even more ultimate machine. And what I mean with that is that there are so many things that we can do with it. Remote collaboration, teaching, enjoyment. design, planning, and so on. And I like very much William Gibson's, one of the quotes from William Gibson, I think it is in one of his short stories that the street finds its own use for things. And I think that goes for computers, and I think it goes for virtual reality. So it will be used for so many things. So it is the ultimate tool.
[01:12:59.278] Kent Bye: Awesome. Well, Tommy, thanks so much for taking the time to be able to write up this paper. I think this is a deep topic that can be difficult, I think, to really pin down in terms of the philosophical implications of how to even define identity. I think privacy falls into similar traps sometimes where you try to define something and there's no good way to pin it down. And I think that taking a very pragmatic approach of stepping through both some of the privacy implications, but also the implications of our identity, how that's represented in the spectrum of the mundane into the science fiction future that we're running into. And that at some point we're going to start to get into the creepy or unethical areas of it's going too far of how we're doing that and what the context of that is and whether or not we're consenting to it and the power dynamics around it. I think there's a lot of moving parts that are still yet to be determined, but I think this is a great start of just at least catalyzing the initial conversations around this topic and to have this discussion. So thanks so much for joining me today to be able to unpack it all. Thank you very much. So that was Tommy Ericsson. He's a researcher and teacher in Chalmers in Sweden, working on both media and VR for remote collaboration, as well as Mathana. They're a Berlin based tech ethicist and interplanetary philosopher. Mathana has actually also been a part of the executive committee as the vice chair, helping to attend all the different monthly meetings and the plenaries and everything else. I've been working alongside Mathana on this whole IEEE effort for the last couple of years now. I have a number of takeaways about this interview. First of all, Well, when I think about this concept of virtual representation, I think it's a really handy model to start to go from the utterly mundane, from a still footage and a video recording, on into more and more closer to these science fiction futures and potentials. Starting with an avatar in a 3D model, you start to do avatar and replicating different aspects of your behaviors and your motion patterns, and then you have full human interaction, and then eventually getting to this actually self-aware virtual clone. Now, I'm A little bit more skeptical that we're ever going to actually get to the point of being able to recreate people within artificial intelligence. But it is likely that we're able to have these companies record a whole ton of information about us and then to make these clones of us through machine learning and artificial intelligence. But it's not going to be the essence of us, although there may be enough of the essence of us to start to trick and fool other people. But there's this larger question that it may actually be possible for computing to get that sophisticated and for us to have no ability to differentiate between people. Again, I'm a little more skeptical that that's going to be a future, but that's a possibility. I think one of the things that Mathana was bringing up was that there's a lot of limitations when it comes to these AI models. There's always going to be things that are incomplete, and it's going to be preferencing certain populations. If it's trained on a certain population, it may not work with other populations. It's a lot of the work that they've been doing in terms of AI ethics and machine learning explainability and understanding the limitations of artificial intelligence. It's a line of reasoning that I've seen also argued at a broader sense by Cory Doctorow, where he did a whole How to Destroy Surveillance Capitalism as a reaction to Shana Zuboff's book of surveillance capitalism. I like Avi Barzev's opening statements that he made last year, a year ago now. at the RightsCon, where he was comparing these ad companies as casinos, where you just need to have favorable odds. As long as you're able to tip the odds in your favor just a little bit, then you can still be widely popular. It doesn't have to be exactly precise, but it has to be close enough to be able to drive certain decisions. But what's interesting is that as you start to move forward into having different virtual representations of yourself, think about what's already possible with deepfakes, being able to take someone's voice and to recreate that, as well. What happens when you start to have somebody who starts to mimic your identity and your personality within the context of these immersive worlds? How do you establish your rights there? One of the things that Tommy said is that one way to summarize the risk is that everyone will be able to virtually do anything to anyone, and this must be taken into serious consideration. Yeah, leave it up to your imagination to see how that could go horribly, horribly wrong. At what point is that going to start to cross ethical thresholds? How do you start to establish control of your body rights? There's another issue there that they talk about in terms of reconstructing people after they've died. You can actually start to pollute or corrupt your memories of people that you actually remember. When you start to create the simulation of it, then that starts to supplant the memory of their being. What are the implications of that? You're going to these different realms at your own risk and maybe overriding existing memories that you have of people. I think there's a lot of transhumanist elements that are happening in the context of this conversation about being able to capture the essence of your memories and being able to pass it on to other people. I think that's, for me, a little bit more speculative in terms of what's going to be even technically feasible in the future. I think for me, at least, there's probably a lot more near-term concerns that I have around the digital twin aspect of your identity and what these companies are doing in terms of mapping your identity and how to start to really put a firm boundary on that. A lot of the definitions around the difference between identifiable data, that's personal identifiable data, versus de-identified data. Right now, there's no restrictions on de-identified data because they're essentially saying that it's not tying back to your personal identity. Identity ends up being a key legal concept when it comes to the different classes of data that are being defined. There are certain types of data that are revealing of your identity, and you have to take special care of that. A lot of the different privacy laws here in the United States have these differentiations between identified data and de-identified data. The challenge with XR data is a lot of that de-identified data may actually be identifiable through the right machine learning algorithm. But for me, the bigger issue is that a lot of that data is feeding into these psychographic profiles for what Brenton Heller calls biometric psychography. The fact that all that de-identified data is being correlated to your identity, and then being able to map out all your likes and preferences, and a lot of the things that they're talking about here in terms of your physical movements and your behaviors and your memories, dispositions, emotions, and the expression of your essential character. All those things are being tracked and monitored, either for hard biometrics or soft biometrics, or doing this psychographic profiling. Then, what happens to all that data is also a big question as to what rights do we have around different degrees of our identity being modeled. That, to me, is the big takeaway in terms of where it fits into the newer rights, as the right to mental privacy, right to identity, and then the right to intentional actions and free will. But in terms of this conversation, your virtual representation of yourself is going to be a big part of the discussion that they're having here. It is going to be an issue in terms of, do you have ways to verify? to be able to revoke people's rights, to be able to represent you. Within the context of these social media platforms, there's verified avatars that give a little blue checkmark, and that is at least some level of trying to establish who the actual authenticated identities are. But if you have these technologies that are pervasive, then is there an underlying verification process to ensure that you're actually talking to the person and not some mimicked version of them? There is this ethical boundary between what we're already doing with the utterly mundane, and then we are moving more and more into this science fiction future. I think it is useful to plant a flag in the ground, saying this is what we can at least see as a distant horizon for what the science fiction, artificial general intelligence future might live into. But it's worth thinking about what happens when someone starts to not only record all your information, it's not just about selling you ads, it's about trying to mimic you as a personality. out and creating a deep fake of you and be represented out into the world. So what are the ways that we can have autonomy over that? So anyway, that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a list of support podcast and I do rely upon donations in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.