#1182: Recreating Philosophical Moral Dilemmas in VR, the Gamer’s Dilemma, & Virtual Ethics

Andrew Kissel has been recreating moral dilemmas like the Trolley Problem in VR at his Virginia Philosophy Reality Lab at Old Dominion University. I had a chance to talk about his work in moral philosophy around catalyzed by Morgan Luck’s paper “The gamer’s dilemma: An analysis of the arguments for the moral distinction between virtual murder and virtual paedophilia” that tries to define the ethical threshold between different types of virtual wrongdoing. We may have an intuition for why virtual murder in video games is morally justifiable while virtual paedophilia is not, but we explore how the variety of ethical frameworks like Consequentialism, Virtue Ethics, or Deontology break down this problem. Kissel invited me to give a keynote talk at a Exploring the Humanities through VR Workshop held on December 10, 2021 where I presented on “Process Philosophy & VR: The Foundations of Experiential Design.” We debate process relational metaphysics vs substance metaphysics in the last part of this interview, and I’d recommend checking out my conversations with Whitehead scholar Matt Segall here and here as well as with Grant Maxell for more of a deep dive on the nuances of a process-relational perspective and why I think it’s so useful for thinking about VR.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that's looking at the future of spatial computing and the ethical and moral dilemmas of XR. You can support the podcast at patreon.com slash voices of VR. Continuing on my series of looking at some of the ethical and moral dilemmas of XR, today's interview is with Dr. Andrew Kissel, who is an assistant professor in the Department of Philosophy and Religious Studies at Old Dominion University in Norfolk, Virginia. He runs a VR lab called the Old Dominion University's Virginia Philosophy Reality Lab. They've been recreating moral dilemmas within XR with the trolley problem and I had a chance to check out their VR experience and you go in where a train is either killing one person or it's killing five people and you have to decide whether or not to pull the lever to kill one instead of five and other variations of like pushing people off the bridge. So it's using these virtual reality technologies to explore some of these different moral dilemmas. In my previous interview with Eric Ramirez, we talk about using virtual reality to explore these different types of moral judgments. This is a conversation that I actually did like a year and a half ago back in December 14th, 2021. This was like four days after I was invited to give a whole talk on like process philosophy and the foundations of experiential design while they were giving me the option to talk about whatever I wanted. And I chose to talk specifically about process philosophy. And so at the end of this conversation, we dig into some of the different Western Intellectual Tradition perspective of substance metaphysics versus process relational metaphysics. In my next interview, I'll be talking to a Whitehead scholar, Matt Siegel, about his latest book to kind of dig into more of those metaphysical discussions. But in this conversation, we're talking about applying different types of ethical systems and trying to evaluate things like the gamer's dilemma. Are there different contextual dimensions that make it different and why is it different? the different major types of approaches of ethics, whether it's a consequentialist approach of virtue ethics or deontological approach for looking at some of these different issues. So that's what we'll be talking about on today's episode of the Voices of VR podcast. So this interview with Andrew Kissel happened on Tuesday, December 14th, 2021. So with that, let's go ahead and dive right in.

[00:02:19.087] Andrew Kissel: My name is Andrew Kissel. I'm an assistant professor in the Department of Philosophy and Religious Studies at Old Dominion University. And my exploration into VR has been a weird and sordid tale. No, I actually got into VR initially through a philosophy and video games course. I was a graduate student at Ohio State, and right as I was getting ready to leave, they said, hey, they're putting together an interdisciplinary game studies major and want to know if the philosophy department has any interest. And so I I'd been thinking about some of these things because I was a hobbyist in video games in the background. So I threw together this syllabus real fast, but never got the chance to teach the course. So when I came to Old Dominion University, I said, like, hey, I've got this course that I haven't had the chance to teach yet. And I added a VR component there because I had just gotten the PlayStation VR headset, and I was excited to share it with my students. But as a professor, you have to do research at the same time that you teach. And I found myself asking questions about moral decision-making while playing virtual reality video games. There's a problem that was getting traction at the time called the gamers dilemma that I can go into if we want. So I started publishing on when, how, and if ever it's appropriate to make moral judgments on the basis of purely virtual actions. And to what extent it depended on like real world consequences, or if it's strictly virtual domains are appropriately considered moral domains as well. And that then allowed me to start talking to some other people who are developing virtual reality experiences, which has led to my most recent project. which has been an effort to create philosophical thought experiments in virtual reality in order to sort of gather data on study and try and get some clear understanding about how people make moral decisions. So over the past year, I've been working on a National Endowment for the Humanities grant to create trolley problem thought experiments in virtual reality and have people run through that. So we're just coming to an end on that project. So.

[00:04:21.367] Kent Bye: Okay. Nice. So let me just talk a little bit more about your journey into VR proper and the first time that you came into it. And then obviously you've got this background in philosophy and you know, how those two have continued to intersect within your own journey and how you're bringing that philosophical orientation into the work that you do within VR.

[00:04:41.436] Andrew Kissel: Yeah. So, I mean, my first experience with virtual reality, it was, I was with my in-laws at a mall. And there was some Microsoft set up for virtual reality. And I started geeking out because I'd seen enough. I want to try this. I want to try this. So my first experience in VR was in front of my now wife's parents flailing about doing one of those cliff climbing things. And I mean, I feel like most people, their first experience with VR, you kind of get this like that moment of presence when you first put the headset on and sort of everything around you disappears. I was just extremely struck by it. And I thought to myself, like, I need to get one of these for myself. But as a professor working at a small university, there's not a whole lot of extra cash laying around. So I actually got advice from a colleague that said like, Hey, if you've got expensive hobbies, if you start doing research on it, you can use your research money to pay for your toys, which I thought was a funny piece of advice. You know, cause now you, I run into that dilemma where they say, if you do what you love, you never work at a day in your life, but you also never get a day off because every time that you're playing something, you're thinking about your work. But as I was starting to explore these virtual reality experiences, I saw many of the sort of troops of video games trying to replicate themselves in the virtual space. You know, we have had years and years and years of perfecting and fine tuning first person shooters. And so when I was seeing gamers move into this virtual reality space, it was like, well, let's just transfer all those lessons over as well. So now you can do the sort of killing that you've been doing all along in video games, but in virtual reality. And I was struck by the way it was affecting me differently when I was playing these games in virtual reality, right? It felt much more like I was the one doing these things rather than there was like the separation between me, the button and the character I was playing as. And I came across, I mentioned it briefly before, the gamer's dilemma, which I'm not sure, are you familiar with the gamer's dilemma, Ken?

[00:06:40.349] Kent Bye: No. What is the gamer's dilemma, especially for others who may not know either?

[00:06:43.374] Andrew Kissel: Yeah. So in the sort of philosophical circles I'm running in, It's this dilemma that this guy and I should have looked up all these names beforehand because I'm not going to remember any of them. But he says, look, here's what people do when they play violent video games, right? They say to themselves, this isn't real. They say to themselves, all the studies suggest that playing violent video games does not make you more likely to be a violent video gamer and doesn't make you more likely to engage in real world violence. And we say it's not real. It's just pretend. And so violent video games are sort of a dime a dozen now. It's sort of a dominant cultural theme. You know, as I was saying, first person shooters is like a predominant way of interacting with virtual systems through games to the point where it's like you look at the early 90s and there's these, you know, moral panics around Grand Theft Auto and Mortal Kombat and Doom. to today where, you know, Battlefield, Halo Infinite, Fortnite, you know, it's just expected that like killing a virtual individual is just part of what you're doing when you play video games. So we've largely sort of come to terms with violence in virtual settings. But then you say, well, what about other morally problematic content? How about pedophilia, right? You say, is it okay to engage in sex with an underage child in a virtual setting only? And a majority response you get just from sort of informal bullying is like, no, that's totally gross and disgusting and no one should ever do that. And this is backed up by sort of community standards. You know, Steam as a game front pulls games with pedophilic content, but they don't pull games with violent content. The worry, though, is that the kinds of explanations you gave for why it's perfectly acceptable to engage in violent video games is going to be there for pedophilic content as well. You can say it's not going to make me more likely to engage in pedophilia in the real world. It's just pretend no one's being harmed. Right. And so then this is the dilemma. It looks like if you enjoy playing violent video games, you either need to say that there's some explanatory difference that makes it OK to play violent games, but not pedophilic video games. Or you need to change your attitudes towards one of these contents. Either you have to admit that playing violent video games is just as bad in the same way that playing pedophile games are, or you have to say that playing pedophile games is just as acceptable as playing violent video games. And so this sort of dilemma has been getting a lot of traction. People have been trying to discuss, like, how can we break this stance? And the general attempt has been to say, like, there's some way in which the content is presented that makes it the case that violent video games are not tapping into your actual moral faculties in the same way that pedophile video games have been. But then this just got me thinking, like, at what point do we think we're actually engaging our moral faculties in video games? And when are we just playing pretend? And what does it even mean to play just for pretend? And I think virtual reality adds this whole extra dimension to it, because when we see the kinds of reactions you get to compelling virtual reality experiences, to claim that this is just pretend, it's harder and harder to maintain, because at the very least, you're reacting moment to moment in a way that seems sort of true to life. When I'm near the edge of the cliff in virtual reality, I get vertigo in a way that I don't when I'm at the edge of the cliff, if I'm watching it on a TV screen, right? So, yeah, much of my recent thinking and research has been to try and think about to what extent are the behaviors I engage in in purely virtual context appropriate for moral evaluation? Should I go the traditional violence route and say, it's just pretend I shouldn't make any moral evaluations on this because no one's getting hurt? Or should I go the more pedophilic route where it's like, at least in some contexts, it looks like we might want to make some moral evaluations on the basis of what you're doing.

[00:10:35.564] Kent Bye: Yeah, I mean, this really gets into a lot of different issues in terms of the virtual and the real. And the other concept that comes up a lot as I hear you speaking is the context, you know, the context dependent nature of like, is your behavior only dependent upon this virtual context, or is it a part of your essential character? Or it's maybe creating an experience that is embodied in a way that starting to mirror behaviors or giving you experiences that, you know, this argument that has been in the video game realm for a long, long time around like, oh, there's no research or indication that these different types of violent behaviors have any connection to the real world. But I remember the first time I shot and killed somebody in VR on a PlayStation VR, it was probably like 2015 at the SVVR conference. It was a demo from PSVR and it was like, you know, it was ducking behind a desk and I got up and I shot somebody and it impacted me at the time. And certainly since then, there's certainly been a lot of different experiences that I've had where I've almost become numb to that visceral experience. But I do think that the quality of presence that someone has you know, the way that Mel Slater would talk about it. And from a philosophical perspective is the place illusion, the plausibility illusion, meaning that, you know, there's an illusion that we're in this place and that place is real and that everything in that world is plausible. But, you know, Slater's approach is that it's all still an illusion. that there's still a part of us that knows that it's still a simulation, and that there's a part of us that is not actually at these places. And then Chalmers has the whole virtual realism argument where he starts to say, is this virtual object just as real as other objects? Or is this experience just as real as these non-mediated experiences? And so, yeah, I think there's a lot of discussions here in terms of not only trying to talk about the ontological status of these virtual experiences, whether or not they're real or not, but in terms of moral judgment, my take is that if you're in an environment where you have relationships with other people within, like, say, a social VR context, then the behaviors that are in that environment may be more exhibitive of moral behavior than something where you're just in a virtual simulation without other people. However, I think it also depends the degree to which to how many virtual experiences you've been in, because I've been in thousands of experiences. So I'm a bit numb in terms of like, oh, I want to see what this experience is like to what it feels like to kill somebody in VR, because I know I'm in VR and you may have different behavior. So I guess as you start to look at these issues, where do you even begin to start to come up with either starting to address these issues and you've created a virtual experience yourself, but I guess how do you break down these problems to start to approach them in a way that you can start to maybe isolate some of these various different issues?

[00:13:19.284] Andrew Kissel: Yeah. So I think that's a great question. One of the first places that I like to start as a sort of, you know, very basic philosophical distinction between like, consequentialist approaches to ethics versus deontological approaches to ethics versus virtue ethics as a sort of like three piece breakdown. So the consequentialist ethicist is going to come in and says like, well, when we're evaluating, making moral judgments and making moral evaluations, the first thing that we want to look at is actions. And in particular, we want to look at the consequences or the outcomes of the actions. So this is going to be things like, you know, to what extent does this action promote harm? To what extent does this action promote happiness? And when you bring that sort of attitude towards the ethics of virtual reality, it's very easy to make the kinds of arguments that you say. Virtual reality is just pretend it's not harming anybody because you say, look to the consequences. In a single player game where you're alone in your headset, the NPCs don't have feelings, so you're not harming them. And so as long as you're not coming out of this experience, having learned new violent tendencies or new attitudes that you're then going to carry out to the rest of the world, then there's no reason to make a negative moral evaluation, right? And that's, I think, the way that the conversation largely goes, particularly when we're relying on empirical data, right? It's looked at the consequences, who is harmed, who is benefited. I always think that this is a little bit of a bummer to like limit ourselves in this way, because I think if we're going to start to make the arguments like, well, the evidence doesn't show that people are more likely to engage in real world violence. And we think that virtual reality isn't going to increase negative outcomes. I think that we also have to say that it's not likely to increase positive outcomes. And that just seems clearly wrong to me. I can go into this virtual reality setting. I can have this life altering experience. It's going to change the way that I view the world once I'm out of the headset, right? if for no reason, because I've got new concepts with which I'm evaluating the world that I learned in virtual reality. So the idea that like from the start, there can never be any negative consequences to me is sort of short-sighted, right? That said, I do find the evidence compelling that like violent crime is not likely to increase as a correlation to the extent to which you play violent video games. So we're going to have to be a little bit more nuanced in what we mean by the negative consequences on sort of this approach. So then we might stick with this consequentialist approach, but then think in terms of, well, not like, are you going to be, go out and be a violent adult, sort of like, what world are you endorsing when you engage in this kind of virtual behavior? So it might not necessarily be the case that if I go and play a first person shooter in VR, that I'm going to go out and become a shooter myself. But I am endorsing a worldview that sees violence as a form of entertainment and fun rather than another form. And we might think about the potential downstream effects of this kind of attitude. So it's still consequentialist, but it's less about like physical harm and more about the way that we want to carve up the universe as we're interacting with it. Right. And so Stephanie Patridge is one philosopher that discusses it, you know, we might talk about like a person sort of being hardened by engaging incessantly in this way, even if it doesn't translate to literal violence. And that might still be something that we want to avoid or something we should be concerned about. But if we put all of these sort of consequentialist approaches aside, the other route that I like to go that I think is interesting is to think in terms of like virtue ethics, where we think when we're making moral evaluations, it's not just evaluations of actions, but we sometimes make evaluations of people, right? I say of myself, this is the kind of person that I want to be, and I strive to be that kind of person. The part of being a good person or a person on the path to trying to live a good life is going to involve performing the right actions, but it's also going to involve cultivating the right kinds of virtues, right? Caring for others, being courageous, being honest, that sort of thing, right? And when we engage in virtual reality, particularly if the virtual setting is compelling and I'm engaging my actual moral character, then it might be the case that at least sometimes this is a revealing of my character. It's not that it's a downstream consequence that it's causing me to be violent, but it's revealing something about myself that I take certain kinds of pleasures and certain kinds of contents, right? And so to that extent, we might think that virtual reality settings are good places for moral evaluation, not necessarily of actions, but of persons and the kinds of things that they like to do in those settings. Right. And that's, I think, a very different way of thinking about the kinds of evaluations that we're doing. But then it raises this exact question that you've brought up, which is if I've been through thousands and thousands of VR experiences, I start to think to myself, well, I want to see what happens when I shoot the guy this way, or I want to see if I can fling him in a catapult over top of there, right? And we don't necessarily think that this is indicative of our character. It's us messing around, it's us playing. And so the question then becomes, which of these actions that we're performing in virtual context are indicative of who we are as persons, and which of them are sort of us trying on new clothes, trying out temporary roles just for the fun of it? And I think, unfortunately, the way that much of the conversation has gone has been to reduce it back to the consequentialist claims again that we say, well, if it's not something I would do in real life, then it's clearly not something that's indicative of my character. Right. So as long as it's just things that I do limited to virtual reality, then it's not part of my, quote unquote, true self, capital T, capital S. Right. But in my mind, really, all this is saying is so long as I wouldn't do it in real life, it's just pretend. And in that sense, then it's very much making similar claims as the consequentialist approach, as long as it's not causing you to go out and do bad things, it's not indicative of your character. And I don't think that this is the best approach for thinking about what's going on in virtual contexts. And I think I think this because it doesn't really track how we make moral evaluations of people generally in life, right? Sometimes We take on roles in our lives and what we do and how we view ourselves is based on viewing ourselves through this role. So for example, I am a father. I have a two-year-old son and a wife. I am also a philosophy professor. I'm also a person who plays video games and enjoys virtual reality. And I can view my own self and life understanding in terms of those various roles. And they place different demands on me, right? So what is required me as a father is making time for my family and my children. And what is required me as a teacher is making time for students. And sometimes those two demands conflict. And so how I do myself is this complex web of all these roles that I take on in different contexts. And sometimes these roles conflict. And I guess the view that I've been trying to push in the last couple of years is that we should view at least some of the rules that we adopt in virtual reality as being among these complex web of roles we adapt in order to create the very understanding of self that we live with every day. So if I try for a weekend engaging in a virtual murder spree, and I find that's not for me, that's okay. But if I start to do it every single weekend, and I start to describe my virtual life to friends, you know, I say things like I'm a shotgun main, I don't like to use, you know, the Uzi or whatever, you know, that it starts to become part of the narrative of my life that I tell about myself and the roles that I regularly adopt. Then I think it starts to become appropriate to say that this is indicative of your character. It's how you're viewing yourself. And so the fact that it's limited only to virtual context, I don't think necessarily makes it any less of an appropriate target for moral evaluations, both good and bad, right? If I find out that my friend has been locking himself in his basement, playing super hot every day, trying to figure out ways to save the red guys rather than shoot them, I would say, man, I've learned something new about my friend, right? Like he wants to try and help people even that are trying to harm him, even if I've never seen any inkling of this in the real world, right? In the same way that if I find out that my long Catholic friend is also a member of the LGBTQ community, and I say, man, I need to reevaluate what I thought about you, because all of your Catholic behavior didn't indicate to me at all acceptance of this LGBTQ community behavior. And so these sorts of messy conflicts are to be expected, but they're both going to be part of us. And so I think the view that I'm trying to push is that the fact that this occurs in virtual reality rather than in meat space, doesn't make it any less indicative of the character or allow us to cultivate new kinds of character. It's sort of how I've been thinking about it. And then I've gotten all sorts of people emailing me to tell me why I'm wrong about this. So I'm still working through it all.

[00:22:00.062] Kent Bye: Well, I think, you know, it's certainly a lot of really big juicy questions for me, as I just listened to it, there doesn't seem to be any clear answers, which I think is part of what philosophy does is ask a lot of questions and leave you left with not having any of those clear lines. But I'm wondering if you could, before we dive into some of those different nuances, I'd love for you to dig into a little bit more of the deontological approach, which my understanding is a little bit more of the rules that are set up. And, you know, we have rules that are set up in our society that if you violate those rules, they're sort of like a, normative standards that have different ways of, you know, when you break a taboo, there's sociological impact that can happen. But also within the moral context, there's rules that are set up within the IRL world, for lack of a better term, versus the virtual simulated context where there's code of conducts and other rules that are maybe emergent based upon the culture that is developing. But maybe you could dig into how the deontological approach fits into this larger complex of ethical approaches to XR.

[00:22:55.904] Andrew Kissel: Yeah, so if the consequentialist was saying that we want to focus on moral evaluations of actions and their outcomes, part of what they're going to say is that what makes an action right is something about the consequences of the outcomes. So it's not just like, hey, you should do this. It's what makes it the fact that you should do it is that it would have some good outcome, right? So when a consequentialist says, hey, don't murder people, What they're saying is it's wrong to murder because murder leads to negative outcomes, right? It causes suffering on the part of the person killed and the people they care about. It makes you more likely to do this in the future, all sorts of negative outcomes. And that's what makes the action wrong. The deontologist, on the other hand, says what makes an action right or wrong is its conformity to principle. Now, where those principles come from is going to be subject to debate among deontologists. Some are going to say they're given to us by God. Others are going to say rational reflection and the needs of humans creates universal principles. Others are going to say they're sort of social constructs made up by communities of people. So they'll all come from different places. What makes actions right or wrong for these folks is whether it conforms to those principles. So thou shalt not kill would be an example of a principle that you should follow, and violating is wrong because it violates the principle, independent of the outcomes. Now, generally speaking, the consequentialists and the deontologists and the virtue ethicists, they're all going to agree about what you should do. They're all going to agree you shouldn't murder. Consequential says don't murder because it causes harm. Deontologist says don't murder because it violates principles. Virtue ethicist says don't murder because, you know, that's not compassionate. It's not in keeping with virtue. And presumably, insofar as we can find similarities between people's individual moral attitudes, we should find similarities between what these things are telling us. But it's difficult to try and figure out how deontological principles should apply to XR and VR because it's not clear whether virtual worlds are subject to the same kinds of principles. So when we set up a principle that says, don't murder under any circumstance, we normally don't have to dig too deep into what murder is. We think like, it's the ending of a life, you know, by your hand. So when we get into a virtual context, we say like, well, you can almost say like the suppress like virtual in parentheses, virtual murder. And the question is whether virtual murder is a species of murder or not. Right. And for a long time, the dialogue that I was having with other people was treating it like this is a species of murder that just happens to take place in the virtual context. But there's been a sort of recent pushback on that and says, this isn't murder at all. It's perhaps a representation of murder, but that doesn't make it murder itself. In the same way that when you're watching a movie, you don't say like, oh, the actor murdered someone. You say he represented or fictionally murdered, but that doesn't make it a species of murder. And so then it becomes a little bit unclear, you know, for the deontologist, if our moral principles don't obtain in virtual worlds, then it's not clear whether there are any moral obligations at all in virtual contexts. Now, the deontologist does have some moves that they can make, right? So Immanuel Kant, famous deontologist, right, says, look, you should not murder rational creatures. That's bad. But he doesn't actually think we have any obligations to animals, right? He says like a dog, although it may be capable of feeling pain, is not a rational creature. And so it's not worthy of the same respect that a human is. So in that sense, there's no in principle wrong of harming a dog. But Kant says, insofar as you want to follow your principles, you also want to practice behaviors that will help keep you on the right track of following these principles. And so by harming a dog, you make yourself more likely to harm a human, to violate the principles that you have to hold towards humans. And so harming dogs for condo is like derivatively wrong. It's not harmed because you violated a principle, but it is problematic because you make it more likely that you're going to violate a principle in the future. And so we might try and take these sorts of lessons into an XR context, right? We might say, yeah, if you want to engage in cowboy killing left and right in XR, nothing wrong with that in principle, but insofar as it's making it easier for you to violate these principles in the real world, we might still look at it with some moral concern. But there again, it's in my mind, that is them falling back to we can only make moral assessments of actions in virtual context by comparison to the real world consequences. So.

[00:27:29.964] Kent Bye: Yeah, it's interesting. A few things come to mind. One is I did an interview with Britton Heller, who has disclosed that she was a Jane Doe in the very first cyber harassment lawsuit that from her time at law school and that the very early beginnings of different online harassment and bullying that was happening, that because it was happening in an online context in the quote unquote cyberspace, then that type of harassment was not seen as as real as if it was harassment happening in real life. You know, of course, you know, listening to people who have suffered from harassment and trolling online, that the phenomenological experiences of that can be just as real, if not even more real because of the scale that it can happen, than if you were experiencing that type of bullying. But because it was on an online space, it was in this realm of the cyber harassment that was kind of a new area for law that had to start to determine what those lines were and how to handle some of those different cases. And the other thing that comes to mind is in the First Amendment, there's fighting words, which means that like if you are in real life to somebody and you speak in a way that is going to like incite violence or lead to someone being physically hurt, then that type of speech is not protected. However, if you use that same type of speech in a virtual environment, then you still don't have that same level of physical violence that is, I guess, not in the same class, like the fighting words clause has different ways of being mediated from a First Amendment perspective. And that's usually in relationship of the government relative to people rather than these companies, which they have their own code of conduct and they have their own ways of dealing with that type of dilemma. But in essence, there's the concept of the ways in which that there's laws that are written that are handling things like the fighting words that have different applications when it comes to if it's in a virtual context, where all of a sudden it's okay because the same type of physical threats are not available. So I feel like there's certain ways in which that there's a phenomenological element where sometimes it can feel just as real. And sometimes there's an element of like a physical threat, like when you murder someone in a virtual environment, if they can respond, then what's the big deal? But I can also think of environments where, let's say, there's a virtual world that someone creates where if you die, you're dead, and you're out of this virtual world. So by killing somebody in this virtual world, you would be functionally eliminating all their relational dynamics within that virtual world, and it would start to match some of those similar experiences if you were to murder someone in real life. it's, you know, trying to look at the relational dynamics, from my perspective, that seems to be maybe a key to unlocking some of these different moral questions of whether it's the relationship to other people and your emotions in your experience, versus, you know, your physical health, that obviously, when you die in a virtual world, you're not actually bringing physical harm, although you could have other emotional abuse and other things like that. But another case, when you are in a virtual world and somebody somehow eliminates you, then that would start to maybe mimic some of the similar things of being eliminated from a social context that you are no longer a part of. So I can think of ways of architecting virtual worlds that would start to mimic the experience of murder without the physical violence. But there seems to be some differentiations there, spanning everything from the phenomenological, the physical and the relational.

[00:30:47.935] Andrew Kissel: Yeah, so I think that's an excellent point. I am tending to be thinking, and this is probably because, you know, I'm a philosopher and we go into our basement with our books and play by ourselves and need to talk to people more often. But, you know, the kinds of cases that I was thinking of are single player virtual worlds where you're interacting with NPCs and stuff like that. But I think you bring up an interesting point. You know, if I'm engaging with another person through the medium of a virtual world, at the very least, it looks like whether you're consequentialist, deontologist or virtue ethicist, you can begin to bring your infrastructure in place. So if I'm a consequentialist, I need to think about what are the harms I can cause to this other person, because now I'm interacting with another person. And I take your point that it's sort of limited. perhaps in the ways that you can harm them, you can't cause the same kind of physical harm that you can, but you might have an increased ability to cause emotional harm, right? And if you're a deontologist, right, you know, as I was just mentioning with Kant, like if I'm engaging with another person through the medium of a virtual world, I'm still engaging with a person, a rational entity with their own goals and ends and desires that I need to respect. And so we can start to sort of derive obligations from that. And, you know, virtue ethicists very similarly, right? I can only be compassionate towards other members of the moral community. And insofar as I'm engaging with another member of the moral community, I need to be engaging my compassionate virtues in these contexts. But I think I really like your example of a VR experience that you get booted from if you get killed in the experience, because, you know, one attempt to explain the wrongness of killing is to say that it deprives a person of all future potential happiness, right? And so if we're wearing that consequentialist cap, we can say like, here's why murder is not just wrong, but among the worst wrongs you can incur on a person, that they will now not experience any further happiness or pleasure or good experiences. If what makes murder wrong is the deprivation of future happy experiences, And in your hypothetical scenario, somebody gets kicked out of the VR experience, you're plausibly depriving them of happy experiences in that virtual world. And so while it might be a difference in sort of extent, the kind of wrong, the deprivation of happiness is going to look awfully similar there. That person is locked out from a valuable experience. And if we've learned anything from games like Second Life, people invest quite heavily themselves in their virtual spaces and to be locked out of that would be depriving of them a great deal and not just a fun weekend activity like people outside of these conversations sometimes think looking inwards.

[00:33:26.067] Kent Bye: Yeah. You know, I think that that makes a lot of sense. And I think as we start to look at a lot of these existing infrastructures of the different experiences, they do have ways that you can do actions that get yourself banned. So I can imagine a situation where you, you frame somebody in, as a result, they end up getting banned from the platform or, you know, worst case is if you would get banned at this, at least the way that it's set up now. is that if you have a Facebook account that you need to have for the quest and all that, they said they were going to decouple this, but until they decouple it, it's the case that if you get banned from Facebook, you no longer could even use your VR headset. You're functionally being eliminated from all of these different experiences, which, and in some ways it's like a metaphor for being killed. You no longer have a presence there, which makes me think about different aspects of like, say, truth and reconciliation, which is you know, there's retributive justice, which, you know, puts people in jail. But then, you know, this is essentially like the death penalty virtually when you have your account and you no longer have that account. And there's no recourse for you to apologize or to build right relationships with the people you may have harmed. which would be more in the restorative justice type of framework. So when I think about where we're going, we're starting to create these fully rich, robust lives in these virtual spaces. What does the future look like when there's no viable means to be able to establish the right relationships when you may have caused somebody harm? You know, what are the ways to kind of restore that balance? And right now, it's basically to eliminate them and to commit this virtual death sentence where they're they're eliminated, which maybe sometimes that's exactly what is needed. in order to bring more of a utilitarian argument to the most benefit from the most people is to have those extreme bad actors removed. But when we move more and more into this future where more and more of these virtual worlds are interfacing with our physical worlds, what happens to people that have been eliminated from the possibility to engaging in those other mediated realities when the world around them is being surrounded by people who are living in two worlds per se? So those are some of the things that I think we're still at the very beginning of this, but probably worth bringing up some of these different issues for to think about what is these different justice frameworks that would be able to be applied that maybe go above and beyond our existing methods of retributive justice?

[00:35:41.695] Andrew Kissel: Well, since interesting that you call banning virtual murder, because as a philosopher, I have to think about Socrates. And Socrates gets in trouble with the local government because he's basically going around asking why, why, why, pissing everybody off. And so he gets convicted and found guilty of corrupting the youth because he was basically making kids want to question their parents. The parents didn't like this. So he said, basically, you get a choice, Socrates. We'll either exile you, you know, kick you out of Athens. You're not allowed to come back ever again. Or you can kill yourself by drinking hemlock. Right. And Socrates is like, look, I've spent my entire life trying to hold the importance of this city of rational thought of community with other rational actors to exile me to kick me out of this group is even worse than death. Right. because it's undermining all the work that I've been trying to do. And also essentially cuts me off from all the things that make my life worth living. So he chose to be killed rather than be exiled because exile is even worse. So when you say like banning, I'm thinking like, no, they're being exiled from the community, which for Socrates is even worse than being killed, right? Because it's not just, you know, no future happy experiences, right? But it's also like undermining the very thing that you took to be valuable during the life that you were living. And so, you know, in game studies, there's this sort of old idea about the magic circle, right? Where when you engage in a game, you enter into a magic circle where you say certain rules that normally govern our lives no longer apply here. It's like I step onto the football field, American football field. Now we've all agreed that I'm allowed to tackle you and throw you to the ground. If I did this on the streets, we'd all be like, what the hell? But we've entered this magic circle and the rules outside the magic circle don't enter. over the years, you know, the magic circle has long time been like a very useful phrase for thinking about gaming, but we're starting to realize how porous it is, right? You know, in virtual settings, you know, you step into the world of the virtual reality headset and perhaps now gravity doesn't work the same way anymore. You can float and throw Frisbees, right? And so the rules outside of the headset no longer apply here, but you're still going to be engaging in microtransactions and your real world money is flowing into the game and for certain experiences, losses of virtual currency lead to real decreases in your bank account, right? And as we've been discussing just recently, experiences that you have inside the magic circle are going to carry with you outside of it, lessons that you learn, friendships that you form. And so the magic circle is seeming less and less like this sort of tough bond that can't be permeated, but rather this porous thing that gives and takes. And so when we're talking about dealing with bad actors in virtual settings by banning them, what we're trying to do is throw them out of the magic circle. But it's treating it like it's this thing that you're either all in or you're all out, rather than this thing that we're constantly crossing boundaries in various domains in our life. And so I'm of the thinking that feelings about retributive justice aside, that's just sort of blanket ban is not going to be very useful for dealing with these sorts of facts, just because it's not acknowledging the way in which like this goes beyond just behavior that is contained in here. But it's real persons going back to my previous views, adopting various roles in different situations that are all part of their identity that can't be pulled apart from each other. You can't just cauterize it, cut it off and pretend like the problem's gone away. But my hope is that some of these behaviors are because of views about what's going on in virtual settings, where this is just pretend and not subject to moral evaluation. But if we can start to think of ourselves as engaging some aspects of our character in many of those situations, then we might be more reflective about what it is we're doing when we're engaging with NPCs or other humans in virtual settings.

[00:39:29.398] Kent Bye: Yeah, this is also within the context of a situation where so much of these platforms are centralized and these decisions are being made, sometimes not even by people, they're sometimes automated by artificial intelligence. And so I know that there's the Facebook oversight board that now meta oversight board, I don't know if they've renamed that as well, but They had like tens of thousands of different things to go be appealed by this board. And then they basically looked at like a few dozen of them, maybe like 50 or 60 total relative to thousands and thousands and thousands of them. And so, you know, the scalability of these different decisions and the justice that we have existing right now. This is also in the context of me serving on jury duty last week. So providing my civic duty to sit on and listen to a court case and just marveling at like, this is both the intent, which is novel, but also all the different ways in which that bias and stuff plays into all of this. But like thinking about, do we need to have a similar type of entire justice system for virtual worlds? but also thinking about those concepts of restorative justice. And, you know, certainly there, like I said, there's cases where you want to be able to eliminate those bad actors, but also, you know, there may be people who just make an honest mistake when they're young. And does that mean that they're all of a sudden banned from certain sectors of the metaverse for their entire life based upon something they did that they may regret or they may have found ways to repair the relationship of people that they caused harm to? So I feel like, I don't know, we're still the very early phases of what the justice systems may look like in the future of the metaverse. But yeah, some of these different discussions, it makes me reflect upon, you know, not only the moral decisions, but also these at a societal level, what are the institutions and the structures that are going to be able to create a system that creates this global community? And what obligations as citizens do we have to provide some degree of emotional labor to be able to ensure that there's some system of justice that is playing out equally for everybody, rather than having everything being decided by these companies and corporations. Or if it's the matter of these are private networks, and it's completely up to the right to those companies to do that without having any community involvement. So I don't know, those are just some things that are coming to mind as we start to discuss through all this stuff.

[00:41:38.841] Andrew Kissel: Yeah, no, the metaverse has a long memory. And I deeply empathize with the, I made an early mistake or something that I would like to change or reflect on it. And the internet and the metaverse doesn't let those things drop very easily. It can be hard to sort of move on, make the kinds of changes, the character developments that I've been advocating for here, when that is constantly sort of looming behind you. But I am also, I think with you, very skeptical of the democratizing of moral values. I think History is a good lesson in consensus and agreement. It's not necessarily a good indicator of morally appropriate or good. And so when I hear you describing these AI systems making decisions about how we're going to run this thing, or even boards getting together and voting on it, I think it's less about trying to find consensus or reflect the will of the people and more about engaging in hard conversations about what our values are in these contexts. And I think, you know, for some of these things, we might be able to import old values over pretty easily. Right. You know, prohibitions against emotional harm, I think, should probably track over fairly straightforwardly. But the question is then how to implement it directly. Right. So it's sort of like we've got this idea against harming others. And the question is, where are the sources of harm? What are the ways we can harm people in these contexts? But for other traditional and by traditional, I mean, you know, sort of historically important, not like gather around the family, you know, two and a half children traditional, but how these traditional values we might need to come up with new understandings of what's important about them. So, for example, something like privacy, right? I've long thought about the importance of privacy in terms of Again, ways that I've been describing my understanding of self as adopting different roles, right? Privacy is important because privacy allows me to manage the way in which my identity is presented to others. It allows me to pick which role I'm adopting when I'm engaging with others. And this isn't two-facedness. This is just what we do when we engage with people. Who I am in one setting is not necessarily the same as the other. And there's going to be conflicts that I got to work through. But privacy allows me the ability to control that narrative and in doing so to build my own sense of self. And by losing privacy, I lose control of that narrative. And so I lose my ability to make myself who I am. Right. That's long how I've thought about it. And so then you have difficult questions in like, non-virtual settings about how privacy should be enacted. And I've always tried to answer those in terms of these more fundamental questions. Who is controlling the narrative to what purposes and why? When we move to a virtual context, one of the great things about these technologies, about everybody being linked together, is that people who have historically not been able to control their own narrative now have more of a voice, now have more of a say. And that's not just them like sort of getting the message out, but it's literally constituting their identity online. But the very same technologies that are allowing them to engage in these spaces to build their own identities also require that companies that are not them have great access to data and personal information about them, which historically would be a violation of privacy. And so now I find myself in this sort of this myth where I think, on the one hand, given the importance of privacy as self-constituting narrative, allowing access to personal data is in support of the very goal we're trying to pursue with privacy. On the other hand, it takes the form of a thing that historically has been very serious violation of a right to privacy, right? And so that's where then I think, well, we might need to do some more digging. And instead of working just with the concept of privacy, work with more subconcepts about what's important, why it's important to try and get a sense of how can we work out the legal logistics, which are even more complicated than the philosophy. So I'm not even gonna try and touch those. about privacy in these settings.

[00:45:43.578] Kent Bye: I'm a fan of Helen Eisenbaum's contextual integrity theory of privacy, which starts to get into that, both defining these different contextual domains and realms where there's what she calls an appropriate flow of information, given the normative standards of that context. When you go to the doctor, as an example, you would give your medical information, whereas when you go into the bank, you're not giving your medical information, but you're giving your bank information, whereas you may not give your same bank information to your doctor. So I feel like there's something there about the contextual domains and how there are different ways in which that we are in different relationships to what information is appropriate. And I think that's the question of the appropriate flow, what's appropriate and what's not appropriate to be in these virtual worlds and how that data are being captured and how it's being used. And often it's to violate different aspects of our mental privacy to gather all that information, do psychographic profiling, and then to model our identity, to understand principles of our essential character, and then to eventually nudge our behaviors that could potentially undermine our agency and our right to take intentional action free from undue influence from other people who are trying to manipulate us in different ways. So that's the neuro rights, which are trying to establish these fundamental rights to mental privacy, rights to identity, right to agency. And currently, right now, we don't have a good way in which those fundamental neural rights are embedded into a federal privacy law or even GDPR. Britton Heller has also talked about these concepts of biometric psychography. And the point that she makes is that most of the laws that are written right now are really focused on personally identifiable information. So being able to connect information back to you, your identity, your phone number, you know, information that is going to be more static. And looking at what are your likes and dislikes, because these virtual environments will understand what that contextual domain is, and understand what your behaviors are, could be able to get lots of tightly targeted contextual information that is able to identify different aspects of yourself relative to that context, which then could be used for targeting ads that could then be used to further nudge your behavior. Like, where do you draw the line if we go down this path of surveillance capitalism within the context of all these data rich and contextually rich information, at what point is it going to be unethical to be able to start to subtly change or manipulate or influence people based upon these really robust models of our identity, our mental privacy, what we're thinking in our physiological and biometric reactions, and then to potentially undermine our intentional actions.

[00:48:07.283] Andrew Kissel: Yeah. I find that myself and perhaps people like you, we hear about modeling structures and the nudging and what we hear is coercion and a threat to autonomy and things like that. Right. And I don't want to downplay that, but I've also been surprised by talking to some of my colleagues that have not had the same experiences as me in the past where they say, you know, That thing that you describe as nudging you towards certain things that you don't want is also what put me in touch with a whole community that I didn't know existed before. And I found because of the benefit of some of these algorithmic data, data mining. And so for them, there is always going to be this threat of coercion and undermining autonomy, but it's also been a great opportunity for them to exercise further autonomy. And so I've been trying to balance that perspective in my thinking recently. which is the affordances that these very technologies that I'm concerned about can also provide to people, especially people that are not like me and have not had the same life experiences as me, if that makes any sense.

[00:49:14.965] Kent Bye: Yeah, it all comes down to the context in terms of the appropriateness of that, the normative standards of what is appropriate and what's not appropriate. Because these virtual realms are so new, there hasn't really been really established normative standards to be able to make some of those decisions. So that's where Nissenbaum's approach of contextual integrity is difficult, because knowing how specifically to say, okay, we know that if we have all this biometric and physiological data, we can basically architect this pathway into dystopia. But what are the things that we cut off and put a conceptual frame around what specifically within those actions should not be available and what things should be available to be able to enable these amazing things of being able to connect to people versus being able to subtly undermine different aspects of our health as an example. health information that could be revealed, but then is sold to insurance companies. We have a fiduciary relationship with doctors who we trust that the information we provide to a doctor, that they're not going to turn around and sell that information to an insurance company that's going to deny us coverage. We have this system within our society that prevents that from happening. However, there's no fiducial relationship when it comes to the types of information that are gathered by these technologies, meaning that whatever they are gathering that for, they could do it to be able to advance their own profit motives while causing us harm by denying us certain access and availability to certain goods and services based upon this invisible social score that's created based upon our behaviors. that is then preventing us from having access. And so at what point do you conceptually start to draw those lines? I think that's why we don't have a really robust federal privacy law or a neuro rights framework that is viable, that is able to connect those gaps from like the high level principles of like, hey, it would be nice to preserve aspects of our mental privacy, our identity and our agency, but to then take it to the next level for okay, how do you translate those philosophical virtues and those principles down into the specifics of what the law are, and then how to then enforce that law to make sure that these companies are following that.

[00:51:18.209] Andrew Kissel: Yeah, I mean, what you're saying makes sense. And I absolutely agree with you that determining the context is going to be important. I do come back a little bit to sort of my uneasiness about democratizing process here, where it's something like we'll ask around, how do we all feel about our privacy relationship with our doctors? We feel good with that one. OK, what can that then tell us about these different contexts? How many translate over? How many don't? Because there's going to be people who don't have that feeling of trust with their doctors that you described, where they think I'm going to be heard. And this information that I'm sharing with them is not going to be used in ways that will not be beneficial for me. And if they don't already have that in the cases that we think we've worked out, what privacy means. I'm not sure then what that means for these new cases, right? Where the thinking is two people could come in with different life experiences and what the context determines as best for them is gonna be radically different, right? But then if we're all gonna be engaging with these systems together, how do we work through this problem, right? Yeah, so, you know, a black person's experience with a doctor historically speaking, is going to involve a lot less trust because data seems to suggest that doctors underestimate pain claims by Black folks in the United States, and particularly for Black women, right? And so that contextually determined privacy relation and trust that we're describing here, I'm not sure obtains for everybody, right? And so then I'm not sure how we're supposed to use these to help us determine what's going on in other contexts. And that's not to say that the project is not going to work. It's just to lodge something that I've been thinking about recently, as I've been trying to work through these same sorts of issues.

[00:53:00.797] Kent Bye: Well, I would make a call to the larger philosophical community that these issues of neuro rights and privacy are probably some of the most pressing issues to be addressed. And I do think that in order to really live into the full potential, we do need some protections, which means that we would need some laws, but I've been at a loss to try to like even figure out how to articulate above and beyond what the more human rights approaches that the neuro rights approaches, but those are all sort of high level and they're not to the level of translating those ideas. So like there's certainly debates that are within the context of the United States in terms of the federal privacy law that I've discussed in previous episodes with like Joe Jerome as an example, where we dig into a lot of nuances between To what extent did the federal laws preempt the state laws? And was there private right to action? You know, some of these specific nuances specific to that law. But I'd say that XR and the threats to privacy that XR present, I think for me, makes me feel unsettled, meaning unprotected, that we're kind of living into the same type of surveillance capitalism models that we've been using. And we've already seen potentially with Cambridge Analytica to the degree to which that that type of information could be used to undermine our democracy. by targeted advertising with a small number of people to be able to nudge their behaviors and to shift the entire election. So sociologically, that's certainly that scale. But I think as we look at what's that mean as an individual basis, where you start to take that same concept and with a state actor, or even just somebody who has a profit motive, just not trying to undermine someone's well-being, but just to make more money or to create an environment where we're living in this space where it feels like we constantly have to defend ourselves against people that are trying to take advantage of us, where it doesn't feel like we're safe, or that we're somehow radiating information that we can't consciously control, but yet that's being measured by this technology that is then, again, creating a model of ourselves that are then being used to make judgments, whether that's judgments around like, you know, you can take it to the extreme of thought crimes where someone thinks something, then they're labeled as a terrorist or someone who's a threat. And without even taking any action, all of a sudden they're being arrested because of the data that are being captured and something that may be imperfect in the way that's even making these judgments. So I don't know, I feel like there's a lot of opportunity for the larger philosophical community to start to step up and engage in these discussions to help close some of these gaps and to sort things out, hopefully engage in the point where they're not at the end of the day, have more questions than answers, although I suspect that maybe the result. But from my perspective, it at least gives enough clarity to make these human rights philosophical ideas down into some lower-level tech policy. Thomas Metzger has talked about how there's a technology pacing gap to which the technology is moving so fast that the ways in which we're creating the conceptual models and the tech policy frameworks are lagging so far behind that it's anywhere from five, ten, 20 years behind, arguably, in some cases, where the technology is so far advanced from both the understanding of that technology, but also the tech policy around it. What is a way that is a little bit more agile or a way to at least provide some more responsible innovation principles, but at a policy level that is able to test some of these different ideas to see what the effects are and to have a system where we're not having a situation where the tech is just able to run without any oversight and create these big mass shifts in our culture when there could be other ways that the government needs to step in and start to figure out how to mediate this relationship where a handful of companies are functionally more powerful than a lot of governments around the world. And as I look at this, I think privacy is at the heart of a lot of those discussions. and I think very philosophically rich in terms of trying to take these ideas and translate them into the tech policy realm.

[00:56:51.500] Andrew Kissel: Yeah, it's it's actually unfortunate. I mean, philosophers by nature tend to be a little bit slower and more deliberate in their arguments and their thinking. But then academia, as it currently works in the United States, has a sort of built in inertia that prevents it from moving quickly by design. as a way of trying to increase the quality of the thinking. The idea is like, you know, hey, you come up with a great idea. Well, you got to publish it in a respectable journal. And that respectable journal is going to take six months to a year just to return comments on your draft, whether it's going to get published or not. Right. And this slowness is to make sure that lots of eyes view it. Your review process tends to be slow, particularly in philosophy. But that very thing that's supposed to ensure the quality of the thinking and make sure that you're not coming off half-cocked also means that we're lagging even further behind that tech. And it's not just, oh, people are being slow and not addressing the issues, but it's that even the people that want to address the issues are finding themselves in systems meant to be slow and deliberate. That's making it harder for philosophers in particular, but academics generally to keep pace with, you know, what's going on in industry right now.

[00:58:01.232] Kent Bye: Yeah, well, maybe we should talk about the, your experience that you created, because, you know, we haven't really talked about that yet, but you, you say, how did you pull me into a privacy conversation rather than killing people with trades? Well, we're sort of following the thread of ethics. I mean, it's sort of, there's so many deeply rooted topics for the ethical discussions within XR that I've been tracing for a number of years. So nice to just kind of talk about these at a philosophical level, but I'm very curious to hear that. What point did you decide that you wanted to try to recreate what's essentially the trolley problem within VR to be able to see this research? You know, we started to talk a little bit about, you know, Hey, let's maybe get some grants and start to do some research. And so. What point did you decide to start to create an actual immersive experience to start to test whether or not you would be able to discern whether or not people are acting morally or not within these virtual environments?

[00:58:50.815] Andrew Kissel: Yeah, so it started about what has been late 2019, I think, you know, like just before the pandemic really hit, I started having conversations with some other folks at Old Dominion about their virtual reality components. And at the time, my primary interest was How do people react in virtual settings? My sort of intuition was if you can create an immersive experience, you will be able to get better assessments of what people think morally speaking they should do. Maybe I should take a step back though and describe the trolley problem first. It's pretty popular now, but I never know if somebody's first experience with it. So the trolley problem is a philosophical thought experiment we say like, look, you're walking along the tracks of a train, you notice train coming down and it can't stop. Looking up, you see five people on the tracks. And if nothing's done, the train is going to hit those five people. They're all going to die. But you notice you're standing right next to a switch that you could pull that would divert the train, send it onto a different track, killing only one person. So the question is, would you pull the switch, killing one person to save the five people? And like most people, when they hear about this, they're like, absolutely pull the switch. And they are thinking consequentialist terms, right? Which outcome is going to minimize harm when, where one person dies rather than five. But then philosophers are jerks. And they're like, well, wait a minute. What if we mixed it up a little bit? And we said, instead of in a train, you're in a hospital and you got five people all dying from organ failure, each one dying from a different organ failing, but they all have the same blood type, right? person comes into your hospital, they've got the right blood type and they've got all healthy organs and you go, I could kill them, harvest their organs and distribute it to the five. Now, if your previous thinking and the trolley problem was do whatever minimizes suffering, it looks like you should kill the person, harvest their organs and distribute them to the five. But most people are like, that's absolutely repugnant. We would never do this. And so philosophers get into these long disputes about what you should or should not do. And they use it to argue for their preferred flavor of ethics. I think it's interesting because it's an opportunity for people to reflect on their own commitments and perhaps say to themselves, hmm, so maybe I don't just go, you know, sort of full bore consequentialist ethics. There are limits to what I think is acceptable in the interest of the greater good. Right. And so that's what I use it in my philosophy classes to get students to sort of do this digging on their own beliefs. Right. What are the limits and what ethical framework do they want to engage in? But when I present it to you like this, particularly when I've got like 18 year olds that are, you know, very confident in my intro philosophy classes, they're all like, well, I'd throw myself on the tracks to sacrifice my own life to save all six people. I'm like, that's incredible. And if you can do that good for you, you're a better person than me. I'm a coward and would never be able to do that. Right. But then you start to wonder, like, I can sit here in a quiet moment of reflection and say, I think the best thing to do is X, Y, Z. But it's a different question about if you actually found yourself in the situation, what you would do. And it's a different question to ask, you know, like if you had to do sort of hot, fast thinking, what you would do. And so my thinking is if in virtual context, we can engage people in making the kinds of moral decisions they would actually make in real life, then it would be cool to present this kind of scenario, not just verbally or in written form, but through virtual reality. And so we could learn about how people actually make moral decisions, but we could also then learn about the extent to which virtual systems can engage people's actual moral decision-making process. So we created this trolley problem, five versus one in virtual reality. We made it for the Oculus headset and then upgraded it to the Oculus Quest 2 headset. And For the past six months, we've had it up and running on SideQuest, and we've been collecting data from people going through it, giving them the opportunity to pull the switch or not, and sort of recording their responses. we've been trying to do our due diligence to protect privacy. So we've got some very clear things in place to try and separate any identifying information from the actual decision-making process. But we have been collecting quite a bit of data, movement data, basic head movement data, and then ultimately whether the person pulls the switch or not. And we're finding it's been about 100% of the people have pulled the switch, killing one person to save the five. And We also gave people the opportunity to push a person off of a bridge in the way of the train to stop it. And that's been about a 60, 40, 60 percent of people have been pushing the person off of the footbridge to stop the train. And again, if the only thinking is do whatever it takes to save the most lives, we would expect those two things to line up. But the fact that people seem more reluctant to push a person in the way of the train to save five lives seems to indicate that their actual moral decision making processes are based as much on the thought of killing this person right in front of them as on the number of lives saved. If I'm being totally honest, none of this is like groundbreaking work in the sense that we've been studying the trolley problem since like 2001 in empirical settings. But what we are finding is that what's going on virtually is generally matching up so far to what people have been responding in pen and paper. So there's some really cool views about moral decision-making, two process theories from Thierry Cushman and Josh Green, trying to explain what's going on in between your ears that explains why you're morally judging these things in the way that you are. But we have one of the neat things about our study is that we're having people do it from their own homes. So because it's up on SideQuest, we're not bringing people into our laboratory, sitting next to them with lab coats and saying, okay, you're going to get a chance to pull the switch. What are you going to do? And so that means we've gotten some more interesting responses as we collect sort of audio recordings. So We've had some people say things like, well, I would never push a person off of a footbridge to stop a train in real life, but because this is just virtual, I thought I'd do it. And then they went on to give the moral justification. And I did it because I wanted to kill one person to save the five. But for some people, that sort of virtual framework was there and they were using it to allow for behaviors that they might not otherwise do. But for other people, it was just straightforward, like this was a moral dilemma and I wanted to save as many lives as possible. So I pulled the switch. So it's interesting to see which of these people are engaging their virtual decision making process and which ones are using their actual moral decision making processes.

[01:05:23.965] Kent Bye: Yeah. I mean, if you were to translate that into real life, obviously you could be prosecuted for murder, but also there's no guarantee that pushing someone off into a speeding train would stop the train. So it's sort of like the rules that are putting forth, like if you do this in this virtual context, you're able to have this certain result. So I guess trying to isolate the contextual dimensions of whether or not it was up to the context and the behavior or the, my reaction is because of something that was outside of my control, or if it was something that is more inward that I have control over. And so when I think about these different moral dilemmas within these worlds, I'm constantly going back to that, okay, are these behaviors because someone are in this virtual world? Or is it because this is a part of their essential character, that they would have also done this in real life. And I think that's, in some ways, maybe never really fully answerable, because it's always going to be relevant to that context that people have a relationship with. But I feel like just at the very beginning, I can imagine a future where there's going to be a lot more of that negotiation of trying to decide what parts of your essential character are being revealed by your behaviors and what parts of your character are only dependent upon the context that you're in.

[01:06:32.863] Andrew Kissel: So this is interesting, though, and I'm a recent convert, but talk of essential character, I'm becoming increasingly worried about. You know, I was a long believer in sort of Harry Frankfurt's sort of hierarchical model of the self, where there's like your true character, which is sort of the second order desires that you have. It's like the desires about the way that you want to be. And Harry Frankfurt outlines it in terms of like a drug addict, right? We've got a drug addict who desires the drug, right? But he's got a second order desire to not have the drug, right? Unfortunately, his second order desire doesn't do anything because he's so addicted to the drug. He just acts on the urge for the drug. He says the true self for the drug addict is to not want to do drugs. So what's important for him, what's really their essential character is the second order of desires rather than the first order of desires that they act on. And I'm sort of simplifying for brevity's sake. But he had this view that your job as a person is to try and get your first order and your second order desires to line up. So the things that you're actually acting on and doing coincide with those higher orders about the kind of person you want it to be, right? So the drug addict should try and remove the first order desire for drugs and stop doing drugs in order to make it more unified with his second order desire. But I'm increasingly thinking that this idea that we can sort of get our desires to line up in a sort of unified essential self, and that the rest of the stuff is just pretend, or in contrast, doesn't give full voice to the complexity and nuance of our personhood, right? I'm starting to wonder why can't I agree that in virtual contexts, I'm inclined to do one thing and in non-virtual contexts, I'm inclined to do another, but that both are just as true or real or essential as the other. And I think this becomes particularly compelling when we start to see the larger portion of our life that we're spending in virtual contexts, right? the sort of appeal to like virtual contacts are only short weekend hobbies is looking less and less plausible for people who are spending a great deal of time in these things and creating beautiful works of art and providing compelling and challenging stories. So the essentialism I'm starting to become more skeptical of, but I've been an essentialist for a long, long time. So it's been a lot of growing pains for me in the last two years.

[01:09:01.267] Kent Bye: Yeah, it reminds me of some of the different debates within the philosophy of history of looking at how sometimes you have to wait to see how things unfold. And then you look back to see what were the things that were really building up into that point. And sometimes as history goes on, you get more and more context for where things have gone to be able to get more context for what was the things that led up to that. There's this constant process. And so maybe part of our essential character, we won't really know until we're on death's door to know, as we look back to see what parts were essential and what parts were because every storytelling framework is all about character development and changing your character and growing and evolving. So as somebody who is maybe oriented towards more of a process relational approach, I am also maybe skeptical to those essential characters, but I do think there's probably some levels of a center of gravity of who we are as an identity that stays consistent throughout our lives, but also other aspects of our character that are continually growing and evolving and shifting. So I'd like to hold maybe a little bit of both, that there's a center of gravity, but also a growth and a change. because there is, I guess, in substance metaphysics, a way of seeing things as these static objects that are consistent, and they're kind of unchanging in a certain way, which I guess it sort of brings me to, you know, I should thank you for inviting me to give a talk to some of your other philosophy friends as a part of the outreach that you have, where I got to do a deep dive on process philosophy. And I'd be curious to hear some of your thoughts on know, some of these different debates around substance metaphysics and process relational metaphysics. And if you see that there's some interesting aspects in terms of the process philosophies as applied to experiential design or virtual reality.

[01:10:32.252] Andrew Kissel: Oh, goodness. Catch me flat footed here. So thank you so much. Yeah, there was there was a very deep dive. We went through a lot of really cool stuff. And, you know, I always feel like I, you know, I need the disclaimer that I am couched in an analytic tradition and a substance metaphysics. And so my experience with process philosophy is limited. And I think the things that you mentioned that I track most with this sort of Buddhist approaches to sort of process philosophy, the sort of everything should be understood relationally in this idea that there's like a fundamental metaphysical level of stuff. doesn't make as much sense. But I do say that, you know, I'm still couched in that substance metaphysics, you know, the Harry Frankfurt one that I just laid out is, hey, here's this standing desire, you know, unchanging, that's an essential part of your character. And so I think one of the differences that I noted between the way you were presenting the material and how I was thinking about it is that you described the project of metaphysics as frames for meaning making metaphors and stories. And I came up in a tradition where metaphysics is trying to answer the question on what there is, right? And sort of like a Quinean approach, like, you know, take your existence predicate, put the parentheses around it, and then metaphysics is the business of figuring out what goes between those things. And so that's where I think part of the questions that I think the resistance to process philosophy might be a difference in opinion about what we're doing when we're engaging with metaphysics, right? Are we trying to make meaning here? Are we trying to carve at the joints of the universe as Aristotle might say? And if you're trying to answer the, on what there is question, that's already setting itself up for substance metaphysics to have the best answer.

[01:12:16.626] Kent Bye: Right. Well, I don't know about that. Okay. All right. Because I think, so I would, I wouldn't say that I'm just using it as a metaphor. I think that Whitehead was a mathematical physicist, right? So he's coming up in a time where he tried to write the Principia Mathematica. He was trying to create all of the foundations of math into logic. And then that failed. And then the way that Matt Siegel told me is that Bertrand Russell went off to kind of start the analytic tradition, and then Whitehead was in some sense liberated. And seeing that mathematics and the foundations of mathematics, that you needed some aspects of intuition, but also there was things beyond the models. But also he was coming up around the time when all these quantum revolution was happening. So he was looking at what is the basis of reality. He disagreed with the way that Einstein was spatializing time in a way that created kind of a block model universe and that he just felt like there was more of the universe that was still in the making moment to moment. And if you think about the quantum substrate as this sort of higher dimensional math structure that then reduces down into the metrical spacetime, but there's these realms of potential that are down there in that quantum substrate. And there's a tendency within substance metaphysics to eliminate that potential. and to spatialize it out into Everett's many-world interpretation of quantum mechanics as an example, of like taking that potential and to actualize it within something that's in a space-time that's orthogonal to our existence. But what Whitehead was really trying to do was to say that, what are the building blocks in order to create all of the nature of reality? And rather than put the basis of physics, that he's putting the basis of biology and organism in relationship and process. And so that if you get down to like the core building blocks of all of reality, you can describe all of reality in the context of relationships and process. And especially when you look at some of the different quantum ontology that happens with relational quantum mechanics from Ravelli, or whether it's relational realism from Epperson and Zephyrus, or from Ruth Kastner and the transactional interpretation of quantum mechanics. So when you really get down to like the there there, it gets down to patterns of energy and relationship to each other. So I do think that it is the building blocks and that if you look at something like category theory in terms of mathematics, category theory is sort of the algebra of relationships and is a competing foundations of mathematics in contrast to set theory, which is more of a substance metaphysics oriented. So the more process relational approach would be the category theory. So looking at both category theory and mathematics, as well as with the process relational approach, specifically these quantum ontologies and the relational realism from Epperson and Zafiris, When I look at that, to say that it's just a closed case, I think is to ignore some of these latest discussions that are happening within these quantum ontologies.

[01:14:56.075] Andrew Kissel: Yeah. So when we're talking about substance, right, we might go all the way back to, you know, somebody like Descartes, right? Descartes is like, I'm a dualist. There are two kinds of substances. There's physical substance and there's mental substance. Physical substance is characterized by having extension in the world. Mental substance is characterized by thinking. I'm thinking of Descartes as a substance metaphysician. He's thinking about the world fundamentally in terms of types of stuff. But the types of stuff that he's interested are not going to be limited to just physical things. So one thing that I get kind of hung up on is if we say like, look, I reject substance as a basis of metaphysics. It's all about relations between energy forces or fields or things like that. I'm having trouble seeing how commitment to energy flows and things like that does not require the positing of a kind of substance, namely the energy. And because substance doesn't need to just be physical. And again, you know, I'm happy to describe my ignorance and be educated more about process philosophy. But if you want to put metaphysics in terms of on what there is, you're going to have relations between entities. Those entities might be physical, but they might be non-physical, right? I take it that the interesting thing for process philosophy, at least again, from the sort of Buddhist tradition, is that rejection of the sort of subject object entity standing in relations. It's sort of relations all the way down. That's the cool, exciting thing that I think is neat about process philosophy and that brings it a different attitude. But then I also am not surprised when people who are so couched in a substance tradition, where I say on what there is, right? They're always going to want you to fill in the holes in the relations. And I know this is nothing new, right? And the process boss was like, fuck, I got to explain this to you again, man. But I'm going to keep trying, right?

[01:16:38.923] Kent Bye: Well, in some level, metaphysics is an assumption and those assumptions then drive up and there are certain decisions that are made. And for me, when I look at substance metaphysics, they say consciousness is a property of the physical reality, which creates this bifurcation of the mind-body dualism. Whereas I think Whitehead himself was a pan-experientialist is the way that David Ray Griffith calls it, which is meaning that at the heart of it is like emotion and experience and feeling is at the heart. And then from there, the pan-psychic approach would be that every little bits of atoms would have a little bit of a consciousness. And as you add them all together, it pulls together, which is the combination problem within panpsychism, which is to what degree do you collate and aggregate all those different bits of consciousness into one cohesive consciousness? You know, Tononi has his own approach for how to do that. But I think there's issues around the hard problem of consciousness as Chalmers has described it, which is that how do you get that phenomenal experience out of little bits of matter that are dead, that have no consciousness within them? you have to make some sort of leap of saying that either our consciousness is an illusion, sort of what the limited materialist would say, or you say that every part of our reality has consciousness, or what Chalmers says in his TED talk is that there's consciousness as being fundamental.

[01:17:52.773] Andrew Kissel: I think as this is where I want to, I want to try and jump in real fast though. So for me, I think there's a lot of categories we could take towards the sort of mind body problems. Just what is the relationship between the mind and the body? Right. And again, Descartes says there are two substances, two kinds of things, material, physical body thinking, mental things, and they interact, right. It caused stuff. And then Descartes has to answer this problem of how they could possibly interact. Right. So then the materialists come along and they say, that's crazy hooey. That's suggesting mind-body problem is solved by saying there's two things and they interact with each other. There's really only one thing, the material stuff, right? And then they say, consciousness, mind can be reduced to the physical. So they're reductive physicalists, right? And then you bring some other people and they say, reductive physicalist didn't go far enough. It's not just that we can explain and understand and provide a metaphysical reduction of the consciousness to the physical. But when we think clearly about it, we realize that there's really no consciousness at all. So I do want to say that the limited materialist and the reductive physicalist have different positions, right? It's about the appropriate status of consciousness. The physicalist says consciousness exists, but can be understood in terms of only physical stuff. Whereas the limit of materialist consciousness doesn't exist. Chalmers is going to come in and he's going to try and he claims get the best of both worlds to the best of my interpretation of it. He says, we can't reduce the mental to the physical in the way that the reductive physicalist says we can. But we also can't be a limit of materials because that's just to deny the obvious. We all have conscious experience, or at least I have conscious experience, and I feel pretty good about my induction that all of you have it as well. Right. But I don't want to go the Descartes route because I really don't understand how these things could possibly work together. So Chalmers says that consciousness is a fundamental aspect of reality, but not another substance. Right. It's a property had by material things. in the same way that we might say, like, in a certain tone of voice, to use a phrase, greenness is a property had by things. And so in that sense, it can't be reduced to the physical, but also we're not gonna see or understand the mental as floating independently of the physical, right? And so he's gonna be the sort of non-reductive physicalist. And I guess these to me seem like importantly distinct positions to take towards the philosophy of mind, all are going to involve one or two substances, but they're also all going to involve different claims about the relationship between the thing we call mind and the thing that we call body. And I don't want to too quickly lump the reductive physicalist and the non-reductive physicalist together, because I think they're making different claims about the nature of reality, even if they say that the number of substances is the same.

[01:20:46.250] Kent Bye: Right. Yeah. And I guess some of these different debates will never really be subtle because there's no way to get a clear answer, which they'll remain philosophical debates for a long, long time, the philosophy of mind, as well as these discussions around metaphysics. So maybe that's where you got that. I was using it as a story as a metaphor is because it is kind of an assumption that we have to make. And I'm somebody who's maybe oriented towards that process relational approach, but speaking to an Atlantic tradition, this deeply steeped within substance metaphysics framework, I'm not expecting people to change that fundamental assumption. I don't think that you could necessarily prove it one way or another. What I find compelling about it though, is that it leads to so many different aspects of say, looking at iteration and agile approaches and embodied cognition and perception as a process that's iterative. You know, all of those things for me are these unfolding processes that have a certain frequency that as someone who's oriented towards a process relational approach, it makes sense that those would be fundamental concepts. Whereas the thing that I have arguments against the more reductive approaches is it so often collapses the context to the point where you're trying to make measurements and trying to isolate things from the context when you can never really fully isolate yourself with any measurement from the context that's surrounding it. which I think is part of the thing that draws me towards the process relational approach, is to say how fundamental that context is, which mereology is a big part that came out of the process orientation from Whitehead, where he was trying to really define these holes in these parts and how they're related to each other. It's a discussion that's been happening through philosophy for a long, long time, but by Whitehead putting his grounding within, rather than physics than biology, he's looking at things in terms of these organisms, so that you can start to have a myriad logical structure at the lowest level of atoms and all these patterns of energy to the human level, all the way up to the cosmic level. So that in some ways is his way of trying to resolve these different math structures. And for him, by having more of a relational approach in this organism approach is a way that he can address all of those different things.

[01:22:43.077] Andrew Kissel: So, I mean, so there's a lot there. Maybe I should start with an uninteresting methodological points, but I think as you're describing it, like I'm fully on board with pursuing as many different metaphysical frameworks as possible and seeing if they bear fruit. And that's why I'm loving to hear more about your take on process philosophy. Cause you know, the more conversations we can have, you know, the more we can learn from each other. And I guess my thinking is sort of like one of the markers of the analytic tradition that I come from is that we try and take. the scientific approach and use it for engaging with philosophical problems. And there are limits to that. And I'm glad we're not the only game in town, but it is the game that I've been playing for a long time and I'm somewhat okay at it. So for me, it's like, you know, we've got a couple of metaphysical options on the table. You've got heavy substance metaphysics, and then we can have debates about which substances there are and how they interact and things like that. Then you got a heavy emphasis on process philosophy. And these are two pictures of reality that we're proposing. And we say, okay, how do we choose between these? I'm not necessarily going to say, you know, which one's true or correct or the right one, but what's the one that I should believe in now. Right. And I think you're right. If we're treating these like scientific theories, we'll say things like, how well does it explain the evidence that we have? Can it account for the phenomena that we see in the world? Another question we can ask is, you know, how fruitful is it? Does it explain things we didn't plan for it to explain, but look, it already can, right? Does it lead to further questions we can ask and then try and answer those ones? All of these are sort of theoretical virtues. And to the extent that one theory embraces these theoretical virtues better than the other one, we should prefer that one. So if we can start to make the case that like, man, process philosophy explains all the stuff that substance metaphysics wasn't, and the stuff that substance metaphysics is explaining, process philosophy doesn't matter. Then that seems to me for a reason to believe that process philosophy could be the game we should be playing, right? And so that for me, you know, if the question is on what there is, right? The fact that it's that productive is some reason to believe that that's what there is. Right. And of course, we're always going to be open to new arguments, new ideas, new evidence, and we can change our answers. But yeah, so for me, you know, I think the metaphor language is a great starting point, but at the end of the day, if the theory produces, I should believe it just as much as I should believe any other theory that I think is justified by the evidence. Right. But then the job is on me to go explore these other theories, right? And that's why I love conversations like this.

[01:25:14.136] Kent Bye: Yeah. First, I'd say that it's been adopted into different communities of a lot of the complexity theorists and Berger-Jean, Isabel Stengers, and Bruno Latour is another process relational thinker. And the philosophy of biology is probably another really big area where the process relational approach has been taking and getting a lot of steam in terms of thinking about how primary those processes are and thinking about biological organisms. And I think looking at the quantum ontologies are stuff that I've been listening to a lot of the discussions. Timothy Eastman has a book called Untying the Gordian Knot, where he's been trying to synthesize all these different perspectives. He was a NASA plasma physicist. and interested in philosophy and engaging with the Cobb Institute over the years, talking to different process relational thinkers. And so he's been having these monthly discussions with all these quantum ontologists and process philosophers where they've been debating all these different aspects. And the thing that comes up again and again is these concepts of potential and how the quantum wave function is embodying those aspects of potential and how naturalism is saying that the only thing that's real is in the metrical space-time, but that there's a non-spatial temporal and things around anomalies when it comes to quantum entanglement, suggesting that there's non-local aspects beyond just baseline. And so by looking at the process-relational approach, Epperson and Sefiris looking at relational realism, as well as Ruth Kastner looking at the transactional interpretations of quantum mechanics, are giving ontological validity to those potentials, rather than just being discarded by the only thing that's real as the thing that is actualized. and I think there's different ways Whitehead is trying to recover that potential that has yet to be actualized. Do those eternal objects that he has, more of these platonic forms that are these mathematical structures that are representing those potentials, ideal forms as Plato called them, Jung called them archetypal forms. So yeah, there's lots of ways in which that, for me, the process relational approach is producing really great metaphors to understand some of these underlying concepts for thinking about immersive media, because so much of it is about embedding potentials into these realities. And then not all of them are actualized, but as we move forward, we're moving away from that linear experience and more into different dimensions of those potentials and what's it mean for those potentials. Now, whether or not they are going down to like the deepest level of the nature of reality, I'll leave up to the other quantum psychologists and other folks.

[01:27:38.524] Andrew Kissel: You use that dreaded word metaphor again, you're going to throw me off here. So if you might indulge me just a little bit longer, and I know we've run 20 minutes over and I'm going to have to run home to cook dinner soon, but I have two questions. Question number one is why think that these benefits are dependent on a process metaphysics rather than a substance metaphysics? Right. And I think some of this depends on how deeply in the philosophical tradition we want to go with the substance metaphysics tradition and where we're placing that tag. But there's certainly, you know, cases to be made, you know, going back to the early Greeks about potentials being located already in the objects themselves, in some sense, waiting to be revealed. And I don't think of them necessarily as buying into this deep process philosophy claims. So part of me wonders, how many of these benefits process philosophy can be had by other views of metaphysics, but just understanding that these substances are embedded in complex systems. So that's my first question is, why think these benefits are unique to process philosophy and not from thinking in terms of larger contexts and systems generally, regardless of your commitment to substances or not. And my second question is where representations fit into this account? Because oftentimes I find that when we pursue the sort of relational process dynamics approach, we start to understand in terms of relations between things and talk of representations falls out, which in many cases I think is a good thing. But sometimes I think that talking in terms of the way that a mental state represents the way the world is supposed to be, or I talk about an image having semantic content, talking in those static screenshot ways is very useful for understanding what we're doing. And part of my reluctance to jump full on with process philosophy is fear that we'll lose representations, either as mental contents or semantic content or something like that, that I think is crucial for how we interact with virtual and non-virtual worlds.

[01:29:36.744] Kent Bye: I have some thoughts and I'd say, first of all, because process philosophy is relatively immature when it comes to the maturity of analytic philosophy, and there isn't as many process philosophers, then I'm going to answer the best I can, but I don't know if there's any canonical answers that are out there that'd be better, but I'll give them my best shot. If you read through the process philosophy entry in Stanford Encyclopedia of Philosophy, it lays out some of the different problems to be solved in terms of like that. It's not as a robust, fully formed system of philosophy. but still relatively nascent when it comes to the development of the analytic tradition, as an example. So given that caveat, Joanna Seip, actually the author of that process philosophy entry, I think she's not so committed to Whitehead's metaphysical approach. In fact, she's someone who works a lot within robotics. And so a lot of her writings actually is trying to abstract out the process relational approach independent of Whitehead's metaphysical assumptions. And so Janusite actually wrote a whole article about the fallacy of misplaced concreteness. You know, there's an old saying of the map is not the territory, taking an abstract theory and trying to project onto it. And that substance metaphysics in some ways is a fallacy of misplaced concreteness of saying that the metaphor and the map of the substance is the actual reality of the stuff.

[01:30:50.707] Andrew Kissel: That makes perfect sense. I think robotics is a great example where sometimes dropping the representational attitude is going to be beneficial because when we're trying to solve problems like how to make this machine walk across this unlevel ground, and we're saying, okay, we're going to put a bunch of cameras up here to make this perfect map of the territory representations, and then go through this very heavy cognitive load of trying to interpret all the various arcs to place the foot we did terribly, right? These things were falling over left and right. But when we said like, throw light forward and respond dynamically to the fall, the robotics, you know, walking process has gotten much, much better, right? So stopping thinking in terms of like representations of the world and reacting to it and starting to think in terms of coupled dynamic systems co-evolving, I think the proof is in the pudding that that was beneficial. And so it's not surprising at all that robotics would be an area where a sort of process philosophy attitude is going to be extremely helpful.

[01:31:46.465] Kent Bye: Yeah, I'd point to Sight and her writings on a lot of stuff, because I think she's really into the process as an idea, but not to Whitehead's metaphysics per se. But the other thing that I'd mentioned just in terms of the representation, I go to, say, the neuroscience ideas of the predictive coding theory of neuroscience, or also the predictive processing theory, which is the idea that we have our embodied experiences, And then as we take those embodied experiences, we create these mental representation of those that whenever we experience something, we're constantly comparing what we're sensing to this repository of a priori memories, experiences, and mental models that we have about that. And when there's a difference between those two, that's when the predictive coding theory of neuroscience says that we're actually sending dopamine into our brain to be able to update those models. So even if you get down to the lowest level of the predicting coding theory of neuroscience, you have in it, this process of constantly iterating between observing the world, making models of the world and updating those models based upon new information that you're receiving. The other thing I'd say, just in terms of the representations is that when you look at artificial intelligence, we haven't created artificially general intelligence yet. which means that there's a certain level of human intelligence that we haven't been able to recreate within AI. But there are architectures like recurrent neural networks that have some degrees of memory. When I was going to the International Joint Conference of Artificial Intelligence, talking to different AI researchers in 2016-2017, this is around the time when AlphaGo had just beat a lot of the top Go players in the world. And AlphaGo was some sort of combination of this more top-down hierarchical ordering structure and more of a bottom-up learning. So having a combination of both the many, many different iterations, but on top of some sort of heuristic to be able to help guide and organize it. So you can kind of think of that in some ways as a metaphor of our mental thoughts and our ideas somehow playing in combination with being able to identify different things. There's a concept of zero shot learning as an example. So this is a provocation to say, as a human, you could read a Wikipedia article, and you could say, this is the description of what this bird looks like, you can go out and see that bird and say, that's that bird that I read a description about that from an artificial intelligence perspective is very difficult. Usually you would have to take a million picture of that bird and then do lots of different iterations. So it's able to figure out all the very low level features, like millions and millions of features that are then combining up to that idea. But to take a natural language description of something and then match that up with the machine learning to make it more efficient. So you could have something like zero shot learning is part of you know, when I think about the iterative process, there's something about artificial intelligence that has that many different iterations to try to tune the neural net, to be able to identify, perform a very specific function. It's a function approximator. So be able to approximate any function and then enough times of seeing enough of the dataset, then it's able to continually do that. So when I think about representation, I sort of go to the AI metaphors and that's just some of the stuff that comes up.

[01:34:45.897] Andrew Kissel: Yeah, no, I think the appeal to AlphaGo makes a lot of sense. And that's the kind of case that I had in mind here, where we say there's a sort of classical computationalism approach to these problems that are based off of representations with semantic content and interactions between states of the machine on the basis of that semantic content. And then you've got the sort of neural network approach. And initially, this is introduced as a representationalist approach to computing, right? And then you sort of start to get this move where you say, actually, when we're working with both of them together, it seems to do pretty all right for us. And so there's some debate among philosophers about whether classical connectionism, artificial neural networks should be understood as alternatives to classical computationalism, or if they are just different implementations and that the representations sort of hang on top. And it's sort of a recapitulation of problems that you always have when semantics arises, whether it's the mind-body problem or computations or anything like that. But there, you know, again, one of the themes that I keep coming back to is that it seems to me that as an approach to solving scientific problems, to solving engineering problems, the sort of dynamic modeling and whatnot is very useful, but I'm not sure that's the same as metaphysical commitments. And so maybe the next conversation we should have is what are we doing when we're engaging in metaphysical conversations? And philosophers have all sorts of differing views on that. You know, whether it is just playing language games or getting at foundational reality, I don't have strong commitments on that yet, but maybe you'll help me work them out.

[01:36:16.574] Kent Bye: Well, just to wrap up, I always like to ask people, what do you think the ultimate potential of these virtual reality technologies might be and what they might be able to enable?

[01:36:25.673] Andrew Kissel: Oh, goodness. Um, I'll say that, as I'm currently exploring that I think that one of the most exciting things for me about engaging with virtual reality is a new way and form of discovering things about myself. whether that's by creating a new role for myself in virtual settings, that's going to be constitutive of my identity, or whether it's learning about things that never had the opportunity to manifest because I wasn't in the right situation. But as a person who as a philosopher is interested in the dive into the discovery of self, I think virtual reality provides a really unique opportunity to do that in conversation with other people. And I find that really exciting. Was that Mushy mouse stood up for you.

[01:37:10.825] Kent Bye: Yeah. And is there anything else that's left unsaid that you'd like to say to the broader immersive community?

[01:37:16.132] Andrew Kissel: Thanks so much for having me. It's been a pleasure. I've been enjoying listening to the podcast you've been sharing with me, and I hope you keep doing cool work. Cause I'm looking for more conversations to be had. So.

[01:37:26.641] Kent Bye: Awesome. Yeah. Thanks so much for joining me. I've been, you know, I think I identify as an aspiring philosopher cause I don't have any formal training in philosophy. And so I feel like I'm very philosophically driven though. I'm these deep questions. So I just really appreciate having a chance to talk to professional philosophers about VR. And I know that David Chalmers has his book that's coming out here. I'll have a chance to chat with him here soon, but yeah, I just really appreciate some of these deep questions that I think that I don't have any good answers to. And I don't think that's part of the problem is. trying to go into these issues around morality and these questions. And yeah, just the way that you framed it here, it's putting an embodied experience to some of these more abstract thought experiments. And so what's it mean to actually have embodied experiences with some of these different questions and will that change people's thoughts on these different things after they've actually been through the experience when up to this point, it's only been a thought experiment. So I think it's just at the very beginning of what could actually turn out to be very interesting in terms of new methods for really addressing these, these questions from a more experiential based with the caveat that you're still in a virtual environment. And to what degree can people suspend or really be immersed within an environment and have their behavior match with what they would actually do if that were put into that situation in a reality that was more in the physical world rather than a simulated world. So yeah, I think it's at the very beginning and I just enjoyed being able to hear more about it and excited to see where it all goes. Hey, thanks so much. We are too. So that was Andrew Kissel. He's an assistant professor in the Department of Philosophy and Religious Studies at Old Dominion University in Norfolk, Virginia. So I have a number of takeaways about this interview is that first of all, well, it was a lot of fun to unpack different aspects of things like the gamer's dilemma, looking at things like the VR Charlie problem and through the lens of the consequentialist ethical framework, which is, you know, looking at the consequences of the actions, and whether or not they have bad or good outcomes, the virtue ethics, which is, you know, trying to make evaluations based upon to what degree can you live into these different virtues of either caring for others or being courageous or compassionate. And there's certain dimensions that's like revealing aspects of your character. And then the deontological approach, which is the rule-based, whether or not it's right or wrong, and these are conforming different principles. And those principles, or they're coming from God, from rational reflection, from social constructs from people, or as we discussed in previous interviews, human rights principles that are trying to get to these basic human rights and derive different ethical systems from those human rights-based approaches, which is, as Daniel Leufer is saying, a deontological approach. So yeah, as we talk about some of these other aspects of the gaming dilemma and looking at aspects of moral character and virtual reality, is virtual reality changing the direct embodied experiences where maybe there's a level of abstraction where you're not completely embodied within these different experiences, which I think maybe made some of these different arguments a little bit different or kind of shifted them in some ways. The thing that I found interesting is just the direct embodied experience that Kissel was saying. It was activating his own moral faculties maybe in a way that was different than what he's experienced when other more abstracted 2D versions of these virtual worlds. Lots of really interesting things we kind of dig into there from everything from AI moderation and the consensus around moral values and, you know, Socrates' perspectives on exile and privacy and all the different ways that we're trying to add the different contextual dimensions of that. And then at the end, we kind of do this wonky deep dive into my talk that I'll add a link here in the description that I gave, Process Philosophy in VR, the Foundations of Experiential Design. So I was invited to present at this workshop on exploring the humanities back in 2021, in December 10th, I gave this talk, which happened to be the one year anniversary of episode 965 that I talked to Matt Siegel, which I gave a whole primer on Whitehead's process philosophy as a paradigm shift and foundation for experiential design. So there's lots of other questions that obviously we weren't able to fully come to any sort of resolution, which sometimes happens when you debate some of these different metaphysical things. But in my next episode, I'll be doing a deep dive into some of those process philosophy things that were kind of brought up at the end of this conversation that I had with Andrew Kissel. So I guess in some ways, putting these together will have more of a full response than I had the capacity to kind of like respond to in that moment. I'm not like a professional philosopher, more in the realm of aspiring philosopher, trying to carry forth some of these different conversations and responding to the best I can. But I guess my preferred method is to talk to other people who are the experts and to let them speak from all the scholarship that they've done on some of these different issues. And so I'll be digging into that a lot more in the next episode with Matt Siegel to unpack some of the different aspects of his latest book on process philosophy called Crossing the Threshold, Ethereic Imagination and the Post-Kantian Process Philosophy of Schelling and Whitehead. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. You can become a member and donate today at patreon.com slash voicesvr. Thanks for listening.

More from this show