#1089: IEEE Global Initiative on the Ethics of Extended Reality: Trolling, Harassment, and Online Safety

The IEEE Global Initiative on the Ethics of Extended Reality has published the first eight white papers investigating different ethical aspects of virtual and augmented reality. This has been a collaboration with dozens of XR industry professionals who have been participating in helping to better define the open ethical challenges and moral dilemmas within the XR landscape across many different contextual domains. I have been helping to lead this effort as an executive committee member since it started in July 2020 as well as helping to kick start the white paper initiative. Now that the first batch of white papers have been published, I will be featuring interviews with the primary authors and contributors on the Voices of VR podcast in an 8-part series spanning more than 11 hours.

Here is the listing of XR Ethics paper PDFs and podcast interviews links for the first eight White papers, and I will add in the podcast links as I continue to publish this series.

My first interview is with Michelle Cortese, social VR Design Operations Lead at Meta, and Jessica Outlaw, Behavioral Researcher who has done some pioneering social science research in social VR showing the extent of online harassment. We talk about the challenges of real-time harassment in trolling in social VR, as well Cortese & Zeller’s insight to adapt Hall’s proxemics framework to help understand the technological layers and design principles for different social distances of intimate, personal, social, and public.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So this episode is kicking off an eight-part series of XR Ethics that was done in collaboration with the IEEE Global Initiative on the Ethics of Extended Reality. So there's a number of different white papers that were produced from this effort and I've been involved with this for a couple of years now going back to 2020 when I came across one of the original papers that were from the IEEE Standards Association It was more AI related it was the IEEE's ethically aligned design chapter on extended reality within the context of the autonomous and intelligent systems and It's a larger paper that came out in March of 2019, and then they added an additional chapter in May of 2020 that was for XR. And I came across that from Mathana, who was one of the co-authors of that, and then ended up being a part of this new IEEE effort. got in touch with IEEE Standard Associations, John C. Havens. He said that they may be putting together an IEEE group to discuss creating a standard around the ethics of XR. I passed along a lot of the work that I've been doing with the XR Ethics Manifesto that I did in 2019 and the many different interviews I've done on privacy and a lot of the thinking that I've done up to that point. We had the first meeting on July 24, 2020. From that meeting, we decided that we had enough information and enough people to actually put together an industry working group in the context of IEEE to produce a number of different white papers. It really kicked off in February of 2021. Over the course of the next year, a number of different authors were coming in through a lot of these different contextual domains. There's eight white papers, and the one that we're going to be covering today on today's episode is around social and multi-user spaces within VR, trolling, harassment, and online safety. For anybody who's spent any significant time in public social VR spaces, inevitably you run into some type of harassment or racist or sexist behaviors. And so what are the different systems that need to be in place to help cultivate safe online spaces for people? that mitigate the amount of trolling and harassment that are happening in these real-time environments. So that's what we're covering on this episode, but this is also kicking off a larger series that I'm going to be doing, talking about XR ethics in medicine and education, around who owns our second lives, around virtual clones and the right to our identity, business, finance, and economic aspects of XR. You have the XR ethics in diversity, inclusion, accessibility, as well as in XR privacy is a huge, huge topic. And then the last paper is going to be on the metaverse and governance. diving into some of the different XR ethical implications there. Lots of white papers that are available for you to dive in and read. For each of those papers, I'll be doing anywhere from an hour to two and a half hours, in some cases, discussions. It's going to be totaling more than 10 hours of coverage of these different white papers. This is such a huge topic and it is pretty dense, but I think through the process of having these conversations, it'll at least provide an entry point to be able to dig in and get more information. Or if you just want to listen to get an overview of the larger domain of ethics within Nexar, then this will be a good series to listen to. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Jessica Michelle about social and multi-user spaces in VR, trolling, harassment, and online safety happened on Thursday, May 12th, 2022. So with that, let's go ahead and dive right in.

[00:03:32.686] Jessica Outlaw: My name is Jessica Outlaw, and I'm a behavioral researcher. I've been working in the field of VR for a couple years now. One thing that I've always been interested in from the very beginning is, how are people interacting in social VR? What shapes their behavior? What are the things that they may not necessarily be conscious of? in their environment that are shaping their behavior, as well as questions like social contagion. I've previously published work around social virtual reality and experiences of harassment. And that is what motivated me to get involved in this IEEE initiative and the standards that were being created around it.

[00:04:10.431] Michelle Cortese: Awesome. That was very thorough. I'm going to be less thorough. I'm Michelle Cortese. I am a designer and sometimes artist in the space of VR and XR largely. I've been working at Meta for the last five years in largely the social VR space. I kind of live in the space where social VR meets design systems and like the large scale frameworks that we can use to build the virtual spaces, as well as like the interface systems that we use to interact with them. That's like my favorite sweet spot is finding the ways that we can sort of expand and scale on concepts into the products and sort of push that out into larger spaces. I have a small practice of more artistic and research-based work that I do on the side, a little bit of publishing and a small series of like experiments. I can link you to some performance art pieces that I've done in like Spectorial VR, but it's a little bit all over the place. Largely the thing that ties it all together is just an interest with What happens next as we grow media forward? I know you wanted to talk a little bit about how we got here, and I think this will bridge it together, but I've always been fascinated by ensuring that the people who design the technologies that we use, the technologies that push us forward into deeper versions of technology, that we're never losing something along the way, that we're always keeping the best possible experience as we go along. I'm going to cut myself there and let you ask the next question then.

[00:05:36.435] Kent Bye: Yeah. So we're here in the larger context for this IEEE global initiative on the ethics of extended reality. Both you, Jessica and Michelle were the lead authors. So there was some other contributors to this paper called social and multi-user spaces and VR trolling, harassment, and online safety. So maybe before we start to dive into this specific paper that you helped put together for this IEEE initiative on XR ethics, maybe give a bit more context to each of your background and your journey into this space of XR.

[00:06:06.043] Jessica Outlaw: Yeah, I got my very first demo of social VR in 2016. And 2016, as I'm sure people will recall, like the Me Too movement was in full swing in 2016 and 2017. And there were ways in which there were reports of harassment. I was influenced by the Jordan Bellemeyer story about talking about being harassed inside of Quiver VR. And there were ways in which people used to say things about how safe the real world was for people that they don't say anymore. I think there's ways in which people talk about harassment and how it occurs and the safety of both our physical and virtual spaces. This conversation has been interesting to watch. That's one way that I've been involved in watching how how the space has evolved. And one of the things that originally drew me to study social VR, you know, because there's many different facets of VR. But I think there's a part of me where my bias as a social scientist is that I don't necessarily see large distinctions between how people's behavior is influenced by signals in the virtual world and in the physical world.

[00:07:17.464] Michelle Cortese: This is fun because my journey is pretty different. So I've had this obsession and I started touching on this non eloquently earlier but I've had this obsession with ensuring the like ethical efficacy of new digital media basically every time we get a new way to express ourselves like how do we continue to make that as humane as possible. And I basically been obsessed with this since about the mid 2000s when I discovered that we nearly lost closed captioning for a period of time in the process of updating TV to a digital signal. And I was just like so moved by that at the time that I was like, this is something that I want to put myself at the center of. It's like every time a new technology starts, I want to try to help make that more humane and allow for things like that to not happen. And then I worked in advertising for a bunch of years and that was soul crushing. and ultimately tried to pivot my career by going to grad school and kind of studying at a weird creative science program. And in that process learned unity just like by accident and returned to the advertising world and experiential advertising at a period in time when like people were starting to ask about this VR thing. They were getting VR curious. They wanted to do that. And I was like, you know what? I work in this world. I get this. I have all these skills already. I've already done all this stuff. So what I started doing there was kind of like Working on that stuff, inheriting technology from companies like Facebook and Google and just sort of like, you know, trying to do what I could, I basically sort of got just tired of that, felt like I was inheriting technology that was inherently not good. I wanted to go to the source and I wanted to try to make things better. That is about the period of time where I begin to become obsessed with social VR and VR harassment specifically. Like I go to, you know, now Meta, then Facebook. I go there to try to start making like systemic level design decisions that help make things better for another generation of tech. I ended up staying in the social VR department for several years. I work on Horizon. I work on large scale design systems there. And in that time period, basically this one moment happens for me. And I realized that my own PTSD from a history of physical assault could be triggered by VR. I become incredibly fascinated by this, like incredibly personal experience. And that affects all of my work for the next four years. It just like everything switches gear. And suddenly I'm obsessed with using the designer's skill set to try to systematically prevent things like that from happening.

[00:09:35.132] Kent Bye: Nice. Just as you were saying that, Michelle, I was just reminded of your Medium article that's referenced in this piece from 2019, the virtual healing. And I don't know if you wanted to say anything more about that piece and your journey from that piece and how that piece maybe led to other articles that you wrote and then leading up to the context of writing this piece for the IEEE exploring social VR ethics.

[00:09:59.427] Michelle Cortese: Yeah. I mean, all of those are super connected. And the virtual healing one is really just the story of everything, like how that comes together. What it submitted in the section of the IEEE document that I was responsible for, which is largely just the part about design principles is really like the quickest possible summation of like years of building frameworks for this sort of work. But the virtual healing article kind of starts at the beginning and talks about both the period of time, like the first initial discovery of like, oh God, you can trigger very real like PTSD reactions through VR. What does that mean? What research exists to back up? And so I found Jessica's research and that's kind of how I started to like see what was out there. I saw a lot of Nell Slater research to understand that the experience that I was having was not just me. Like that's really, I think the turning point there. and a more like salient story for more people is that the experience of being in VR can lead to these experiences, sort of like the illusory body. I know that when something is happening to me in VR, that it's not happening in real life. But the reaction that I was causing scientific explanations, I wanted to probe deeper into it. And I had some very interesting experiences with my coworkers at the time. Like I went from my first way of trying to figure out this problem was like, OK, when I see a design, a prototype, an experience and a critique that seems like it could be triggering to someone who has problems like myself, I will just explain from the get go. Like I'll explain my own personal experience. I'll basically exhaustingly put myself in line every single time. And I was like, that's hell. I'm going to build a framework for this. And that really leads to a chunk of work that I did with a coworker named Andrea Zeller, a series of different, we put a couple of publications out, but the centerpiece of it is like a book chapter that we wrote for a book called Ethics and Design, where we are using proxemics to try to define the digital world by physical real world terms. I'm not going to go too deep into that because I don't want to just totally veer off the map here. But the beginning of the journey is really that discovery, seeing how in the process of communicating to designers and the very real context of designing at Meta, that my experience of just explaining my experience was not scalable. So hence the proxemics comes in.

[00:12:21.274] Jessica Outlaw: Yeah, and there's one thing that I would also add to this, because where Michelle has taken us is describing a lot of her interdisciplinary influences. And I want to give a shout out to Andrea Ion Cojucaro, who is trained as an architect in the physical world and now does a lot of work in VR. And there's things around the disciplinary nature of creating these social VR spaces and which types of designers are included in this. and what type of skill sets do they bring? One of Andrea's critiques around social VR is who's creating these public squares and who's really thinking about it from this really holistic perspective, and who's leveraging the understanding of European piazzas? And how do people interact when they're in a piazza compared to when they're in these other types of physical structures where we have those cues? So I think this is one of the things that makes VR so challenging, is that nobody would try to build a new town without hiring an architect, but there's a way that you can kind of start from wherever you're at with content creation. And just to link it back to the chapter that we wrote around this creation of social VR spaces, one of my goals was to really give people some very specific design principles and an understanding that the choices that they make about the design, about the rituals, about the symbols that they choose are going to influence people in ways that they may not expect.

[00:13:51.365] Kent Bye: Yeah, and so maybe before we dive into the IEEE paper, one other bit of context that Michelle alluded to was the research that you've done, Jessica, in terms of establishing a baseline in terms of some statistics for how many women had already experienced some degree of sexual harassment with an XR. set the context for how that study came about, because that was really early. And I remember we did an interview about it at the time, but looking back, it's really starting to at least establish a baseline of what some of the common experiences for women were through some of the surveys that you were doing.

[00:14:22.263] Jessica Outlaw: Yeah, I'll talk about the 2018 survey that I did. This was sponsored by Pluto VR, and I recruited 600 social VR users. And I asked them, first of all, what did they like to do in social VR? And then also, had any of the people who took the survey experienced adverse things happen to them? And so 49% of women reported having at least one instance of sexual harassment. And then there were actually a lot of males who also reported being harassed as well. So 30% of males reported that they were targeted for racist or homophobic comments when 20% of males were on the receiving ends of violent comments or threats. Another thing that came out from this is just the extent to which people did not necessarily know what to do when they were witnessing harassment. So maybe they weren't the one being targeted, but they would see it happening to other people. So there was this experience of being a bystander when harassment was occurring. And what I did after that study is I created a bystander training for people who were witnessing harassment inside of social VR and tried to like go through the bullet points of like what to do, how do you support the victim and what could be done next. So I think this is just another example of how the knowledge that we have on how to create good social spaces in the physical world does map onto the virtual world.

[00:15:48.933] Kent Bye: Yeah. And as I looked through the paper, the first and second chapter are setting some broader context for social VR and talking about, you know, some of the different presence research or social presence, you know, indicators of social presence from O et al and Bailenson. They had a number of different. factors of social presence, but maybe you could give a broader context for what we're talking about being sort of real-time immersive virtual reality environments that are different than, say, asynchronous social media that's mediated through 2D frame. You're having these different illusions of presence, whether it's the place illusion, plausibility illusion from Slater, or you feel like you're immersed within the context where you have these social interactions that feel like they're actual people on the other side and you're embodied and you feel emotionally engaged and that you're able to take actions. And so it feels like you're in this space. And for me personally, I want to just sort of comment before you start to dive into that context setting, which is that Slater in his presence, researchers really focused on the sensory motor contingencies of what is unique to VR and a sense of the illusionary aspect of it. And I think the illusionary aspect is really tricky because it has a tendency of saying a dichotomy between anything that happens in the virtual realm is not real. And you have this sort of false dichotomy of the virtual and the real, which I prefer to think of it as the virtual and the physical and how there's the physical reality and the virtual reality, but yet there's still aspects of that virtual reality that feel just as real as any other reality, whether it's the emotional or social or, certain aspects of the embodied presence that get triggered through levels of proxemics. And so, yeah, maybe that's a good place to start in terms of what you were trying to do in these first two chapters, at least setting a baseline for everybody as they're having this discussion.

[00:17:27.523] Jessica Outlaw: First of all, I want to invite everybody to read this chapter. We do focus a lot on online safety in this. And I also do want to give a shout out to our collaborators on this project as well, Tommy Erickson and Sarah Carbono. They were also contributors to this chapter. So to answer your question about what the framework that we were trying to set is, just to say this is how we're defining social VR, we talk about some of the benefits about how we can reduce potentially carbon emissions if more people are going to virtual conferences. I think there's ways in which the pandemic has shown that people are very willing to substitute in digital connection for physical connection if there's a reason to do that. The next thing we do is we give an overview of the types of social VR spaces that are available today. And then we go into the definitions of harassment and trolling and how those are different. And then we just set the stage for, you know, this is happening right now. These are going to be the pitfalls. And what I really wanted to do with the entire chapter was really empower the people who want to build their own communities inside of social VR. Like my sense is that the very best experiences around social VR are probably happening in like small groups of people. And for the people that are curating those experiences for their friends, what would I want them to be aware of? What are the different tools they could use inside of social VR to really steer that experience? Because I think there's probably a lot more people out there that are experimenting and doing new things and probably touring. and creating really exciting art. So I think that's one population I was trying to really serve. Another group are the people who are from the technology companies who are really trying to think about what would be the design tools around the environments, but then also the technological solutions around muting and blocking and bubbles and those types of features that will be part of the social VR apps.

[00:19:31.170] Michelle Cortese: Awesome. I will say that I felt like your framing at the outset was very David Chalmers-esque. It reminded me a lot of that recent book, Reality Plus, that makes a very large scale proposal for the idea that the virtual worlds are completely viable, completely real worlds. They just happen to not be physical. So interesting stuff there for someone who wants to read on the philosophical aspects of that world. I feel like if there's anyone who hasn't used social VR before and what they're looking for is like the visceral description of like, what does it feel like to be there? I'll just try to throw that in here. So in the beginning of the chapter, we do have a list of various social VR experiences that exist, kind of all like the major players, VR chat, alt space, big screen, rec room, horizon, half and half, et cetera. The way I often explain those experiences, if you've never been in them and you're just trying to get a sense of like, what does it feel like personally to be in social VR, for most of them, like for walking into VR chat, it feels a lot like walking into an AOL chat room, like 25 years ago, except you are physically walking into that space. You can meet Anyone, people are screaming random stuff at any point in time. It's strange. The content and the subject matter discussion are all over the place. It feels strangely fascinating, yet also somewhat disturbing at times. I feel like that's kind of it at its best. It reminds me a lot of like that exploration that I was doing in that early internet time. And then at its worst, it sort of feels like if a YouTube comment could chase you. Like that to me is like the visceral experience of social VR harassment is like a YouTube comment came alive and it chased you around your living room.

[00:21:18.009] Kent Bye: Yeah. And Michelle, I think you're correct in identifying the influence of the Chalmers, RealityPlus, and the ways that I've been trying to frame the discussion around presence in VR. And I wanted to just call that out just because I feel like there's a tendency to say, because it's happening in a virtually mediated space, it's not real and it didn't really happen. But I feel like the experience that people have is that these experiences of trolling and harassment can feel just as real or even catalyze memories of existing trauma that happened in physical reality or create new levels of trauma that didn't exist before. So I wanted to just point that out because I think that's the difference that I see. A big difference, at least, is that a lot of the existing 2D platforms have been more asynchronous and these media artifacts and say a photo, video or text. But here you're in these what appears to be the illusion of these non-mediated spaces where you actually feel like you're with these people And so it becomes an issue or challenge of dealing with these aspects of trolling or griefing in real time, either personal space bubbles or muting, blocking all these things that, but there's a different urgency, I guess, in terms of teaching the tools that people need to know that they're in these situations, they know how to defend themselves if something does happen. So love to hear any reflections on that.

[00:22:30.550] Michelle Cortese: I had one thing to that. And I love that framing and thank you for that. I think that really understanding on a like emotional level, what that very reactionary experience feels like. It's really helpful. Like I usually when I'm when I'm doing talks and this stuff, I do a moment where I'll just talk about the rubber hand illusion. So I just think it's just a really helpful moment to get people in the zone of understanding what it feels like to react to something that you intellectually know isn't there, but you're just having these like knee jerk reactions, and everything is real time. So it's important. I can go into an explanation of the rubber hand illusion if you feel like that's useful for this audience, but otherwise we can just keep going.

[00:23:14.004] Jessica Outlaw: I think it's an important thing because it talks about how our visual system overrides almost all other sensory information we have, which seems very relevant.

[00:23:23.668] Michelle Cortese: Yeah, absolutely. I'm going to try to explain this in this environment. I usually use my hands so much that it's going to be interesting to try to do this just sonically. But in the rubber hand illusion, we're talking about the physical one, not the virtual one, though there have been virtual versions of this done. A person is set up at a table with both of their arms in front of them on the table. One of their arms is placed underneath the table, and a rubber arm is put on top of the table in their clear view. So they're looking at both a rubber arm and their actual arm. Underneath the table, their real arm is being stimulated with either a feather, a finger, whatever, that's being touched in some way. On top of that, the fake arm, the rubber arm, is being stimulated in what appears to be the same way. So the person sits there and they see this happening to this rubber arm progressively and they begin to sort of absorb that rubber arm into their body in a sort of protective way. At some point in the experience, someone comes out with some sort of threatening object. Maybe a knife and gestures to basically stab the rubber arm. Invariably, everyone jumps back at that moment because you've given your brain an opportunity to basically absorb that arm into your sense of being. Like, for the moment with you, that is part of you. And the same thing happens in VR. That's like the perfect example of how your brain adopts an avatar as its body.

[00:24:48.132] Kent Bye: And I think that was Slater or others who've extended the rubber hand illusion into the virtual body ownership illusion, meaning that you feel like you really actually are owning the body that you're in. As I did an interview with Slater, he said that 99% of the time, when you look down in your body throughout the course of your life, it's your body. So that when you're in a virtually mediated space and you look down at your body, you just, again, assume that whatever body is there is your body. And so I think as we start to have these new aspects of embodied and environmental presence and social presence, these issues of harassment and trolling have had like maybe four different layers of ways of addressing the solution that at the outset is the cultural aspects of the code of conduct and different things are coming from the culture. And I think that's the last section that you dive into. And then I'm using less eggs models. Maybe this is a little bit less relevant in terms of the economic and market dynamics, but I mean, we are talking about social platforms that do have different business models. And so there are constraints in terms of how much money you can spend for moderation or AI moderation tools to be able to address this. So there are economic aspects in terms of really having safe online spaces. Does it need to be in the economic context so that you have resources that people are being paid to help create and cultivate those safe online spaces? And then there's the laws that are set forth in terms of people being banned from a space. And so if you are violating some of these different agreements, then you no longer have access to these sites. And so you're able to potentially get rid of bad actors through that process of banning either at the platform level, or in some cases, losing access to your Facebook or meta account to not even have access to your hardware. And then the last point is the Technological architecture in the code, all the ways that you're designing the technology, whether it's through personal space bubbles, muting, blocking, or moderation tools to be able to address this. And so that's when I look at this issue, that's the high level. I don't know if you have any expansions in terms of each of these different vectors to start to look at this as an issue.

[00:26:42.603] Jessica Outlaw: I mean, I can speak to the law perspective, because that recent case that showed the Americans with Disabilities Act does apply to virtual reality. So that's an area in which I am going to be looking at, like, are there going to be more accessibility features that are going to be added in the future? I don't have any specific examples about ADA and safety in social VR right now, but I think that's an emergent area to watch. And then when it comes to the market fundamentals and the economics, this is just something that we can watch as the space develops and we can see which social VR platforms are successful and end up generating more and more money. Are they going to be the ones that have more of these safety features? where they do maybe have live moderation for everything, or maybe they have a real emphasis on helping community managers manage their small groups. So I think it's, again, an emergent area where I'm going to be watching over time to see where do people decide to spend their time when they're going to these social VR spaces? Are they only going to art exhibits? Are they only going to music? Are they using it to hang out with their friends who live across the world? I think there's multiple contexts to start to understand that question.

[00:27:56.008] Michelle Cortese: I have to be careful in the responses here because of my affiliations in meta, but I am happy to offer a little bit of Thinking I'm trying to like capture the essence of the question a bit, but something I think that did resonate with me that I was thinking about from my experience of working on horizon world specifically when you were talking about. sort of like collective reporting, banning, like what happens, how universal is what is going on when someone reports harassment. I want to touch on that because we have both Andrea and I and then like so the Horizon team have spoken about this publicly. So it's open knowledge, but basically one of the things that we really wanted to do when we were working on reporting on Horizon Worlds was ensure that if you reported something in that context, that it reported on the Oculus system level. And that's something that I'm not sure that, you know, maybe everyone who's listening to this doesn't actually know that, but if you are a user of Horizon Worlds and you go to report a bad actor, it is something that will happen. That report gets registered on that user's like Oculus account level. We wanted to do more of that. It's obviously not possible on third-party applications on the platform, but our hope is the more of that that we can do, the more that we can try to prevent bad actors from just finding a new route in, making a new username, just going around and circumventing that. There is a lot of sticky subject matter around what identity is being used in Oculus. I'm just going to pivot off that now and say I'm done on that subject. But one of the reasons that Horizon does use that is for this protection reason.

[00:29:45.100] Kent Bye: Well, you talk about different, I don't know if it's modal or model feedback in terms of having those. If we talk a little bit about the fourth section, you start to talk about the social XR harassment and ethical interaction design. The actual interaction design of if there is a moment of harassment or trolling that's happening, then having either something that is having some window pop-up or something that is, doesn't have an absence of a window to have like a. maybe an emergency button or something like that, but maybe you can extend on the baseline aspects of personal safety bubbles, blocking and muting. If we talk about the interaction level, there's certain ways that you can have different interactions for each of these different social VR platforms. And they all do it in a little bit, slightly different ways, but some of those different foundational human interaction aspects of those designs to make it so that it's easy to understand, intuitive. And so people, they are in a situation to be able to protect themselves.

[00:30:40.490] Michelle Cortese: Yeah, I'm happy to jump here on this because I think every app does this differently, give or take, but fundamentally, they're all sort of chasing the same tenants and the same options that they want to give people. Referencing back to the book chapter that I mentioned that co-authored with Andrea Zello, which is really like impacted at least like my approach to some areas of this chapter, the way that we got to the exact things they wanted to offer was by starting with that proxemic way of dividing space connects back to things that Jessica was saying earlier about bringing like fundamentals from the real world into these virtual spaces. So we're like, we need to slice virtual space up into these four regions of intimate space, personal space, social space, public space. And for each of those, we wanna be able to think about under those categories, the types of features that we wanna have. So the tenants we sort of made for those were for intimate space, granular controls that are set up before you get into any space, super critical. For personal spaces, we wanna ensure that we have simple, fast gestures that someone does not have to think to employ to get them out of bad situations, basically like eject buttons that work really well and seamlessly and don't compromise the experience they're having. For social interactions, we need to establish local behavior rules and for public interactions for everything that is on a large scale. We need consistent laws or rules. Those are the to me the four categories that you really need to build up that like framework of safe experiences. I think they can manifest as features in a million different ways, but it's really like you need to give some of those granular controls so they can express what kind of experience they want to have before they get in. They need simple actions to get them out of tough situations. There needs to be like local behavior codes and you need consistent laws that basically that lead up to like hardcore bans of people who are just like consistently breaking the system. Yeah.

[00:32:33.668] Kent Bye: Yeah. That's that reference back to Edward T. Hall's zones of interpersonal space of the intimate space of the personal space, social and public space. So the boundary between intimate and personal of 0.45 meters, the boundary between personal and social 1.2 meters. And then the boundary between social and public 3.5 meters. And so you can kind of think of these different proxemic zones that if people get too much into your intimate space or your personal spaces, there's a bit of a taxonomy for what those distances are and what those meanings of those given the context of whatever situation that you're in, as well as the norms and everything else. the cultural guidelines for that. But when you're in these virtual spaces that it sounds like there's different technological approaches, even as I talk about all these different aspects of the public banning versus blocking and everything else that they kind of nicely fit into this different zones of interpersonal spaces. What I kind of hear you saying.

[00:33:27.657] Michelle Cortese: Yeah. Yeah. Slotting them into those zones is really just a way to think about them and sort of organize the thought. Those are not like requirements. But they all fit nicely in there and it's just it makes it for a neat checklist when you're thinking about what sorts of features you want to put into an application that is going to have social implications and is virtual. Yeah.

[00:33:48.919] Jessica Outlaw: Yeah. And I mean, what I would bring up is just the importance of defaults because there's ways in which I think there are certain best practices around defaults in social VR. I think people who are creating these social VR apps, they don't want to overload people with a ton of choices to make at the beginning. And so when I think about access to these safety tools, I always think about alt space because with alt space, the menu is persistent. It's like around your knees. So it's always in your peripheral vision. And I think there's a way that I like that the menu is always accessible and it's always in my peripheral vision because it does take away one more decision on like, okay, do I know how to access these controls? And so I think that's one example of like one company that's doing something in a certain way. And I think where the industry seems to be evolving is better defining, all right, how are we going to give people safety tools? How are we going to make it easier? Here I would also point out Britton Heller just published an op-ed in the information and the title is, We Need a 911 for the Metaverse. And it's about like, how do you make things consistent across all of social VR? So that way, if somebody really needs help, they know like the one way to get help rather than having to be trained over and over. So I think I want to highlight Britton's work because she's building towards having this vision for safety that would make it easier for all companies to adopt one standard.

[00:35:18.546] Kent Bye: And as you start to look at these different design principles, then in this section 4.2, I'd love to maybe go through some of these different principles to kind of extrapolate what we have with the social VR versus what some larger design principles are as we start to try to cultivate different mechanisms of safety within these spaces.

[00:35:38.059] Michelle Cortese: Sure, I can jump in on those. I'm glad we went through some of the proxemics and that framework first, because this is really just built on that. It's a way of kind of unpacking that without going through the entirety of it. It begins with, it's really just a, these ones are a set of recommendations. So it begins with treat virtual embodiment with the weight of physical presence. It's something we've talked about here. That's like just probably a really smushy principally way of saying like, As Jessica was saying earlier, you can't build the piazza without the architect. You can't just make these physically experienced spaces without the consideration of them as though they were real. They have just as much social weight. Then the next principle there is to use proxemics specifically to do that. So we've already talked about that. The remaining four principles are shortened versions of what we just talked about. They are always communicate consent which is very close to the granular controls one from before, but it's more about ensuring that every fork in the road is opted. I'm going to roll back to that using some like real world examples for a second. I'm just going to kind of forge through providing quick action remediation tools for tough situations. We talked about that at length. allowing users to define their preferences before social interactions begin. That is very much synonymous with the granular controls discussion from before. And that is not to say that every person should be able to define everything that can happen. You don't want to, as Jessica said, you don't want to inundate people with menus or interactions before they get started, but you do want to be able to rule out some of the most precarious or problematic experiences before someone comes in. And then finally, establishing local behavior expectations. And that one really connects back to proxemics because it's about acknowledging that there are different types of space and giving people the opportunity to opt into those. And I'm going to use that as a route to get back to the always communicating consent one. I've had this experience of when talking about these design solutions, I often get backlash from people to the effect of like, stop trying to sanitize our spaces. Stop trying to make everything in VR safe. Safe is a subjective word. Basically stop trying to sanitize. There's a really interesting use case that came up once where a guy was like, hey, what if I want to go to a world in VR where everybody greets each other by punching each other in the face? And, you know, technically, your rules would get rid of that. And I think our answer to that is something to the effect of, no, they would not. Your world can absolutely exist. There is definitely space for it. But these rules actually just ask that you explicitly say that that's what your world is about up front. Tell people. So that no one accidentally walks in there and gets punched in the face. Because nobody wants to just unexpectedly get punched in the face. If you're into that, more power to you. Opt in, show your consent, and have your fun. No one wants to take anything away from you. It's just more advertising things as they are. So that's definitely an essence at the bottom of all of this. And it's something that I do find very important to say in these conversations.

[00:38:52.632] Kent Bye: Yeah, it reminds me of some of the different discussions of, I think Altspace may have changed their personal space bubble default size and that by the default behavior, it's safe for people, but for people who have been longtime users to have no ability to overturn that prevented them to have a certain amount of intimacy or personal spaces that they're used to having. And so I think that goes back to Jessica's point, which is the defaults that are in the worlds so that if people are just going in there, And they don't do anything. They don't touch anything that defaults to the more safe environments that if people want to have more of those close interactions, there's certainly options within VRChat, as an example, to turn off your personal space bubble so that you can have interactions that are, again, depending on the context, if you're in a context, maybe you switch your settings and you're able to do that, but that you're able to tune it for people coming in. You're preferencing that they don't have a terrible experience that feels like that's transgressing their boundaries. Yeah. And Jessica, do you have any other thoughts on these design principles?

[00:39:48.562] Jessica Outlaw: Yeah, I think there's a way in which I see that our chapter is a resource for people and where social VR is at right now. I think there's other people who've been doing work in this area as well, like Lance Powell is somebody who comes to mind. I mean, Ryan Schultz has done so much to catalog what is the feature availability across different social VR worlds. So I do want to acknowledge where this chapter and where these ideas are coming from. And I think that I don't necessarily want to predict what the future of social VR is going to be because any predictions I make are likely to be wrong. But I do see a huge opportunity around ideas of convergence and figuring out different types of flows for people to go through. So this is really a snapshot of where things are at right now. But I would imagine that as people get more education around the opportunities in social VR as more defaults are established. If there's a 911 in the metaverse in the future, like Britton Heller was suggesting, I do really wonder if the individual controls are going to be as important or if there'll be more defaults that everybody is comfortable with. And then they go back and change those things over time or for themselves based on what they know. And I would just be really curious to come back and revisit this chapter five years from now as these different apps are changing and evolving.

[00:41:12.475] Kent Bye: Yeah. One other point that I would just point out before we move on to the last section here is that I've noticed the challenge of real time environments is that it's really tough to scale moderation because essentially you can have just two people in an instance and like, what are you going to have a moderator and every single one. It doesn't scale in the same way that cultural artifacts of like text photos or videos can have moderation overlooking. When you're talking about real-time social dynamics, it's like really difficult to police it, but also there's embodied interactions that happen that could be violating in terms of sexually suggesting behaviors that are embodied. So I do think there's going to be an increased use of artificial intelligence and machine learning to not only potentially detect behaviors that are embodied behaviors that are violating from the code of conduct, or speech that's violating from the code of conduct. So the counteractive aspect of that is being ruled by AI overlords with no oversight, in order to create safe online spaces, but there's errors and limitations to large language models and all these other aspects of limitations of algorithmic approaches to solve this, where I don't think it's going to be possible to completely solve it algorithmically. There's going to be needing for other aspects of the cultural dynamics, which we'll get to in the last section here, but just to say that what I've noticed at least is that some of the social VR platforms turning towards more AI techniques and machine learning to potentially detect some of these behaviors and then create what I assume is some level of a social score of keeping track of the trust level of each of the users and their past behaviors so they can sift through the overall behavior. So there's a lot of stuff on the backend that I know that a lot of the social VR companies haven't necessarily talked about for good reason because They don't want these systems to be gamed, but there is a cat and mouse game that happens sometimes with some of these situations for creating safe online spaces, moving into more AI machine learning techniques. And I didn't see that mentioned explicitly, but I know that that's a big part of the future of all of this. Handling these issues is leaning on the latest cutting edge techniques, be able to handle some of these moderation aspects that. Whereas they've been handled within text video and photos before, but now with real-time interactions, there's lots of new stuff that still has yet to be fully figured out.

[00:43:26.330] Jessica Outlaw: there's something else in what you're describing. If the future really is moving towards AI moderation, that's going to put people under a level of scrutiny when they're inside of social VR and there's that feeling of being watched. Kim Stanley Robinson talks about the structure of being watched and how that can affect people. Even just by going into a store and seeing the video cameras very prominent, it can affect your behavior, it can make you more self-conscious, and it doesn't necessarily matter if there's somebody watching you through that video camera. And just from that perspective of what is the experience going to be in social VR, I think that's another area that I'm going to be looking at closely, is to what extent do people's experience of being in social VR change if they know that they're being surveilled or that they have the potential to be surveilled, or they may be subject to algorithmic decisions when it comes to moderation. And I think that's just an area that I don't have strong predictions about, but just from the perspective of what it feels like to be watched and how it can change your behavior to be watched, that is just something as a social scientist that I want to pay close attention to.

[00:44:43.672] Kent Bye: Nice. All right. Well, let's maybe move on to this last section here of the social norms, which I think is addressing some of those cultural aspects where there's limits to what the technology can do on its own. And there's a part of this is cultivating a culture, you know, creating a safe environment with each other. So what the normative standards of that culture are. And so I know Jessica, we've talked about some of this before. I recognize some of the same structure, but maybe you could take this in terms of some of these other aspects of trying to create safe online spaces from more of a cultural perspective.

[00:45:16.083] Jessica Outlaw: Yeah, this is influenced by how researchers in sociology and anthropology study groups of people. I talk about creating social norms in social VR, and this actually originally came out of a talk that I gave at Games for Change in 2019, because there's clearly demand for the creation of social norms. And I wanted to start giving some particular building blocks that people could use for the formation of norms. And so the last section of the IEEE chapter is talking about what are these building blocks. So they are stories and myths to begin with. So the stories are going to communicate the values that you have. Your stories will have heroes. Your stories will have antiheroes, so whose stories get repeated. I mean, I think last year we definitely got a crash course in how stories and sci-fi can interact with technology with Neil Stevenson's Snow Crash and his coining of the word metaverse and how it just became all of a sudden really pervasive. The story of what the metaverse is and what is it for and who is there is one story that everyone in the tech industry is pretty familiar with it by now. Yeah, we talked about the heroes, the archetypes. The archetype would be the stereotype of the person for your group to emulate. What is the personality type? If you're building a group of people in social VR and you want to help shape their behavior, you know, what's the ideal personality type or two personality types that they're going to have? Are they going to be mainly extroverts? Are they going to be artists? Are they going to be visionaries? So this is another way where you can start to describe who your group is for, and it will also influence how they behave. Next, I talked about signals of belonging, where there's the symbols to indicate your status, your belonging in the group. Then there's also artifacts, which are the things that you get in exchange for participating. So, for example, in VRChat, they have a whole badge system that has to do with how trusted you are. Is it based on the amount of time that you've been in there, or is it based on... Ken, do you know how they give out the badge system?

[00:47:31.528] Kent Bye: Again, it's all secret because all this can be gamed. So they never reveal what their algorithms are. So some people reverse engineer, but yeah, there's reasons why their trust and safety systems are pretty occluded. It's because if they're known, then they can be bypassed more easily.

[00:47:47.444] Jessica Outlaw: Yeah. Yeah. And basically anytime you see a badge or some type of thing that you get in exchange for participating, that is going to be an artifact. So if people have a really, really high level of trust inside of VRChat, they're probably going to continue to have a really high level of trust. They're probably not going to take actions that could lead to them losing whatever status they have achieved. And this goes to why people get participation trophies in kids' sports. There's value in giving people things for the efforts that they've made, and it makes them feel more strongly aligned to the group and more willing to stand up for the values of the group. So when it comes to the social norms, these are the ways that people will act out the values that they have.

[00:48:38.745] Kent Bye: Great. And yeah, and then there's the language and the rituals and the ceremonies that we talked about here. And for anybody that wants more details, I think we did a whole like hour long conversation at Games4Change in 2019 about that talk that you gave, so can dive into much more detail. But yeah, that kind of wraps up this paper and kind of reflecting all these other dimensions. I'd love to hear from each of you what you think the ultimate potential of VR and social VR, especially if we're able to create safe online spaces and what the ultimate potential might be and what it might be able to enable.

[00:49:09.820] Jessica Outlaw: I mean, I think there's a lot of things, especially as it relates to avatars and how people choose to represent themselves inside of social VR that I'm very curious to watch. So we talked about the rubber hand illusion and then the extent to which our visual system in our brain can override almost all of our other systems at times. And this is reminding me of another academic paper where they trained people on how to like move a tail. And they were able to give people the simulated experience of being able to like play a video game. And your controller was like, you moved your tail by like just swinging your hips back and forth. And I think that's an area of embodiment that has been understudied. And so I'm so curious to learn how different types of embodiment and especially non-human embodiment will have changes on people's behavior. And will that actually make things more safe? Because if I go around as a piece of furniture instead of as a woman in social VR, does that mean I'm going to be less likely to experience harassment? So I think that's one open area that I think is exciting to look at because there's just so much about our defaults in how we interact with other humans and the ways that, you know, we've been socialized to say please and thank you. Or maybe we've been socialized in other ways from playing Halo and Call of Duty nonstop. You know, there's all sorts of socialization that are available. But what is going to happen when you're interacting with like a new object or a brand new avatar that you've never seen before? And how does that change the quality and the type of the interaction?

[00:50:57.845] Michelle Cortese: Very cool. Okay. I'm going to kind of take this answer in two directions. One being what I think is the most exciting thing that this could go to, but I'm also just kind of interested in talking about two things that I just think are like two interesting like axes of change for social VR that may happen in the next few years. And I'm just like, excited to watch develop. Clearly, I'm very reticent to make predictions about anything as well. But I think just the best version of this and most exciting aspect of this, the thing it really enables is just access. Access to so many different types of experiences, despite physical location or ability. It is incredibly exciting. And that is like the true promise of so much of this. And that's what makes people excited when they red snow crash whenever in their lives that they chose to do that. Or, I mean, it's pretty dystopian, but you know what I mean? Any time you see anything in media that allows you to be somewhere that you're not, it's basically the process of teleportation. So that's the lovely fantasy version. From a very realistic perspective, two areas of growth for social VR that I'm really excited to watch develop in the next few years One is mixed reality, because I do think that I'm much more excited about the versions of this promise of the metaverse that are more real world where things begin to bleed into our actual existence is to me where things become a significantly more interesting and be significantly more real, which also makes people significantly more likely to take some of the safety concerns more seriously. So it's mixed reality. And the other one is decentralization. I'm super fascinated with where embodiment and Web3 will touch. How do we scale safety? That's something I've been thinking a ton about right now. So those are less things that I'm like, here's where it's all going, and I'm super excited, but more like, here are two things that I think are going to tweak this journey quite a bit for us, make it more interesting, and kind of grow it. And yeah, that's where my brain is at right now when I think about the best in the future.

[00:52:58.778] Kent Bye: Yeah, no lack of design considerations for the future of whatever the metaverse may be, for sure. Especially taking into consideration all these aspects you're covering in this paper of social multi-user spaces in VR, trolling harassment and online safety. Is there anything else that's left unsaid that you'd like to say to the broader immersive community?

[00:53:16.768] Jessica Outlaw: I'm going to be on a panel at AWE with some folks where we're going to be talking about social VR and safety and privacy and some other topics. So it's going to be on June 3rd. I think at 2 p.m. Pacific. And so come and find me after that panel if you want to talk more on these topics.

[00:53:35.907] Michelle Cortese: I have nothing to represent right now, but I will say it is possible for us all to have that room where we punch each other in the face when we see each other. Anything's possible.

[00:53:47.680] Kent Bye: Nice. Make all of your dreams come true with virtual reality. Well, Jessica, Michelle, thank you so much for joining me today and for participating in this paper that you wrote for IEEE, the social and multi-user spaces in VR, trolling, harassment, and online safety, written in collaboration with Sarah Carboneau and Tommy Erickson. So yeah, thanks again for coming on and unpacking a bit on the podcast. So thank you.

[00:54:09.613] Jessica Outlaw: Thank you. Thank you.

[00:54:12.947] Kent Bye: So that was Jessica Outlaw, a behavioral researcher who's been investigating different aspects of online harassment and trolling within the context of social VR, as well as Michelle Cortese. She's a designer and sometimes artist in the space of VR and AR who's working at Meta in the social VR space as a designer. I have a number of different takeaways about this interview. First of all, it was great to get a lot of the historical context of what's been happening within social VR harassment. I think Just Scout Law has been at the forefront of doing some of these pioneering surveys that are trying to get statistics that are showing the degree to which harassment and trolling are happening in the context of these social VR spaces. For me, I don't think there's ever going to be a pure technological solution to this problem. I think there's going to have to be some aspect of cultivating a culture through all the different things that Jessica went through at the end, in terms of how to cultivate community and culture through a lot of those anthropological insights. If you want more information on that, definitely check out the 2019 interview that I did with Jessica. around the elements of culture and cultivating community within the context of social VR. There was a whole talk that she did at Games for Change in 2019, and we broke down in much more detail in that conversation that I did with her on site in New York City. And also, the interview that I did with her back in 2018 when she first did the survey with Palliative VR that we're looking at the different dynamics of social harassment. And she also mentioned Lance Powell. I did an interview with him about a lot of his work that he's been doing in the context of social VR as well. I think in a lot of ways, you see a lot of coverage around harassment in VR. If you just go into any public space within social VR, it doesn't take too long before you face some level of physical or sexual harassment or racist or sexist behaviors or actions. It's happened from all different dimensions of the internet for a long, long time. But in the context of these real-time social environments, you're particularly susceptible to a lot of these things. What are the different solutions to try to actually mitigate this? I think that it has to be a multi-approached system for not only trying to do the technological architectures, but to also cultivate the community. Also, there's different economic dynamics there in terms of hiring moderation and having businesses that are having spaces that people want to spend time in. If they have a successful business, then they may be able to have the resources to actually help to enforce a lot of those different moderation policies. There's also going to be different aspects of not only the laws that are set within the context of each of these platforms for the code of conduct and how they're being enforced, but also if there's going to be larger laws and regulations. Jessica mentioned the ADA, which is the American Disabilities Act. There was a recent lawsuit that someone sued Viveport from HTC, saying that a lot of these immersive and VR experiences were not accessible. And so the finding was that a lot of these different experiences actually do need to follow the American Disabilities Act, which means that there's a whole other layers of accessibility that needs to be put onto all these different systems. So that's a whole other white paper and interview that will dive much more details into the specific issues around and accessibility within the context of XR. But in this case, just as you're building all these systems, then there will be legal obligations from the government that are mandating that these safe online spaces are also going to have a certain level of accessibility that are going to be required. So not only having the user interface within the 3D UI and everything else, but how to make it usable for people to go in and actually protect themselves. A lot of the things that I've seen, at least anecdotally, just in the conversations with Lance, is that they'll have a lot of journalists that'll be thrown into these different spaces, and there are different onboarding processes that you're supposed to go through when you are entering in these social VR spaces. It's not only made clear that this is a matter of protecting yourself and that sometimes people will either skip or bypass that or not really be familiar or comfortable with how to actually push the buttons or to deal with different aspects of harassment. A lot of times within the social VR space, people will go into private to avoid all of those different harassment issues within the context of these public social VR spaces. But the point that Jessica was making in terms of the defaults, as we're throwing people into these different spaces and as the XR community exponentially grows, then we can't just rely upon people just knowing all the different ways that they're going to have to protect themselves. There's an education process that has to be there, but also the tools that have to be at least some sort of uniformity or standard user interface patterns that people should know. Britton Heller had a piece in Information on March 2022 talking about whether we need a 9-1-1 for the metaverse, talking about the need to having maybe real-time response for people to be able to call in and respond to a situation that may be unfolding. There's a lot of different tools in terms of blocking and banning and muting people to be able to do the different layers of interacting with people. The other big point that I got from this conversation from Michelle was talking about Edward T. Hall's approach of proxemics, which was from the anthropological literature and looking at how there's different zones that you have within their space. The four zones are the intimate space. That's within the context of 0.45 meters. Then you go into the personal space within the context of 1.2 meters. And then the social space is 3.5 meters. And then beyond that, you're kind of in the more public space. And so as you have the intimate, personal, social and public spaces, then you have different technological approaches to try to approach each of these different layers. And so the intimate spaces, you have personal space bubbles and other defaults that you have. And then Michelle was talking about the personal space, ensuring they have simple, fast gestures that you don't have to really think about it and eject buttons that allow you to go into those safe spaces. You have established local behavioral rules in the context of the social space, and then, at the large scale, having consistent laws or rules across the entire platform, in the context of the metaverse, across multiple platforms. I don't suspect that we'll ever have uniformity across all these different sites. I think there's probably a value of having some diversity of different approaches, that if you want to have different types of experiences and playing different types of games or different communities, it's just like the world has different normative standards that are very specific to each culture, I think that's going to be reflected within the context of the metaverse. But at the same time, just having like a baseline of ethics and safety that are just expected as you go into these different spaces. So a lot of the recommendations around treating virtual embodiment with the same weight of physical presence, consider the proxemics when designing and comprehending all these virtual spaces, communicating consent to set the expectations, giving agency and enforcing them, and then providing quick action remediation tools for tough situations, allowing users to define their preferences before the social interactions begin without making it too overwhelming for them to get into the experience, and then establishing local behavior expectations. So those are the major recommendations, and hopefully that gives a good sense of the landscape. I think going to be a problem for now until we've kind of reached a cultural evolution where this is not a problem. I think it's going to take some time or at least having these mitigating factors. There is that thing that Michelle said that in the context of horizon worlds, if you have violating behavior, then it could actually get connected back to your office account that could turn off your access to your VR headset, or at least you'd have to. create another ID at the Oculus level, so you could actually lose access to your hardware if you have violating behaviors within the context of Meta's horizon worlds. But most of the other third-party apps are more regional to their experiences. Then, ban evasion is just something that each of these different companies have to account for. Also, the degree to which they have their own social scores and are tracking their own online safety. Like I said in the conversation here, a lot of that is really occluded, so there's not a lot of really detailed information about that within this white paper. But I think it's a part of the reality of in order to create these spaces, they have to have some system at least to try to sift through all the noise and to try to deal with the people that are having these different violating behaviors. In the future, I do expect to see a lot of AI moderation. I think probably a big issue as we move forward is if there's algorithmic justice and things that may be AI moderation that is too aggressive, or are there ways to appeal different decisions that are made, or what's the larger justice system? I think that'll get into more of the metaverse governance and IEEE paper that we go into in a little bit more detail, as well. So again, this IEEE Global Initiative on the Ethics of Extended Reality, this is the first paper in this series of the Social Multi-User Spaces in VR, Trolling, Harassment, and Online Safety. There's eight total white papers that are being released in this first batch. Seven of them are coming out, and then the Metaverse and Governance is going to be coming out actually within a few weeks as well. But this series is going to be diving into each of these different papers, and hopefully you enjoy digging into each of these different frontiers of XR ethics. For me, there's a lot of really interesting philosophical aspects, but I think as we move forward, trying to understand the patterns so that we can create the technology in a way that isn't causing more harm than good. I think that's the big thing that drives me for being involved in a project like this. Executive committee for the last couple of years and engaging in monthly conversations and plenary meetings and helping to Facilitate the white paper process in the beginning and then passing the baton and then now just trying to help get the word out Of all the work that's been done over the last couple of years So anyway, that's all I have for today And I just wanted to thank you for listening to the voices of your podcast And if you enjoy the podcast and please do spread the word tell your friends and consider becoming a member of the patreon This is a this is a part of podcast and I do rely upon donations from people like yourself in order to continue to bring you this coverage and So you can become a member and donate today at patreon.com slash voicestovr. Thanks for listening.

More from this show