#932: Medical VR Insights into the Ethics of Experiential Design with Howard Rose

Howard Rose has been working in medical VR for over 20 years now, and he’s the co-founder and CEO of Firsthand Technology. He reached out to me after listening to the XR for Change panel on Ethics in order to talk about some of the specific ethical considerations of experiential design that he’s learned from working on medical and educational VR experiences for over two decades.

We talk about some of the larger political, medical, and economic contexts as well as the importance of diversity and inclusion, but there’s also some specific insights that Rose shares from working on everything to PTSD therapy applications to oral health education aimed towards underrepresented minorities.

Rose also highlights these three points in terms of what he’s observed happen when people are in VR:

  1. People often think they had control and agency than was actually programmed into the experience.
  2. Our memory is a reconstruction process where sometimes people will extrapolate and create memories that never actually happened in the virtual world.
  3. People have a hard time differentiating between what happened in reality and what happened within virtual experiences.

Rose highlighted these points in the context for how these vulnerabilities could potentially be exploited to in order to maliciously manipulate people. While these vulnerabilities exist in other contexts and media, then our media literacy skills for being able to deconstruct how immersive experiences impact our beliefs is still at a very nascent stage of development. So he was also bringing up larger questions around whose stories are told, and which stories are not being told given this asymmetry of knowledge about the power of XR experiences.

There’s also a number of other ethical issues raised in an Frontiers of Virtual Reality article titled “The Ethics of Realism in Virtual and Augmented Reality” authored by Mel Slater and 13 other co-authors.

There is a lot of the specific information about the psychological and physiological impacts of immersive experiences that comes from the medical applications of XR, and so Rose wanted to share some of his insights and concerns about the ethics of experiential design with the broader XR community as he’s in the process of writing a book about this topic as well.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. So I've been having a lot of conversations about XR ethics lately, both on panel discussions and then follow-up conversations. So I moderated a panel on XR ethics as a part of the XR for Change back on Thursday, June 25th. They posted the video a few days later on Sunday. And then on Monday, June 29th, I get an email from Howard Rose. He's somebody who's been in the medical VR field for 25 years now. And he said, you know, there's a great discussion, but you didn't talk much about content and the experience of VR at all. And so I thought I'd take the opportunity to have a discussion with Howard to be able to explore some of the different dynamics of the ethics of the creation of content. And we also end up exploring a lot of the different larger systemic issues that are going on and the dynamics of what the current state of the healthcare system in the United States, as well as different issues of diversity inclusion as well that we talk about. So that's what we're covering on today's episode of the Voices of VR podcast. So this conversation with Howard happened over a couple of days on Tuesday, July 7th, as well as Thursday, July 9th, 2020. So with that, let's go ahead and dive right in.

[00:01:21.114] Howard Rose: I'm Howard Rose. I think a lot about virtual reality and ethics, and I'm pleased to see Kent that you're elevating this conversation. Because I think there's two challenges that we have. One is to figure out what the ethical issues are and the other is to implement them. in a society and an economic structure that is resistant to a lot of these ideas. And so, you know, I think you can look at social media and what is happening there and how it was. The people who developed Twitter and Facebook, they have a hard time thinking about ethics and trying to apply it backwards. and they're ethically challenged to begin with. They don't have clear ethical guidelines personally. So it's not that they don't know what to do. It's not that they don't know a lot of things that will help. It's that they just don't do them. So I think knowledge is necessary but not sufficient for change, as everyone says.

[00:02:30.948] Kent Bye: Well, I would, I would sort of jump in here and just say, you know, to set a bit of context for that is that, you know, VR is a very new medium and it's not at the scale of something like social media. And I feel like there's something about the communications platform that has enabled all sorts of amazing new things of being able to shift culture. You know, you could look at different protest movements around the world, what's happened with Black Lives Matter and the Me Too movement. you know, all of these are, I would say, positive manifestations of allowing people to have a voice to be able to speak out. But the other challenge I think is that when you have a communications network that is connecting people at that scale, then you have all sorts of other unintended consequences and moral dilemmas when it comes to information warfare and fake news and misinformation. And so as we're in the early stages of VR, we can extrapolate out to what some of these larger more dilemmas are going to be facing, you know, what's it mean to have communications networks that are that large? We have a centralized system right now that has these communications platforms that creates influencers that are able to shift collective thought. But, you know, as we start in at that point, that's where I'm looking at is like, okay, once you get at this scale, then you have all sorts of new issues. And how do you, when you're thinking at the very nascent beginnings of virtual reality, extrapolating that out to say, okay, if there's a billion people in VR, then what can we imagine could go wrong? And if we have the existing economic systems of surveillance capitalism and everything else that's sort of built up, I can just extrapolate out this as, okay, there's going to be a lot of big problems there. And so I guess the challenge is as we move forward, and it's almost like I see it as like this metaphor between a Greenfield system and a Brownfield system, So like the legacy system is what we have with all the existing communications mediums and like a greenfield system is something that you're kind of building from scratch. It feels like VR is in this unique space where it's able to be a little bit of this greenfield but with a legacy system with all these brownfield economic systems and other ways in which that we're having these mass communication platforms and social media And Kevin Kelly, he argues that once you introduce a technology into the world, then you pretty much can't take it back. It's there. And we learn to live with the affordances are. So I have a hard time imagining like taking away something like Facebook or Twitter. But as we're in the early phases of VR, as the VR community, then what is the responsibility to kind of look out to the future to see what potential problems are once you get to that certain scale? And what can we realistically do now to be able to maybe learn from some of those lessons from before?

[00:05:05.030] Howard Rose: Well, you just covered probably about 12 PhD dissertation topics there. Just that one comment, I'm going to latch onto it because I think that idea that once you have a technology out, you can't bring it back is a very American idea. The Europeans are trying to exert pressure on privacy laws, on changing and reeling in and structuring how technology goes. What that sentence implies is that once it's out there, it's free. It's like a buffalo roaming the savannah. right? And we can't bring it back, but we can reel it in. We can create structures. And I think America is in this trap. I mean, it's so interesting how COVID and Black Lives Matter. And in a way, I look at COVID as the perfect disease for us because it's slow enough and it forces us to make choices, and it exposes all of the cracks in our system, both in terms of equity and health, and health is really an equity issue, and wealth and capitalism and control and all of these things. And it's just sort of peeled the onion in a painfully slow way. And then combine that with Black Lives Matter, where you have white liberal Karens in Central Park calling the police on a birdwatcher, right? So I think there's implicit bias. There's explicit bias. We have a lot of explicit bias. There's a lot of explicit bias going on in policing. There's a lot of explicit bias going on in social media. You know, we have a moron in the White House. It's not hidden. And I think before we start talking about implicit bias, we have to work on the structures and the institutions and also the personal work that people need to do to develop a brain and a spine. So, yeah, I don't, you know, look, I've been in this 25 years. It's not new to me. And what I've seen is that it's really easy to manipulate people in VR. So I've worked on health applications and PTSD, and first of all, terrorizing people or making them afraid is so simple in virtual reality. It's like, you don't have to do that much. In fact, you've got to dial it back. So one of the interesting things about PTSD research, so there's a lot of ethics around PTSD therapy. And one of them is that the therapist does not implant or elaborate or embellish or exonerate or do anything for the patient. It's a process for the patient to come to terms with their experience. But in a therapeutic environment, it's very easy for a therapist to go, oh, you know, you didn't have a choice, you know, to say all sorts of things that would lead the patient in a different direction. So therapists really try to work on not doing that. And then you think about what happens in virtual reality, where we are creating a completely synthetic environment that that person did not experience. We know that the way memory works is that we recall something, we reinterpret it, and then we put it back in our brains. So, humans are going to have a hard time, gradually, as we go down this path, differentiating between what they saw in virtual reality. and what they actually experience, because we remember these things as places. So I worry about implicit bias a lot, but I'm far more worried about explicit bias. And if you look at what's happening with Russian intervention and Chinese intervention and God knows whose intervention in our elections and manipulation, the way that they work, is that they don't create memes. What they do is they find plausible ideas that are already in society, and they amplify them in very selective ways in order to breed dissent and confusion. Now, if they can do that on a phone, think what they could do in a virtual world. And there are so many people watching Fox News and watching stuff that isn't news, stuff that isn't real. and anti-vaxxers and all of these people. Imagine the amplification of that when you can't differentiate very clearly what is real and what is not.

[00:10:08.294] Kent Bye: Yeah, I've been looking a lot about issues of epistemology and truth and knowledge and Agnes Callard has a great little five minute video that's been a koan for me. She talks about how in order to find the truth, you have to believe all truths and avoid all falsehoods, which are actually two different algorithms that it's impossible for one person to do on their own, both at the same time. So it actually requires a community process of peer review and the believers and the skeptics and collaboration towards the truth. So people putting forth thesis and trying to kind of deconstruct it. And so there's this dialectic that happens that in order to get to the truth, then you have to have this larger process. And that's what journalists up to this point have been is trying to vet out the information, but we've got people that have enough platforms on these alternative networks, whether it's social media or whatever else, that they become their own authority without any of that believe-choose and avoid-falsehoods checks and balances where people can just put out information without it being truly vetted by a community process, which I think is kind of the heart of you saying that the COVID disease and the coronavirus itself is like this anomaly to a paradigm that needs to shift. It's like catalyzing a deeper paradigm shift, but however, it's not an easy shift that that's happening. So, you know, when I look at a lot of these ethical issues, the dilemma that I face is that there's a handful of major players that really have the ability to shape the underlying economic business models and the trajectory of where this all goes in the future. And then there is content creators like yourself that are able to create experiences and that you're in a medical context. And so you're not necessarily dealing with the same market pressures as like a consumer context that are out there, but you have a blurring of all these different contexts is I guess the concerning thing for me is that these entertainment experiences are all of a sudden potentially having therapeutic implications when it comes to post-traumatic stress disorder. And so I guess one of the things that I put together like a XR ethical manifesto, but it's a bit of like, okay, now what, uh, for like, okay, yeah, now I'm sort of like, here's the landscape of potential problems. Now what happens to actually like, because there's a lot of different stakeholders, there's stakeholders that are the platform creators that are building the underlying engines for the data and what happens to the data. But then there's also the content creators like yourself, who have a whole other range of additional ethical and moral responsibilities to be able to put forth content that is going to minimize harm and all these things that we can look at that Mel Slater recently collaborated with a bunch of people to kind of put forth some guidelines of some of the ethical in the real. So, but yeah, I'm just curious from your perspective, how do you ground that collective issue that is way beyond either one of us as individuals and the larger ecosystem versus what agency you as an individual content creator have And for us to be able to have this conversation, to be able to sort of like figure out how do we negotiate all the variety of different stakeholders and being able to accept some risks, because, you know, in order for VR to exist, we need to have some risks, but to be able to minimize the harms of the longterm, when we think about these seeds that could grow into something that becomes the next issues that are at a collective scale, that we have a lot more ability to try to think of now before it actually happens.

[00:13:29.247] Howard Rose: Yeah. I think one of the challenges, so first of all, I think VR as a thing, as a distinct thing, is going to disappear. It's going to melt into every other piece of technology, into my phone, into everything, into people's workplace. It's going to stop being a discrete thing, just like the internet has stopped being a discrete thing. It's just my phone. I don't think of it in the same way, right? I use my phone for lots It's not going to be what it is now. So we're in kind of the early stage of, it's like the car. There were tons of small car companies. They're all gone, right? So you've got the hardware is starting to consolidate with a few players. I mean, there's more, there's more of the Packards out there right now, but that's going to continue. And they'll always be outliers. They'll be the Teslas of VR. with higher-end stuff that's special. But, you know, I think the challenge for people is that VR is a complex thing to try to understand, right? At a certain level, it's really simple. You just go in, you put the headset on, you have this experience. But to try to deconstruct it, like we might deconstruct a movie. So people have a whole vernacular. They've seen a lot of movies. They can say, I didn't like this have a whole fairly sophisticated way of thinking about these things. When you think about virtual environments, how do I know if I'm in a good environment? I've been in lots of them. I can go, this is a really bad experience and I feel sick, or this is really good. I'm an outlier, of course, but there's a huge leap between the type of sophistication that we have about movies and theater or whatever. music to How we understand like take a simple example like product placement So I'm in VR. I walk into a place and what's the music playing? It's not so different from walking into a mall and having music or a store and having music But the messages can combine Because the entire environment not just that store but the entire environment is developed by the same people, right? So it's got intrinsic values and ethics or lack thereof built into the entire experience. So, you know, why does that certain car company keep driving by, you know? And those are sort of more innocent, but like that billboard, that little sign, everything. So I think that the way that manipulation can happen is far more subtle and, in a way, insidious. So when I read the New York Times or watch Fox News or whatever, I, as a reader and as a person who understands what the New York Times is and what Fox News is, I have filters. But VR is, by and large, not totally, but there's a big pre-attentive experience to this, right? The reason VR works is because it hits our lizard brain, right? It's a pre-attentive experience, that feeling of presence, that feeling of being, you know, when you stand up, do you steady yourself on the virtual table? that's a sign of being immersed and present. And that's not something that we cognitively do. So by its very nature, you're trying to filter something that is pre-attentive. And that is, I mean, there's a lot of people who know that, but I'll just say it's a very sophisticated thing when you compare it to watching a movie in a very sort of controlled environment and being able to parse it. So I think that as long as VR is limited, like so you take social environments, VR social environments, collaborative environments or alt space or something like that, as long as you know all the people in it, it's a very different thing. But when you don't know all the actors, you know, look what happens in you know, Linden Labs and Second Life, and you just get a lot of really bad behavior. And you look at what happens even in social online gaming, the statistics of the number of people who have been harassed or sexually harassed or abused or, you know, suffered some sort of unpleasant experience is extremely high. It's over 50%. So, you know, it is going to happen, right? So if we don't find a way to control these things, it's going to happen. So who in this universe of experience, of technologies and developers, there's no entity. We don't have Socrates and our political system is bifurcated. It's not like we have one central We don't even believe what those other people believe. So who's going to be able to make those decisions and who's going to guide us? The marketplace of ideas has never been ethical. I can't think of any examples. I mean, you think about what happened with Black Lives Matter is so interesting because it's finally we're trying to resolve the Civil War. But Johnson told King, force me to change. Yeah, force me to change. Make me change, and I'll do it. He didn't go there. He didn't stand up and say, that's the right thing to do. I don't know. I'm pretty bummed out about the ethics of people at the moment. We're tribal animals, but we have to do something. I feel compelled to do what I can do.

[00:19:57.545] Kent Bye: Yeah, there's a, there's a lot in there. Well, first you're talking a lot about the basics of media literacy and how we have over a hundred years of film that we're able to deconstruct and understand it. We have new mediums, like maybe the social media where you have like this dialectic of the cat and mouse game is one metaphor I think of is like, Tristan Harris went to Stanford and was studying all the persuasive technologies lab. And then all those people go out and build things like Instagram and basically create like these digital slot machines to be able to like hijack our attention and to get us really hooked in to like using the game mechanics and kind of exploit the holes of our psychology to be able to have us get hooked into these different online platforms. And then you have this resistance movement that comes from Tristan Harris finding the Center for Humane Technologies and to have like all these ways we're trying to deconstruct these different persuasive technologies and to be able to be more critical and maybe unplug from being in those types of obsessive loops. But that in some ways is on the bleeding edge of culture. And there's a certain amount of privilege that comes with being able to disconnect from some of these different platforms and there's a bit of a distribution curve that you can think of the Rogers diffusion of innovation of the innovators and early adopters, early majority, late majority and the laggards. And I'm a big fan of the Whitehead's process philosophy that sort of looks to see how a lot of these technologies and the culture as it shifts is that it goes through these different phases of people that are able to have the education they need to be able to have that type of media literacy. But also like your relationship to those different filters, like you were talking about, you're like early adopter or pioneer in that sense. And that's not something that's been evenly distributed, even to the early majority of the existing media landscape that we have now for people to have those same types of media literacy skills. And so what's it mean to be able to introduce even more complicated technologies that go way beyond our ability to even rationally think about things and start to subtly influence us in an experiential way to have like, not just information warfare, but a form of experiential warfare that we can see that is possible on the far horizons. And how do we continue to navigate that? And this is a big issue that comes up in terms of technology and its diffusion. And it's often said like, in order to make things accessible, you just have to reach economies of scale. And then once it gets there, then it kind of diffuses out. And so I see this as this diffusion process. And there's a certain amount of personal responsibility that individuals have, but also a collective responsibility that I tend to look to Lawrence Lessig and his pale dot theory where he sort of breaks it up into these four different areas. He says there's the economics is one area where you turn those knobs and that brings about these collective shifts. There's the culture, so the individuals as a collective and what we have with the normative standards and our culture. There's technology in the architecture itself, which is platform creators and the technologists who are putting in the algorithms that are shaping the collective behavior. And then there's the government and the laws that can come in and start to shift how the oversight that if any of those other ones aren't looking out for best interest, then what type of regulatory laws and enforcement do we have in order to really cultivate the right behavior? And so I really see it as this combination of the economics, the culture, the technology architectures that we're building, as well as the law, and that they all need to be working together. And that right now we have a little bit of a laissez-faire capitalism where there's resistance to have any sort of oversight or regulation. And so it's really left up to the market, which drives the culture, which then leaves the people who are creating this technology and the architects as sort of the top of the food chain. you know, whatever they decide is what goes. And then everybody else sort of falls in between the shifting of the culture and the economics because there's no real viable alternative. So I see like the decentralized web and the decentralized internet as an antidote to some of the centralization of that technological architecture, but you're still left with the shifting of the culture, being able to deal with Is privacy a human right that we need to have enforced at a deeper level like the GDPR is trying to enforce privacy into Europe, but there's no real equivalent here in the United States. So I don't know. That's a little bit of what I see is that it's this unfolding process, but also that there's these four major areas that you could start to look at and to see, okay, well, we're talking about stuff that could eventually become policy, but as technology creators, the thing that we have the most leverage in is the technology and architecture and the code to be able to create experiences that then shift the culture that then maybe create new economic behaviors.

[00:24:30.562] Howard Rose: That's big. Um, okay. See, I'm not the only one who covers 35 different topics. No, I mean, this, of course, when you start thinking about this and ethics and technology and media, it hits every part of society. You know, about the media literacy thing, I think people of color and African-Americans and the people I know are some of the smartest, most aware people. I mean, because if you're African-American, you've had to negotiate a lot of bullshit in this society on a daily basis that can threaten your life. And so they are very aware of the inequities They live with it. They live with it on a macro and a micro level. They have the talk with their kid. My friend had the talk with her six year old kid about you can't do the dumb things that other people do. My parents didn't have that talk. And I, you know, I have white privilege. So what's interesting is that white people are so clueless that You know, it is the dominant culture. We're the default color. But, you know, African-Americans and people of color in this country know exactly what the hell is going on. They have two reactions, sort of. One is to vote. A lot of civil rights leaders talk about voting as it's a spiritual, moral obligation to vote and that people have died for that and that they really respect it. You know, white people are running around to state houses with guns saying, it's my right not to wear a mask and I want to go to a restaurant. So I think that the ignorance is not evenly distributed. That doesn't mean everybody's woke. But I think that the other thing is that America is the anomaly. We are the anomaly on this planet, and most countries are not like America. I lived in Japan for seven years, and I have a strong connection to that country. And Japan is much more like England than we are. Forget that they're islands. But, you know, the interconnectedness, the sense of place, the sense of history, you know, some of it, it's not all plus. It's class systems. the untouchables in Japan, and there's a lot of crazy stuff happening everywhere. But, you know, I think that Americans, what we're really good at is innovating. It's a highly competitive system that advantages small businesses, take all the risk, and then, you know, you know the funnel of startups that actually succeed. It's like, And it's down here. And that is the engine of innovation in this country. And then the successful ideas after I, as a small business person, have de-risked it to the point where VC or somebody comes in and is willing to elevate it and do all that, that is the amplifier. But we are terrible. So we're really good at innovating, but we're terrible at implementing. You look at our healthcare system, we spend 18% of our GDP, God knows what it is now, but 18% of our GDP, $3 trillion, and we have the worst outcomes of any developed country. Our infant mortality, I mean, it's just that you could run down the list. It's absurd. So we can't even get people to agree on that. So how are you going to get people to agree? Well, let's stop asking questions. I think that what we need to do, I think we do need to have this conversation among the people who understand it. But I don't think we should be so naive to think if we think it and write it, they will come. I work very hard in my own business and the other companies that I advise and work with to try to push a very ethical use, you know, a science-based, evidence-based approach to using VR for health, education, training, you know, other things. Because I see the benefit, and I see that you can use this to reach people in a way that other media can't, that other experiences can't. I don't want to be responsible for spreading something that is ultimately going to be really problematic. It's going to take us a while. It's going to take us 10, 15, 20 years to really, for these technologies, I mean, there's the access to technology ethics, right? So the other thing that's interesting is that my graduate work was in education. So VR is an intervention that really helps low achieving students the most. It raises the lowest votes the most. Like if you have an elite student who is working at the top end of their ability, VR can give them a plus factor that is valuable. But if you look at people who are just failing at the education system because they just don't get this whole textbook written thing, way, giving them something in VR can really transform their lives significantly. So I think there is an ethical issue about bringing it to people who need that capacity building and who need that other approach. So I do look at that as an ethical mission, but the people who are going to get the most of it are the ones who it will marginally help. And so I think access development, design, make design work for the people that really need it the most. I could go on, but that's what I'm thinking about.

[00:31:02.743] Kent Bye: Yeah. When I put together my XR ethical manifesto, I had like these 12 different domains and I put forth, if we lived in an ideal situation, we would live up to each of these different domains. And my takeaway after putting that together was like, okay, if I were to actually give this idealistic document to any one individual to try to implement, it would be impossible because there's inherent trade-offs between these different aspects of these different domains. And I think that's what makes the essence of ethics so difficult, but also, I guess, philosophically interesting is because it's trying to map out what those trade-offs even are. You know, as we're talking about a lot of access and equity and diversity and inclusion, for me, I have this humility saying like, okay, I have, I have these different frameworks and mental models. If I use Roger's diffusion or if you use Lessig's, you know, four different areas, well, there's going to be a critical theory perspective from a feminist perspective, from a black American perspective, from indigenous perspective. So the black indigenous people of color, you know, all these minority perspectives, they have a lens onto these different dynamics that are unique for mine as a white cisgendered male in the United States. So what I try to do is at least talk to those other people and try to include those perspectives. But at the same time, one of the challenging things is the design frameworks to allow people to look at what those trade-offs even are, to try to distill down into principles or actions. Like if you're creating a experience that is going to be deployed out to people who are in a medical context, then what are all the different things you need to think about when it comes to identity and privacy and the economics around that and diversity inclusion, algorithmic bias? I mean, there's so many different things that you kind of have to take into consideration. And so for me talking to you as an implementer and practitioner, Just curious how you think of that. What do we need to be able to either have an ethical framework that as experiential designers, we could have something to kind of push against as we start to build these different implementations, if it's a way to have a bit of a check and balance to see if there's any of these blind spots or areas that we're not paying attention to that could have unintended consequences or impacts of the technology into the lives of the people that we're trying to create experiences for. So just what do you think is needed to be able to actually like ground these different trade-offs to help guide that into decisions that you make as a designer?

[00:33:22.792] Howard Rose: You need people of color and women at the table. I don't think there's any substitute for a diverse bunch of voices in the conversation and becoming developers. And I feel that very deeply. We need, you know, I give a lot of talks. I was giving a talk at one, the event, and everybody in the audience was a white male. And the first thing I said was, Next year, go back to your companies and bring some women. And I mean, like, it's not rocket science. We know that gender diverse groups act more ethically. They're more efficient. They perform better. When you have gender equity in an orchestra, The orchestra is better. There are studies, lots of them, that show that gender equity improves the ethics and it improves the performance of organizations. So we need women and we need people of color at the table. And it's appalling how many decisions are made by a bunch of white dudes. An interesting global perspective on that is that a friend of mine, Torin Lucas, took our VR system to Uganda. He lives in England, but he grew up in Uganda, and his father is an anesthesiologist. I'm referring to pain relief, VR pain relief applications that we've used with lots of different populations and they've been very effective and all of that. So he took it to Uganda and there's a university in southern Uganda in a city called Kibale, the Kibale University, and they have a medical school. And he took our application cool. You know, it's a journey through a landscape, it's snowing, there's otters, there's spring, there's cherry blossoms, there's all this stuff. So he takes it to Uganda, and he gives it to a doctor, like the head of the anesthesiology department. And they've never seen snow. Like, what's an otter to them? The colors, everything. It's just so American. And the other thing is that there's about 26 million, I think, people in Uganda. Torin told me there are 13 anesthesiologists in the country. So the scale of untreated or undertreated pain in a country, in an environment like that, I mean, you think about surgeries, getting your tooth extracted. I mean, the access, the access to anesthetics, doctors, the access to health systems, the infrastructure. So access and economics is one, but the other is that I don't want to just take my best guess and send it to them. I think that it's really interesting to think about what would an environment look like that is designed, or what would this technology be? I mean, at a certain level, okay, they may use different colors, different animals, different things, but what would the technology be? We live in a country that values individualism, that values the self, that values the capitalistic motivations. If it was developed essentially in a different culture with a different set of values, collaboration might be built in at the core instead of something that we're adding later. You know, headsets are what they are and collaboration has been around for a long time, but there's problems with collaboration. But in another way, collaboration in a virtual environment is really Simple, if you have two local instances of the environment running, you're sending very little information across the network. It's just location and what you're looking at. This technology has evolved in a very specific moral, ethical, cultural environment. We need all those other people at the table to save us because without them. We can't do it. We don't have the will and we don't have the intelligence and we're just not that smart so Get people at the table diversify the access let other people help make those decisions and they will be more ethical. I mean, I'll also stop and say I really appreciate that you are elevating this discussion about ethics because I think there's not that many voices out there. who are talking about it in a thoughtful way. I'm not a Luddite. I've been in VR for a long time. I don't feel like, I mean, I feel like in some ways VR and ARXR, whatever it is, is inevitable, and that we need to push it in the best direction possible. But I think that a lot of people look at ethics as just an inconvenient friction for their development. And I guess I would encourage people to think about it. I mean, I think there is a higher moral calling about why we should do things in an ethical and appropriate way. But I do think that in order for things to be effective, they have to be accepted. And the public just doesn't know. I mean, the public right now doesn't really have that many VR, XR, whatever experiences. But I think that one is to elevate the discourse in public life. Two is to speak the language of people. I think it's very easy for this to be intellectualized or academic. I think we need very simple, concise, messages for the public. I think one of them is diversity. there's things that you can just achieve with that. And that's an easy thing for people to understand and an easy thing for people to grapple with. It's much harder to understand a lot of esoteric design prescriptions or whatever. So speak clearly, speak loudly. When you give a speech, make it part of your discourse. Give your own bent about what this technology should be and maybe what it shouldn't be. You know, I get that opportunity and I try to bring it into my just regular discussions because in healthcare, there's so many ethical issues about access and about the ethics of practice, about do no harm. And the audiences that I talk to of medical people, they get that. They get protecting the patient and the Hippocratic Oath. So if you can speak to the language of your audience and not just fall into the hype cycle, I think that those are practical things that we in the industry can do.

[00:40:34.793] Kent Bye: Yeah. Well, you know, as I hear you talking and you have a company and you have projects that you may be building stuff. So like, as an example, you just gave there an experience that is working with Uganda. So it feels like that there's a, a need to also just be in deep relationship with the communities that you're building stuff for. If you're a solo developer, I mean, I guess, how do you do that? If it's just one or two people that are actually making stuff and how do you bring that relational approach to people who are stakeholders for the experiences that you're creating? How do you more intimately involve them within the design process of stuff that you may be creating for projects as a small entity, putting experiences out into the world?

[00:41:14.146] Howard Rose: It's a lot of work. So the company I founded is First Hand. We built a thing called a funded by the National Institute of Health, and we were looking at how effective VR games are at changing self-care behavior around oral health. So our target was 8 to 12 year olds. Specifically, if you look at oral health, it's another great example where 20% of the kids have 80% of the cavities. That means 20% of the kids are costing the system the most. And these are the 20% who don't have the ability to pay most likely. And in oral health, Latinos have a pretty bad oral health problem. So we did it in English and Spanish. So we tried to make it accessible. Pacific Science Center and do a study, but in terms of what it takes to build something. I spent months talking to dentists, microbiologists. I spent a lot of time figuring out what they do, but also saying, you know, like, what is it that you've done? Cases where you have actually seen someone change to a dentist. Like, what is the mechanism of change that you've seen actually work? And we built, based on our best guesses about theories and the feedback that we got, but we also spent a lot of time with kids. We did focus groups in English and Spanish with kids and parents. And we developed a character, her name is Denticia, and you can find our stuff at attackoftheutens.com So we developed a character in Dentitia, and what we did was we set a woman who's a really great sketch artist, and she sat with the kids, and she said, okay, Dentitia, what does she look like? Is she from space? Okay, and then she would draw, you know, space Dentitia, and then she'd say, oh, well, what about cave woman Dentitia? You know, and she ended up at this sort of Amelia Earhart, we call her Indiana Jones the babysitter, you know, sort of old enough to be cool, but young enough to be approachable and stuff, old enough to be wise. We didn't want the brainiac thing. But anyway, my point being, what do you have to do? You have to A, be humble. B, understand that you don't have the answer and don't go for the easy. The first idea you have is wrong. I've done this so many times. I've listened to so many people who are like, this would be great in VR. No, it wouldn't be great in VR, but there's a kernel of what you're interested in that if we work through it, we can come up with a good idea and we can come up with an implementation that actually works. Why am I saying this? I'm saying this because you need to be humble, you need to do your homework, you need to look and see what the other research is. People just go and they build stuff. They haven't even looked at the human factors So how are they going to make anything good? So don't reject history. Go and learn it. Spend your time studying and understanding the topic. You know, really understand it. Because there's a lot of bullshit out there. It's really easy to just run with an idea. And it looks good enough. And then what really bugs me is that, so I do pain relief applications that we spend a lot of time iterating on and we test them and we make sure that people don't get sick. We're working with cancer patients. They don't get headaches. They don't get sick. They feel fine. Do no harm, right? But that's not an accident. But at the end, it looks easy when you do it and you're It's like, in order to do that, you have to fail. You have to fail and be scientific about it. So what can developers do? They can be humble. They can learn. They can study. They can realize they don't have the best answer, but really think about it. And then, yeah, get other people. You've got to go and ask the people that you want to help. You've got to get the kids and the parents in there and say, what do you think? What do you want? It doesn't matter what I want. I don't care. I don't care if she's space dentitia or she's cave woman dentitia. My job is to take what's going to be effective and design it so that it works. But it's not to decide what they want.

[00:45:55.768] Kent Bye: Yeah, as you're talking about this, I'm reminded, as you said earlier, that you've been in virtual reality for over 25 years working in the medical applications, which, you know, some people have this zero point of VR as being like 2012 with Oculus Rift, but certainly it goes back into the early 90s and all the way back to the 60s with folks like Tom Furness, who's been working pretty nonstop in the field for over 50 years now. But one of the things when I first got into VR and actually part of the big reason why I wanted to start the Voices of VR podcast was that I saw this split between the academic VR and the industry. And I guess that's maybe a split that exists on all levels of academia and industry. But in VR it was interesting because here you had a bunch of people at the IEEE VR conference that had been in VR for at least 5, 10, 20 or 30 years. But yet the industry didn't even have any journalists that were going to cover what was happening at the IEEE VR. And maybe that's a part of how news media works and how headlines work and how difficult it is to actually cover what's happening if you go to an academic conference. But I still find that there's been a bit of this split between the old guard and the new guard of VR. And that part of it is the access to the research is maybe behind paywalls that you have to pay a lot of money for, or you don't know where it's at. I'm happy to see that there's been at least more open access journals. But in terms of like the real critical discourse that's happening with the larger community, it feels like if you are coming in and you do want to have a little bit of humility and see what's come before, it's actually not all that easy to find out where that is. And I was happy to see Jeremy Bailenson create an experience that really tried to summarize his VR research that he's been doing for the last 20 years. And that just premiered at Tribeca. And so just thinking about like, what's the future look like if the research about VR is actually published within the VR medium? And how do you start to condense some of these insights into pieces of experiences that people could have and be able to then from there do the research? You know, that seems to be a new development as well. But there still seems to be this split between industry and academia. Things like that with stuff being locked down just makes it a little bit more difficult.

[00:48:06.149] Howard Rose: Right. And part of it is, I think, a Silicon Valley ethos of being young and scrappy and doing it in the garage and reinventing everything. I understand that. I mean, I don't think we're to the point where we know all the rules and we know what everything should be. But I do think that particularly where people are developing things for some sort of outcome besides entertainment, VR involves so many different disciplines and you can't be an expert in all of them. And I think part of it is understanding enough about psychology and the way that we interact with these technologies and really kind of figuring out how you make something that is both enjoyable and effective. And I think people are I guess I see a lot of young developers doing things that we already know don't work. In some ways, it's nice to have a clean slate, because technology is going to change, individuals and their habits and their abilities are going to change. So it's a moving target. I can't tell new developers what to do. There's a lot to know here. You know, I don't think that people should be stuck by the research, but they need to be informed about it. So, first of all, you have to know it's there. So, on our website, firsthand.com, we share a lot of the research on VR and specifically about pain relief. But, you know, the studies, the fMRI data, and part of it's the approach, but still, It's a long way to go, but I think we need to pay attention to that.

[00:49:59.569] Kent Bye: Well, I think this conversation come about between us in some part because I had helped to moderate a discussion for XR for Change, and it was four different people talking about different things. And I find a format of 90 minutes or two hours of talking about ethics with four different people It can be challenging because it's difficult to be comprehensive over the full breadth of all the different ethical and moral dilemmas. And you had reached out and said, well, what about actually creating content for people? What about some of those different dilemmas? And so I just wanted to maybe put it back to you and ask if there are other big aspects of some of the different ethical and moral considerations when it comes to actually creating content for people.

[00:50:39.593] Howard Rose: Right. I mean, I think the conversation was very interesting. There's so many aspects to it. It seems to focus a lot on the meta context of VR. And I think I'm very interested in the experience and how the experience links to broader society. So I think there's two aspects of that. Let's talk about what happens in the experience. I think that, as I mentioned before, VR hits your lizard brain and the whole phenomenon is pre-attentive, it's pre-conscious. We get absorbed into it, we get immersed, we get that feeling of presence. And so the very nature of it is that it's something that we don't filter. Now our cognitive filters come in at a certain level, of course, but there's a lot of things that will seep in. So sort of three things that I think are interesting from my experience using VR is that people will think that they had control over something that they didn't have control over. So either by circumstance, serendipity, or whatever, people will have a sense of agency when they don't really have agency over something. So it's easy to think you caused something to happen, but then again, the only things that can happen are things generally that were programmed into it. Yeah, so that's one. Another is that people remember things in VR that didn't actually happen. So just as our brains, when we recall things, we recall it, we play with that memory, and then we rework it, and then we put it back into our memories. And so there's a tendency in memory to say, if this happened and that happened, then this other intervening thing must have happened. We had kids going through our environment, Attack of the S Mutants, and they definitely felt like they did things, and they were very active. It's a game. But they would tell us that they did things that weren't in the program. But either the fantasy of it or the emotion of it, or if this happened and this happened and that intervening thing had to happen, there's a lot of ways that that manifests itself. So that's sort of number two, is that we can remember things and we can write into the script things that actually weren't there or maybe weren't intended. And the last one is that this confusion between reality and people can have a hard time differentiating between what really happened and what wasn't. So as an example, I came from the HIT Lab at the University of Washington and Hunter Hoffman did this study where he built basically a giant chessboard in VR and a giant chessboard in reality. in a real room, and then he put different objects in them. And he was testing confusion as an indication of presence. So the more that you confuse, so where did you see the sandwich? Did you see the sandwich in VR? Did you see it in the real world? Now that was a very contrived thing, but it really points to the fact that people can have a problem recalling did something happen in VR or did something happen in the real world. So you take a different example. I saw somebody did a news-ish kind of thing about domestic violence, and they chose to do it in a working-class African-American neighborhood. So the scenario in this VR experience, you're just a voyeur. You don't actually get to do anything. You see this thing play out. There's a house. There's cops outside. There's a couple inside the house, both African-American. There's a sister who's going in and out of the house. And this thing reinforces the idea that domestic violence is something that African-Americans have, that working-class people have. It's like, why did you choose to show that, right? It's not an upper-class, white, wealthy family. It sort of fits the stereotype. And so the other thing I'm concerned about is the stories that get told and the ones that don't get told. this thing, I feel like it reinforces the stereotype that domestic violence is this certain thing, and it happens in these certain scenarios. And of course, it's like that. So I think the piece of it is what stories do we tell? Just to give one more example, and then I'll be quiet. But like, okay, so if you have an AR overlay of a piece of your town, and it tells your history, who gets to tell that history? whose voice gets to say what happened. I mean, there's some really interesting examples in the South where they're trying to commemorate lynching spots. You know, they're trying to let people know that not only it happened here, but look at the prevalence of this thing. So there are ways that we could use this, you know, an AR overlay or some sort of immersive technology to enlighten. But on the other hand, there's lots of ways that we can amplify the dominant view of history.

[00:55:59.349] Kent Bye: Yeah. And is there any other topics related to XR ethics that you wanted to dive into?

[00:56:05.405] Howard Rose: Yeah, I think that one of the interesting things about VR is that VR raises the lowest boats the most. It can help the people who really need another way to learn or really need another, an escape, you know, a different kind of way to do medicine. People who are not being successful in the sort of conventional way things are done. And there's a lot of, say, ethical issues. I mean, I think that you could also look at them as market opportunities. But for education, I think the challenge of getting things into education is that education has the least ability to pay for things. And so the people that we're trying to help, the people who can benefit from the most, have the least access to the technology. And you even look at what's happening with COVID and the access to computers and being able to go to school online. There's lots of people who just can't. They don't even have the computers or a good internet connection. So they're just not going to school. So there are equity issues there. And so if you amplify that, I think that the danger with VR is that it has the potential to amplify, to help the boats that are floating highest get a little higher. But I would love it if we can find a way, you know, maybe on the Chromebook model or the $100 computer or whatever, that we can get this technology in the hands of the people who can benefit the most.

[00:57:38.705] Kent Bye: Great. And, and finally, what do you think the ultimate potential of all these immersive technologies might be and what they might be able to enable?

[00:57:49.555] Howard Rose: I think the ultimate potential of, I'll just call it VR, is to make us better people when we take the headset off. And I look at VR as a way to create transformational experiences for people. So if you have pain, while you're in VR, there's a benefit. But what's exciting about that is you see that when they take the helmet off, they still feel better for a long time after. And we actually see people getting off of opioids, for example, using virtual reality as a way to change their relationship to pain. So I guess in the best case, it helps us access things that we couldn't otherwise access. It gives us a sense of VR makes the unspoken spoken. So it makes behavior, intelligence, action visible. And when we can design for that and harness it and enable people to get insight into, like PTSD is a great example, it makes the stress and the stressor, the stress-inducing stimulus, more visible. And then once it's visible, you can address it. So from therapy, from education, there's so much benefit there that we can design for. I think the ethical who need it the most.

[00:59:27.051] Kent Bye: Yeah. Yeah. Well, just as we're wrapping up here, is there anything else that's left unsaid that you'd like to say to the broader immersive community?

[00:59:35.976] Howard Rose: Broader immersive community, I love you. And I am in the process, a little foreshadowing, I'm in the process of writing a book that is about experience. How we think about this concept of experience. and how we as humans, as animals, experience the world. I think the other thing is to show people that not all VR is the same. I mean, movies or TV is a great example. It's like seeing one movie and saying, yeah, I've seen movies. I've seen all movies, or that all movies are the same. I think part of that sophistication is the ability to understand not just different scenarios, like, oh, I was on a roller coaster, people, well, be able to differentiate more than just genres of experiences really that, you know, VR is messing with your, we're hacking the brain and the senses, right? And the way that we do that, both technologically and also in terms of content and experience, there should be a Hippocratic Oath for doing no harm. But I think that, you know, I can kind of dream up things like that, but I, I think at the end, we really need to be talking to Joe Public, right? And I don't know. I think right now it's interesting because of Black Lives Matter and a lot of the other things that are going on, ethics, our interrelationships between each other, our obligation to each other is now front of mind for people. Also, I think there's an important thing, a tactic, where Don't make it so different from everything else. Don't make VR unusual. Make it typical. People are very accustomed to talking about the ethics of social media, for example. And I do believe VR as a thing is going to, quote, disappear. It's going to melt into everything else we do. It's not going to be such a discrete, different thing. It will be a different way to experience It'll be a different way to experience training, but it's not going to be like the VR thing. So I think the more that we can link VR to everything else, the easier our job is. So when we talk about the dangers of Russian intervention or cyber stalking or other things, talk about it in terms of this, this phone. Now you know how it's happening here. Okay, now imagine that it's happening in your room, right? And that the things in your room are actually acting in a sort of a similar way. Let's take the computer out of it and just get them to think about what if Amazon controlled my desktop, my actual physical desktop? What would they do to it? It's not very far in the future that things are going to be like that, right? what products will be mysteriously placed because they know my demographic and my biometrics and what I look at, you know, analyzing eye tracking and stuff. So they'll have insights into me and my buying power that I don't even know myself, right? So in a way, we can just talk like that. And then you can have some interesting conversations Because then they start thinking, well, what if that guitar on the back starts playing by itself? It's like you can talk about actual things rather than VR as a different kind of experience. I don't know. Maybe that will work. But it certainly brings it alive.

[01:03:39.087] Kent Bye: All right. Well, Howard, I wanted to thank you for all the work that you've been doing for over two decades now in the virtual reality field, ranging from education to medical applications and to bring up a wide range of all these different ethical considerations. So yeah, I just wanted to thank you for joining me here on the podcast and taking a little bit of a deep dive into XR ethics.

[01:03:57.808] Howard Rose: Okay. I look forward to the next installment.

[01:04:00.937] Kent Bye: So that was Howard Rose. He's the co-founder and CEO of Firsthand Technologies. He's been thinking a lot about VR and ethics, as well as working in the medical applications for VR for over 25 years. So I have a number of different takeaways about this interview is that, first of all, Well, a big message that I got from Howard is that diversity and inclusion is a fundamental foundational principle, just because there's going to be a lot of different blind spots that folks have. And so in order to be as complete as you can, you really need to get the plurality of different perspectives and opinions. And there's going to be lots of different ways of understanding what's happening with the technologies. So there's a number of different specific things that Howard was talking about here. I just want to go through those. You know, he's talking about how VR is pre-attentive, where the experiences are hitting our limbic part of our brain. It's impacting us at a deep level. So one of the things working with PTSD, he says that the therapists are trained to not implant, elaborate, embellish, exonerate, to basically take as hands-off approach as possible and to let people have their own experiences and then to reckon their memories with the experiences that they're having because the process of memory is this reconstruction process and then sometimes you can like add things in there or you know basically you want to try to not do harm with VR and What's it mean to be able to have all these different things that he's observed over the many years of showing experiences? And he named like three or four of those. And number one is that when people go into VR, then they think they may have more agency or control than they have. They say that they could do things that you didn't actually program are pretty much impossible for them to be able to actually do. Maybe they're just making correlations and thinking that their agency was projected into different actions, but that's a problem. Then people remember things that didn't actually happen. So things that weren't in the experience, and then that they're starting at point A and ended up on point C, that B must have happened. And so there's often an extrapolation of things that didn't actually happen. And then people can have a problem differentiating between what happened in the real world and the virtual world, looking at some of these experiments of creating giant chess pieces versus the chess pieces within VR, and then trying to differentiate what happened in what context can sometimes be difficult. So we're kind of creating these immersive experiences that are indifferentiated from reality according to our memory. It's hard to know what was real and what happened in these virtual and simulated environments. So there's certainly a lot of ethical implications there in terms of what people are starting to do within these environments. And then finally, who gets to decide what stories are told and who has access to the ability to be able to tell those stories, whether it's augmenting different aspects of reality or just generally who has access to being able to tell their own stories when it comes to these different immersive experiences. Howard said that one of the things that VR does is that it makes the invisible visible so you can get an actual experience of your behavior intelligence and your actions that you're taking and so lots of different therapeutic applications there and that he was really recommending folks to be able to Go and look at the history look at what's been done before not to just jump in and start making experiences I think that's that disruptive mindset is to just go start making stuff but when it comes to medical applications, especially then there's a lot of research that's out there and and I think generally there is a lot of information in terms of the different psychological elements that are coming through these medical contexts. My experience has been that a lot of people that are new into VR, they're like, okay, this is the new technology, everything has changed. But there are certain aspects of human behavior that have actually stayed the same and that you would potentially get the exact same results from previous research. And so I don't think it's necessarily a valid argument to just completely ignore it and to not look at it. It's also impossible to consume everything that has come before. And so there's a trade off there, a balance of being able to dig in and try to get access to some of the insights from the research. But generally the information is out there and you can be able to go and take a look and find it. We talked a lot about the larger systemic issues of specifically what's happening here in the United States, but I think also around the world. Healthcare I think is a peculiar situation here in the United States where there's no universal health care and it's privatized and has like 17 to 18 percent of the GDP and yet at the end of the day the outcomes are actually some of the worst of all the different developed nations and so Howard was saying, you know There's certain aspects of kovat 19 that are actually kind of revealing some deeper systemic issues that are happening with the United States and I'd say that's part of looking at our existing healthcare system and how that is not necessarily working for everybody and to see the different inequities that we have and that he said the thing that's great about America is that's really great at innovating and having small businesses taking these risks and having that innovation being driven into the larger consolidation of these big tech institutions. And so we aren't necessarily as good at implementing things at deep systemic levels. And so thinking about the collective interest is not necessarily as developed. And so there's certainly a lot of implications when it comes to a certain cultural, ethical and moral context, which a lot of this VR technologies is being born out of. And, you know, the thing that I'll sort of end with here is just these larger concerns around projecting out into the future and imagining what kind of risks that we might be facing. And I think it's easy to maybe look at social media and to look at some of the different aspects of information warfare and try to extrapolate that out in terms of what that looks like in terms of experiential warfare. Certainly, I think that you could start to do things at a very fundamental level. If you're trying to persuade and control people, you know, Howard's talking about the use of fear and how you have to really dial back different aspects of the psychological responses that people have within VR. And that he sees that from his perspective, it's actually very easy to be able to manipulate people within virtual environments in different ways. Now, whether or not that means that there's going to be a one-to-one translation into virtual worlds, into all the same problems of social media, I'm a little bit more skeptical that it's just going to be a one-to-one translation. And primarily it's because of the dynamics of network effects within something like a video, a tweet, some sort of image could go around the world instantaneously, and it's very easy for people to experience it. For a experience to go viral. It actually takes a lot of time to actually go into the experience I mean, it's not like an experience be instantaneously distributed across everyone in the planet I mean now that is gonna be a lot easier when once it comes to like WebEx are for people to be able to jump into VR experiences, but you know also requires a certain mass ubiquity of the headsets of both AR technologies and VR technologies and so I I don't think that that's necessarily like one of the top concerns. In fact, maybe the VR environments can actually have more of a decentralizing effect because people are more concerned about the individual connections and relationships they have within these closed contexts. And it's a little bit less like the public Internet where things can just spiral out of control with information and fake news. That all to say, that doesn't mean that people are not going to be able to create deceptive information within these virtual environments. But what is that type of media literacy going to look like when it comes to these virtual worlds? People who are very sophisticated in being able to deconstruct and know all the different hidden agendas and intentions and biases that are feeding into different opinions, that's a lot easier to do with in written text. And to a certain degree, we can do that with art and films where there's this whole established critical process where you can start to look at the language of these different mediums and be able to deconstruct it in different ways. And I think we're still at the very nascent beginnings of what that starts to look like when it comes to VR and being able to deconstruct what it means to be embodied in different avatars, to be immersed into these different worlds, to be able to have these different narratives that are unfolding, but also the different architectural dynamics and the different ways in which that you could subtly influence someone's mood by color and lighting. I mean, this is all happening within film as well, but within VR, you're completely immersed within it. And so it's not like sitting in a controlled environment, like a movie theater, you're completely embedded within a world. And there's just a lot of new things that are now new and fresh when it comes to VR. And for me personally, that's a lot of why I'm so interested in looking at immersive stories and these different experiences and try to find different ways of coming up with what the affordances of VR are and how to start to think and talk about it. But at the end of the day, this is a lot of people that are in the VR industry or at the most sophisticated level of being able to do that type of level of analysis. And I think it's going to take perhaps a number of different generations before that type of literacy when it comes to these immersive worlds is more distributed out. And so we're kind of in this precarious position where there's an asymmetry of knowledge and power when it comes to the ability to be able to manipulate people into these virtual worlds, whether it's an augmented reality or virtual reality. Anyway, lots of really great discussions here, and I'm glad that Howard reached out, because I think there is quite a lot of information when it comes to the actual phenomenological experience of VR, the ethics of that, and what's known from a medical perspective. Like I mentioned in this conversation with Howard, there's actually a article called The Ethics of Realism in Virtual and Augmented Reality. Mel Slater and 13 other co-authors put out a paper, March 3rd, 2020, that dives into a lot of those other different aspects of the ethics of realism and what we know about the psychological impacts of virtual reality as well. So that's definitely something that's worth diving into as well and hopefully have some more conversations with some of those co-authors at some point as well. So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a list of supported podcasts. And so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show