Himanshu Chaturvedi wanted to investigate the impact of the uncanny valley within training simulations that include interactions with virtual humans. The specific training scenario was a simulation for nurses to help them identify the signs of rapid deterioration. They investigated different avatar styles that ranged from realistic to stylized to black and white sketches. They did find that the stylized and sketches were more effective in demonstrating the signs of rapid deterioration.
Become a Patron! Support The Voices of VR Podcast Patreon
Theme music: “Fatality” by Tigoolio
Subscribe to the Voices of VR podcast.
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR podcast.
[00:00:11.998] Himanshu Chaturvedi: My name is Himanshu. I'm a graduate student at Clemson University and I worked with Dr. Sababu. So what we are demoing here is an application that we created to train nurses to pick up signs of rapid deterioration in patients. So usually in general wards, nurses typically have to tend to four to five patients and they have to visit each of these patients at least four to five times in a day. So that's a lot of work and a lot of times if one of those patients is rapidly deteriorating, they kind of miss that out and we actually had statistics that more than 250,000 people in the past five years have passed away in the US in medical wards because they were not really taken good care of. So, basically we collaborated with the St. Francis Hospital in Greenville and we developed this application to train the nurses to pick up these subtle signs and symptoms of rapid deterioration so that they can take timely action. So, that is basically the application that we had and the study that I have done using this application looks at three different levels of photorealism of the virtual patient. So basically it took us a lot of time to create a highly realistic looking virtual patient which had like very detailed textures and shaders applied to it. So I was just wondering whether it was worth all the effort and time to develop such a character or can we get away with say a cartoon looking like character or a sketch looking like character. So, we developed these two more conditions and we ran our studies in a between subjects experiment. So, we had a third of the participants for the realistic condition, the next third for the cartoon looking like condition and the last third for the sketch conditions. So, we got pretty interesting results from this study. We did not find any main effect condition, no significant effect, but in one of the time steps in our application, which is the last time step where the patient is the most severely distressed and sick, we found that the negative emotions imparted to the participant were significantly lesser in the realistic condition than the cartoon conditions. which is contrary to what you would expect. I mean, people by default think that the more realistic character you have, the more convincing he will be. But as we found in this condition that the cartoon looking like characters had a significantly more higher impact than the realistic characters. And this kind of supports the uncanny valley effect which talks about how the expectations of the users are lowered down when they look at cartoon characters. You do not expect cartoons to behave, you know, realistically. So even if there are, say, glitches in the behavior of the virtual character, you do not mind that and your emotional bond with the virtual character still continues on. Whereas if you try to shoot for realism, people tend to pick up these nitty-gritty details and if they do not like it, then they get a disconnect, emotional disconnect from the virtual patient. So that's probably the reason we think behind our results. So basically the aim behind this study was to sort of create guidelines for people who develop interpersonal skill training virtual applications. So we could just go ahead and tell them, say, you've got to shoot for either realism or a cartoon character or anything like that. And currently we are trying to collect more and more data so that we have more interesting results and more support for our theory.
[00:03:39.413] Kent Bye: Yeah, and I think that, you know, that's a phenomenon of the uncanny valley, that the spectrum between the low fidelity, the middle fidelity, and the high fidelity, and that, you know, when you're in that middle fidelity realm, if it's not absolutely perfect, then people just reject it more. And it sounds like So you're validating that in the sense of having nurses come in and trying to detect these signs of rapid deterioration. And how are you actually rendering out these different signs of rapid deterioration? And what you have is sort of a more realistic and then a cartoon, which is a little bit more like stylized. And then the sketch is very similar to the cartoon, but just sort of like a pencil drawing in black and white. So there's actually no color at all. Maybe you could talk about how you're rendering those, and then the difference between the cartoon and the sketch.
[00:04:25.069] Himanshu Chaturvedi: Sure. So we collaborated with St. Francis Hospital, and we collected data about patients who had actually undergone rapid deterioration at St. Francis. So we collected data that what was their skin color like, and how did they look, and how often were they coughing. And so we collected all this real life data, and then we incorporated it into our application. It's a completely data-driven application. And if you would see time step by time step, there are very subtle cues that we keep on introducing in the patient. Like from time step one to time step two, the skin would get a little ashen, a little pale. He will start coughing a little more rapidly. You can see distress in the way he's sitting on the bed. He cannot lie down. In our application, you can also ask questions to the virtual patient. So, as time steps increase you would see that he is not responding back to you that well and he is kind of annoyed and grumpy. So, these are basically the symptoms that we have tried to capture in this application and these were all in consent with the medical experts that we were working with. and coming on to the difference between cartoon and sketch so basically cartoon condition uses a technique called cell shading which is basically a flat shader and two colored shader and the sketch condition uses a technique called cross hatching so we developed this shader ourselves and it came out pretty good luckily Earlier, we thought that we'd just have one non-photorealistic condition. But then, when we went through the literature, we found that both these techniques have been widely used in NPR applications earlier. So if you see in movies, you see a lot of cartoon-looking characters. When you look at NPR images, there's a lot of hatching that has been used. So we thought, why not extend our continuum and try to look at these three very different conditions and gather as much data as we can.
[00:06:19.843] Kent Bye: And in terms of the experimental results, what was the difference between the cartoon versus the sketch? Was one more effective than the other?
[00:06:25.809] Himanshu Chaturvedi: We did not have any significant differences, but if you would look at raw data, we found that the cartoon condition had more effect than the sketch condition.
[00:06:35.978] Kent Bye: OK, yeah, I would expect that, because it's a little bit more color and fits in with the scene a little bit than making something that's like black and white, which to me just doesn't seem as plausible, I guess.
[00:06:45.126] Himanshu Chaturvedi: Yeah. So one of the limitations of our current study was that the shaders and the techniques, NPR techniques that we used were only on the virtual patient and they were not on the entire environment. As you can see that the environment is still shaded realistically. And that is because we wanted to focus on the virtual patient itself. And when you make different conditions, you try to control the other variables as much as possible. So we do not want the effects from other environment objects to seep in, in our results. So we just put our shaders on the virtual characters, which some people might think that gives flawed results, but we also collected subjective data from the participants in our study. And we asked them, what did you think about the environment and the virtual character? And none of the 36 participants mentioned that they noticed the difference between the character and the environment. So I believe that should not be a problem at all.
[00:07:44.024] Kent Bye: And so what's next with this type of research or what other type of projects that you're working on in VR?
[00:07:49.067] Himanshu Chaturvedi: Currently, I'm trying to collect more data on this study itself. So what I'm presenting here is the positive and negative affect data. And we are also looking on a differential emotion survey data that we had collected. We also use something called an electrodermal activity sensor, which measures your skin conductance. And it is also a very good indicator of your emotional state. So yeah, we are compiling that data and trying to see if we can get more interesting results.
[00:08:18.860] Kent Bye: Great. And finally, what do you see as kind of the ultimate potential for what virtual reality could provide to these types of training simulations?
[00:08:26.601] Himanshu Chaturvedi: So in context of our study basically our aim was to provide guidelines to people who develop such interpersonal skill training applications. So if you have strong results then we can go ahead and say that if you want to develop an application to train nurses or something which requires an emotional connection with the virtual patient then you probably do not have to shoot for realism or you should shoot for realism or I mean, either way. So I think that'll be really helpful for a lot of people who develop such medical applications.
[00:08:57.764] Kent Bye: Okay, great. Well, thank you. Thanks. And thank you for listening. If you'd like to support the Voices of VR podcast, then please consider becoming a patron at patreon.com slash Voices of VR.