#394: Distance Underestimation in Virtual Environments

Victoria-Interrante2Academic VR researchers have shown that “people typically underestimate egocentric distances in immersive virtual environments,” sometimes up to 20%. This could have huge implications for architectural visualizations, but also for anyone making aesthetic judgments based upon the proportion and scale represented within a virtual environment. I had a chance to catch up with University of Minnesota professor Victoria Interrante at the IEEE VR conference to talk about her 12 years of research into some of these perceptual and cognitive effects within virtual environments. We talk about some of the causes, the role of embodiment in distance estimation, photorealistic vs stylized environments, and the impact of having virtual humans within the environment.

LISTEN TO THE VOICES OF VR PODCAST

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye and welcome to The Voices of VR Podcast. Have you ever watched a movie star in a movie and kind of had this mental image of them? And then you meet them in real life and you have this thought of like, oh, wow, you're a lot shorter than I thought, or you're a lot bigger, or the proportions are just different than you expected from looking at them on this 2D screen. And so there's something about our ability to estimate distance and proportion that doesn't really translate that well into 2D projections. But yet, there's something still a little bit different between our ability to estimate distances within virtual environments that we'll be talking about today. And so one researcher that's been looking at this for over 12 years is Victoria Interante, who is a professor at the University of Minnesota. And she works with architectural visualizations and deals with the fact that there's this distance underestimation effect within VR. that can be up to 20%, which actually can make a huge difference when you're looking at making different aesthetic judgments within simulated architectural environments. So we'll be talking about this distance animation effect, what may be behind it, what you can do about it, on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by Unity. Unity is the lingua franca of immersive technologies. You can write it once in Unity and ensure that you have the best performance in all the different augmented and virtual reality platforms. Over 90% of the virtual reality applications that are released so far use Unity. And I just wanted to send a personal thank you to Unity for helping to enable this virtual reality revolution. And to learn more information, be sure to check out Unity3D.com. And so this interview with Victoria happened at the IEEE VR academic conference that happened in Greenville, South Carolina at the end of March. So with that, let's go ahead and dive right in.

[00:02:21.041] Victoria Interrante: Hi, I'm Vicky Ntoante. I'm a professor at the University of Minnesota. I've been working in VR for about 15 years. I work really closely with an architecture professor where our research focuses on how to harness the potential of virtual environments for architectural applications, in particular architectural design and design reviews.

[00:02:43.070] Kent Bye: Yeah, and it seems like for architecture, that's a pretty clear win in terms of the affordances of VR for architecture. So why is VR so compelling for architecture?

[00:02:54.495] Victoria Interrante: Great question. So I think what Lee says is he talks about how the current tools for designing create a relationship between the designer and the model where the model is regarded as an object and a small object. Because traditional tools are either pencil and paper or materials but it's a small model or on the computer. And in all those cases, the designer is large, the model is small, the designer is here, the model is there. And so what that does is that encourages people to think about designing a structure that's beautiful to look at from a bird's eye perspective and not so much starting from the very beginning thinking about a building as a space to be inhabited. And so what it leads to is all these designs that the enjoyment of the occupants is sort of an afterthought instead of starting from how it's going to feel to have soaring ceilings or big windows or whatever. So in terms of design education, he sees it as a really integral part of the education process to have VR as one of the many tools that people learn to use when they design.

[00:04:02.755] Kent Bye: I've also heard people talk about one of VR's strengths is that it's able to represent proportion and scale in a way that we really can't do in the 2D medium. But I've also heard here at IEEE VR different research about distance estimation and how there's actually a disconnect between our actual ability to estimate distances versus what we are able to see in VR. I would imagine that could provide some issues or problems that if you have some sort of mismatch between your distance estimation and VR, and if you're doing architecture, that could have huge implications.

[00:04:35.286] Victoria Interrante: Yeah, exactly, and that's what we've been working on for probably the past 12 years, is how to try to help people have a better understanding, a realistic impression of distance in virtual environments. So, for example, most of the research today says that people in virtual reality tend to perceive things as being about 20% closer to them than they really are. So that means that if you're in a room and you're trying to decide, do I want the 8-foot ceilings or the 9-foot ceilings, and you try to do that in a VR headset, you're going to see the 9-foot ceilings, they're going to look more like 8-foot ceilings. And so that's a real problem for being able to make those types of design decisions in virtual reality. So our research, we've been looking at basically two different methods for helping people to perceive distances more accurately. And the one we've been focusing on the most is giving yourself an embodiment in the virtual environment. So we've got a small RGB depth sensor that we've attached in the front of our head-mounted display and so you can look down and you can see your own self in the context of a virtual building and we find that that type of representation does help people perceive distances more accurately.

[00:05:37.866] Kent Bye: So if you're a disembodied ghost, is that one of the big problems of distance estimation that you really do need to have your body to kind of calibrate against?

[00:05:45.962] Victoria Interrante: Well, you know, it kind of depends on who you talk to, right? Because the evidence is a little bit mixed on that. But personally, what my hypothesis is that when you're a disembodied view, you don't really know where the camera is. You could be a camera that's floating in space. So you could be seeing the world through like a window. You don't really know that that's necessarily your eyes. The other thing is that I think it's also a question of like, What you receive is the affordances for action of the virtual environment. So, for example, if I was to look at that painting over there, and that's not a great example, but if I was to say, you know, what's the distance from me to point A in that painting or me to point B in that painting? I could probably say something that sounded right, but it would be an abstract judgment. Whereas if I was actually in a real environment, and I had to close my eyes and walk to a point, it would reveal a much more sort of intrinsic understanding of the space. And so I think what happens is when you slap people into a virtual environment, you've got two computer screens in front of your eyes. And if people don't really understand that this is an environment just like the real world that they can interact in, Let me back up a little bit. So one of our very first experiences with VR, we were trying to look at this distance underestimation question. And what we did is we built a totally photorealistic version of our lab. I mean, down to the tiniest detail. My colleague, Lee Anderson, architecture professor, measured the whole thing and built it up in SketchUp. And he took photographs of all of the surfaces. He text him out the photographs onto the walls. And when you put on your head-mounted display, People would say things like, oh, I can't see you, I can't see myself, as if the HMD was some sort of pair of magic glasses that made people invisible. And it was just such a compelling illusion. I mean, you really felt like that environment was real. And in that very, very limited case, we found that people didn't underestimate distances in that environment. We took that same environment and we moved our tracking system to a different room and we brought people into that other place and we had them put on the headset and they saw the room that they'd never been to before. They would underestimate distances. It didn't have anything to do with the fact that the environment was totally photorealistic. It had to do with this people's illusion that the head mount was a pair of magic glasses. that was allowing them to see a real world in a different way. And so I think that when you put your own body in the virtual environment, I think that can also kind of give the sense of, okay, you know, I'm seeing the real world, I'm seeing myself, and I'm in this, like, other space, but somehow I think it evokes more affordances for interaction with that environment than if you're just, like, a floating camera seeing this world.

[00:08:27.490] Kent Bye: Well I'd imagine that there'd also be a dimension of depending on your height and how accurately the camera position is relative to your actual height and your eye line would also make an impact. So for example if I'm looking at this picture over here on the wall I could like imagine myself laying down like one and a half times and say oh it's about nine feet away because I'm six feet tall but yet if the camera was like at four feet or even at ten feet then my own sense of distance estimation is going to be impacted to how far I am off the ground. And so I'm not sure if that's something that you've looked at or has some sort of impact as well.

[00:09:04.054] Victoria Interrante: Yeah, it's not something we looked at in our lab, but I know Betty Mohler has done some research with Marcus Lehrer, looking at the effect of eye height in head-mounted displays. And Danny Proffitt's group from University of Virginia also did some research in that area. And yes, I got to try this out in Betty's lab. And what's interesting is that when you're shorter, sometimes you feel like the environment has somehow increased in size. But when you're taller, you just feel like you're floating. for some reason. You don't necessarily feel like the environment is smaller. I don't know if that's just because in that particular demo I was a disembodied view and I wasn't actually walking. You'd expect that if you were a giant in a small environment, one step would take you much farther. And if you're a midget in a much larger environment, then one step would take you shorter. We've done some related research in our lab. We've implemented something called seven-league boots, where you put on the boots and then every step takes you seven steps. It's kind of a surreal experience, but it really helps you get, because the problem is our lab is only 30 feet long and we want people to be able to walk down a hallway that's 60 feet, and so you need to give them some sort of augmentation. And we really like the seven-league boots. It's actually a little bit more sophisticated than just like scaling up the gain from the tracking system because what we do is we have the assumption that you're looking in the direction that you're going. And so we break down that three-dimensional vector into the components of the direction of travel and then the other two orthogonal directions and we only scale you in the direction of travel. That's really important because if you just scale globally, then you've got these slight up and down, side to side motions of your head when you walk. We really don't want to scale those up also because then it just leads to a really strange feeling and people get cyber sick.

[00:10:50.113] Kent Bye: Yeah, I know with the Vive that they're going to have some pretty good calibrations to the location of the head-mounted display relative to the ground, because there is this calibration process. And, you know, I don't know how precise it is, but it feels real and it feels good, like you can reach down and the floor is where you expect it. And when you have a mismatch there, especially in a room-scale environment where you're walking around, It can be really disconcerting if your body is getting chopped off or if you're higher than you should be. When I was trying to do a demo on the Oculus Rift on a room scale set up with the bullet train, the cameras were off, the horizon line was off, and I wasn't exactly oriented to the ground. To me, I felt like it was going to cause some motion sickness. I can imagine a time where with the laser-tracked HTC Vive and having that relative to actually giving your height, it's no longer something that's going to have to be set in within the program. And so I guess I'm also kind of wondering, a larger question is, that aside, and that's going to be potentially addressed, what do you see are some of the big causes of this misestimation? Like, what is happening?

[00:11:55.417] Victoria Interrante: Oh yeah, that's a really good question. Like I was saying before, I think in my view that the main cause of misestimation is a lack of sense of presence in the virtual environment. That you don't really understand the importance for action in the environment. It's like a little bit of a of a new experience and a little bit puzzling and you don't know like if I do something how is my visuals going to react to my actions and so people have done some research where they show that if you give people a chance to sort of acclimate to the virtual environment the distance judgments tend to get better. Now the question is you know is that just recalibration and if so then you know how would that affect Would the decisions you make in the VR be the same as the decisions you make in the real world? And I think that's kind of still an open question. If your perceptions are not right, what if somebody puts on the head mount and then makes a snap judgment, like, no, that bedroom is too small for my child? I mean, they might not pace it out. And so I don't think you can necessarily count on recalibration. I think you need to look to other solutions. But embodiment is one. The other thing is, I don't know if you saw my talk on the virtual humans workshop. So the other thing we're doing is we're adding virtual humans to architectural environments as entourage elements. And so we're looking at things like if I add the totally photorealistic 3D model of somebody that I know to a virtual environment, can they help give it a sense of scale and also create affordances for action? Like if I watch a totally photorealistic replica of a person walking around in the virtual environment, does that then make the affordances for my interaction with that environment clearer?

[00:13:35.360] Kent Bye: Yeah, there's all sorts of different talks at the workshop on virtual humans and crowd simulation that was happening here before the IEEE VR and I think that was one of the first gatherings of that sort and did a number of interviews with different presenters there and Yeah, I guess I'm wondering if using virtual crowd simulation within architectural models, if that's something that you're trying to integrate into the design process so that the designer could see how people actually flow through a space and see if there's any barriers or how to actually design the space to optimize different specific things relative to how people are trying to move through the area.

[00:14:10.880] Victoria Interrante: Yeah, exactly. Now, I think some of that work is already being done in 2D. What we're trying to do is bring it all into the third dimension, and that's where things get a lot more complicated, because a lot of the simulation that's done in 2D, everybody's represented as a point, and in fact, a directionless point. And so when you start wanting to take that point and instantiate it as an avatar, well, you have to start paying attention to facing. Then you have to also, you know, if you're in a virtual environment and it's populated by a bunch of virtual humans, You don't want those virtual humans just walking right through you as if you weren't there. So they need to have enough intelligence to be able to do dynamic collision avoidance. That's a really tough research problem. And you also don't want them just sort of walking around aimlessly. We did. I had a student come and he did an implementation, which was good. But all the virtual humans sort of looked like zombies. And the reason they looked like zombies is that they were aimlessly wandering. And people in the real world don't aimlessly wander. They have a purpose. They go from point A to point B. And even if you're just touristing like a museum, you would think that people are aimlessly wandering. They really aren't. They're kind of going from picture to picture. They're stopping and talking to each other. And so what we need to do is research looking at how touristed sites or hospitals or schools or whatever it is that we want to populate with virtual humans, how real humans interact in that space and make sure that our virtual agents behave in the same way. but also make sure that our virtual agents will interact with us in the way that we expect. So if I'm walking down the hallway and a virtual agent is coming up, you know, if I go to the right, you know, I would expect him to go to the left so we don't bump into each other. So we've got some research planned there where we look at what are the cues that people use to signal to each other these avoidance maneuvers and how can we program them into virtual agents so that they behave plausibly. Because I think in architecture, one of the main problems is that the aesthetic sensibilities are very, very well developed. And so they're not interested in doing anything that will compromise the aesthetics that they've worked so hard to do. So the bar is really high for adding virtual humans because they have to either be really, really realistic or they have to be aesthetic and plausible. And kind of cartoon characters are, yeah, I mean, I don't see a whole lot of enthusiasm for like cheesy computer graphics people in architectural environments.

[00:16:32.333] Kent Bye: Well, you're kind of battling the uncanny valley and so you have to kind of deal with the fact that even if they're photorealistic, if they look photorealistic, we're going to expect them to have all these realistic behaviors of social cues and how they're walking around. You know, in talking to different people who are simulating these different virtual crowds, it seems like there's a couple of different models. One was the agent-based model, which is kind of the bottom-up going from an individual's intent or goal and what they're doing in a large crowd with those kind of extrapolating the crowd behavior from a set of simple heuristics. And then there's other kind of like crowd behaviors that I think for different flows that happen in maybe extreme situations like evacuation. So, I'm just kind of wondering, like, if you were walking down this hall right here at IEEE VR, there'd be people looking at different posters, they'd be talking to each other, and from an agent-based model, you know, you may be able to get some of that flavor, but I think that there'd be some probability of a set of this percentage of people are going to be talking to each other, this percentage of people are going to be doing this activity. And that you could start to characterize the behavior of crowds at different locations, like a museum, or a college, university, or a conference hall, and start to use some sort of combination of both an agent-based model, but also kind of a top-down model with more sets of probabilities. And so, have you started to look at that in terms of, like, different ways of characterizing and describing the behavior of how people work in different contexts?

[00:17:55.341] Victoria Interrante: That's a really good question. Now, this is where I have to plead a little bit of ignorance, because that's my colleague Stephen Guy. There's three of us, actually, in that collaboration of virtual crowds, and Stephen is the crowd animation expert, so I'm more the distance perception expert, so I think I'd have to defer to Stephen for the details on how he would implement that.

[00:18:14.277] Kent Bye: Okay. Well, what are some of the biggest open questions that are driving your research forward, then?

[00:18:19.920] Victoria Interrante: Oh, you know, Leah and I are really interested in this question of social VR. So you've got six people, they're all part of, say, from the Minneapolis City Council. They want to all simultaneously evaluate the suitability of a particular building design. So you've got all sorts of stakeholders, they're all really important people. They all come into the lab and six of them put on the head-mounted display. You may have six different ones and they're untethered, they can walk around. But then the question is, how do we represent the other people so that you can interact with the other people? So if I'm talking to you, I can see you, so I know where you are, so I look at you when I talk to you. And if we were in a conversation with, say, four or five other people, I would want to be able to turn to them and look at them. So if I'm in a virtual environment, yes I've got, in a shared virtual environment, I've got the audio cues that I sort of know where they are, but it would be nice to see them represented somehow so that I don't worry that I might say bump into them as I'm walking around. So our space is a 75 foot by 75 foot space and if people are If you have no representation of the other people, that's kind of awkward. But then the question is, what sort of representation do you want? What's the minimal representation that will give us the functionality that we need? So we're looking at things like, if I wear a camera on my head-mounted display that lets me see a video image, of people. Maybe it's an RGB depth camera so when they're close I see them. When they're a little bit farther away maybe I just see like a little box with their name in it so that I know where they are but they're too far away to really talk to. So we're looking at, we have plans to look at different types of representations of people so that we can facilitate interaction when you've got multiple people all wearing head-mounted displays in a shared environment. I think that's If we want VR to really take off, I think considering the social aspects of VR, like four kids playing, it's the fun of being with your friend. You don't want to put on a head-mounted display and be all by yourself. And so how do you represent yourself and how do you represent the people that you're with? And do that in a convincing way that works and the technology doesn't get in the way of what it is that you're trying to do.

[00:20:24.658] Kent Bye: So are you looking at social presence at all then?

[00:20:28.358] Victoria Interrante: Yeah, so again, this is really preliminary work. But yeah, we've got a number of things that we're going to look at in terms of how easily and smoothly you can have conversations or negotiations with people depending on their type of representation.

[00:20:44.182] Kent Bye: Yeah, I'm very curious about this concept of social presence and kind of definitions and what people think about it and how they measure it. So how do you first understand the idea of social presence or define it? And then how do you measure it?

[00:20:57.225] Victoria Interrante: Yeah, that's a really good question. So for us I think the gold standard is comparing live face-to-face. So we have some experiments with architecture students where they right now what they do for the design reviews is you'll have the design up on a, it's either pinned up in terms of drawings or it's up on a computer monitor and people are talking face-to-face about the design. If everybody is immersed in the three-dimensional model there might be other types of, and we want to kind of compare, you know, what sort of things are people talking about? Are there more references that have to do with, like, immersive references? Like if I was saying, oh, this thing over here, you might not do that when you're just talking about a model that's on a screen. And then we can look at things like, well, it's tough in architecture, right? Because it's not like, Like, it's a controlled psychological experiment, and it's a little bit, you know, qualitative. But just look at how well the design review progresses, the quality of the feedback people get, the quality of the comments, the quality of the interaction, and see which one of these technologies facilitates it. Because we are, in the end, really mostly interested in making VR work for architects.

[00:22:06.559] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality, and what it might be able to enable?

[00:22:13.034] Victoria Interrante: Oh, I'm so excited about VR. I mean, now that the cost has really gone down, I think this is going to be in every little three-man architectural design. I mean, there's no reason not to adopt VR technology now. I mean, both for the designer, you know, in thinking about the design, and for the client to get the client's feedback about the space. I think as the costs come down, it's just going to be much, much more plausible. And I think it's going to lead to better building designs.

[00:22:41.108] Kent Bye: Yeah, the way that you describe the building that you have been working in is it sounds like it wasn't designed for humans. So it sounds like with this conversation and dialogue that you could actually get the reactions of the people that are going to be tenants in the building before the building is built.

[00:22:54.842] Victoria Interrante: And informed reactions, because sometimes it's hard for a client just looking at a set of drawings to really imagine what the space is going to feel like. But by putting on the headset and being there, you can look up at the ceiling and you can say, yes, this ceiling height feels cozy. I like it. Or this ceiling height feels claustrophobic. I don't like it. And those are really personal decisions. And it's hard for the architect to make for the client. And this way, the client can make the decision for themselves in an informed way.

[00:23:20.831] Kent Bye: OK, great. Well, thank you so much.

[00:23:23.812] Victoria Interrante: Thank you.

[00:23:25.465] Kent Bye: And so that was Victoria Entorante. She's a professor at the University of Minnesota and has been researching cognition and perception within virtual environments. So there's a number of different takeaways from this podcast. First of all, this distance under estimation effect, I think, is going to be pretty important for anybody that's doing some type of photorealistic distance estimation, whether that's architectural visualization or even IKEA, where you're trying to estimate how different pieces of furniture may look within physical environments. Now, I think that if it is true that the body is connected to this distance estimation effect, I think perhaps something with the HTC Vive may be a technology that does a pretty good job of already setting the camera height to be pretty realistic relative to the floor. This is one difference between the HTC Vive and Oculus Rift, where in Oculus Rift, you're usually sitting down, and so it's up to the developer to set the height of the player. With HTC Vive, you're calibrating the system relative to the floor, and so your height is going to be varying depending on how tall the person actually is in real life. And so, as we move forward and start to get more accurate body representations, at first we just have the hand-track controllers, but being able to actually track the feet, I think we'll start to invoke what is referred to in the academic community as the virtual body ownership illusion. where having accurately tracked limbs, and I think that having both elbows and knees are going to be important for that as well, but once we have that, then having accurate body representation with VR that matches your height and kind of matches your expectations of kind of what you see your body as, having your body within these virtual environments may actually be a good calibration effect for you. And I talked to another researcher at IEEE VR, kind of referred to as this constant process of recalibration. So it's not so much that we're just calibrating, it's that we're constantly recalibrating. Because our visual field dominates so much and can override what all of our other senses are seeing, we kind of are relying upon our sight and our vision to recalibrate. And so when we go into different virtual environments and if there's something that's off between what we expect, then our minds sometimes just take some time to recalibrate. And so it sounds like also there's a little bit of if you're in the room and have a physical expectation of what things are, you're kind of in some ways calibrating yourself to that space. And so what Victoria said that they found is that when they take somebody and they're actually in that room and they put them in a virtual environment they've already kind of calibrated themselves to the distance and there's less of a distance underestimation effect in that you start to have this feeling of just putting on a magical pair of glasses where you're invisible which is a description that I think is really awesome if you have that feeling of kind of going from a real space into a virtual space and then all of a sudden you're a ghost. For some people they can really enjoy that feeling of being disembodied but for other people it can be a little disorienting in that it takes away our ability to recalibrate and also just makes us not believe the situation as much. It's almost like the effect of going into a movie theater and sitting down and going through the ritual of watching all the trailers and then you kind of get into this storytelling mode that helps you suspend your disbelief. There's something about VR that when you're in there and you want to have this sense of embodied presence that having a body can actually really help that suspension of disbelief that there's something about your mind that just starts to accept your body as being there and that If you don't have a body in the virtual environment, then it can start to just already just break that suspension of disbelief a little bit and kind of break presence in different ways, which could be helpful if you're trying to do a narrative experience and trying to just really focus you on putting you in a place rather than focusing on what you can do in that place. I think another reaction that I had from listening to this interview again, as well as talking to different academics at IEEE VR, is that there is this question as to what is the role of academic research into VR? And I think this is a perfect example of some of the research that's happening within academia, where it could actually be applied to game developers and other modern consumer VR experiences that are created. Because if this underestimation effect does hold true and is happening, then people who are doing architectural visualizations or other types of experiences where distance estimation actually is really critical to the main value proposition of the experience, then it's something to take in consideration and see how to address it. But just to even be aware of it, because if you're not aware of this effect, then it could have huge implications for whatever product that you're creating. But I think in some ways, there's going to be innovation that are coming from both Hollywood entertainment and the game development industry. When talking about specific things, like for example, social VR, I think is something where the consumer market is probably going to have a lot more innovation and people actually experimenting and doing rapid iterations of innovation in that space much faster than academia can move. And so academic research have had a huge time to be able to do lots of different experiences and to come up with different studies. But I think in some cases, the fidelity of the experience as well as the research is going to be eclipsed by the game development consumer market. But there's still a lot to be learned from the decades of research that's happened from academia. So I think the common perception of the consumer VR market is like, oh, well, the technology is completely new. So everything that's happening in academia is completely irrelevant, which I think is extremely short-sighted. And so this is actually a conversation that was happening between GDC and IEEE VR. And I have some other interviews that will be airing later with academics who were both at GDC and IEEE VR talking about what can academics teach game developers and what can the academic community learn from game developers. This is an ongoing dialogue that I hope with the Voices of VR podcast I can start to bridge the gap between some of the game developers and academics and by just having attended both conferences and being able to share interviews with both communities. But we'll be exploring that a little bit later in future episodes of the Voices of VR podcast. Just a couple of other quick thoughts about this idea of simulating crowds within virtual environments. One of the things that Victoria had said in another context was just her environment and her building was really designed to be aesthetically pleasing first, but not really taking into consideration how humans were going to actually experience that environment. And so doing things like simulating virtual humans within these environments can actually help them at a higher level design some of the different ways that people actually flow through a space or use it. So I had some other interviews as well diving more into those different components. But this concept of dynamic collision avoidance with crowds of humans and what type of cues do we take for avoidance maneuvers are little subtle things that when you start to have humans within virtual environments, you want them to actually react to you if you are physically moving through a space and locomoting in some ways. Or even if you're stationary and other people are moving around, you have this uncanny valley effect, which, again, we go back to this concept of the uncanny valley, which is essentially what you expect the behavior to be. And that the more and more that you move from something that has a likeness of a robot to a likeness of a human, the more expectations we have to see the full breadth of all the different social cues. And if we don't, there's this drop-off called the uncanny valley that we start to just feel gross and creeped out by. that they just feel like zombies and disembodied cartoonish characters. So I can sympathize with architects who are hesitant to be putting virtual humans within these architectural visualizations because if they look like cartoonish zombies that give you this feeling of disgust, then that feeling of disgust could kind of subconsciously be projected onto the aesthetics of the building that's being created. And I think the thing that is helpful about virtual humans within these environments is that it can give you a sense of calibration. You know your height, and if you're familiar with other people and their heights, then having other people that you know and are familiar with in these spaces can help you just start to estimate and contextualize the overall space. But it sounds like this is a bit of an open problem of making realistic movements of humans. So uncanny valley is usually associated with the aesthetics of their social cues. But this is talking about how they're actually physically moving through a space. And really interesting to hear that humans don't aimlessly wander. They are moving with purpose. If you've seen the indie horror film, It Follows, where there's just a kind of a zombie character walking aimlessly just directly forward towards the character. It does feel kind of creepy because nobody actually moves that way. And so figuring out how virtual humans move through virtual spaces is a workshop that happened at IEEE VR. And there's a couple of other interviews that I have about that as well. That'll be coming up later. And the final point that I just wanted to make is that there's this interesting concept that really fascinates me about VR is that going into virtual environments, we have to reconstruct everything from scratch and take something like, how do humans move through spaces? And this is something that we can start to break down into different models and recreate it. And we start to learn more about our own humanity by going into virtual reality and having to recreate different components of it. And so the thing that I just I think was really interesting about studying VR is that it's another lens of just being able to study about human nature and human experience. So that's all my takeaways from this interview. And yeah, I'm actually going to be going on an epic two-week trip here. I'm going to be going to Austin and speaking at an illustration conference. Then I'll be going to cover the International Joint Conference of Artificial Intelligence. I'll be doing a lot of interviews about AI with AI researchers for a new podcast that I'm going to be starting up here at some point called the Voices of AI podcast. You can follow me with some early reactions on at Voices of AI on Twitter. And then I'll be moderating a panel at the Casual Connect conference in San Francisco after that. And after that, I'll be going to SIGGRAPH in VR LA, so there's lots of travel that's coming up for me over the next three or four weeks. And if you do enjoy the podcast and would like to help support in any way, one thing you can do is just contribute to my Patreon, and that would help with all the different travel expenses that I have, and to help this podcast keep going. So, you can do that at patreon.com slash voicesofvr.

More from this show