#200: Redirected Walking Techniques for Virtual Reality

evan-sumaEvan Suma is an assistant professor at the University of Southern California Institute of Creative Technologies. He was at IEEE VR presenting a poster on “Towards Context-Sensitive Reorientation for Real Walking in Virtual Reality” led by his USC ICT students Timofey Grechkin & Mahdi Azmandian.

Context-Sensitive-Reorientation

Evan talks about some of the redirected techniques and different visual tricks for getting the most use out of your constrained physical space. For example, here’s a 2012 paper from Evan titled: “Impossible Spaces: Maximizing Natural Walking in Virtual Environments with Self-Overlapping Architecture.” You can read some of his other papers here in his list of publications.

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast.

[00:00:11.975] Evan Suma: I'm Evan Suma from the USC Institute for Creative Technologies, and we're presenting a poster. This is the work of my postdoc, Timothy Gretchkin, and PhD student, Maddy Asmandian. We're working with redirected walking, which is a perceptually inspired technique that's really solving one of the key fundamental problems for VR interfaces, which is how do you move around if you don't want to use a joystick? So if you want to be able to really walk in the real world, you're going to be limited by whatever size tracking space you have available. If you're using the valve system, apparently it's 15 by 15 feet. So if I can walk around in a limited area though, can I do something in the virtual world that will let me perceive that I'm walking through a much larger space than I actually am? And it turns out that the human perceptual system is quite malleable. And we can introduce very subtle perceptual illusions to rotate you slightly more in the virtual world when you turn your head, or to curve you along a bent path, or to scale up your movements or scale down your movements when you walk slightly. And if we do that in an intelligent way, we can get you to essentially walk in circles infinitely without perceiving it. and going through much larger, potentially infinitely large virtual environments. So we're presenting a poster where we've introduced really just another technique for how you do that where when someone is about to collide with the boundary of the physical world, we spawn a waypoint or some sort of event or have a character in the virtual world call your attention. Something in the virtual world happens that gets you to pay attention to something behind you. And as you walk over to this, whatever this event is, we then employ redirection so that your original target, where you were heading, by the time you turn around and head back towards it when you're done, it's now within the bounds of your space. Whereas previously you would have collided with a wall or walked off the target. And we do this mathematically optimized so it's automatic.

[00:02:01.388] Kent Bye: And so is there a minimal space that you need in order to effectively do redirected walking then?

[00:02:06.808] Evan Suma: So there's no clear answer because it really depends on the type of environment you have. If you have a really outdoor space with a large linear walks, you need a fairly large space. We do it in about 10 by 10 meters, which is usually pretty large enough. If it turns out that you have an indoor space with lots of turns and hallways and things like that and doors to go through, then you can do a lot better. I had one environment that we did in about 15 by 15 feet where we moved doorways behind your back using a phenomenon called change blindness and turns out that as long as your back is turned people are almost completely blind to it. So you can basically modify the structure of the environment around the user as long as there's a structure to manipulate and then you can do it in much smaller spaces.

[00:02:47.468] Kent Bye: Yeah, so maybe tell me a bit about some of these heads-up display icons. I guess on one level, there's a break of immersion when you start to throw things up in kind of a heads-up display icon or something, but talk a bit about that trade-off, but also what you're actually showing here in terms of some of these icons.

[00:03:05.502] Evan Suma: Well what we're trying to show is how redirected walking can be merged into a task. So in this case all the icons here are indications of what we want the user to perform in the virtual world. So we're doing a scouting task in this demo environment where they're instructed to say take a picture from a certain perspective or do a virtual panoramic shot. And really, that's to get them to do the motions we want them to do. Now, this is really dependent on the task you want to perform. So if I were in a completely different scenario, I would want to come up with something that's different and maybe doesn't use these icons. If I'm in an environment with multiple avatars or characters, maybe I want the characters to call your attention instead. So it's really dependent on the individual application. So these icons are just demonstrative of how you would potentially merge this into a task.

[00:03:51.583] Kent Bye: Yeah, I guess it's a thing where, you know, there's these safety issues. And I know that the Valve has their chaperone system where they have kind of like a grid that comes up and to warn people about they're about to hit the wall or something. And so is that something also that you've looked at in terms of trying to create systems that are warning people saying, oh, you're about to bump into something? Or is it more of distracting their attention to look over somewhere else so that when they turn, you can sort of reorient them and readjust so they're not going to bump into anything?

[00:04:18.951] Evan Suma: So I think you always need an emergency failsafe if someone doesn't follow your instructions. But in what we find really is that in most cases, if people will follow your instructions, at least if they know what you want them to do. So in this case, if we provide them with some sort of, you know, character or some sort of event that's calling their attention, generally speaking, they won't ignore it. So you can merge it more seamlessly and then just include like a warning or something like that as a fail-safe for those like 1% of people who decide they want to break the system and then you need like an alarm or something to make sure that they don't hurt themselves.

[00:04:52.930] Kent Bye: And so when you're talking about rotating the environment, what is the threshold in terms of like how much can you rotate before they start to perceivably notice something that's actually happening and it may become uncomfortable for them?

[00:05:04.624] Evan Suma: It depends on what your threshold is for. So my colleague Frank Steinick, who's here at this conference, has done a lot of work where he's done perception of redirection techniques. And so I think the number that he cited was about a 70 feet radius circular tracking area if you just want to get someone to walk along a curved path and perceive they're walking straight. But that's when he was giving them a task that said looking at different rotations and telling them specifically what is the point at which you notice or you can tell it to discriminate. between the two. I think it's a very different question if you're asking if people notice and you don't tell them beforehand that you're doing it and the answer there is actually really high. You can do a lot before people notice and in fact they'll usually get simsick before they'll actually most people will notice. So I think the real question that's important is at what point do people get simsick and I don't think we know because simsick is so variable between people.

[00:05:57.332] Kent Bye: Yeah, I guess that's the risk, is that you're kind of moving around the world, and you're maybe subconsciously perceiving it, but you don't realize it, and your body can realize it, and then you're getting sick. And so, yeah, I guess that's the trick with this technique, is that you risk having a large variability in terms of how people actually react to it when you are trying to do these tricks, I guess.

[00:06:15.789] Evan Suma: With the ones that manipulate your emotions by introducing some sort of discrepancy between real and virtual, then yes I would agree you have to be careful and there's going to be individual differences and you don't want to make people sick. But some of the more kind of geometric based techniques like the change blindness technique I mentioned, we also have another one where we generate environment hallways on the fly so we can basically dynamically create the geometry for the environment, just transitional hallways that kind of loop back in on each other. And it turns out that people don't really notice if a hallway starts looping back in on itself. These ones don't get people sick because their motions are still exactly what they were virtually. They just don't notice that perhaps their pathway has intersected where they previously walked. So those you can do very, very, very well, you just have to design the environment to be able to support that. Whereas the motion-based techniques, you can just plug it into an existing model.

[00:07:10.333] Kent Bye: Yeah, it kind of reminds me of MC Escher type of, you know, creating these impossible spaces that people are creating a mental model in their mind, but yet, I guess there's, has there been research of creating like these impossible MC Escher spaces and putting people into VR experiences with them? And then if so, then what happens when they experience these kind of like impossible situations?

[00:07:30.215] Evan Suma: Yeah, so actually I've done a few experiments like that and published a few papers in the last few years at IEEE VR where we've done exactly that. And it turns out that as long as what they can see at any one moment is visually consistent, it's extremely powerful. With the change blindness study, we ran about 77 or 78 people through and only one person was able to notice after we did this to them multiple times. And with the space compression techniques where we're kind of looping hallways in on each other, you can do massive, like 200% or more of just like things overlapping before people even start to notice. So it's a very, very powerful.

[00:08:06.963] Kent Bye: Yeah, it seems to me that this is a technique that's probably going to be used a lot with something like the Valve's HTC Vive, because if you're doing a walkable 15x15 space, then that's a pretty tight constraint, but yet it sounds like you can do quite a lot in terms of allowing people to do virtual reality locomotion while actually fully walking around, and then do all these tricks to be able to kind of create environments that allow them to explore around. So if people wanted to learn more information for how to actually implement some of this, what would you recommend to them in order to kind of look at, in order to start to build some of these environments in VR?

[00:08:42.592] Evan Suma: So I would tell them to just Google redirected walking. You'll find that there's a lot of really cool techniques out there that might give you some ideas. And actually, if they go to our website, which is www.mxrlab.com, we have an open source portal where we put up Unity scripts and libraries and software libraries and we're actually working on a redirected walking Unity package that we can put out there and you can basically just drop it into a Unity environment and it will allow you to walk around in. So I would say start there.

[00:09:12.828] Kent Bye: And in terms of locomotion, what's some of the most exciting or innovative stuff you've seen in terms of dealing with the issue of locomotion in the IEEE VR sort of literature? Because it seems like locomotion is a pretty big open problem in VR, and so just trying to get any sort of guidance in terms of things that you have on your radar that you're kind of tracking.

[00:09:31.442] Evan Suma: Well, redirected walking was definitely the most exciting for me, which is why I chose to start doing it. But the other thing that got me really excited one year was, again, Frank Steineck and Gerd Bruder. They started doing work with virtual portals and also stereoscopic visual effects that could actually manipulate your perception and allow you to, again, locomote through much larger areas than you can walk physically. And then also Betty Moeller, who's also here at VR, has done a lot of work with avatars that can basically get you to be redirected or to locomote, you know, help you locomote through much larger spaces. So I would say that those are some of the most exciting work that's been done in that area.

[00:10:11.878] Kent Bye: And finally, what do you see as the ultimate potential for virtual reality and what it might be able to enable?

[00:10:17.801] Evan Suma: I think that VR is going to fundamentally shift the way that we interact with each other in terms of, you know, when HMDs become good enough that they don't make people sick and the resolution and field of view is good enough that you can actually start to feel truly present and you enjoy being in them, then why wouldn't you want to be in them? Why wouldn't you want to be able to interact with someone who's your friends and family who are overseas or, you know, far away or, you know, even if they're close by in some sort of fantastical space together? I think that there's so much that VR will do for society beyond video gaming. We just have to make it enjoyable to use and be inside a virtual world. OK, great.

[00:11:02.066] Kent Bye: Well, thank you so much. Thank you. And thank you for listening. If you'd like to support the Voices of VR podcast, then please consider becoming a patron at patreon.com slash Voices of VR.

More from this show