#131: Rob Lindeman on Non-Fatiguing 3D User Interfaces, Questioning long-term engagement with Room-Scale VR, the challenge of Haptics, & Gaming as the Killer app for VR

rob-lindeman
Rob Lindeman was the chairman for the IEEE 10th Symposium on 3D User Interfaces this year, and he’s currently the director of the Interactive Media & Game Development Program at Worcester Polytechnic Institute.

Rob believes that the 3D user interfaces that are often depicted in popular science fiction movies is not a sustainable solution. That may work in short-term situations, but it is very fatiguing to hold your arms above your waist for long periods of time. Rob is really interested in researching non-fatigued user interfaces that can be used in immersive environments.

One of the more difficult problems with VR locomotion is that it is difficult to use a single type of travel interface that allows you to do short-term, medium-term and long-term travel. He talks about some of his research into using multitouch tablets, and using a walking motion with your fingers in order to do VR locomotion across all three spans of time from short-term to long-term.

The 3DUI symposium is shifting from incremental research topics looked at in isolation to trying to solve real-world problems with a hybrid approach of combining the low-level tasks in interesting ways. They’re striving to create more holistic integrations. Also because the graphics from game engines are so good, then his lab has shifted to integrating multi-sensory feedback into immersive experiences.

Rob is actually pretty skeptical about room-scale VR immersive experiences because of what he’s seen with the evolution of Kinect and Wii. People found that it was effective to play the games with smaller and more efficient wrist motions rather than full swings of the arm. Even though there was an intent to recreate the natural motions, the limitations of the system ended up that after the novelty wore off that people would play with much more efficient motions. Rob says that there is a tradeoff between efficiency of operating in a game environment verses how immersive the experience is. He prefers a very immersive driving experience, but he can’t compete with his brother who uses a more efficient game controller. He hopes that it takes off, but recommends people look at some of the 3DUI & IEEE proceedings to avoid making some of the same mistakes that they’ve discovered over the years.

The idea behind Effective Virtual Environments is to build a VR system that allows people to do something that they couldn’t do before. For Rob, he believes that the killer app for VR is gaming. He sees that gaming is really important and that having fun is a good use of your time.

Rob’s research has been about how can you have more long-term VR experiences in a way that’s non-fatiguing. He suggests thinking about bursting behaviors with actions that may be fatiguing over long periods of time because having resting periods is how we naturally do things in the real world.

Haptics includes everything from sense of touch like wind on your body, pain, temperature, pressure and vibration on the skin as well as our proprioception system which helps us identify where the relative position of our body parts are located. The input and output are very tightly coupled in an extremely short feedback loop, which makes haptics difficult. Also our skin is the longest organ of our body, and it has variable sensitivities in different parts of our body.

There are two types of haptics including feedback force feedback and cutaneous feedback, and to do fully generalized haptics would require an exoskeleton plus a skin-tight suit which is pretty crazy proposition. Because generalized haptic solution is so difficult, most of the successful haptic solutions are very customized to doing a very specific task in a very specific use case. You can also compensate for one sensory cue with another one, and so it’s much better to think about these experiences in a multi-sensory and holistic way.

Rob is a fan of Ready Player One, and he’s really looking forward to jacking into a world and going to places that he couldn’t go before. He’s looking for experiences to change his view or to take him to another world. He think that entertainment and fun is really important thing that should be considered a first-class citizen in our lives. He’s also looking forward to more game developers coming to the IEEE VR & 3DUI conferences in the future.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast.

[00:00:12.219] Rob Lindeman: My name is Rob Lindemann. I'm from the Worcester Polytechnic Institute in Worcester, Massachusetts. So I've been coming here to the VR conference since 1996, 95, 96. And the main thing we're doing right now is we call non-fatiguing interfaces. So the idea is, in many of the popular media, you see people holding their hands up like Minority Report, he's got his hands up and he's moving stuff around. Or in Iron Man, he's got his hands up and he's moving a lot of stuff around. And the thought is that people don't really like to keep their arms up for long periods of time. So one of the things we're looking at is allowing you to, for instance, take multi-touch pads like you'd use on a computer and put them on your thighs and then you hold your hands in a comfortable place and you're able to interact with these touchpads in a way that you really could do it for hours because you really don't get fatigued. And then you can use multi-touch gestures like you would zoom, like using your normal zoom or you could rotate things using your normal rotate gestures that are ubiquitous today. And then we came up with a novel movement approach that allows you to kind of swipe your index and middle fingers like you are running on the touchpad. So your index fingers become your legs. So as you move them, the camera moves forward in the environment. And that allows you to move short distances. If you want to move a little further, you take your fingers like they're standing on a Segway and you allow someone to use a Segway gesture on the touch platform. to lean forward, turn, and to cover medium distances. Let's say you're moving through a neighborhood or something like that. And if you want to travel long distances, then you take your fingers and you put them like you're standing on a snowboard. And you can control your speed and now you can fly. So the idea is you can use this interaction technique to travel long distances, fly through the environment, and you can turn again by using kind of the turning gesture And you can, we think, you can seamlessly switch between these gestures. So what's been very difficult in virtual environments is to have one type of travel interface that allows you to do both short-term, medium-term, and long-term travel in one kind of unified way. So this is one of the things we're looking at.

[00:02:33.158] Kent Bye: Yeah, and it seems like a lot of the virtual reality input devices, like the Leap Motion, or something like the Razer Hydra, and the Sixth Sense Stem Controllers, even the Valve's Lighthouse, where your full arms are engaged, it's six degrees of freedom. Do you see that as being problematic for doing games in VR or is what you're doing more for sort of corporate applications where someone may be doing it for up to six hours a day? Maybe you could talk about some of those different use cases for when you would want something that's fully tracked in six degrees of freedom versus something that may be kind of a touchpad with limited degrees of freedom.

[00:03:09.490] Rob Lindeman: Right, so it's a great question. So 3D UI, 3D user interaction stuff has been studied for at least 15 years, and a lot of different things have been tried. And so, you know, I was chair of the 3D UI symposium this year, and one of the things that I put out in the call is that I think we need to shift the type of research we're doing from these, here I know a new way that we can travel, I know a new way that we can select things, to things that are more higher level. So you can think about if I want to accomplish a real task, a training task of some kind, where I need somebody to be able to, like a military thing, where they're going to run through the environment, they're going to have to pick things up, duck behind things, maybe communicate using, you know, call-in air support, something like this. You're actually going to want a hybrid approach. So what you're going to want is you're going to want to select something, for instance, using a six degree freedom pointer, point at, let's say, this building over here, and then you're going to want to enter some more information. And you probably don't want to do that using a six degree freedom input type of approach. You might want to do something that's more on a tablet that you have in your hand, or it might be something that you use speech for. So a lot of research really kind of focuses on one approach to solving one piece of this puzzle. And I think now what we're starting to see is people trying to solve more real-world tasks by combining all of these low-level tasks in interesting ways and really focusing on instead of like studying one in isolation, how do I integrate them together and allow a user to understand them all, transition between them all, and for the metaphors to carry? Because if you use one metaphor for, let's say, selecting a building, and use another metaphor for travel to a certain place, it requires the user You know, it doesn't give them a unified thing. If you take a player in a multiplayer game, you know, an MMO, and you put them in an environment, it all seamlessly works together. There are ways to go through, select a weapon, to talk to people. And in VR, I think things have been a little bit more isolated in terms of the way the labs are focusing. So what I've pushed this time and what we're doing in our own lab is we're really trying to focus on integrating these low-level tasks together to try and solve real problems. And the other thing is to really look at multi-sensory feedback. So I think that today we're doing visuals really, really well. I think we have very believable environments. I think the audio that we're doing today is really, really good. And even though I'm a graphics person by training, I actually don't do a lot of graphics anymore, because the stuff that's available from game engines is definitely good enough for what I'm trying to do. The audio is good enough. So I'm really interested in these secondary cues, such as wind feedback, or vibrotactile feedback, or smell feedback. You know, in the worlds, you see the grass blowing, you see the leaves blowing in the trees, but you don't actually feel any wind. And so what we're doing in my lab also is producing systems that allow you to generate computer-controlled wind feedback or smell feedback and vibrations. If something explodes, you feel the floor shaking or you feel you're wearing a vibrotactile vest, so you feel things shaking. So it's kind of both this multi-sensory thing, which again most labs focus on, you know, we are a graphics lab or we're a sound lab. Whereas I think the holistic approach makes more sense because as humans that's what we are, is we really experience the world in a holistic way. So that combined with kind of this non-fatiguing and integrated user interaction stuff is really what I think is really going to be useful for both gaming systems as well as some corporate more vertical types of environments.

[00:06:43.600] Kent Bye: Yeah, and I think that's the challenge for virtual reality developers whether they're designing games or whether it's sort of educational experiences or other corporate applications is that they're going to have to bring together the full everything together into a cohesive experience. And so what's striking to me about hearing about some of these non-fatiguing user inputs is that I would imagine that they may be less immersive or have less of a sense of presence and so perhaps if you're doing a video game where the intent is to create a sense that you're in another world and that your arms are fully tracked in another world, then the 6DOF controller may be better than something where you're sitting down and using a tablet where it may not feel as real as using your hands to be moving these objects. So maybe you could comment on that in terms of, you know, the intent of creating a sense of presence in VR versus actually getting real work done.

[00:07:33.837] Rob Lindeman: So it's a great, great topic and something I've been thinking about a lot. I'm actually pretty skeptical about the, let's say, realism part of things. And I think I'm standing on firm ground. Think about the Wii. When the Wii came out, shook everything up, really designed for non-hardcore gamers, people to step up. Now you can, when you're playing Wii Tennis, you can swing the tennis racket because you're swinging the controller. And everybody was very excited for about 10 days. at which point they realize, oh, I can just sit on the couch and I don't have to take these big swings. I can just kind of move my wrist and I can play and I can still beat my brother at this game that I'm playing. The Kinect is another example. The Kinect came out. I can't tell you the number of students who said to me, I know what I'm going to do for research now. I'm going to work on the Kinect. And the Kinect, it's great because it's a cheap device that allows a lot of people to work on spatial type of things, but I would say after the hype wore down, there's not a lot of people actually playing games with the Kinect. I think the Kinect is used in some places, but I wouldn't say that it has had the impact I mean, it's more immersive, it's realistic. I think that having a nice, realistic, immersive type of setup is good for certain types of experiences. If you talk to a gamer, a gamer wants efficiency. They want immersion to a point, but the immersion is actually in their head. They can create their sense of depth by having an interface that they don't have to think about. Everything is second nature. If my arms are getting tired, believe me, I'm going to know that the interface is in the way. What I really want is an interface that allows me to be as efficient as possible. So I play a lot of driving games. I have a nice seat, I have pedals, I've got the force feedback steering wheel. I love playing driving games with my rig, right? Just because it's great. My brother sits next to me with his PS2 controller, PS3 controller, and he beats me every time. And the reason is I have to turn my steering wheel in order to turn. He's got to move his thumb. So if you think about efficiency, he literally does beat me every time. But I have a more immersive experience. So for me, I would rather play in the immersive way than playing the way he does, because for me, it's about the realism. But I really wonder how many people go to the trouble to actually stand up, do this sort of stuff, really have the immersive, maybe realistic type of experience for a long period of time. I think that Like the Virtuix Omni, I think it's, again I'm kind of skeptical about this because I've seen a lot of these type of locomotion interfaces and I think for certain segments of the population it will be useful. I think a lot of people it's going to be like the treadmill that they buy that they use for a week because they really want to do their exercises and then they don't end up using it very much. I think this is the thing that a lot of people don't think about and it really hurts me to say this because I'm such a proponent of VR. I want all this stuff to work. I drank the Kool-Aid a long time ago and I really want this stuff to work. I just think that there's a lot of problems that have to be solved. The nice thing is there's a lot of people, a lot of lay researchers, like citizen researchers doing really cool stuff with this equipment because it's just so cheap. So I'm really happy that there's a lot of people playing with this stuff now and coming up with really interesting things. So that's something I'm really positive about, even though I guess I sound really negative about the outlook. I really want it to happen. I just think people need to look at all the research that we've done before and because there's so many mistakes that people are going to make that we know that this doesn't work so don't try it. So I just I urge people actually to look at the VR proceedings of the last 17 years. Look at the 3D UI proceedings of the last 10 years. and see what people have tried because we've been trying a lot of stuff that, trust me, it's not going to work over long periods of time. So anyway, that's what I would say. Again, I really hope this stuff takes off. I'm really interested in doing what I can. That's why I'm still in the field. I just think we need to, yeah, we have a lot of work to do, which is good.

[00:11:38.602] Kent Bye: Yeah, I think that from the perspective of, I've heard this a lot from the virtual reality community, from developers, that they would say something like, be skeptical about research done with equipment that may be old or, you know, there's all these new sort of cutting-edge head-mounted displays with very low latency, the tracking with valves, Lighthouse, you know, very low latency and very precise tracking. And so I think that the tooling is going to be there, but I'd imagine that there's going to be experiences where people do want to have that sense of immersion, where they're going into another world and they have that sense of presence there. And I think that walking around and using all of your hands is going to actually achieve that for a lot of people. But it does bring up the point of what is the threshold and what is the boundary in terms of how long can people do that. And so from your research and when you reference some of these studies that have been done in the academic community, I'm curious about what's driving you to kind of move more towards these non-fatiguing interfaces rather than fatiguing like what type of research and studies have been found in terms of what are some of the thresholds in terms of what people are willing to kind of bear within these VR experiences.

[00:12:42.774] Rob Lindeman: So you can think of it kind of as effective virtual environments. You think of it as a really good research group at the University of North Carolina, Chapel Hill, that's called EVE, Effective Virtual Environments. And the idea is to not just do cool engineering and cool science, but to figure out, like take a real chemist and build a system that allows a chemist to see something in a way they couldn't see it before. So you could think about it, and we say this a lot, what's the killer app for VR? Why would I put on all this stuff or go to this special room or buy all this expensive equipment? And again, it's not that expensive now. What can a chemist accomplish? Or again, what can I learn as a middle school student? in this type of technology that I can't learn on a desktop environment or on an iPad or something like that. So we've always had this problem of looking for the killer app for VR. And I think gaming is the killer app. And I think many researchers wouldn't like that, right? But why not? I mean, first of all, I think gaming is really important. I think that having fun is a legitimate use of your time. I don't think it's a waste of time and I think that the market is there. We've seen, you know, NVIDIA, the reason things are so cheap in terms of graphics rendering is because a billion people play games and that's brought the cost down, right? So trying to make life in a virtual world effective over long periods of time is kind of why I've chosen the direction that I'm going in. It really is people get tired, people's eyes get tired. There's a lot of research on eye strain, adaptation in the virtual environment. So, you know, humans are really good because they adapt to things very well. But what happens when you've been playing your favorite game for six hours and then you come out and get in your car, right? Are you a different person? Are your reflexes different? And I think that's kind of a big issue to look at and something that back in the day we used to have, when I used to have people in the mid-90s go through my user studies, I had to have them sit for 45 minutes after exposing them for 45 minutes to my virtual environment because that was the safe amount of time. So I have them fill out questionnaires or whatever, I chat with them, give them some saltines, you know, because that stops cyber sickness. Because we had to do because we didn't want them to go kill themselves and today, you know You're gonna have a lot of people and even the warning comes up when you put on the DK 2 and they tell you People are just gonna go drive and do other things I think so that's a usability question is what what do we do? And you know texting and driving is bad VR and driving. I don't know

[00:15:20.617] Kent Bye: Well, in terms of fatigue, what are the thresholds and the boundaries in terms of, you know, how long people can actually spend in a VR environment with their hands up, you know, above their waist doing actions or, you know, was there a limit in terms of what you found, what's a comfortable experience for how long someone's spending in VR?

[00:15:37.627] Rob Lindeman: Yeah, so it's going to vary person to person. I actually don't know what the, if you have your hands up, what's the threshold and when should, how long should I do that? What I would say is think of burstiness. I think having bursty interactions where I pick my hands up to do something, you know, fire off a couple rounds or grab an object or select something, manipulate something, but allow me to keep my arms down most of the time, for example. The other thing is standing. And a lot of the environments that we see in research, everybody's standing. So is that how people are mainly going to use VR? We had a poster here where we use a sitting interface. And the sitting interface, you can lean from side to side, and the chair actually bends. This is work that was done by Steffi Beckhaus that we're building on top of. And she built this. It's called the Chair.io, or Cheerio. And it uses this chair that actually is on a spring, and so it actually bend side to side and forward and back and you put a sensor on it and now you can you know lean forward or move forward on the chair in order for it to move forward in the environment. You could do that all day right as opposed to keeping your arms up pointing where you're going and if you're at a desktop you're limited if you use a keyboard and mouse because you can't physically turn around and look in a different direction. And so you need something else and that something else is kind of something we're exploring. So in my lab we're trying to add like haptic feedback and force feed touch feedback and wind feedback to a user who's using this chair interface. Again, non-fatiguing. You can sit for hours on this chair and use kind of the arms at your side interaction techniques in the same way you would if you were standing. So psychologists know what these thresholds are. I just don't have them off the top of my head. But it's very clear that if you just let people do things in a bursty way, it's more akin to what you would do in the real world. So giving some support, I think, for resting is a really important thing.

[00:17:28.156] Kent Bye: I see, that makes sense. And there seems to be like different classifications of haptic information cues, environment cues, object cues. Maybe you could talk about all the different types of cues in the haptics and why you decided to focus on some of the wind and sort of the rumble on the ground.

[00:17:44.817] Rob Lindeman: Right. So I've been working in haptics for more than 10 years now. Haptics is a really interesting sense, I think, because there's a lot of things. First of all, it's kind of this umbrella of everything else. So if you think about things that are in the haptic domain, so again, the sense of touch, it's things like wind that you feel on your body is, you know, because, well, it's definitely not a sound. It's a little bit of sound, but it's not a visual thing. And so it makes the hairs move on your arms or your head or it makes your clothes move. So it's a haptic thing. It's a sense of touch. Pain is also part of the haptic domain. Temperature is part of the haptic domain. Pressure on your skin. Vibration on your skin. Proprioception, which is the ability, so the drunk driving test, for example, close your eyes and touch your fingers in front of you. The reason you can do that is because you have haptic sensors in your joints and your ligaments that can tell you where your hands are at any given time, and those are impaired when you're drunk. So this proprioceptive sense is also under haptic. So the ability to put your hand in a certain place without looking at it, you can do that because of the haptic sense. So that's the first thing is there's all these types of sensations that are lopped under the haptic domain. The second thing is haptics is the only sense that's very, the input and output are very tightly coupled. So if you reach out and pick up a mug, you're doing something, and you're also getting feedback by touching something. If you pick it up, you're feeling the weight of it, and you're also exerting your force on it. No other sense works that way. So there's this very tight coupling, which means that the feedback loop for sensing and actuating in haptics is very short. So you need to be very fast in terms of sensing the user and then giving feedback as quickly as possible. The other thing is that the skin is the largest organ in the body and the sensation of the skin is different in different parts of the body. So on your fingertips, your fingertips have a very dense very highly evolved set of sensors in their finger pads because this thing we call evolution has evolved to allow us to sense texture, pressure, and vibration, and very, very fine-grained things. On your torso and on your back, We have very, very low density of sensors. So, for example, four centimeters. If I touch you in two separate spots, there's a test called the two-point lumen test. And what you do is, you know, close your eyes, and so they touch you with two points, and they vary the distance between the two points. And what they found is that if I touch you with two points that are closer than four centimeters on your torso, you can't tell whether it's one or two points. Whereas on your forearm or on the palm of your hand, that distance is much, much smaller. So not only does the haptic sense have this huge surface area that you have to feed, but it's also variable in different parts of the body. And the likelihood that you're going to bump into something or that you're going to use part of your skin to interact with an environment also varies. Your hands are going to get a lot of interaction. Your back is not going to get a lot of interaction. And so the types of feedback that are typically used in haptics are, there's usually two kinds and that's force feedback. So in other words, like an exoskeleton where you push and it pushes back or it can stop your motion so that if you reach out and touch a virtual wall, it will stop your hand from moving. The other kind of haptic feedback is cutaneous. So it's really on the skin. And this happens when you feel the texture of something. You don't feel that through your ligaments or the weight of something. That you feel through your ligaments. But if you're touching the surface property of something or something's hot or cold, all that sensing is done in the skin. and not in your joints. So the actuators or the devices that you use to stimulate the user are different for cutaneous or skin level sensations versus something that's an exoskeleton type thing. So if you want to do like full body haptic type thing, you're going to have to give them an exoskeleton plus a skin-tight suit with actuators in it that are going to stimulate all the other things that you need to do in haptics. So just think about that for a second. That's a crazy thing. And if you do that, you can't feel the real world anymore because you have a suit on. So not only are you going to all the trouble of building this stuff, but now you can't even pick up your coffee anymore because you have all this stuff all over your body. So this is why haptics is really, really hard. And so the solutions have typically been very task specific so if you're doing a laparoscopic surgery simulator where you hold like grippers and you go in through you know incisions in the in the body we can simulate that really well but you don't want to use the same controller to play a first person shooter so you get really good haptic feedback for this type of laparoscopic surgery, but it's very specific to that task. It's not a general purpose solution like we have for visuals or audio. So a lot of the haptic work that's been most successful has not been trying to solve the general haptics problem of cutaneous and kinesthetic or forced feedback type of things, but it's been really focused on a very specific vertical market. And those are the things that have been most successful. So the vibration in your dual shock controller works for the type of vibration that you're trying to say, oh, you crashed your car into something, you feel the vibration. It works for that, but it's not going to work maybe for other types of sensations that you want to feedback.

[00:23:22.809] Kent Bye: Yeah, it's such a broad field and it makes sense that, you know, I found that even sort of a simple rumble can go a long way. So there seems to be like the visual systems can compensate for something that even though if it's not a one-to-one what it would actually feel as long as there's at least something there that's giving some sort of feedback from what I'm seeing I found personally to be have a huge impact of creating a sense of immersion and presence.

[00:23:47.308] Rob Lindeman: Yes, so I think it's a good point. So you know, the other reason why our lab is really a multi sensory lab is because you can compensate for shortcomings in one sensory modality with another. So for instance, if you don't have wide field of view visuals, you could use sound to indicate that something is off screen. Or if you don't get the collision detection or the force feedback exactly right, you could give a visual cue that simulates some of the force type thing, like make a sound, ah, I bumped into something, right? So if you combine a rumble with the sound, ah, with the visual of your avatar stopping or whatever, all those things together are gonna be much more effective than if you focus on one of the sensory modalities individually, which again, I think it's really important because life is a multi-sensory experience and VR should be a multi-sensory experience too.

[00:24:39.759] Kent Bye: Yeah, and because you're the chair of the 3D UI committee this year, can you talk about some of the larger categories in terms of navigation and some of the different sort of sections and clusters of different types of talks and some of the things that you were really looking for to kind of bring into this two-day conference here?

[00:24:57.382] Rob Lindeman: So, this is the 10th year that we've done the 3D User Interaction Symposium, and what I have seen is this kind of shift, as I talked about, to more integrated, higher-level tasks. We still have some lower-level types of studies being done, you know, selection, the way you select objects, the way you move things around, some navigation and travel techniques. But we're starting to see a little bit more work on combining these together in interesting ways. That said, there's still a lot of, you know, the standard problems, the same problems that we've been working on. Tracking is always an issue, right? How do you allow somebody to look around or move an object around or scan the environment and give somebody a sense of what's around them in the near field, track your hands in some way. And a lot of the technology is available for that, but every technology has limitations. So, If you have a Leap Motion on the front of your head-mounted display, it's good until your hands are outside the field of view of the Leap Motion. And then you say, well, just keep your hands in the field of view, but then you're constraining the user's movement. And so just like every type of technology, what you'll see is people coming up with interesting ways to compensate for these limitations. So adding multiple connects in one environment, for example, a good paper on how to scan and track a person, scan their whole body and construct a representation of them and send it to a remote location so that the person is both tracked in terms of their skeletal movement but also you have the surface of their body that's sent over too. So this kind of virtual presence is something that was an interesting thing to see this year.

[00:26:38.919] Kent Bye: And what type of experiences do you want to have in virtual reality?

[00:26:43.071] Rob Lindeman: So I'm a big reader and follower of the popular media. So two summers ago, I guess I read Ready Player One, which one of my friends had suggested. And there's been some controversy over the book for various reasons. But, you know, I'm an 80s guy, so all the 80s references were really great. But, you know, I really would like to be able to jack into a world and be able to experience places that I couldn't normally go, you know, ride a broomstick or hang glide, you know, do some of the things that I think would be really cool. Go bungee jumping. We had a workshop a couple years ago where we tried to think of the grand challenge in virtual reality and eating spaghetti came up as being one of the interesting ones because you got to get all the graphics right, you got to get the smell right, and the taste right, slurpage right, it's got to make you full in the end. So I never want to stop traveling physically to real places because I think there's just too much great stuff to see. So I think that providing an escape for people I think is a really good thing. I think we all have times where we escape either in movies or books or games and I think that providing that really rich experience The other thing is I'm also director of the game development program at my university. That's been really interesting because immersive gaming has started to come up as a really interesting, different type of mechanic in gaming. And there's real differences between making a first-person shooter for the desktop and a first-person shooter for a Rift, for example. And so I think that the other nice thing about having these technologies in the hands of a lot of people is give it to an artist and amazing stuff comes out. You know, we're technologists, we shouldn't be building bad art or any art. So I think this kind of interactive, immersive movie mashup is going to be, I've seen some really interesting stuff and I think there's going to be more and more really interesting stuff coming. So I'm really interested in it. And again, it's kind of an escape thing. Teach me something or change my point of view or something. That's kind of the stuff I really want to see artists really push some great boundaries with the technology that we have.

[00:28:52.018] Kent Bye: And so what do you think is the ultimate potential of virtual reality then?

[00:28:58.192] Rob Lindeman: Hmm, maybe I'm the wrong person to ask. You know, I always have a lot to say, but I think we'll find out. I think it's really hard to predict such a thing. It's going to be social, it could be educational, it's definitely going to be entertainment related. And again, I don't apologize at all for that. I think that entertainment is a very important and I think too often kind of push to the side type of thing. Entertainment, fun, is actually a really important and real first class thing that we should be doing. And so I think that it's perfectly valid to say that entertainment could be the thing that really makes the big impression. So, let's game.

[00:29:39.688] Kent Bye: Is there anything else that's left unsaid that you'd like to say?

[00:29:44.772] Rob Lindeman: We're trying to make a big tent. I would really like to see more gaming-type companies and gaming-type people at these types of conferences. I think that there was a push a couple years ago to do that, and it hasn't really played through, but I think there's a lot more crossover now. So my PhD student just got hired at Microsoft, working at Microsoft Game Studios, and he's a fantastic researcher. So I think that we're going to see a really nice coming together because there's a lot of really bright people working at a lot of these companies and the companies are really open. So I think I'm really excited about that aspect of it.

[00:30:23.220] Kent Bye: Okay, great. Well, thank you.

[00:30:24.841] Rob Lindeman: Sure. Thanks a lot.

More from this show