Frank Steinicke is professor for Human-Computer Interaction at the Department of Informatics at the University of Hamburg. His research into VR strives to understand the limitations of human perceptual, cognitive and motor abilities to reform the 3D user interactions within computer-mediated realities.
Frank is the outgoing chairman of the 3D User Interface (3DUI) conference, which happens just before the IEEE VR conference each year. He talks about 3DUI, human perception, and some of his research in using multi-touch screens in conjunction with depth cameras in order to create more non-fatiguing user interactions within virtual environments.
Become a Patron! Support The Voices of VR Podcast Patreon
Theme music: “Fatality” by Tigoolio
Subscribe to the Voices of VR podcast.
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast.
[00:00:12.017] Frank Steinicke: I'm Frank Steinecke from the University of Hamburg. I'm a professor there for Human-Computer Interaction. This year I'm the outgoing chair of the 3DOI Symposium. Usually it's organized by three chairs. One is the incoming, one is the main chair, and this year I'm the outgoing chair. That means this will be my last duty for the 3DOI Symposium, at least for the next couple of years.
[00:00:31.932] Kent Bye: Great. And so maybe you could talk a bit about your own personal research or how you first got into virtual reality.
[00:00:38.473] Frank Steinicke: So actually, my first interest in VR was basically or growed when I was at Disney in Paris. I saw a 3D stereoscopic movie of Michael Jackson. And I remember when I was maybe 12 years old, I was sitting there in my chair, and Michael Jackson was basically floating in front of me. And I tried to reach him and grab him. And I was thinking about how nice it would be if I could do some interaction now, like changing the scale or something like that. But then a couple of years basically I study mathematics computer science not really related to virtual reality but then I had my focus during my Diploma thesis on mathematics and visualization and then I switched kind to interactive visualization and then I changed to interactive 3d visualization and finally I ended up with interactive virtual environments and now the focus is really on using perceptual inspired natural user interfaces so getting an understanding of the entire human information process starting with perception, action, cognition and motor processes and how can we use the limitations kind of the human to improve this interaction process but also using VR to study the perceptual limitations of the human.
[00:01:44.359] Kent Bye: Yeah, to me, VR is really fascinating because it is this cross-section between psychology, ourselves, our human mind, and how our brains are wired, and technology all coming together. And so it sounds like you, by doing these types of 3D UI, you are really looking at that cross-section of psychology and technology. So what kind of insights have you been able to draw from perception and then how that feeds into how the technology is developed or designed then?
[00:02:12.145] Frank Steinicke: Currently, we have a lot of limitations still in the technology. For example, the most prominent lack that we have is the missing haptic feedback. As soon as we start to interact with virtual objects, usually you don't get any haptic feedback as long as you don't have any specific haptic technology for that. My group has done some studies in terms of perceptual illusions. Can we induce some kind of illusion that you actually touch something even if you don't do so? And we did that, for example, with a lot of multi-touch surfaces. So we displayed things stereoscopically above the surface, but gave the users the illusion that they actually touched the object, although they just touched basically the surface a little bit behind the object. So this was a nice approach, combining psychology knowledge in order to give the user a better illusion and a better experience in the virtual environment.
[00:02:57.107] Kent Bye: Yeah, it seems like the visual field is very dominant, and yet, you know, there's other parts of our perceptions that are not as accurate, so they can be tricked or fooled in that way. And so, maybe you could talk a bit about more what type of actual haptic feedback you were feeding in, in a way that wasn't necessarily directly one-to-one correlated to these virtual objects.
[00:03:15.150] Frank Steinicke: What we actually use is what is called passive haptic feedback. So the idea is we do not really use like haptic devices for that but just physical objects which are already there and we use them as kind of proxy objects to represent the haptic feedback that you would receive if you would touch a regular virtual object. We also have recently developed kind of 3D printed prototype devices in which we use vibration technology which is used in cell phones which then gives you really a kind of frequency vibration and currently we explore how much you can give users the illusion to touch different textures even if it's always physically the same texture so you touch a multi-touch surface but by inducing additional vibration to the finger while you touch it. We would like to consider do you really can have the impression of touching grass or water in particular if it's combined with other senses like the visual or auditory sense.
[00:04:05.465] Kent Bye: Yeah and within 3D UI there seems to be this trade-off between using your natural hands and natural gestures versus using something with a physical button and I'm curious from your own perspective some of the trade-offs that you see from going with one low fidelity versus the higher fidelity.
[00:04:20.847] Frank Steinicke: Yeah, so usually when we try to develop new techniques in our group we try to make them as natural as possible. So that means that we do not would like to give the users like haptic gloves or something like that or any other 3D input devices but just use your real hands. But on the other hand also we try to omit having these floating objects in space which you have to touch and which you often see actually in science fiction movies but from my perspective I doubt that this would be the correct way to interact. since it is certainly not natural to touch some floating object in space, even in the real world we almost don't have any floating objects. So a lot of the interaction techniques that we do are somehow grounded on the natural interaction paradigm, so I use my hands rather than any input devices, but use like natural support that we know from the real world, like having a table on which you can lay at least your elbows down, which also reduces and decreases the fatigue that you get when you interact and these kind of things.
[00:05:13.319] Kent Bye: Yeah, it does seem like fatigue is an issue with doing natural gestures and maybe you could talk a bit about some of the research in that area in terms of if you're doing something like a leap motion with your hands up in the air, like what are people's tolerances in terms of how long they're able to do that or some of the trade-offs in terms of strain.
[00:05:29.107] Frank Steinicke: Yeah, so usually you cannot do that longer than like one or two minutes. So this is why really we do not use these kind of interfaces for our work. Whenever we have something floating in space, we really make sure that the users are able to put their elbow on a desk, so really in a comfortable position. And we have also done some experiments which actually showed that this will also increase the efficiency in terms of the interaction performance. So we have a lot of also of course demos in which we use a Kinect and you have some mid-air gestures. You can use them for like very short limited amount of time but definitely not for precise interaction which require just longer time. So my personal note on that is as long as you just require it for five seconds then it's fine but if you want to do longer work it's really not good to have your arms in the air.
[00:06:12.830] Kent Bye: And so what about the, you know, it seems like if you're using something that's just on the flat table, you may be only getting, like, three degrees of freedom if you're using, like, a multi-touch sensor or something like that. Or maybe talk about that in terms of, like, it seems like when you have your hands up in the air, you're able to get that full six degrees of freedom, but then if you have them resting on a table, you may not be able to manipulate objects on all those six degrees. Or maybe you are. Maybe you could talk a bit about that.
[00:06:37.148] Frank Steinicke: Actually, we combine both. We have a stereoscopic multi-touch table in one of our setups. In addition, we use a depth camera on top of the multi-touch surface. We can combine the touch interaction on the surface with the mid-air interaction above the surface. We did a lot of experiments in which we tried to investigate at which display depth of the virtual objects one of the other is more advantaged over the other. We found that as long as you leave your objects quite close to the touch surface, so within a range of 15 centimeters or something like that, you are much more efficient if you use the touch support for all your interactions. If the objects are displayed with more negative parallax, meaning that they're hovering higher above the table, then at that point you should switch and actually use your fingers in mid-air by using the additional depth cameras.
[00:07:25.453] Kent Bye: Yeah, and in terms of measuring the efficiency or, you know, how well a certain technique performs, what are some of the specific metrics and techniques that you use in terms of evaluating and comparing and contrasting these different techniques?
[00:07:38.876] Frank Steinicke: The typical setup that we use, for example, selection task is based on Fitts' law experiments. That means you give a predefined set of targets that the user has to select. They can be displayed on the touch surface, for example, but they can also be displayed somewhere in 3D space. and there is a specific norm that you can use just to select these and therefore we measure for example the time, the number of errors, the accuracy that they did and you can also combine these metrics and kind of the throughput and so the higher the throughput gets the better the techniques get.
[00:08:08.708] Kent Bye: I see. And so when you're comparing doing virtual 3D interactions versus doing real interactions, it seems like in some contexts the real interactions may be better, and then in other contexts it may be that the virtual reality interactions may be more efficient or better. So maybe you could talk a bit about what is that line between when it should be a virtual training simulation versus a real training simulation.
[00:08:30.459] Frank Steinicke: I mean that's a good question, right? This also goes to the direction of the killer application for virtual reality environments. I think currently we are still in a stage where the technology does not allow us to do like everything really efficiently. So we have to trick a lot so that the virtual environment is really beneficial over a corresponding real-world task. Of course on the other hand you can do in the virtual environment all the things that you can't do in the real world, right? simulate things which would be dangerous in the real world without any danger in the virtual environment and this is of course a great promise for example rehabilitation, psychology studies, phobia studies and something like that.
[00:09:06.636] Kent Bye: Yeah, and in terms of some of the things that you've seen over the last three years, what's some of the research within the realm of 3D UI that you've seen that really sticks out?
[00:09:15.800] Frank Steinicke: I think just what happens currently since the last one or two years regarding the hardware which gets available due to this smartphone era VR or 3D UI that we currently face. So literally like every week or so there's another head-mounted display coming out. and this is really something which is really surprising for somebody who's working now in this area for two decades or so and there was really slow progression in the last 20 years or something like that at least from the hardware's perspective and now within the last one or two years it's even just hard to read all your emails and Twitters about new devices which come up so this is really something surprising and I'm really expecting this just to continue for the next couple of years and so we really face this exponential growth now and have really now a situation where it's really getting fascinating what's coming out within the next couple of weeks.
[00:10:01.932] Kent Bye: Yeah, and one of the things that coming here to the IEEE VR conference that I'm sort of seeing emerging is that there is a very strong presence of Germany within virtual reality. Maybe you could speak to a little bit of like the connection between VR and Germany.
[00:10:14.096] Frank Steinicke: Yeah, I mean it's not only Germans. I think in entire Europe we have a quite strong VR community. So in France, the UK as well as Germany. In Germany there is also like a specific interest or special interest group in virtual reality and augmented reality, in which are basically all professors and postdocs in Germany who are interested in virtual reality organize also workshop annually, so one time a year. And a lot of people are there pretty much also interested in using VR for really industrial applications. So there are a lot of corporations, for example airplanes and CAD and car automotive as well as gas and oil industry. But it's a nice mixture of people who are really interested in industrial applications but also people like my group who is more interested like in the basic research knowledge, something like that.
[00:11:02.207] Kent Bye: And where has some of the research that you've been doing been applied? Is it in industrial corporate applications or where have you seen some of the stuff that you've been doing kind of play out into the world?
[00:11:13.554] Frank Steinicke: So currently most of the things that we actually did are really basic knowledge. So we are really interested in how do we perceive the virtual environment, how can we manipulate users in the virtual environment. and how can we manipulate how they behave. So this is not like a direct link to industrial applications but we have some applications or other projects in which we develop for example micro element aerial vehicles which can be steered and controlled in a virtual reality setup as a simulation and this is then applied to a real world application. But for most of the things at least that we do in my group we are really more interested on this basic research part not so much on how you can apply that within the next one or two years but maybe 10 or 15 years later.
[00:11:52.830] Kent Bye: And it seems like there is this parallel development with virtual reality, with all these academic and sort of corporate applications over the last 20, 30 years, and then this new emerging consumer VR market that is starting with games and entertainment, but potentially going beyond that. But I'm curious how, from your perspective, after being in this field for over 20 years, how you see, over the last couple years, everything developing and where you see it going.
[00:12:16.067] Frank Steinicke: Yeah, I mean it's pretty interesting and pretty amazing what currently happens and it's really nice to see how this new consumer technology also improves the way how we can perform our studies or also attract the interest of our students. So when the Wiimote came out or the Microsoft Kinect or now the Oculus Rift, this is really like hardware which our students know and they really want to get their hands on that. So it's really easy now and very interesting to get good students to work with that. And I assume that this will continue and will really lead to the fact that VR will be used outside the lab, which means it will be used in our daily living rooms or in working environments.
[00:12:52.872] Kent Bye: And finally, what do you see as the ultimate potential of virtual reality and what it might be able to enable?
[00:12:59.298] Frank Steinicke: I mean, if we have like the perfect ultimate display, what we all are aiming for, then we can do whatever we want. I mean, this is the major goal of virtual reality. Then, of course, we will face all these ethical questions, right? I mean, if we can do anything, the question is what is the appropriate thing to do and what would be inappropriate content that we don't want to show. And I think these kind of things are the future topics that our community now really has to face since we are not only the last like 500 persons here at this community talking about virtual reality, but there are outside millions of people who will get their hands on virtual reality very soon. And there are a lot of ethical questions that I think had to be answered by this community. Great, well thank you.
[00:13:37.621] Kent Bye: You're welcome, thanks. And thank you for listening. If you'd like to support the Voices of VR podcast, then please consider becoming a patron at patreon.com slash voices of VR.