#443: Five Universal Tasks of 3D User Interfaces with Doug Bowman

In 1968, Douglas Engelbart gave “The Mother of All Demos” where he gave the first public demonstration of a mouse as a computer control device. For the last 48 years, the mouse and keyboard have remained the primary input devices for Human Computer Interaction. Virtual and augmented reality represent a new immersive computing paradigm where the equivalent 3D user interfaces are being continually refined as there is a burst of innovation with new input devices.

doug-bowmanDoug Bowman has been one of the leading researchers in 3DUI as the Director of the Center for Human-Computer Interaction at Virginia Tech, and the co-author of the 2004 book titled “3D User Interfaces: Theory and Practice.” The second edition is due to come out in early 2017, and is available in early release.

I had a chance to catch up with Doug at the 2015 3DUI conference that was co-located with IEEE VR in Arles France to talk about the five universal tasks in 3DUI including navigation, selection, manipulation, system control, and text input. We talk about the open problems of 3DUI, the uncanny valley of VR locomotion, and the strengths and weaknesses of academia when it comes to comparing different approaches individually and then within the context of a larger application. I also recount some of the big innovations in input devices since this was originally recorded in spring of 2015.


Here’s the moment when Douglas Engelbart and Bill Paxton publicly demonstrate the mouse for the first time at the 1968 Fall Joint Computer Conference in San Francisco:

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR podcast. So on December 9th, 1968, Douglas Engelbart gave what is commonly referred to as the mother of all demos. It was at the Computer Society's Fall Joint Computer Conference in San Francisco, and he did a 90-minute live demonstration of all of what was essentially, we know as the fundamental elements of modern personal computing, including a computer mouse, which for the last nearly 50 years has dominated our primary mode of human-computer interaction. Well, virtual reality and augmented reality both represent new ways of doing more natural and intuitive 3D user interfaces. And somebody who's been one of the leading researchers in this area has been Douglas Bauman. He's the director of the Center for Human-Computer Interaction at Virginia Tech. He's also the co-author of a book called 3D User Interfaces, Theory and Practice, which first came out in 2004 and has a second edition that's coming out in early 2017. So I actually had a chance to catch up with Doug back in March of 2015 at the IEEE VR conference in Arles, France, where we talked about the five universal tasks within 3D UI, as well as some of the ways that they measure and benchmark 3D UI, as well as some of the open problems facing the field. So, that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. This is a paid sponsored ad by the Intel Core i7 processor. If you're going to be playing the best VR experiences, then you're going to need a high end PC. So Intel asked me to talk about my process for why I decided to go with the Intel Core i7 processor. I figured that the computational resources needed for VR are only going to get bigger. I researched online, compared CPU benchmark scores and read reviews over at Amazon and Newegg. What I found is that the i7 is the best of what's out there today. So future-proof your VR PC and go with the Intel Core i7 processor. So this interview with Doug happened all the way back in March of 2015 at the IEEE VR conference in Arles, France. And this was just a few weeks after GDC when Valve had announced the HTC Vive at GDC that March. So, With that, let's go ahead and dive right in.

[00:02:35.717] Doug Bowman: I'm Doug Bowman. I'm at Virginia Tech in Blacksburg, Virginia. I'm a professor in computer science and direct the Center for Human-Computer Interaction there. My own research group is the 3D Interaction Group. I've been working in virtual reality for about 20 years. I started as a grad student in 1994 at Georgia Tech. I had my first VR experience in Larry Hodge's Virtual Elevator, which is a fairly famous early VR demo. And I've been working on it ever since. So my specialties are the benefits of immersion in virtual reality. So what do we get by adding more advanced display features like stereo and surround screens and wide field of view and that sort of thing. And then my other area is the design and evaluation of 3D interaction techniques. So how do we interact with the virtual world to navigate, manipulate, select objects, give commands, that sort of thing.

[00:03:28.039] Kent Bye: Right. And so you've written a whole book about 3D user interfaces. And so I'm curious about some of the major points that you were trying to get across in terms of summarizing all the work and research that's been done in this field over many years.

[00:03:41.329] Doug Bowman: Right, so when we started that book project, the book is 3D User Interfaces Theory and Practice, and when we started that project there really was no reference work that really covered all of the stuff that had been done at that point mostly in academic research on how to interact with 3D worlds using 3D input devices. And of course for virtual reality that's really the standard mode of interaction is You may be wearing a head-mounted display, you may be in a cave or something like that, but you don't want to interact with mouse and keyboard, you want to interact with your hands or with a track device or with your whole body. And so that book really is still the only collection of the standard techniques for what we call the universal tasks in 3D UI. So again, those tasks are navigation and selection and manipulation and system control. and even text input is something that we cover in the book. So the book is kind of organized around guidelines taken from all this prior work where lots of academics have done empirical studies of 3D interaction techniques and which ones perform better, which ones make people have more fun, which ones are more engaging, which ones maintain your sense of presence, that sort of thing. And so the book gives a lot of guidelines to kind of help you choose the best 3D interaction techniques for your application.

[00:04:53.865] Kent Bye: Yeah, and I think within the consumer VR realm, navigation has been probably one of the biggest open problems in terms of how do you navigate within a virtual space when you're not actually physically moving around without inducing motion sickness into people. And so in terms of navigation, you had a slide today about kind of the uncanny valley of navigation. Maybe you could talk about that spectrum from low fidelity to middle fidelity to high fidelity in terms of all the different navigation options that are out there.

[00:05:22.495] Doug Bowman: Sure, so it's interesting kind of watching the consumer VR space and how it's changing in terms of navigating through virtual worlds. So with this kind of recent rise of new HMDs starting with Oculus, but many others in recent years and months, the kind of default assumption is that people would be sitting at their desktops in front of their PCs just wearing a head-mounted display. And so there's maybe a little bit of head-turning, but mostly you're still using your mouse and keyboard or using your game controller to navigate through the world. And then all these kind of standard navigation metaphors apply that everybody's already used to in the gaming world. The hard thing there is that when you try to combine that with some physical movement, like a little bit of head turning, then, you know, when do I use head turning versus when do I use, you know, virtual turning with the joystick or whatever. So now we're seeing kind of this shift where more and more people are interested in greater levels of physical movement. So Oculus added a position tracking system to the DK2, which allowed you to kind of lean back and forth a little bit when you're sitting at your desktop. And newer systems, like I understand that the Valve HTC device is going to allow you to walk around your room, right, and tell you when you get near the edge of your room. So people are moving towards more and more physical navigation. But you're always going to have some element of virtual navigation in there as well, unless you can track a space that's the size of your entire virtual world, which is not going to be the case for most virtual worlds. So that's where these kind of what I call medium fidelity locomotion techniques come in, where we're trying to give people the sense of physical movement, but we don't have a track space that's large enough, so we have to do something different. So we either use a treadmill type design, I talked about the VirtuSphere, which is this kind of hamster ball design, which allows you to walk infinitely in a virtual world. And then there's techniques like redirected walking, which kind of constrain you to move within the tracked space, but give you the illusion of walking infinitely through the environment. So what we found in our work is that these kind of medium-fidelity interfaces, for the most part, have lower performance and usability than either a high-fidelity interface, where it's just purely real walking in a tracked space, or a well-designed low-fidelity interface like a gamepad. So as you said, it's kind of like the uncanny valley. The idea is that as you try to become higher fidelity in your interface design, in your locomotion interface design, that people expect it to work just like the real world, and when it doesn't, they're confused and they have to adapt and so on. Whereas if you're just designing for a gamepad or for a mouse and keyboard, you're free to do whatever design works, and then you can still get high performance and high usability with that.

[00:07:53.564] Kent Bye: Yeah, and in terms of selecting objects, there seems to be either an approach of using a tracked camera, like a Leap Motion, or other sort of camera-based or infrared camera depth sensors, or using something that is actually like a physical button that maybe, like the STEM controllers, are using electromagnetic ways of tracking. or something like Lighthouse, which is using lasers. And so, when you're looking at the differences between, you know, so there's trade-offs, I guess, between having a physical button and using a low fidelity versus something that's a little higher fidelity but, you know, not having all the different, you know, haptic feedbacks and everything like that. So, from your perspective, how do you see that spectrum and some of the trade-offs when deciding what type of interactions you may be designing for a VR experience?

[00:08:37.704] Doug Bowman: Yeah, that's a really interesting space as well. And this is a space where kind of the current trends in consumer VR are very different than the assumptions that we started out with when we were doing academic research in VR, you know, 10-15 years ago. So at that time there were no kind of bare hand trackers like the Leap Motion or full body trackers like the Kinect. You always had to be holding something or wearing something that would be tracked. And that was a good opportunity to provide physical controls like buttons and joysticks and so on. So all the selection and manipulation techniques that we describe in our 3D UI book are based on that sort of setup, where you assume that the user has some sort of hand-held device that has at least a button. on it that you can give some discrete input. And that's really useful. So when trying to migrate or port some of those techniques to newer devices like Leap or Connect, you have to figure out how to replace that discrete input. And maybe you can do it with speech, or maybe you do it with a gesture that's recognized and serves as a discrete event. But neither of those are as easy or as easily recognized as just a simple button press. I think there's a lot of potential to gesture-based interaction with bare hand. We've been playing around with Leap a lot. In fact, we have a 3D UI contest entry here at the conference that uses Leap for playing music with a virtual musical instrument. So we've been playing around with a lot, but it's challenging. As you said, the lack of haptic feedback is one of the key things there, right? So you think that it's intuitive and natural to kind of just put your hands in the scene and do very fine-grained, precise manipulation with your fingertips and so on. But without haptic feedback, that's still really hard to do. Even if you get all the collisions correct and you model the hand exactly right, it's still, you don't have the same level of effectiveness that you do in the real world when you're manipulating a physical object.

[00:10:25.415] Kent Bye: And so what are the other sort of categories? And you mentioned the five different areas, you know, selection and manipulation. What are some of the things that when you look at something like a 3D UI contest that are usually kind of like higher level ways of trying to boil down these concepts into applying it into a contest?

[00:10:43.431] Doug Bowman: Well, I think that something that people have to face, which they don't often think about at the beginning, is the system control concept. That's the name that we give to menus or any sort of system of giving commands to your application. And we don't want to think about those in VR a lot of times because we expect it to just be natural and realistic and I'm not often interacting with menus. in the real world, so why should I have to do it in the virtual world? But it always comes up. You always need to change modes. You always need to change your style of rendering. You need to, you know, load a file. Anything that we do with a computer system, we often need to be able to do in VR as well. And so that's, I think, an area that a lot of people who are doing consumer VR are going to have to wrestle with is, how do I get a usable slider widget into my application when I need to set a value, but all I have for input are the positions of my fingertips in space. I think that's not an easy problem to solve.

[00:11:38.313] Kent Bye: Yeah, and what are some of the virtual reality user interfaces that you've seen that really stand out in terms of a really great implementation?

[00:11:46.222] Doug Bowman: So it's kind of a hard question to answer because typically the academic research has focused on kind of individual interaction techniques, right? So every year at the symposium people will publish, you know, new selection techniques, new manipulation techniques, new locomotion or travel techniques and compare them to the state of the art and so on. But there's not a lot of academic research on kind of combining all of those into a single user interface. that hangs together, that's coherent, and that makes sense all together, where the interaction techniques don't conflict with one another. So we can give guidance on, you know, if you need to choose a selection technique, here are your choices, and here are the ones that have been proven to be the best, but that may not work with the other components that you've chosen for your application. So we don't tend, in academic research, to build up kind of full-blown production-level applications, and so we don't get to answer that question very much, if that makes sense.

[00:12:37.452] Kent Bye: Yeah, it does. I mean, the one that comes to mind is Oliver Kralos's VR UI. I don't know if you're familiar with that and can kind of comment for how he's sort of implemented those principles.

[00:12:46.514] Doug Bowman: Sure. Yeah, I mean, that's a nice example of at least a toolkit that gives you access to kind of all the major tasks and in a way that hangs together and that is coherent. So the individual interaction techniques you might be able to quibble with, but as a whole, that's a nice example.

[00:13:00.533] Kent Bye: And when you're looking at evaluating these, is it in terms of speed, cognitive load, what are all the different dimensions that you're looking at in terms of evaluating these different 3D UI techniques?

[00:13:10.756] Doug Bowman: Right. Well, we always start with basic task performance. So that's, you know, time to complete a task and accuracy or the number of errors that you make. But we really do want to look at a much broader definition of user experience. So we may measure, you know, how navigation techniques affect simulator sickness or how manipulation techniques affect your sense of presence in the environment. or how a user interface for a learning application affects your ability to learn. So cognitive load or mental workload would be an issue there. So we really do try to measure a broad spectrum of UX metrics in our work.

[00:13:44.527] Kent Bye: In terms of your research moving forward, what are some of the big open problems that you see are still out there that are motivating you to continue on in researching this?

[00:13:53.473] Doug Bowman: Well, so I think we have a pretty good set of techniques for these fundamental or universal 3D UI tasks. One of the problems is in moving those techniques, as I said, to newer forms of input, like bare hand input with Elite Motion, for example. Techniques really have to be rethought and redesigned to make sense with that sort of input. Another one that I already mentioned is really kind of researching UX design for VR more at the application level as opposed to at the fundamental interaction technique. And I think there's a lot of opportunities to do that. There's a barrier, though, which is that it may be hard to publish research like that, so academic researchers may be less likely to do that sort of thing, because it's hard to find a comparison and to compare your UI against another whole UI. And then I would say that, you know, we need to start looking more at domain-specific sort of tasks. So we did this a little bit several years ago where we were looking at the domain of architecture and construction. And rather than just trying to find, you know, the best selection technique or the best manipulation technique for your architecture or construction VR application, we were looking at what specifically do people in those domains need to be able to do when they interact with an environment. And so in that example, we came up with this task of cloning. where in an architectural structure, it's often a series of repeated elements that occur in space, right? So it's kind of like object manipulation. A generic object manipulation technique is really not sufficient to place hundreds of elements in space. So we try to design from the ground up techniques that would be specifically for that domain and specifically for the task of cloning. And I think that there's a lot of other places where we could do that for different application domains, so entertainment or architecture or for the different application domains that people want to use VR for.

[00:15:39.632] Kent Bye: And you had mentioned presence. And I'm curious, how do you quantify or measure presence in terms of whether or not a 3D UI technique is either increasing or decreasing immersion?

[00:15:51.521] Doug Bowman: Well that's still an open question and I'm probably not the best person to answer that because I don't do research on presence directly. We typically kind of take the cowardly way out and use questionnaires. Mel Slater has pretty famously said that questionnaires are not really good for measuring presence and I agree with his analysis but it's the kind of easiest and most direct tool that we have. Presence is an internal psychological construct, right, so you can't measure it directly. Maybe you could do some brain monitoring with EEG devices or whatever and try to somehow correlate that to a sense of presence, but it's not something that you can just observe naturally in the world. So, I guess I would say that the things that I've seen that are the best measures of presence are the behavioral ones. So, the famous one is the duck test. If I swing a virtual baseball bat at your head, do you duck? Right, even though you know it's not real. And if so, then that indicates some level of presence in the environment. but for many applications there's no analogous thing to the duck test or it would be really artificial to insert something like that into your environment. So there's no perfect answer to that, but I think the neuroimaging sort of research will hopefully give us a little bit more direct view into what's going on there.

[00:16:59.842] Kent Bye: Well, in terms of the 3D UI, have you determined things that either increase the sense of presence or kind of break presence if it's not done well?

[00:17:09.484] Doug Bowman: Well, any time you take the user's attention off the virtual world and force them to think about something in the real world, there can be a break in presence. So again, I gave this talk today that included this VirtuSphere device for locomoting through the environment in a human-sized hamster ball. And there, you really have to think about the physical device in order to use it effectively. You have to think about, okay, there's this big ball around me and I need to kind of move my body in this way so it doesn't start moving too fast or, you know, I don't lose my balance or whatever. And it's really hard to pay attention to the virtual world, get engaged with the virtual world and feel present when you're always focusing on what's your physical body doing with this physical device. You know, interaction techniques that are more purely virtual, you know, just the key is to design them so that they're subconscious as much as possible. You don't want to think about the device. You don't want to think about how to operate the interface. You want to think about the content that you're interacting with in the world.

[00:18:03.318] Kent Bye: And what has kept you motivated to keep involved with virtual reality and doing these immersive 3D UI techniques?

[00:18:10.301] Doug Bowman: Yeah, so I mentioned earlier that I did my first VR demo in 1994, and that was inspirational in the sense that I realized that this was a form of computing that I hadn't come across before. I had done a little bit of user interfaces, I had done some computer graphics, but putting them together in this way where I was in the computer graphics scene and I felt it in my body, I had physiological reactions to that. It was a qualitatively different sort of experience, right? And I think that's still true today. You bring someone into the cave for the first time and there's a wow factor, even though we're all used to lots of technology in our lives. So that's really still my motivation. I want to understand where that shift is between interacting with a simulation in a computer and experiencing a simulation directly. And how do we use technology to make that experience accessible? And how do we use design to make it usable? And there's still, as I said, lots of open problems there. So plenty of work still to do.

[00:19:10.770] Kent Bye: And finally, what do you see as the ultimate potential for virtual reality and what it might be able to enable?

[00:19:17.413] Doug Bowman: Wow. So I'm not a person who believes that virtual reality is the answer to everything. And again, with the current wave of interest in VR, there's a lot of people who are thinking we're going to be spending all of our time in virtual reality. That's not really my take on it. But I do think we can find more and more niche application areas. There's some where VR has already been successful but have been pretty small, as well as large-scale application areas where people will be using this for performing tasks, for getting work done, not just for entertainment. And so that's really the sort of application that I would like to enable more productivity applications in VR where there's some real-world output. Again, the architecture example is a good one. If I can not only visualize my architectural design, but modify it and create it while I'm immersed in VR, and then the result is a design for a real world building, that to me is an exciting use of the technology.

[00:20:14.174] Kent Bye: Great. Well, thank you so much. All right. Thank you. So that was Doug Bellman. He's a professor as well as the director of the Center for Human-Computer Interaction at Virginia Tech. So I have a number of different takeaways about this interview is that first of all I think it's really helpful and useful to think about the five universal tasks of 3D user interfaces. So those five were the navigation, selection, manipulation, system control, and text input. So in talking to Doug one thing that I found really interesting is that they're trying to objectively measure each of these sub-tasks against each other. But yet, from the academic perspective, it's really actually difficult to comprehensively come up with an entire user experience with all these different elements integrated. And I think that's what we're seeing that's a big shift from what has happened previously in the academic research and what's currently happening within the VR ecosystem, is that developers are forced to be able to come up with all these different integrations of these cohesive applications. So one of the biggest still open questions, I think, is being able to objectively measure different user inputs based upon the same task. I think we may start to see some VR applications that start to implement all the different input controls that are out there and available, but yet I think at this point, most VR developers don't have the time or budget to really fully do that. But I think as the VR ecosystem matures and grows, we're going to have more opportunities to start to more objectively measure some of these different 3D UI interaction techniques, whether it's from navigation, selection, manipulation, system control, or text input. With the text input, I think we're actually moving more towards conversational interfaces where people will just be speaking naturally and be moving away from some of the ways that we can input text. But given that, I still see that there's going to be a need for ways that people are entering text. One thing that Google showed that was really innovative was kind of like the xylophone approach of using the Lighthouse controllers to kind of bang out on letters in this kind of like enlarged keyboard. So a couple of other points I wanted to bring out and then kind of go over a lot of the different input devices that have since been coming out since this interview was recorded way back in 2015. So I think that there's going to be different input devices that are out there and they're going to have their strengths and weaknesses. And so I think it's going to be important to see what the time is to complete specific tasks and then the number of errors that you make on those tasks. and kind of see what's either the most efficient for people who are doing professional applications. So those people who have to do a task over and over and over again for anywhere from five to six hours a day using this immersive computing technology, it's going to be really important to figure out the ways that it's going to be the most efficient. I think with the mouse and keyboard, we have the advantage of nearly 50 years of innovations to kind of figure out what is the most efficient way to do specific tasks. And for some things, it actually may be faster to do things with a mouse and keyboard than using these immersive computing technologies. But I think there's just going to be a lot more natural user input and less learning curve to be able to be able to jump in and do certain tasks. But with that said, there's going to be some interfaces that are going to be more fatiguing to do in the 3D UI versus doing it in a mouse and keyboard. So that's something to keep in mind is How long are you going to be able to do some of these tasks? Not just the efficiency, but the endurance that it takes to be able to actually do some of these within an immersive computing environment. So I think it's really interesting that they're also trying to look at the impact of some of these different 3D UI on things like, is a locomotion technique going to give you any sense of motion sickness? Are some of the manipulation techniques going to impact your sense of presence in some way? Or is the interaction going to require so much cognitive load that that may actually impact your ability to learn within a training application? So let's dive a little bit into this presence. And I think it was really interesting that Doug had talked about this bat test. And it's something that I've personally experienced, that if you do have this sense of presence and you start to actually try to avoid things, like whether or not you're walking around a table or whether or not you actually try to set something down onto a virtual table, that's something that It has created such a deep sense of presence that you are behaving in a way that is observable. So I think in a lot of the VR research that has been done in presence, they try to introduce these kind of fake threats to see if you're actually going to respond to those threats. And so it's something that has been colloquially described within the VR community as the duck test. So while the duck test may make sense in some situations, putting in some sort of threat or duck test within your experience just to see how present some people are may actually break their presence if they realize that it's not actually a viable threat. So there's some trade-off there with trying to measure presence, but it's an internal psychological state. So it's interesting to hear that Mel Slater has famously said that surveys aren't enough, that you have to have some other physiological or behavioral indications of that presence. But it's still an internal psychological state. So I think it's going to be one of the big open problems within VR is trying to objectively measure presence within people, but also trying to design experiences that cultivate even more presence. My own personal opinion is that I think that different people have different levels of quality of presence within real life. And so How are you going to start to measure that? But that may be some indication for if you feel like you're really fully present in your real life, then maybe you're able to achieve presence within VR more easily. But that's just one of my hypotheses. So Doug's book of the 3D user interfaces theory and practice came out way back in 2004. And so I think there's been quite a lot of new input controls and devices that are out there. I just wanted to briefly talk about some of the things that have been in development commercially that have been giving public demos and some things that just got funding and haven't even been publicly shown yet. things like the leap motion for example has been out there for a long time and how are you going to be able to actually do some of this user interfaces with your hands and are there going to be certain situations where having the full freedom of movement with all your hands it's way more than just six degrees of freedom it's like 21 or way more than that where all the different levels of control you're able to have. But the challenge is that you're not giving any specific haptic feedback unless you're using your fingers as haptic feedback. So you're kind of touching your own hand or other fingers in some way. The Myo armband is something that just got a lot of VC funding and that's something that's put on your arm and could be able to a little bit more objectively measure some of the movements of your hand. So there may be able to do specific gestures that are able to trigger some sort of event within VR. I think that in trying the HoloLens, you have this pinching motion where you're kind of taking your index finger and just moving it down onto your thumb. That's something that, you know, worked okay for a little bit, but it can be a little bit fatiguing to be doing that type of movement. It's quite a large motion, and so not quite sure how comfortable some of these gestural controls are going to be in the long run in terms of how fatiguing they can be. One of the things that I found really interesting in talking to one of the co-founders of the OpenBCI, which is this brain control interface, is that instead of using EEGs to be able to use as a real-time input control, they're kind of moving more towards the electromyography, the EMG signals. So These are signals in your brain that are firing whenever you do specific micro-expressions of the face, like jaw grits, or moving your tongue, or eye clenches. These are things that give a specific impulse that can be easily measured. And so there could be ways to start to use your facial expression to do input control. I just had a chance to do a demo of iFluence, which I'll be running an interview with the CEO Jim McGrath here soon. But you're essentially using your eyes as a primary mode of user interfaces. And they've come up with a way to be able to, without using your hands at all, be able to just interface with technology using your eyes, which in some ways starts to feel like it's reading your mind. And so Oculus is going to be coming out with their touch controllers. You know, the HTC has the Vive, you know, the touch controller is optically tracked and the Vive is tracked by lasers. I'm still seeing Sixth Sense and the Stim controller show up at different trade shows. And so they're still doing demos. I think it's yet to be seen how some of these Stim controllers are going to be able to be integrated with a lot of this technology. I think from the software side, they're going to have to be using their special SDK. And then on top of that, there may be specific use cases where using the electromagnetic track controllers with STEM is going to be better than either the optical or laser tracked. So that's yet to be seen if there's going to be specific use cases for the STEM controllers to be able to see where they go in the market. But I know Oliver Kralos has said that it's actually really good that there's different types of input controllers out there because they have different strengths and weaknesses. And so we're also going to be seeing a lot more track devices when it comes to CES this year. I think there's going to be a lot more abilities to have new objects that would be able to be tracked within VR and potentially even ways of putting more points on your body to be able to get more of a sense of immersion. So that's something that I think will be coming out in early 2017. There's other controllers like the Nod controller and other locomotion techniques that used as input controller like the Virtuix Omni or the Cyberith Virtualizer or exercise bike with like Verzoom. So I think there's different locomotion hardware that's going to be out there that's going to be able to make it easy to locomote within VR and make it more comfortable since Just the very nature of moving your body up and down is going to create a disruption in your vestibular system and make it more comfortable to locomote through these virtual environments. So just one final thought is I think that academic research has been able to provide a lot of really objective measures of some of these different techniques, whether it's navigation selection, manipulation, system control, text input. But at this point, it feels like the tide is shifting into the consumer market that's going to be starting to do a lot more holistic integration of all these different components within a cohesive user interface. And that as we go forward, perhaps we're able to create these standardized applications and be able to swap out all these different input controls and be able to see what's the most efficient time to complete a task and what's the ones that minimize the number of errors that you make. And there may be just a matter of personal preference in terms of whether or not you prefer something like a leap motion with having your full access to your hands or whether or not you prefer a mouse and keyboard in 2D or whether or not you want to start to use tablet interfaces where you kind of have the equivalent of a tablet that's there and you do some sort of combination of all of these different user interfaces whether or not you have one 6DoF controller and then on the other hand you maybe have a keyboard or you may have some sort of tablet or a mouse device if you're sitting down. So that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then spread the word, tell your friends, and become a donor at patreon.com slash Voices of VR.

More from this show