#3: Oliver “Doc_Ok” Kreylos on VR for Data Visualization & Scientific Research + Kinect-enabled, collaborative VR telepresence

Oliver “Doc_Ok” Kreylos is a research scientist / computer scientist who develops virtual reality applications for scientific research, specifically immersive 3D visualizations for the department of geology at the W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES).

doc-ok-kinect
He is an active participant in the Oculus subreddit posting as “Doc_Ok” where he has been gathering a lot of attention for his innovative data visualizations as well as an array of Kinects to create a “pseudo-holographic” avatar within virtual reality.

Oliver has been making virtual apps since 1998  including the Virtual Reality User Interface (aka Vrui) for navigating and interacting with immersive 3D visualizations of scientific data. He has a wealth of knowledge about VR, and has been providing a lot of insightful commentary on his blog at Doc-Ok.org.

 

TOPICS

  • 0:00 Intro & enabling scientific VR visualization
  • 1:57 Remote collaboration with scientific collaboration
  • 4:49 Developing the Virtual Reality User Interface (Vrui) toolkit
  • 8:01 3D visualizations that are impossible in 2D
  • 10:37 Converting 2D CAT scan slices into full 3D medical visualizations
  • 14:12 Future of Kinect-enabled telepresence collaboration
  • 16:32 Hardware & software stack for setting up a calibrated Kinect-array for telepresence
  • 19:07 Speculation on hacking Kinect V2
  • 21:37 How Kinect VR avatars can provide a sense of presence
  • 24:12 The importance of implementing positional head tracking for presence
  • 25:32 Importance of using 6DOF Hydra & STEM controllers with Vrui & data visualization
  • 27:29 Importance of supporting VR input devices in a unified manner to avoid previous VR mistakes
  • 28:32 Prophetic feedback on the Oculus DK1 that has been integrated into DK2
  • 31:33 Find Oliver online at Doc-Ok.org, @okreylos & Doc_Ok on Reddit. Oliver’s future projects

Links

Music: “Fatality” by Tigoolio

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast.

[00:00:11.955] Oliver Kreylos: I'm Oliver Kreilos. I'm a research scientist at UC Davis, and I have this kind of weird cross appointment. I'm a computer scientist, but my appointment is in the Department of Earth and Physical Sciences, where we have this project called Keck Caves. which is about developing virtual reality as a scientific instrument. So the idea is that that is a very close collaboration between earth scientists and computer scientists like myself to use the core virtual reality environment that we have in that collaboration, which, well, is a cave, hence the name CA-Caves, to really take that beyond just how it's normally used in other places, where it's primarily a way to... I almost call that 3D PowerPoint often. People do some visualization, they make some images and bring them into a cave and show them to other people just to communicate the results. But we are really treating it as the environment where you do the science that leads to the results by developing custom software to bring in all kinds of data and then get the scientists into this VR system, make them essentially a part of the machine so that they can then analyze the data, derive insight, derive data from it, and then go back to do what they normally do. And so in that capacity, I've been working on virtual reality, specifically on virtual reality applications for a fairly long time. We started this caves thing more than 10 years ago now. And so everything else I'm doing, and also the things I'm doing at the moment with connects and rifts and so forth, all of those is connected to the core mission that we have. It just enables different aspects of that mission.

[00:01:44.962] Kent Bye: I see. So yeah, you've made quite a wave in the consumer VR community. On the subreddit for Oculus, you've been posting a lot of really fascinating demos with telepresence and what you can do with the Kinect. So it sounds like a lot of this experimentation is also part of your day job. Is that correct? Or is this kind of like side projects that you're doing?

[00:02:04.574] Oliver Kreylos: No, that is very much a part of what I'm really supposed to be doing. Yeah, part of my day job. One of the things we are looking at is remote collaboration. We have found out that when scientists use visualization, when they use specifically our cave, they always tend to work in groups. small groups, because the cave is primarily a one-person environment, so there will be a professor with two grad students or a postdoc with three undergrads, something like that. And we found out that they will spend almost the majority of the time not really working with the data, but talking to each other about the data. They will really gain insight by discussing it, by saying, oh, look at this, this looks strange, what do you think about it? And so we realized a long time ago that science is becoming more and more distributed, that people work together over long distances. We have all these projects going on with people on the East Coast or in Europe or in China or even with people in Antarctica and so forth. So then from there, it was really kind of an obvious step to try to replicate the way how people work together in the same virtual space by extending it to people working together over long distances. So we started looking into this remote collaboration, telecollaboration thing fairly long time ago, it was just that initially the hardware needed for that, the kind of 3D cameras to capture these pseudo holographic avatars of people that really enable collaborative work, were just custom built and very expensive and very finicky. And so when the Kinect came around, the initial one, like three years ago, we were pretty much ready and waiting for it. which is why we were able to make these impressive demos very early on, because we had all the infrastructure already there, just waiting for the Kinect to drop in, so to speak. And the same thing with the Rift. I've been working on low-cost VR, low-cost in the sense that a cave will set you back half a million dollars. But starting in about 2008, I realized we could suddenly build environments with the same kind of functionality, somewhat scaled down, for less than $10,000, for about $7,000. And so we've been doing that for a while, and then of course things got better and more commodity. Razer Hydras came around, 3D TVs got cheaper, and then suddenly the Rift came around, which again makes things another order of magnitude cheaper, which is of course why I got interested in that. And just like with the Kinect, I was able to do some early demos. and show what we're doing because we had all the software already running and in place. It was just a matter of supporting the Rift in my software stack, which was actually not really all that difficult. Which is why I've been involved in the Oculus subreddit, because there's just so much that we have already done that we found out how things might work or might not work. that I think it's important to share that to make sure that doesn't get forgotten. I mean, there's a lot of really exciting new stuff going on, but it's also important to get the knowledge out about the things that were already done that have already been explored.

[00:04:50.024] Kent Bye: And can you talk about the origins of your VR UI project and then how you eventually then tie that into the Rift as well?

[00:04:58.848] Oliver Kreylos: Sure thing. I call it Vrooey. Starts with Vroom, ends with Gooey. Yeah, I shouldn't have chosen that name. It's just, it's stuck. Anyway, like I said, stuck with it. When I really got into VR for the first time, that was in 98, I had been doing 3D graphics for a long time before that, but never really proper VR. But when I came to UC Davis, I was a visitor back then, I came to Davis to do my master's thesis. I was just finishing up my degree in Germany at the time. And so I came to Davis, and the day after I arrived, literally, they took delivery of the first VR system that I had ever seen, which was a so-called immersive workbench. It was like this drafting table, about six foot wide, four foot tall, slightly angled, and you were wearing 3D glasses, and you had these data gloves, and you had a stylus, which was, I thought, just insanely cool, because it fits so much into what I had always been wanting to do. But of course, nobody was using that because there was zero software for it and it was really difficult to develop for it and all that. So at that point, it was not part of my day job. I just really dived in and just started developing software for that. And then when I came back to Davis to do my PhD, again, my day job at the time was scientific visualization, meaning turning data into images. But I always put a VR spin on that because I found VR to be so compelling. I really had the hunch that this is the perfect user interface, how we can interact with 3D data. And so I developed essentially all the software that I had to develop for class work and project work to also run on that workbench VR environment. And then during the summers while I was a PhD student, I was working at the computer visualization group at the Lawrence Berkeley National Laboratory up in Berkeley on the hill. And so they had yet another VR environment and they forced me to develop my software based on this VR toolkit that was sort of the state of the art back then called Cave Library or CaveLab. And so now I had to develop two versions of it, one for the toolkit that we ran in Davis and one for the one that they ran in Berkeley. And they both just sucked. I mean, not in terms that they were bad functionality-wise, but that they were way too low-level, you were almost programming on the bare metal. So if you wanted to do a menu, you had to write the whole thing yourself. And so I developed this higher level layer on top of it just to make things easier and I called it Vrui. And then a while after I realized that I really didn't need those underlying toolkits anymore, cave library and whatever the thing on the workbench was called. And so then I guess you could say the rest is history. So I've been developing that ever since. and found that it is really easy to adapt to all kinds of different, not just different brands of environment, like a cave made by one company versus another, but by totally different types of environment, meaning, for example, a cave compared to a desktop, or a desktop compared to a Mercer desk, or a desktop compared to a Rift. And so that's why I've been able over the years to adapt the software so easily to all these new devices that came out, including the Rift. I had to do quite a bit of work for that because it was the first HMD that I ever worked with on a serious basis. But then once the basics were there, then of course all the other software just ran out of the box. And so that was of course great to get all this stuff ported and going very quickly.

[00:08:02.315] Kent Bye: And since you do do 3D visualizations, I'm wondering if you could talk about some of the things that you're able to do in a 3D environment that just don't really translate all that well to a 2D environment.

[00:08:15.584] Oliver Kreylos: That's a good question because we've been trying to put it into words for going on 10 years. As you can imagine, it's really hard to communicate what we're doing to people via words or text or even videos. Anyway, let me try. One of the things that is always forgotten, that is not obvious, is that when you're doing any kind of 3D visualization, you always look at the results of that on a 2D image. You're always projecting your 3D data into 2D and then you work with it. And one of the things that is really happening there is that the moment you project, it messes up all the spatial relationships in your data. It distorts sizes and angles. perspective projection adds to that. And so you can't really take measurements of these projections of your 3D data. You can't just look at the projection of, I don't know, an architectural model and measure and say, oh, okay, this beam here is three meters long or whatever. And that is such an ingrained thing that people don't really expect it to work. And the thing is, with VR, you don't use a projection, even though technically you still do. You project onto screens, but the way your brain interprets the things you're seeing, you are seeing 3D objects and not pictures or projections of 3D objects. And so one thing that you can very obviously do that you can't do using regular visualization in a 2D environment is really measure the spatial relationships between your data. And that turns out to be a really, really big deal because that is really what science is all about. finding features, isolating them, measuring them, extracting them, and so forth. And so we are getting a lot of mileage, for example, out of bringing scanned 3D models of environments into VR environments, and then doing essentially fieldwork in the VR environment. But because now it's all computerized, you can do your fieldwork much quicker, much more comfortable, and you can collect a lot more data that is also higher accuracy and higher precision data. So these earthquake-related things that we've done specifically with LiDAR scans are a really prime example for that, but it goes throughout all the applications we have. The go-to example that I like to use as a benchmark is that molecular modeling program where you can build molecules with your hands. because that really takes the interactivity of VR to an extreme in the sense that you can just literally build a molecule from scratch, which you simply couldn't do. Well, no, sorry, you can do it using a 2D environment, but it's incredibly tedious. It takes a long, long time. It is literally about a factor of 20 or so faster in a VR environment, meaning that you can do things you would never even attempt to do if you didn't have a VR environment, because it just wouldn't be feasible to do it.

[00:10:38.295] Kent Bye: Yeah, and I saw a video that you did of something like a CAT scan that you were navigating with your VR UI and able to really just expand out entire parts of internal organs. And maybe you could talk a bit about the technical things that you had to do in order to take something like a CAT scan and then be able to really slice and dice it and isolate 3D objects within that.

[00:10:59.913] Oliver Kreylos: Yeah, well, that's my whole dissertation right there. So let me paraphrase that. No, I'm kidding. But yeah, that is another example where we're getting really good traction because three-dimensional data is just happening everywhere. I mean, science used to be limited to doing 2D things like, you know, doing slices of simulations and so forth because just they couldn't handle it. And so now we have the computers that can generate three-dimensional data, but we still don't really have the means to analyze those data. And this system that we have is one of those means. So to concretely talk about the CAT scan, The software we're using there was born not out of medical applications for CAT scans, but out of looking at simulation data, like combustion simulations or fluid dynamic simulations or so forth. But specifically for a CAT scan, what happens is that the scanner essentially slices your body into a stack of cards, where each card is a cross-section of your body. take these cards and reassemble them in the correct order, you get a three-dimensional representation of whatever you scanned, your body. But then, of course, you need to find some way of looking at that. And the traditional way how radiologists look at that is just they look at it one slice at a time. And those guys are really, really well-trained to then, in their heads, reconstruct a 3D mental model of what that patient looks like on the inside, just from looking at these 2D cross-sections. And they are really well-paid because there's very few people who can do that. And so with the 3D visualization, we are able to sidestep that process in many ways and allow someone who is not a trained radiologist to still look at these data and make sense of them. So the software, what it has to do is take this stack of images and reassemble them in 3D space by just, you know, like I said, stacking them on top of each other. And then, of course, the crucial part is the algorithms you can run on that stack of images in order to really make sense of it. One of the really basic ones is the cross-section, where instead of only looking at the slices, how they were taken, you can now slice in arbitrary orientations and positions, which is quite helpful if you want to find things going on in the data. And then another complementary technique is where we essentially extract the 3D analogon of topographic lines. Like, you know, on a topographic map where the lines indicate points that have the same elevation, we can now take points that have the same tissue density, let's say bone or fatty tissue or muscle, And we connect all of them. And in 3D, that doesn't give us lines, that gives us surfaces. And these surfaces really show you the 3D shape of a skull or an organ or, you know, the spine or an artery. And it turns out that the way we can present that in VR is that even a layperson can really understand what is going on and can see what might be going wrong with something. And of course, now a trained surgeon would have a much, much better chance of doing that. So that's another way how you can really work with the three-dimensional data in the three-dimensional space, using your hands, interacting with it in a very intuitive manner really helps in getting knowledge, getting insight out of those data. And that's one of the reasons why we built one of those low-cost VR systems I alluded to earlier for our med school, where they are looking into how to do that for surgeon training and for patient treatment and so forth. We are at the very, very early stages of that, to be honest, but I think it's going to be a big deal.

[00:14:14.273] Kent Bye: And so moving forward, what do you see as the ideal situation in terms of setting up a Kinect telepresence collaboration? I know you've had some videos where you had some people wearing either AR goggles or in a virtual reality cave, but I'm curious about where you see that going in terms of doing this type of telepresence collaboration with the tools that are coming out.

[00:14:36.592] Oliver Kreylos: Yeah, I think there's going to be not a single way how that's done, but it's going to be this interconnected multitude of different modalities. Wow, that is a mouthful. What I mean by that is that I'm expecting that initially, just due to the difficulty of getting the hardware that's really required to do that, there's going to be a resurgence of what you could almost call a public phone booth. But now it's going to be a 3D phone booth. which is, you know, I'm just spinning freely here. You go into your local Kinko's or whatever. No, they're called FedEx Kinko's, right? Anyway, you go into that place and you sit down in the space and you're surrounded by 3D cameras of some sort and you have some kind of 3D display, maybe a screen with 3D glasses, maybe a headset, who knows? And then you can just dial someone else anywhere and then you can just go with them into the same shared virtual space and interact with them. And then, of course, for scientists, labs that use VR to do science like ours or others that we collaborate with, They would have, of course, in their cave or in their whatever VR system they have, that would double as a capture space for telepresence, like our cave does, like my low-cost VR stuff does, like the capture space I just rebuilt in my lab, the one with the three connects and the rift like that is. So they would all be connected to each other. Essentially, you can think of it as the 3D version or the holographic version, I almost say, of Skype, only that it's supported by a broader range of environments, as opposed to being a flat screen plus a webcam. It's going to be all these different modalities. And then, of course, with low-cost stuff like the Rift, and I mean, the Kinect is not exactly expensive, just regular people can have it in their homes, like I have on my desk, and can just participate in this. I see that being really a slowly evolving process with the end result, and this is maybe not going to happen, who knows, but in terms of the functionality and really the sense of talking to someone in person that you get from using that, which is hard to communicate unless you've tried it, I think this is going to totally supplant how we do Skype or phone calls or video conferences or Google Hangouts right now.

[00:16:33.557] Kent Bye: And maybe you could talk a bit more about the software stack that you have that is running the array of three connects. Like, what would somebody need to be able to reproduce that?

[00:16:42.964] Oliver Kreylos: Right. Well, the easiest to talk about the hardware, of course, you need some computer, you need at least one, ideally two or three connects. You need to know how to set it up and calibrate that. That's currently the hardest part, I would say. I'm working on making that as easy as possible. But it still needs some training, which is why, again, I'm thinking that sort of central phone booth type things would be the first way how this is deployed. But anyway, so you have the hardware, you connect it all, and then the software. The biggest problem right now, in many ways, is that my software only runs on Unix-like operating systems. I mean, that's simply because I grew up on Unix. I did all my work on SGIs and, you know, that kind of stuff back in the day. But, of course, most people run things other than Unix. Windows, for example. So that's going to be a practical problem. But let's just say you are adventurous enough to install Ubuntu or whatever on your computer at home. So then essentially you download the software and it comes in a bunch of components. There's the base software, which is the VUI toolkit we've been talking about, which is the... I'm sometimes tempted to call it a VR operating system because it abstracts between all these different types of devices and all these different ways of interacting with them. And that is the glue that holds everything together. Technically speaking, of course, it's not an OS, it's middleware, but oh well. And then on top of that, there's the Kinect package, which is an add-on package that drives the Kinect cameras. Right now, the Kinect is the only 3D camera I support. I'm hoping to support more in the future, like specifically the Kinect 2, if that ever gets a proper driver and so forth. And so that is essentially a low-level driver that extracts 3D video from the Kinect, but is tapping directly into it via the bare-metal USB interface, so it's not using any third-party libraries. And then it has these higher-level components that can reconstruct and render 3D video, like I've been showing in the most recent video. But it also has the components to store video, so you can essentially take 3D home movies and play them back later. And it can also transmit the video in real time over the internet. And that is how our collaboration infrastructure, which is yet another separate package, how that works, how you can then work with someone and see their Kinect-based avatar, you know, as close to real time as we can get, even though you're a long distance away, and can interact with them almost as if they were in the same space. We have gotten quite impressive early results out of that, with people almost forgetting that they're not quite in the same space when they're doing that.

[00:19:08.188] Kent Bye: Do you have any insight in terms of the version 2 of the Kinect as to whether or not that's going to even be available to run on a Linux environment? Because I've heard that it's only going to be Windows 8, but I'm not sure that's true.

[00:19:20.662] Oliver Kreylos: Yeah, that is totally up in the air. So I'm just freely speculating whatever I'm saying right now. The Connect tool for PC, what they call the Connect for Windows, the official SDK is only Windows 8. That much I know. I think we ordered one of those through a grant we have, but it hasn't gotten here yet as part of the beta program. So once it gets here, I'm going to, of course, look at it. But the same thing in a way happened with the original Kinect, that they had the Kinect for Xbox and they had the Kinect for Windows, and the Kinect for Windows was only on Windows through their SDK and was also significantly more expensive than the Kinect for Xbox, even though it was hardware-wise exactly the same thing. With the Kinect 2 it's like that again, only that so far I haven't gotten any reports that it has been hacked, so to speak, that the Kinect 2 for Xbox has been opened up for use on a computer, which if you remember with the original Kinect that took all of two days to happen. With this one not. I think this time they didn't accidentally forget to put some encryption or obfuscation onto the USB port, which would explain that. which of course would make it much more difficult to get Linux support. So the workaround that we are looking at right now, because the Kinect 2 is a huge improvement in terms of 3D video quality over the Kinect 1. Now keep in mind I haven't actually seen it firsthand yet, but just from seeing, you know, leaked shots and what people have done with it and from the specs and from the technology, it's going to be a great improvement. So I'm definitely looking forward to using that. And the workaround we have in mind right now is because everything is already client-server based, we would just, you know, bite the bullet, so to speak, and run a Windows server, and then just capture the 3D video through that, and then send it in our wire protocol over the network to whatever other computers we have that would render it. It's definitely not an ideal situation, but it's going to get us started. And I'm really hoping that at some point somebody's going to break the protocol. I'm not really good at that, which is why I haven't even attempted doing that. Back in the day with the Kinect One, it was someone else who extracted the USB protocol messages from the Kinect. It was Héctor Martín, the Spanish guy who was the first and who won the Adafruit Prize back then. And so I just, you know, the moment he said that he'd done it, I just jumped onto it and rode the driver myself, but he had done the legwork. So I'm hoping that someone else is going to do the legwork again this time so I can just jump on it. Because that's, yeah, that's really not my expertise. There are other people who are much better at that stuff than I am.

[00:21:39.177] Kent Bye: Yeah, maybe you could talk a bit about the experimentation that you've done in terms of using the Kinects to be able to render out a fuzzy 3D version of your hands and your limbs and what that has done for your sense of presence.

[00:21:56.849] Oliver Kreylos: Yeah, that's interesting. I haven't really done very many experiments with that particular angle. And the reason is that, funny as it sounds, I've been doing VR since 1998, but I have never really worked with head-mounted displays. Because, like I said, I was thrown into an environment where we had a screen-based VR system, that workbench I mentioned. And then when I worked at Berkeley Labs, they had two screen-based environments. And then the cave, of course, is screen-based. So what all of these have in common is that your vision is not obstructed, meaning that you do see your real body interacting with the virtual objects. And that automatically creates a very strong sense of really being in the space and these objects being real. The feedback we have gotten from people using any of our systems is a very strong indication for that. with them literally feeling the objects in the space, even though obviously you can't. And so the first head-mounted display I've really tried, but that I've really done some serious work on, is the Rift. And the difference, of course, being that it completely obstructs your vision, so you have to have some way of getting your body in there, or otherwise you feel like this disconnected ghost floating around and you have a really harder time using it. So with that, having the 3D video of yourself in there, to me, was definitely a great help. I mean, I did that pretty much immediately, so I didn't really play with it much before I did it, so I don't have a fair comparison, because I came in being biased, so to speak. But from having done it now more, and also now having exposed other people to it, who didn't have much VR experience, and who immediately jumped in with seeing their bodies, they react to it very, very favorably, with really a strong sense of being in that space. And I sometimes, like I point out in the video at one point, I catch myself doing things that make no sense, because I'm in a virtual space where real world rules don't apply, but I still do them. just because my brain truly at a subconscious level believes that I'm in that space. And I guess that's really the definition of presence in a nutshell. And I'm not a hardcore VR researcher. I haven't really looked into the psychological and psycho-visual effects of all of that. But I think that's really what they mean when they talk about presence and achieving it, that your brain is fooled to the point that you intentionally don't bump into these virtual objects, even though you clearly know that they're not there. So that was really a very obvious thing to bring the 3D video in as a lifelike avatar from very early going. The one thing that I did notice, though, is that the current dev kit of the Rift, the dev kit 1, doesn't have positional head tracking. About a year ago I made a video about why I thought that was going to be a problem, and I think most of that has come true. And so, until now, I have primarily used the Rift in its native mode, meaning without positional head tracking, because I didn't want to get ahead of the game, so to speak. But for that capture space I filmed for the video I did, I put the positional head tracker on the rift and I have to say the difference is of course, I mean I should have expected that, but still the difference is quite astonishing. At that point it really becomes a physical space that you walk through. I had done these sort of ad hoc comparisons. In the cave, we have Cave Quake Arena 3, because everybody does. It's like the Hello World of VR. And when I play with it in the cave, I get a very, very strong sense of being there. My body reacts in all the ways how it would to a normal space. I'm slightly afraid of heights. It really comes to the foreground there. And previously with head-mounted displays, where I've tried them, and even with the Rift, I never got that feeling. But then the moment I threw in the head tracking, of course, that changed, and then it was a real space. So, of course, everybody knows that, and that is why the DevKit 2 and the consumer version will have positional head tracking. But still, even for me, it was a surprise, really, how much of a difference it makes.

[00:25:31.471] Kent Bye: So in your VR UI toolkit that you have, I noticed that a lot of times you're using these Hydra controllers, and I'm wondering if there's going to be any other ways of interacting with that, or if you have to have something like a Hydra or a STEM in order to really be immersed into these environments.

[00:25:50.094] Oliver Kreylos: Yeah, I definitely think so. I mean, you can use the software with a mouse and a keyboard, or with a joystick, or with one of those space balls, and you know, it works. But it's just in terms of usability, in terms of effectiveness, it is, I would say, an order of magnitude worse than using a proper six degree of freedom, meaning tracked and positioned orientation input device. And the RAZR HYDRA is, for the price, it's a really, really good device. It has all kinds of problems, as I've talked about at length. But I have to say, it gets the job done. And so I really think you need something like that. And whether that is going to be a HYDRA, or the stem as being a follow-up to the HYDRA, or a hacked and augmented Wii controller, or a hacked and augmented PlayStation Move, which actually looks really promising right now. I'm working with one right now as we speak. Well, not literally as we speak, but you know. I think there's going to be suddenly a market for these, because people will soon figure out that you really need something like that to meaningfully do the things you want to do in VR. There's a lot of negative commentary about that. Oh, you know, I don't really want a tracked input device, yada, yada. And I think that's because people can't really imagine at this point, just because nobody's ever seen anything like this, that the things you want to do once you have VR are very different from the things you want to do on your computer right now, just because right now you can't do them. So the moment the Rifts get out there, I predict there's going to be a huge demand for these kind of input devices. And that's going to be really interesting, because the reason why the Hydra failed blatantly in the market is that there was just no demand, because there was no infrastructure for it to drop into. If the Hydra had come two years later, if the Rift had come two years earlier, things would have turned out very differently. And I think these devices are going to well, for one, have a resurgence. I mean, they have been around for a while, but also there's going to be really this cottage industry of third-party input devices that are going to crop up all over the place. And then, of course, you'd have to have software to support all of them in a unified manner, which people also currently, and I mean developers right now, are not really thinking about. And so then maybe things like VUI or other VR toolkits will suddenly not necessarily be used as they are, but will I hope, you know, show a little bit the way of how you can deal with these heterogeneous devices in a unified way as far as the application software is concerned, which is another reason why I'm doing all this stuff right now, because I hope that developers right now won't repeat the mistakes that the VR community has done, you know, over the last 20 years. which is ignoring the input side of VR. I'm not ignoring it, but not treating it properly and therefore then creating VR software that wasn't really all that useful because you couldn't do things with it that you wanted to do. And I hope that it's going to be different this time.

[00:28:31.574] Kent Bye: Yeah, for sure. One of the things that I really noticed is going through your blog and reading a lot of your posts is that your review of the Oculus Rift that you did over a year ago now, I guess, it seems like a lot of those things that you were pointing out have since come to pass, and it was almost like a very prophetic, here's what's not right, and then It seems like a lot of those things are kind of coming out in the next rift. So I'm really curious about if you can kind of comment on your first experience of the rift and some of those things and where you see some of those gaps in the future DK2 of what is left yet to be filled.

[00:29:07.565] Oliver Kreylos: Yeah, that was really a funny thing that my tea leaves must have been right. So no, I mean, I didn't really anticipate any of that. So what happened is that a good friend of mine was one of the early Kickstarter backers. And so he got his Rift dev kit very early. And, you know, I've been working on this for a long time. I was aware of the Rift before it came out, which is why I made that one video about head-mounted displays before it came out. So anyway, he invited me over and I was like, okay, you gotta check this out. So I went over there and so we just had a go at it for a couple of hours and it was really quite amazing because there were so many things that they had done right with the riff, which is why I then decided to write at length about this and talk about what they did right and then also the few things that they didn't quite do right. And I think that's what really resonated with people. And I also was trying, of course, to be constructive about it, saying, well, you know, this is not quite working right now, but this is how you should do it. And I wasn't really saying anything that was a breakthrough or a super deep insight, because most of these things were just, you know, known to people who had been working with HMDs for a while. And I was not one of those people, to be quite honest. Like I said, my background was in screen-based VR. And so it's not really a surprise that many of those things were implemented, because Oculus themselves were already planning those things way before I ever got my hands on the prototype. But then later on, there were some funny coincidences, like when the first version of Rui I released for the Rift had drift correction using the magnetometers in there, and the official SDK didn't do it at the time. And then literally a week later, they pushed out a new SDK version, which did magnetic drift corrections. That was funny. And then recently I talked about, at length, I wrote the simulator for the Rift, so I talked about the optical properties and lens correction, and I said that, well, you know, the current SDK doesn't really have a way of dealing with this and that circumstance. And then again, two weeks later, boom, new SDK version that does exactly that. So that was really quite funny, I have to say. It's almost like they're reading my blog, which they might be doing, I don't know. But I think the bottom line is that I wasn't really as much being prophetic, because that would be really weird. I was just, like I said, saying things that inside the VR community people had been talking about for a long time, but because nobody had been paying any attention to the VR community for about 20 years, people outside might not just have been aware of those things. And so it sounded like I was foretelling the future, when I don't think I actually was.

[00:31:34.642] Kent Bye: Great, and finally, I'm really curious about any projects that you have coming up that you want to mention, and how is the best way for people to find you on the web?

[00:31:43.915] Oliver Kreylos: Well, let me start with finding me on the web. I mean, if you Google for Krylos, fortunately, my name Googles very easily. So then you come up on a bunch of stuff that I'm doing. And then, of course, the blog that I'm using to sort of do public outreach about the things I'm doing is DocOck.org. And that is, you know, my own little slice of the blogosphere, which is slowly, slowly gaining a bit of audience. But of course, it's kind of fringy. And so then in terms of projects, well, I mean, I mentioned, I think, in one of my posts that recently I got involved in this experimental theatre thing, where we did really a bunch of tele-collaboration by having three capture sites, like the one I show in the video, meaning three sets of Oculus Rift and two Kinects per capture site, and we just had actors acting with each other. So that was really a clear application of the larger strategic things I'm working on right now. So the video I just released yesterday is pretty much just a warm-up for the next one that I'm trying to find a crew right now, which is going to be about the remote collaboration, because I have had a few things and videos and so forth about that, but never really anything that showed it very well, that never anything that reached an audience. So I'm going to try to do it right this time. So there will be another video, hopefully in a couple of days, really showing what this is meant to be for, meaning holographic telepresence, if I want to use that word. I always get dinged for calling these things holographic, but then if it cracks like a duck, you know. Anyway, so that's one thing coming up. And the reason why I haven't talked about this theatre thing at length is because it was really very experimental. It was the PhD research of a guy who is in our theatre and dance department. So it's not something that I would say can easily be understood out of context, which is why I haven't, you know, talked about it and told everybody about it. But we are going to do something else in the near future that is more, that is simpler from their point of view. And I think for my purpose, get the point across much, much better. And then I think I will just yell from the rooftops about it. But otherwise, there are just a lot of projects I'm working on in parallel almost, especially on the visualization applications that are sitting on meaning the point cloud viewers and the 3D volumetric viewers and the molecular constructions and all those things we are doing, and the virtual globes, of course. So there's a lot of that is always continuously being worked on and being improved, but it just doesn't make for a very splashy thing. So sometimes I have to do things that are, like you said, not really related to my day job, but still in some way they are, to get people interested in the kind of thing we are doing generally.

[00:34:10.178] Kent Bye: Great. Well, thank you so much for your time today. You're welcome. And I look forward to seeing what else you put out there for the VR community.

[00:34:18.664] Oliver Kreylos: Well, I'm trying to do my best. Keep an eye out.

[00:34:22.767] Kent Bye: Great. So thanks a lot.

More from this show