#545: AxonVR is Building a Generalized Haptic Display

Jake-RubinAxonVR was awarded US Patent No. 9,652,037 of a “whole-body human-computer interface” on May 16th, which includes an external exoskeleton as well as a generalized haptic display made out of microfluidic technology. I had a chance to demo AxonVR’s HaptX™ haptic display that uses a “fluidic distribution laminate” with channels and actuators to form a fluidic integrated circuit of sorts that could simulate variable stiffness and friction of materials.

At GDC, I stuck my hand into a 3-foot cube device with my palm facing upward. I could drop virtual objects into my hands, and there was an array of tactile pixels that was simulating the size, shape, weight, texture, and temperature of these virtual objects. The virtual spider in my hand was the most convincing demo as the visual feedback helped to convince my brain that I was holding the virtual object. Most of the sensations were focused on the palm on the hand, and the fidelity was not high enough to provide convincing feedback to my fingertips. The temperature demos were also impressive, but also were a large contributor to the bulkiness and size of the demo. They’re in the process of miniaturizing their system and integrating it with an exoskeletal system to have more force feedback, and the temperature features are unlikely going to be able to be integrated in the mobile implementations of their technology.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to talk with AxonVR CEO Jake Rubin about the process of creating a generalized haptic device, their plans for an exoskeleton for force feedback, and how they’re creating tactile pixels to simulate a cutaneous sensation of different shapes and texture properties. Rubin said that that the Experiential Age only has one end point, and that’s full immersion. In order to create something like the Holodeck, then Rubin thinks that a generalized haptic device will unlock an infinite array of applications and experiences that will be analogous to what general computing devices have enabled. AxonVR is not a system that’s going to be ready for consumer home applications any time soon, but their microfluidic approach for haptics is a foundational technology that is going to be proven out in simulation training, engineering design, and digital out of home entertainment applications.

https://www.youtube.com/watch?v=wPDtXnE9crg

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR Podcast. So virtual reality at its essence is trying to hack all of your different senses. And with that, we're trying to create what would feel like the holodeck, where you're completely and fully immersed within virtual reality. Now, we are able to do the vision and the sound really well. We're super dialed in there. But touch and haptics is the one area that is still really low resolution, low fidelity, and kind of in the uncanny valley whenever you start to try to do something that's more sophisticated. So on today's episode, I'm going to be talking to Jake Rubin of Axon VR. At GDC this year, I had a chance to try out their generalized haptic solution, which was a giant box on a table and you stick your hand into it and it gives you all these sensations. And so he talks about where it's at now and where they're going and trying to miniaturize it and create exoskeletons and trying to really fully hack your mind when it comes to the sense of touch and haptics. So, that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by the Voices of VR Patreon campaign. The Voices of VR is a gift to you and the rest of the VR community. It's part of my superpower to go to all of these different events, to have all the different experiences and talk to all the different people, to capture the latest and greatest innovations that's happening in the VR community and to share it with you so that you can be inspired to build the future that we all want to have with these new immersive technologies. So you can support me on this journey of capturing and sharing all this knowledge by providing your own gift. You can donate today at patreon.com slash Voices of VR. So this interview with Jake happened during GDC on Thursday, March 2, 2017 in San Francisco, California. So with that, let's go ahead and dive right in.

[00:02:09.750] Jake Rubin: I'm Jake Rubin. I'm the founder and CEO of Axon VR, and we are working to bring realistic touch to virtual reality for your whole body.

[00:02:17.325] Kent Bye: Okay, so we're in the St. Regis Hotel here in San Francisco, GDC is happening, and I'm sitting here and I'm looking at this table that has this huge box on top of it with computers and all sorts of other things. Maybe you could describe to me what this contraption is here that we're looking at.

[00:02:34.430] Jake Rubin: Yeah, so what you're looking at is a prototype that we built a few months ago to show off our HapDex skin material, which is a microfluidic smart textile that gives you high-resolution tactile and thermal sensation on your hand. So you stick your hand in this box of Doom. As you might imagine, we get a lot of Doom references. And you can actually feel the sensation of objects on your hand, shape, size, texture, temperature, weight, as you just tried.

[00:02:59.484] Kent Bye: OK, so how did this come about? Where did this idea begin for you?

[00:03:04.391] Jake Rubin: You know, I've been fascinated with VR as long as I can remember. I started working on this first about five or six years ago, and really initially for the first year and a half or so I worked on it myself, trying to get the vision out of my head and down onto paper and really figure out how I wanted to approach this. I knew that you know, for me, my dream has always been fully immersive VR. I think like a lot of other people in this industry, you know, I've dreamed of the, pick your favorite reference, the Holodeck, or the Metaverse, you know, or the Matrix, although that's not always a positive one. But I've dreamed about getting to that fully immersive VR. And as I really started to dig into the literature, read everything I could get my hands on, talked to people, you know, in the industry, and what quickly became apparent is, and this was actually before the Oculus Kickstarter, so this was early days for the modern VR movement.

[00:03:50.823] Kent Bye: So you started this way before even VR came onto the scene then?

[00:03:53.503] Jake Rubin: yeah, this was back when VR was, you know, quote, dead. I mean, it was still being used for industrial applications, enterprise, but this was pre the resurgence of consumer VR. So, you know, anyway, even back then, when I looked at the literature, it was clear that the audiovisual technology was there. And for me, as I looked deeper into it, it became very clear that the missing piece was touch, you know, was haptics. just, you know, no one had really figured out how to do convincing and realistic touch. And, you know, touch is such a critical part of how we interact with the world. It is, in some ways, the most fundamental sense. You know, vision, obviously, is the largest part of our brain. It's the first thing that we think of. But touch is what grounds us. We were just talking about embodiment earlier. It's what lets us know things are real, what makes us feel like we're in a place. So, for me, it was, you know, both the biggest missing piece and the thing that I was most fascinated with. So, I, you know, decided to throw myself into figuring out how we could do haptics. And from the beginning it was, you know, I didn't want to just develop something that would be useful for one piece of the puzzle. I really wanted to figure out if we could build a technology that would scale to the whole body and encompass all of the parts of touch. Touch is such a complex beast. So, anyway, I worked on it myself for about a year and a half, and I got to the point where I had the basic idea, the basic vision for what would become Axon VR down on paper. I had computer models, I'd written basically a technical white paper, and I went looking for someone with the engineering chops to help me really make my vision a reality. And, you know, again, I did a bunch of research, I talked to a bunch of people, and I stumbled across the guy who would become my co-founder, Dr. Bob Crockett, who was the Director of General Engineering at Cal Poly. And I, you know, cold called him one day out of the blue, and I think he thought I was crazy at first. In fact, I know he thought I was crazy at first because he says it on a regular basis. And, you know, here I was, some random 20-year-old kid, you know, just calling him and pitching basically a holodeck. But, you know, I think I intrigued him enough that he was coming up to Seattle for a conference, and he agreed to sit down and have breakfast with me. you know, that breakfast ended up turning into a four-hour conversation. And at the end of that period, he said, okay, I'll read your white paper. He read the white paper. And he said, you know, after that, kind of, I still don't think this is going to work, but it's interesting enough. I want to investigate this. I'll do some work for you in my lab. And you kind of look at these fundamental technologies. And One thing led to another, and after six months, he ended up jumping in with both feet and co-founding the company with me and throwing the full resources of his lab behind building this vision. And that was really what allowed us to get from idea to something physical, something that you could try.

[00:06:17.922] Kent Bye: Yeah, so I just want to sort of take a step back and talk a bit about my experience and how I think of this and to contextualize it a little bit. So I feel like virtual reality on the long scale of the technology, which could be anywhere from 50 to 500 years of essentially hacking the senses. We're starting with the vision. We're starting with sound. And the haptics right now are super low fidelity. They're just basically rumble. Or you may have a subpack. You know, it's essentially the earth element. So full room scale, being able to move around is engaging the body in a certain way, right? But the problem is, is when I've covered this issue of haptics and haptic display, is that right now it's surely in the uncanny valley of, you know, it's a low fidelity and it doesn't necessarily like feel real yet. And so when in talking to a number of different researchers, the skin all the different varieties of types of skin that we have all over our body, detecting temperature. Essentially, the vision of Ready Player One is a haptic suit, which is essentially like blocking out your entire body from the real world so that you can simulate it synthetically. So that's maybe the long-term vision of where this might go. But right now, the fingertips are some of the most sensitive parts. And I think some people have been trying to kind of stimulate that part of the hand, but yet, that is even more uncanny than the other parts. So you're starting with the parts of the hand that are less sensitive. So their fingertips are the most sensitive, but the rest of the hand is not as sensitive. And so part of what I felt when I was playing with your system here was that these large pins pushing down on my hand, when I felt in the fingertips, it was like, okay, that's not real. But when I would drop things in the palm of my hand, then I could start to see, OK, when I saw probably one of the most convincing ones was when you had something moving dynamically, like a spider moving around your hand. So it's able to push on your hand and kind of stimulate different parts of your hand. And if I were to shut my eyes, it wouldn't be anything. But VR, because the visual sense dominates so much, it was like tricking my mind Sort of. I'm sitting here still in a chair with my hand in this giant box and it's like basically stimulating the least sensitive parts of my hands and there's temperature which I think the temperature part is also probably one of the more convincing things because When I see the color coordinated to the blue with cold and red is hot, then my mind is saying, okay, I've had a direct experience of touching a hot iron and learning that, okay, that's hot, don't touch it. So when I'm putting these objects in my hand, they're changing colors and I'm feeling it, then it also starts to trick my mind. So with that, maybe you could talk a bit about your vision for how you were architecting this and where you see it going in the future.

[00:09:02.356] Jake Rubin: Sure, yeah. So, you know, note on what you tried. Right now we have over 200 points of sensation on the hand. Those are all also high displacement, so several millimeters of displacement into the skin, which allows us to, you know, simulate a lot more than just vibration. We can simulate static contact, shape, and weight. And we're not quite there yet in terms of the resolution that we ultimately want to achieve. That's determined by what we call the two-point threshold. So basically, it's a common physiological measurement of basically the resolution of your skin. So how far apart do two points have to be before you can't distinguish between two and one? And as you mentioned on the palm, that's a larger figure. It's about six, seven millimeters. On the fingertips, it's one or two millimeters. So we're pretty close in this prototype to two-point threshold on the palm, which is why your brain starts to say, oh, it's not discrete points. It's actually a uniform object. On the fingertips, we're not quite there yet, but we have doubled the resolution in our most recent prototypes, and that's getting a lot closer to that two-point threshold. We also can basically add smoothing. So what you were feeling was just essentially the raw actuator to put you on your hand. We've since added a layer that smooths that out a little bit. So it trades off point resolution, but you get a smoother sensation, so you don't get kind of the bumpiness that you were feeling. So those are all things that we're doing to take this sensation from what you're feeling, which, you know, again, is by far the most resolution that's ever been achieved in a haptic device like this. to something where you have the level reels and you currently have on the palm, on the fingertips. So as far as the overall architecture and where this is going, we think of touch as having basically four primary channels, analogous to primary colors, that can all be blended together to create sensations. So those channels are tactile, Tactile is the sense of pressure distribution on the skin. It's a cutaneous sensation that makes up what we would call probably the most critical part of touch. And that's the main thing that's being stimulated in this demo. That gives you a sense of shape, of large-scale textures. It's involved in almost every haptic interaction. The second is force. So force feedback gives you a sense of forces on your musculoskeletal system and resistances. So that's what lets you know, for example, if you grab an object, that it's rigid or that it's heavy. And then we have VibroTactile, which is a subset of tactile feedback. That is the high-frequency feedback that lets you feel contact transients. So if you tap on something, for example, that sensation, you can hear the sound of the tap-tap-tap, but you also can feel those high frequencies. It helps you let you know that it's a stiff table and not a soft pillow, for example. It also gives you a sense of very fine textures below about a millimeter. As you run your finger across them, it's actually hypothesized to be why we have fingerprints. Your fingerprints create vibrations, and those are detected by a certain type of receptor in your skin, and in turn that gives you a sense of a very fine texture. Is it rough? Is it smooth? Is it wood grain? You know, so on and so forth. And then lastly is temperature. Is it hot? Is it cold? And also the heat flux. Not just absolute temperature, but how much heat is flowing into and out of your skin. And that gives you a sense of material properties. So, like the primary colors, any one of these on its own doesn't give you very many options. It's not very convincing. You know, if you just have vibration, which is a common one right now, or if you just have force feedback. you know, your brain is not really going to get tricked. But when you start to combine multiple modalities, especially the force, tactile, and vibration, which are present in almost every realistic interaction, the brain starts to, you know, really believe that you are interacting with real objects. And, you know, like colors, you blend one color, you get one color out, you blend two colors, and you have a few more options, you blend three or four colors, and suddenly you have the entire, you know, massive color spectrum available. So, we've observed in our experiments that When you combine multiple channels, it's a 1 plus 1 equals 10 type of effect. You know, you get exponential improvement in the level of realism and the level of natural engagement. So for our technology platform, you know, there's really two primary components that encompass those modalities. There's our haptex skin material. This is basically a microfluidic smart textile. So we have created essentially a fluidic integrated circuit. We have channels and actuators, all of which are baked into this one very thin panel of material. And that's something that we can create many, many channels of high-resolution, high-displacement actuation in a very thin package. You can think of those kind of like tactile pixels. So just the same way that you have in the screen behind us, you have visual pixels. They're all very simple. They're just points of light that change color. But when you put enough of them together on a grid, you can create all kinds of visual images, anything your eye can see, basically. Same thing with the pixels in our haptic skin. When you put this material against your skin and you turn these on and off, you create different pressures. You can create a vast variety of sensations, texture, pressure, shape, so on and so forth. Our skin also includes actuators that give high-frequency vibration feedback, as well as in the version you saw today, thermal feedback that's accomplished by moving very small amounts of heated and cooled water around. And the last piece comes from our HapDex exoskeleton technology. And this gives you actual resistance and forces on your body. And so with the skin material alone, say you had it on your hand, for example, as you did in our demo, you get very good cutaneous sensation. You get very good skin sensation. So you can feel the shape and the texture and the surface properties of an object. But that won't stop your fingers from, say, going through it if you grab it. So what our skeleton does is it applies those gross forces to your fingers or to other parts of your body and actually makes it so that those objects feel stiff and feel real. So when you put those two technologies together, our skin on our skeleton as well as our SDK, which allows developers access to those technologies and allows them to create content, you have a full platform for realistic touch that we believe will eventually scale to the full body.

[00:14:34.065] Kent Bye: Wow, that is quite an ambitious endeavor that you've undertaken here. So I went to IEEE VR the first time in 2015. I was in Arles, France, and I was talking to people there about haptics, essentially. And the way it was described to me is that most haptic situations are use case specific. So coming up with a generalized haptic solution is actually a very difficult problem. Most of what people have been doing up to this point is creating these Haption devices, which are these very expensive haptic machines that simulate force feedback, range from $20,000 to $50,000, where you can just imagine if you're a doctor and you need to learn how to insert a needle and not hit the bone, you could simulate that within the force feedback profile of all the different tissue so that you could be in a VR experience and be pushing a needle into this haptic device that is simulating what you're seeing visually and your feedback. And so you have to learn how to actually do that. But you're training on this virtual reality machine. So if you mess up, you're not like actually poking somebody's bone with your needle. So most of the haptic devices that I've heard in the context are very specific. If you want to train a firefighter, then actually put them in the suit, give them a fire hose, and give them the force feedback that's required in that, and you give them the visual feedback, and you're able to simulate that training situation. So what I'm seeing here is kind of like the first cut of a generalized haptic display, and it's like this very huge box that seems extremely impractical in terms of what people may want this for. I have a hard time imagining a VR enthusiast using it because it just seems so impractical, but also the content would be hard to develop for it that would be compelling. So I'm just curious to hear from your perspective, like the first initial use cases that you see for some of this generalized haptic device solution that you have here.

[00:16:26.919] Jake Rubin: Yes. The first thing I would point out is this prototype that you tried is about nine months old at this point. And we're not ready to share details more publicly about our products, but we do have an enterprise product that we expect to ship this year in a much smaller form factor with a wearable device. So again, I can't talk much more about that, but the technology we have in our lab is a big step above what you tried today. As far as use cases, we have three primary use cases we're targeting, we're talking to, we have active conversations with customers in all three. The first is location-based entertainment, the second is design and manufacturing, and the third is training and simulation. Now I want to speak a little bit more to the issue you raised of special purpose versus general purpose haptics. You know, what you said is accurate, this is the first early cut at a general purpose haptics platform. And most of our customers right now are using special-purpose haptics, either the kind of single-purpose active devices you were talking about, like a surgical simulator, or passive haptics, where you're building a mock-up of a car or a helicopter or something else that you actually want to interact with. And, you know, that's fine, to a certain point, that's how it's been done for the entire history of VR, when you want high-fidelity haptics. But there are huge limitations to that approach that our customers are running into, and that's why they're so interested in the concept of general-purpose haptics, even though it is a hard problem and it will take time to, you know, get this to a mature hardware platform. you know, those limitations are a few things. One, you know, special purpose haptics has a very, you know, a single device is going to have a very, very limited range of applications. So, if you've built a mock-up of a car, for example, if you want to change even slightly how you're interacting with that, whole new piece of hardware. If you want to train many different things, like, for example, the military has to train, you know, hundreds and hundreds and hundreds of different tasks across all kinds of different equipment. And right now they're building physical mock-ups of every single one of those. The cost is off the charts. They're not portable. There's no sense of, you know, a platform of generality. And there are many things, you know, that you simply can't do that way. There's many, many things people want to simulate, people want to experience, that you just can't build a prop of. You know, someone like The Void, for example, is doing very cool things with passive haptics. And that approach is the best way to get the highest level of immersion right now, but ultimately it doesn't scale. They have to build a warehouse, they have to fill it with physical props, and there's a lot of things that you want to experience in that environment, maybe you want to ride a dragon or something, that just can't be done by building a physical prop unless you're going to build a robotic dragon, for example. So, you know, there is a huge demand out there in the marketplace and the conversations we've had with customers, particularly across those three verticals I mentioned, for a general-purpose haptic solution. We know it's a really hard problem. Believe me, we do. And, you know, there's some people that say it's too hard, it'll never be done. We think that's a little bit of a cop-out. You know, the technology is there. We're early in the development path. We believe, our customers believe, our investors believe that we're on track in the next couple of years to take this technology, our skin, and our exoskeleton, and to create these general purpose haptic platforms. And maybe the quality is not quite as good right off the bat as some of the special purpose simulators. When we talk to our customers, it doesn't have to be initially. The advantages in cost and generality and immersion of a general purpose platform are so compelling that if it's not quite as good initially, that's fine. And we do believe that this has the potential to get better and better and better. For example, the issue you mentioned resolution. We've already doubled resolution just in the last few months. We expect to double that again soon. We're getting closer and closer to that two-point threshold. You know, it's like early screens. The first screens, you know, you could tell what was going on, but it didn't look realistic. As the resolution increased, as the color depth increased, it got more and more realistic. So it's a really hard problem, but I believe, and this company believes, that the only way to truly get to fully immersive VR, the level of touch that we all want as enthusiasts in the VR community, is to tackle that hard problem and to find a general purpose platform for haptics. An analogy I often use is calculators versus computers. In the early days, computers, you know, almost all electronic devices were special purpose, and people could have just built better and better and better calculators, and today we might have gold-plated giant calculators, but if no one had ever, and initially, you know, PCs were a niche item, particularly home PCs, that didn't necessarily have the performance of any one of those individual special purpose systems. But yet they came to take over the world, because when you have that general purpose platform, suddenly it opens up a virtually infinite array of potential applications. It's the same thing with haptics. As long as we have special purpose haptics, every device would be very, very limited. It'll be expensive. It'll be inaccessible, particularly to consumers. If someone can crack the problem of general purpose haptics, then suddenly we have the same thing that we saw with the PC, where it's just the software that changes, and you can build anything that you want. And last point to address the question you raised is our SDK, which is actually running on that computer right there. We put a lot of effort into making it as easy as possible for developers to create content for our system. So we have a Windows runtime and we have a low-level C++ API, but we also have a plugin right now for Unreal Engine with support for Unity coming soon. And that allows us to add support for our technology to content without actually writing any code. And we can work with existing off-the-shelf 3D assets. So a lot of the things you just tried, for example that apple you just rolled around on your hand, that's simply something we got off the Unreal Engine Marketplace. We made no modifications to it. Our SDK is able, through a proprietary algorithm, to extract data from the textures and from the mesh itself and from the physics engine to create the sensations on your skin. So there are advanced properties you can add like temperature for example that aren't part of the base object's properties and we have a haptic material class to do that. But you can just grab a 3D model off the Unreal Engine Store, pull it into the environment, we have a pre-rigged avatar class for a hand that interacts with these objects and automatically sends data back and forth to our system. And so it makes it dead simple to create assets. I often do a demo when people ask about the SDK where I'll go in and pull another asset like a coconut or something out and put it in. But it really is that easy right now.

[00:22:12.052] Kent Bye: Yeah, one of the things that I'm wondering about, because I have this elemental theory of presence and the earth element is the body. And I think part of getting that sensory input is being able to actually fully move around a room scale environment and feel the sense of your body and invoking the virtual body ownership illusion, meaning you have your hands tracked, you can see them moving around and your feet tracked. And so you're in some ways constraining my movement of my body to stimulate different parts of my body. And so I'm wondering right now you have this huge box and you're talking about the process of miniaturizing it. And I'm just curious if there's like things that you have to maybe take out like the temperature is that maybe that's not like the most important thing of maintaining that ability to have that mobility to have that embodied presence. but still to have a little bit higher fidelity of haptic feedback. So I guess the larger question is like all the things that have to be in that box right here, if there's different trade-offs that you have to do to maybe not try to stimulate every one of them, but to be able to have a mobile solution that has that higher resolution, but can be convincing enough when paired with the visual input of virtual reality.

[00:23:22.393] Jake Rubin: Sure, yeah. So the obvious one is thermal. Of the other three channels of touch, they're involved in almost every interaction. So almost every time you reach out and touch a real object, you're going to get some amount of force, tactile, and vibration. Temperature is one that, for most of our customers, is a nice-to-have, but not an essential element. And it's the only one where, in many haptic interactions, if you're not touching something that's noticeably hot or cold, you don't really have a lot of heat flux relative to the skin. You don't really notice it. Yes, we do have a version of our early products that don't have thermal, and we're going to be introducing thermal later. It's a fundamental part of our platform, but it doesn't need to be in our first products, and it's something that by trading that off, you can get a better form factor. Again, I wish I could talk more about our current generation of products. Suffice to say, it's definitely not a box you stick your hand in. We've taken this material, which is only about two millimeters thick, we've integrated it into wearables, as you'll see in our vision. We've made this box, I can say, 26 times smaller. That brings it to a size where it's much more practical. So we're going to be sharing details in the next couple months, sometime later this year, about our first generation of products. We have a number of people already signed up to evaluate those products. expect to hear a lot more soon, but what you saw today really is a pure tech demo of our skin technology and of the kind of sensations we can produce.

[00:24:39.320] Kent Bye: Yeah, so I've been thinking a lot about the future of virtual reality and augmented reality. And I think there's a lot of people right now that say, oh, well, augmented reality is just going to be way bigger. And then I was believing that for a long time. And then I would hear people say, actually, there's way more things you can do in VR and way more things you can explore and do in the social. And so I hear these debates as to whether or not AR is going to come along and ultimately be the huge thing and you know VR is the one that is it going to be something where i think at some point there's going to be limitation and i think that's going to be around the sensory input the haptics whether it's the taste or smell you're taking the first steps there but you know i started to think about like augmented reality and mixed reality and and the earth element in terms of like well you're actually in embodied in your real body and you're able to actually touch things and you'll be able to Really see that. Okay, that's there but then as you pointed out there's that dilemma of like if you're doing specialized training Then you have to end up building a whole passive haptic feedback model that is then re-stimulating the entire thing anyway So if you're gonna do that, then why do you need VR? So I It seems like I see both arguments as to whether or not the future of VR, for me, it's that level of touch and the haptics and all the other senses that if we're going to really be able to hack our senses in a way that is going to be indistinguishable from reality, then We have to really take care of all those senses. So I'm just curious to hear your thoughts about that. If people bring up like the mixed reality, augmented reality arguments that, okay, well, you actually can get a better haptic experience in real life. You just add a virtual layer on top of it. But in VR, you have also the possibility to completely transport people into another world for training scenarios and other things. But I'm just curious to hear some of your thoughts on that.

[00:26:25.525] Jake Rubin: Yeah, so I guess it'd be a two-part answer. One is where we are today, and I think where we are today is there are a lot of different systems that all have different applications. And I love the VR community partially because it's so open and welcoming, and there's a lot of appreciation for what a variety of different companies are doing across VR, AR, MR, passive haptics, active haptics. So today, I think you've really got to take a portfolio approach. There's no one solution that solves everything. What we're trying to do through our vision, if you look at our website and you'll see some renders of our ultimate vision of full-body immersion, is look at what is it going to take to get to the point that you talked about, the holodeck, the full-body immersion embodiment. And that's what we've designed our whole platform around. It's not there yet today. You can't climb into a full-scale station and run around and climb and jump and fly, but every technology that we're showing you, the skin, the exoskeleton, everything that we're building is ultimately designed with that in mind. If something doesn't scale, you know, we don't pursue it. So, you know, that's part of what sets us apart, is as part of our company culture, we are committed to that vision of full immersion. And our technology roadmap is, you know, it's constantly revised as we make new discoveries, we build new things, you know, we're constantly filling in and optimizing the short and midterm. But the long-term endpoint doesn't change. Everything we're building is designed to get us within the next couple of years to the point where you can get full-body haptic sensation, you can run around, you can climb, you can jump, you can cartwheel, because that is ultimately where we're going. You know, you talk a lot about the experience age, it's a concept that we love as well. I firmly believe, and it's part of what gets me up every morning and gets me so excited about this, that we're on the verge of a sixth technology revolution. It's early days yet, but we're moving from the information age, which is all about storing and processing information, transmitting information, to the experience age, which is dominated by the transmission and the simulation and the capture of experiences. I see bits of that throughout basically the entire tech ecosystem, from you know, Snapchat to virtual and augmented reality, it's becoming more and more about experiences. And that only has one endpoint, which is full immersion. And the biggest problem that needs to be solved is touch. So, you know, again, that's what gets me up every day. I know it's not going to happen tomorrow. It's probably not going to happen in a year. It's going to take a few years, but we are on a path to getting there. We've proved out the fundamental technologies. We're actively developing it. We're funded. We've got the partners we need. So I'm very, very excited about where we are. And I think, again, you've got to take a realistic approach and say, it's going to happen in steps. You're going to have one piece of the body. Then you're going to have another piece of the body. You're going to have our product roadmap. Again, I can't share more details. I wish I could. It involves, step by step, building up to that full system. Our first product we're shipping is not going to be this entire system. But everything leads up to that.

[00:29:05.341] Kent Bye: Awesome. And, and finally, what do you see is kind of the ultimate potential of virtual reality and what it might be able to enable?

[00:29:12.942] Jake Rubin: How did I know that question was coming? I talked a little bit about it in terms of the experience age, but just to kind of expand a little bit on what that means to me, again, it is that sense of full immersion. And once we have that in place, once you can jump into a VR environment and get that sense of full realism, and I don't want to be unrealistic here and say it has to be 100% of the fidelity of reality. I think it just has to qualitatively have all the pieces of reality in ways that don't break immersion. walking around, you know, maybe it's a little bit unnatural, but it's good enough that you feel like you can move from place to place. When you're touching things, maybe the resolution's a little bit lower than, you know, the real resolution of touch. Maybe you can't feel the grains of sand on a table or something, but it's good enough that your brain says, oh, that's a table. And when we get there, it will change everything, about everything. You know, medicine, science, industry, entertainment, you know, because human life is experience. And, you know, beyond our basic needs of food and water and shelter, Everything that we value is ultimately an experience and so when you have an experience machine Which is what we're building and what a lot of other people in this community are working toward building Then you can create any of those experiences and it takes this you know I think of the sphere of experiences that we all can have on a day-to-day basis, and that's just perfect tiny little, you know, dot of experiences within this massive volume of experiences that we can imagine that we would want to have. You know, we want to fly, we want to go live on Mars, and that dot in the middle is the experiences that we can practically feasibly have on a day-to-day basis, limited by economics, by the laws of physics. Virtual reality, augmented reality, you know, ultimately this level of realism removes all those constraints, and so it opens up almost the entirety of that sphere. I think the impact of having that much of a possibility space on humanity is going to be just enormous.

[00:30:59.595] Kent Bye: Awesome. Well, thank you so much. Thank you. So that was Jake Rubin. He is a co-founder of Axon VR, and they're building a generalized haptic display. So I have a number of different takeaways about this interview is that, first of all, this felt like a revolutionary device. It felt like I was kind of walking into another world and trying something out from the future. Now, that said, it's not there yet. It doesn't feel like it was completely tricking my mind and saying, oh my God, this is it. But I think there's an important theory of evolution when it comes to technology that I've been looking at that's been helping me understand these immersive technologies and the evolution of them since the late 60s. So Simon Wardley has this model of the evolution of technology that has four phases. There's the initial innovation and invention of a technology, Then there's the customized and bespoke systems that are created out of that invention. Then it gets turned into some sort of consumer product or service that you can buy. And then finally, if it's successful enough, then it becomes a massively ubiquitous commodity and utility that everybody just has and uses. So if we look at virtual reality, it started back in the 1960s, it went through this phase of people doing custom built systems up until the 90s. And then the 90s, it has this huge explosion of some of the initial first commercial products that are trying to get launched. A lot of those products failed, and then people thought it went away. But actually, a lot of the virtual reality systems were just these custom built bespoke systems that are being used in enterprise and Military training as well as aerospace industry car companies So all this was happening kind of behind closed doors up until recently when you have the first consumer launches of these virtual reality technologies and now they're on the road towards becoming more and more ubiquitous starting with mobile phones and It's going to take a while, anywhere from 9 to 50 years before virtuality and augmented reality is at that phase of being mass ubiquitous commodity that just everybody has. So haptics is at the very beginning of that technology evolution curve. It sounds like with Axon VR, there were some initial innovations that they were then trying to integrate into some of the first custom-built products that are going to be used in one of three major industry verticals, which is the training and simulation, which I think is probably the most compelling when it comes to the military, trying to actually put money and invest in these types of products. There's location-based entertainment, which is kind of bootstrapping the technologies that are not ready to be consumerized, but are ready to do these custom-built systems that are too expensive for you to have in your home. And then there's the design and manufacturing applications, where it could be helpful to do ergonomic design or other types of 3D visualizations when it comes to, you know, it'd be cheaper to use this type of system rather than actually building a prototype in different situations. So in the process of actually trying out the technology, I feel like that the microfluidic technology that they're doing, and the way that Jake is describing it, is that there's like this haptic skin material. It's using this microfluidic technology. It's got these, what he calls, fluidic integrated circuits that have channels and actuators. And from all those channels and actuators, it's able to basically push down and put pressure onto your skin. And the resolution was maybe like 200 points or something at this point. And in the haptics world, there's a way of measuring resolution, which is the two-point resolution. That's essentially like, if you're putting two points on your skin, at what point does those two points start to feel like one point? On your back, it's actually like super big. You can push with two fingers on your back and ask somebody, okay, does that feel like one or two fingers? And they can be fairly far apart and it still feels like one finger. And on your palm, it can be about six or seven millimeters, and on your fingertips, it's about one to two millimeters. The two-point test means that you can trick your palm better than you can trick your fingertips. Some of the haptic devices that I saw at CES were trying to go for the fingertips, and it just feels way uncanny, because it's like, nope, that is nowhere near the resolution that my finger is expecting, which is about one to two millimeters. So one of the most compelling demos that I was able to see in Axon VR, you're essentially putting your hand in this box, face up, and then you have this kind of microfluidic pin cushions that are being pushed down so that in virtual reality you're dropping something into your hand and you feel the shape of it from these different pins that are getting pushed down on your hand. And one of the most compelling demos that I saw was the spider and the deer, which felt like those different points in which the virtual object that was moving around my hand felt like the legs were being represented accurately in my hand. And that was one of the deepest levels of presence that I felt. Now, overall, the Axon VR is completely impractical in terms of having a full embodied experience. You're essentially putting your hand into this giant box, which means that you aren't able to then move your body around. So you're kind of trading off the full expression of your body in order to get haptics in your hand. So I think there might be specific use cases to do that, but I think that the real end game for them is going to be able to miniaturize it into your hand and to be able to actually have potentially an exoskeleton that's giving you that force feedback in combination with some of the vibrotactile feedback that you're able to get with their microfluidic technology. So Jake described that there's like four different channels of haptics and that as you add these different channels together, it's kind of like the different primary colors that are able to create an entire full array of like many thousands of different colors that are even possible. So, you have tactile, which is the sense of pressure on your skin, the force feedback, which is the rigidness and heaviness of objects, the vibro-tactile, which is these high-frequency feedbacks when you're contacting something where you're able to see whether something's soft or stiff, and then you have the temperature, which is the heat flux and the heat flow in and out of your hand. Now the heat gives a very visceral information to your limbic brain. It very much increases the level of immersion and I've seen a number of other haptic approaches that just focus on the heat when it comes to immersion when you're playing these different virtual experiences. But heat is also the least portable so that sounds like it's going to be kind of the first thing to go because adding hot or cold is not going to be vital for a lot of these immersive experiences. Much better to have a mobile system so that you can focus on the tactile force feedback as well as the vibro-tactile. A lot of the haptic systems that you have now are kind of like the rumble, they're buzzing controllers and that's kind of giving you that vibro-tactile And it actually tricks your brain enough that it does increase the level of immersion. It's also super low fidelity in the sense that you're not getting all the subtle textures or the force feedback on anything. And so the low fidelity of the haptic systems are kind of limited in terms of the sense of touch that you can provide. So in the long trajectory of virtual reality, I think that haptics are going to be super important. And this is a really interesting first take of trying to build a generalized haptic device. And their microfluidic technology is worth checking out to see their approach. And I'm personally really curious to see how it miniaturizes, because that's going to be less of a trade-off of completely immobilizing your entire body in order to stimulate your hand. The other thing is about this microfluidic technology is that you have this idea of sensory replacement that I talked to David Eagleman at the Experiential Technology Conference a few months after I did this interview back in episode 527. The basic idea is that you could send signals into your body and as long as you see some sort of visual feedback that is correlated and connected to that, then your brain starts to figure it out. The implications of some of the technology that Axon VR is building is that you can start to do almost like a QR code level of complexity when you have these different array of points that are pushing down on your body. If you have a visual signal to that, there could be this process of training your body to slowly learn this kind of parallelized data encoding process in order to take information into your body in a completely new way. So I think that the application of sensory replacement is like this new capability of immersive technology. It'll be a little bit of like, okay, how can you start to put this type of technology onto your body and start to stimulate your body in a way that's encoded that perhaps your body could start to learn to speak a new language. It kind of looks like a QR code. This is a little theoretical, but based upon what David Eagleman has shown, what he's able to do with this neosensory vest, I think that some of the technology of AxonVR could actually start to do some really cutting edge sensory replacement experiments. Axon VR just raised another couple of million dollars. They've got 130 page patent that they just got for a whole array of their technologies, including their microfluidic technology and some of their exoskeleton stuff. So with that, they're kind of on this roadmap of doing the first initial phases of the technology. Like I said, they're moving from the original invention and innovation and then actually productizing it. into a system such that they can start to do these custom built enterprise applications that's going to be bootstrapping this technology in order to make it less expensive and eventually be consumerized into a product that will be able to have access into our homes. And I think that is probably going to be a while, at least nine years. Until then, I think a lot of the training applications that I've seen, like I said, are very specific use cases. They're single use and that the user experience of those are way better. But the trade-off is, like Jake said, it's kind of the difference between a specialized calculator that was at the early days of computing and then having a generalized computing device. Having a generalized haptic display is going to just open up the world to all sorts of immersive experiences that are both possible in real life and a lot of them that are completely impossible to experience in the real life. So that's all that I have for today. I just wanted to thank you for joining me on the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and become a donor. Just a few dollars a month makes a huge difference. So you can donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show