Jules Urbach wants to create a photorealistic metaverse. He’s been working on rending technologies to render digital lightfields, and Otoy’s Octane renderer is used everywhere from baking scenes in Unity’s real-time game engine to rendering the opening scene of HBO’s Westworld. At GDC, Urbach announced a 100x speed-up of rendering enabled by AI and machine learning, which will potentially make the real-time rendering of photorealistic scenes possible. Otoy has already figured out how to use the GPU to distribute the rendering processing, and they also recently announced the ERC-20 Render Token in order to distribute rendering to the army of GPUs miners who are using their technology and power to solve cryptographic puzzles.
I had a chance to catch up with Urbach at GDC, and he always has something to blow my mind. He talks about Lightfield Labs’ Digital Lightfield Walls, which will literally make the Holodeck possible. The possibility of having a lightfield display that renders lightfields at the right density would literally provide a window in another world, and it could be perceptually indistinguishable from actually being transported to another world.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So I've been doing this series on the blockchain, and one of the companies that are in the virtuality space that has launched their own ICO is the OTOI with the Render Token. So the idea is that there's a lot of these different cryptocurrencies like Bitcoin that are solving cryptographic puzzles, which aren't necessarily doing anything that's all that useful. Ethereum came along and came up with this Turing-complete process by which you could actually do real work on the process of mining. And rather than just having your GPUs spend and doing kind of this mindless stuff, Otoy was like, hey, you know, we do rendering and we use GPUs all the time. it's a perfect fit for them to be able to actually try to create this larger ecosystem and market within the cryptocurrency world to see if they could actually distribute the compute that's out there. So you have things like IPFS, which is going to be able to potentially distribute out files out there to be able to do large files and kind of have this peer-to-peer effect. But to actually render these different virtual reality scenes, sometimes you're going to need to have something a little bit more sophisticated, something like Otoi to be able to do some of these rendering jobs. So a lot of these different types of rendering jobs I think are mostly for things that are maybe pre-baked or maybe that are something that's a 2D film that is going to be rendered out to be shown on as a special effect. So the opening sequence for Westworld, for example, uses Otoi's Octane Renderer. So the Octane Renderer has been integrated within Unity. And I had a chance to talk to Jules Urbach at GDC this past year, where he had just been announcing some of the amazing new artificial intelligence integrations, where they were getting about 100 times speed up using artificial intelligence and machine learning to be able to kind of resolve a lot of the noise that happens when you're doing physically-based rendering. And so with AI thrown in there, they're able to kind of cheat it by going about 100 times faster, but still doing something that is just as good. So it's always a pleasure to talk to Jules because he's always thinking about the next iterations of where virtual reality is going. And one of the ways it's going is this digital light fields. And he kind of blew my mind with talking about what's coming down the road, which is essentially these technologies that are going to be able to essentially build the holodeck with these digital light field displays that are like as big as a wall. So Jules is really betting on the future of, you know, these immersive futures and the need for this type of distributed computing, because it's possible that all the data centers that are out there at Amazon, that they're going to hit a hard limit as to what's going to be required when it comes to computational resources. So we'll cover all that and more on today's episode of the Voices of VR podcast. So this interview with Jules happened on Thursday, March 22nd, 2018 at the Game Developers Conference in San Francisco, California. So with that, let's go ahead and dive right in.
[00:03:04.272] Jules Urbach: I'm Jules Rybak, CEO of Otoy, and our product, Octane Render, is integrated in Unity. Millions of users have it. You may also have seen the opening of Westworld and Altered Carbon, and many shows are rendered with Octane. So we have a cinematic GPU path tracer. We've been working for 10 years on getting that cinematic path tracing technology into video games. And I think this GDC we're seeing with direct X-ray tracing and some really concerted efforts, some convergence around that. And the way that we're going to make this happen in the coming year is not just through faster GPUs and GPU rendering, and partnerships like Unity where we can integrate it in game engines, but AI. And two years ago, I would have been very skeptical that AI could solve some of these hard rendering problems for real time. But as our users are seeing with the release of Octane 4, which is just three or four days old now, AI is actually 100x speed up in a lot of cases, and we can use this to get towards real time with faster and faster GPUs helping us along the way.
[00:04:00.034] Kent Bye: Yeah, so my understanding is that you would do a pass of physically-based rendering, and then you'd have a lot of noise. And then I guess you would keep iterating until you would solve for all the different refractions of the photons flying everywhere. Maybe you could talk a bit about that process by what was happening with each of those iterations, but yet how you can take AI and have a very discrete inputs and outputs in terms of what you're expecting, and then what different iterations are, and how to fill in the gaps and basically clean up that noise.
[00:04:28.775] Jules Urbach: Well, the reason why there's noise in our render is because our render is literally simulating the laws of physics and light. There's no cheating, there's no calculating shadows and then overlaying that over the CG scene. I mean, it really is shooting photons and you have to wait for photons to collect and to bounce everywhere to see what the final image looks like. and that is called the rendering equation. So if you follow the book on rendering, literally, you will end up with a perfect scene, but it used to take forever. We accelerated that with Octane 10 years ago on the GPU, and now with AI, we're able to actually only partially bounce photons around, and the AI is able to figure out, well, you know what, I think I can figure out how this is gonna look in the end, and it skips ahead. And it does it in a way that is better than any other trick that we've tried, and it's so compelling and so good that we're, You know, it's startling, and it's startling to us, it's startling to our users, but it also is one of the keys towards understanding how powerful AI can be when it's trained correctly. So rather than coding this AI into the engine, of course there's some programming involved, we basically feed it scenes of the scenes with a few samples and a hundred thousand or a million samples when it's done. Maybe a dozen scenes, not even a hundred, not millions, just enough to give it a different idea of these different pieces. And with an understanding of how those differences are, the AI is able to really, from any viewpoint, from any scene, start to fill in those blanks. And it shows you the strengths of AI. And it also applies it in a way where we're getting perfect renders out of our system. And it's not this blurry AI kind of squishy mess, which sometimes happens with AI. I mean, it can do amazing things. It can select stuff. But this is architecturally perfect visual rendering that gets solved 100 times faster with AI. And it's one of the really interesting use cases that we've come to learn and appreciate and leverage for our product.
[00:06:04.878] Kent Bye: Now, is this AI have to be trained on the scene that it is rendering? So, like, if you wanted to render out an entire, like, you have your specific scene, does it have to be trained on that scene in order to get that speed up? Or have you essentially created a generalized AI that can basically take any scene and be able to fill in the gaps and clean up the noise independent of what you're rendering?
[00:06:23.843] Jules Urbach: Yeah, the magic is that it is very generic. And so the dozen scenes are just examples of glass, and examples of hair, and example of refraction.
[00:06:32.047] Kent Bye: And this is stuff that you've done, right? So you've already trained it, so if someone using it doesn't need to go through this training process.
[00:06:37.690] Jules Urbach: So when we released this a few days ago, the users click a box that says AI denoiser, and their scene gets denoised. It's as simple as that. It doesn't matter what viewpoint, doesn't matter what the scene is. There are cases like we're seeing that we have to do a little bit more with fog and fur and some things that we haven't given enough examples to the AI. For the most part, the scenes that we've shown in the videos, those are totally new scenes AI has never seen, and it does a perfect job. It's amazing. So it's very powerful, and the users don't need to train anything. If they find something that doesn't denoise well enough, we take a look at it. Maybe we'll add that a week later into the data set. But nothing is scene-specific or viewpoint-specific.
[00:07:10.823] Kent Bye: So is this using a convolutional neural network that's being able to look at the visual stuff? Or is it some sort of combination of some number of different neural network architectures?
[00:07:19.990] Jules Urbach: We're using a CUDA neural network on the GPU, which is a built-in library that CUDA provides for accelerated machine learning. And we have a CPU fallback as well that's written for Haswell CPUs. But it's just generic run-of-the-mill machine learning. It's just been applied in a very clever and useful way.
[00:07:37.023] Kent Bye: So CUDA, is that part of NVIDIA?
[00:07:39.185] Jules Urbach: Yes, so our render today is actually only on NVIDIA hardware, although we just announced that we're developing a whole new framework that allows us to port all this work onto iOS and Metal and Mac, and so we were just showing examples of that in the very same announcement video that this is coming, you know, probably around eight or nine months, we'll have versions of that. And NVIDIA has obviously been taking the lead on GPUs and AI, in particular on GPUs, for a while. So they really are the best. 100% of our users are currently on those. But because we have a responsibility to, for example, Unity users, of which there are millions that have an integrated Intel graphics chip, we have to get all this running on there. And so we've shown about 40% of the render and, of course, the denoiser does have a CPU fallback. All of that works on other hardware. And we're shipping that this year as well.
[00:08:23.773] Kent Bye: Is there a depth information in this? Because I know when you're rendering stuff on a 2D screen, it's a 2D, it's like basically a slice where you're taking a picture of a 3D space. And I'm just curious if the AI has to do anything in that volumetric space or if it's all happening in just the 2D, taking a snapshot.
[00:08:40.564] Jules Urbach: Yeah, it's not 2D. It's deep in the scene and that's one of the things that I think makes this work so well is that we feed it all the information, not just the depth, material properties. It has a lot more information than what the human eye is seeing in that viewport and it is volumetric. I mean it actually does have the ability to understand where the light rays bounce around and that's why it's able to denoise really perfect reflections and refractions. That is not something you can easily see in just an image, and a lot of AI denoising work that we've seen in the past doesn't really take that into account, and it gets blurry reflections and blurry refractions, and this one doesn't. And there's a second layer, which I was just showing you before the interview, where we actually track lights in the renderer, and we shoot those rays intelligently with another layer of AI. Combining those together, you end up with amazing results that look basically magical.
[00:09:24.682] Kent Bye: So is this something that you could presumably put down into an application-specific integrated circuit and make an ASIC that would implement this AID noiser so that you could be able to put this on top of like an augmented reality head-mounted display and be able to do digital light fields in sort of a volumetric way?
[00:09:42.144] Jules Urbach: Yeah, well, you know, the thing is that the Volta GPU itself has a lot of P16 processor cores that are designed for deep learning and they do run much faster. And if you can imagine Nvidia doing a Tegra chip Volta, we could leverage that. But there's also, you know, Apple has, you know, Core ML, and there's a lot of Huawei's doing accelerated Tensor Core chips, I think, on their phones. And you're seeing that Google has these TPUs on their cloud. So yes, there is definitely ASIC AI that could be done to do this even faster. But it's already really fast. I mean, it only takes a split second. But to get it to be real time at 60 hertz, we probably would want to have dedicated hardware on a mobile device to do that.
[00:10:19.499] Kent Bye: And right now we have both in virtual reality and augmented reality, most stuff I think is on a single plane where either it's on a LCD screen or it's like some sort of wave guide. But I think we're moving towards these holographic displays, being able to do volumetric with multifocal displays and digital light fields that maybe has like depth map information. And so I'm just curious from your side and Otoy, like how you see what you're doing is gonna fit into like these next generation types of light field displays?
[00:10:52.248] Jules Urbach: The entire reason why the company's doing what it's doing is for light field displays. I think that AR and VR glasses are amazing and they're important for lots of reasons, but they're not necessarily going to be how you experience holographic and volumetric content in your home, in your office. I think that light field displays are not that far off. I mean, super high resolution light field displays and everything that we're doing, all this work to get AI denoising and super fast path tracing is all about light field displays that are I think going to be the future in the 2020s. I really do. So I'm betting the company on that So I think the last time we talked you had mentioned that there was some sort of light field wall display as some company So maybe good they want I heard about them So maybe you could talk a bit about what that is and what they've launched so they they're still in early R&D But the company is called light field lab and John Karafin formerly of lightro brilliant person who I adore and I think he's got the absolute the right idea is working on exactly that. I mean holographic display panels and the density, the reason why I endorse it is the density is high enough where I can actually, I can see the math and know that this is actually going to be exactly what we need. And if you can imagine something like that being the size of a wall, then you have a window into any world and you, you know, I feel like the ability of seeing something like that in an ambient computing environment without wearing glasses, and I've tried four ounce, you know, AR glasses that are pretty good, But after eight hours, that's exhausting. So I think having something like this would work really well. And I think Lightfield Lab is doing amazing stuff. So I can't wait to see how it all turns out. But I think that they're doing great work. And they have the right idea.
[00:12:17.926] Kent Bye: So it's essentially, you could fill an entire room and create the holodeck.
[00:12:21.450] Jules Urbach: Oh, yes. Absolutely. And that's exactly why, it's exactly what everybody wants. And when you do that, then, you know, the question of whether or not you actually need to go someplace becomes really interesting. Because if you're, you know, like, if you're going to a, you know, a place in the real world and you have the exact same experience other than touch, although I think touch is something that can be solved inside of a holidic environment, I mean, that means you don't need a car to travel there. There's a lot of things that change fundamentally in society when you can actually have true telepresence that works in this very ambient way without putting on glasses. It's more social. And I think the Holodeck is not hundreds of years out. I just think it's, I think we're going to get there. It's going to be built and then it's going to be expensive. And then there's like 4K TVs. It'll come down and cost a hundredfold and then it'll be in every home, hopefully by the 2030s.
[00:13:06.089] Kent Bye: Well, I know in the cave environments, there's a place in Germany that had, I think, four or six people in the same cave at the same time running at a certain frequency. So each person has shutter glasses that is then allowing them to be multiple people in the same room. So I guess the question would be, in some situation like this, You could be able to do that for one person if they're tracked because then you could orient whatever they're seeing, but yet if there's multiple people in the same co-located space, is there a way to do some sort of shutter glasses type of thing such that you could do the display and then be able to display it in a way that is a high enough frequency that gives you ability to kind of get your mind tricked that you're actually in another place?
[00:13:45.752] Jules Urbach: Yeah, and so one of the things I showed this morning at our Unity presentation, and I probably should share the video and we can put it on the blog when it goes after the live version, is I used the iPhone X to track my face perfectly, and I had our Lightfield viewer, which can basically stream Lightfields, set up on a display, and I put a Vive tracker on the phone, and I put a Vive tracker on the display, and whether I moved and the phone tracked me without wearing any glasses or the display moved, it looked perfect. just one eye. I had to close one eye to make it look really good. But that is the effect you're talking about. And the reason why I set that up was so that I could try to experience what a Lightfield display wall would work and look like. And it is really compelling. Now, if you did have shutter glasses, you could obviously, and you track the head, I mean, you could build that and do that. But the kind of displays that Lightfield Lab would be working on, these true holographic video displays, don't need any of that. I mean, they are, you know, two people can look out of a window and see what's out of a window, and they don't need to do anything fancy or special. You could actually put a mirror and see reflections correctly out of a light field display. It is really like a window into the world and I think that makes it very different than stereo displays or auto stereo displays that have come before.
[00:14:46.753] Kent Bye: How do you not have to account for the perspective as people look around?
[00:14:50.216] Jules Urbach: It shoots every possible ray out of that crazy display. It's so dense that it actually has a light field emitting from it. It's like if you have a window, a glass window, and that surface is basically emitting a light field. Imagine you just replace that with a digital version that spits out exactly the same rays, trillions and trillions of rays. That's what a light field display would do. And you don't have to, it doesn't matter where you are, you'll get the right ray in your eyeball. That's why it's so amazing. Oh my god.
[00:15:14.116] Kent Bye: So I didn't realize that. Yeah. So no shutter glasses because it's literally just recreating the light field as if it was coming out of reality.
[00:15:20.738] Jules Urbach: Yes, that's exactly correct. It's a lot of rays. It's a lot of work. That's why it hasn't been done before. That's why we need to come up with all these advancements in rendering. But yes, that is the dream, and that is what it does. Have you seen it? No, it's not ready yet. They're working on it, but I know the density of it. And having done software light field rendering for so long now, I know that's the density we need for this effect to happen. Wow.
[00:15:41.521] Kent Bye: I can't wait to try it out. What do you want to do with that?
[00:15:44.895] Jules Urbach: And I'll live in it. I mean, I live in the office as it is. So once my office turns into the holodeck, I'm good. 97% of my life will be taken care of at that point inside of the holodeck.
[00:15:55.365] Kent Bye: Now, this is interesting. I think this is where I start to disagree a little bit as to this is going to be good for, like, I don't know.
[00:16:01.690] Jules Urbach: So my 97% should not be a guide for other people's lives at all. I'm very different than a lot of people as far as how I spend my time and my energy.
[00:16:10.118] Kent Bye: think that you'd be able to not. So I think that there's a couple of metaphors. One metaphor is the chess centaur, which is that technology is going to be working in collaboration with humanity to the degree to which that there's going to be a chess player who is a human, gets beaten by the AI alone, but yet the AI can be beaten by the human and the computer collaborating and playing together. And then another is that, AI is going to create this super intelligent being and that it's basically going to sort of have power over us and we're going to basically worship this technology as a God. And so I think there's a little bit of like, you know, what does it mean? Are we centered in the human experience or are we centered in like the exalting of technology to the extent to which that we're going to atomize ourselves and to be disconnected from each other and the planet?
[00:17:00.962] Jules Urbach: I have an answer for that and it's a very binary one, which is we're humans and it's all about humans. I mean, AI is a tool. It is incredible, but we should not be worshiping it. And frankly, the human experience is a social experience and an AI can pretend to be just like us and talk to us and do all this work and invent things that we can't even think of. We're social creatures and the humanity and everything that we experience in this sort of spiritual, soulful way is something that an animal may never represent. We could just be a philosophical zombie. You know, that doesn't mean that it can't be doing things that are incredible and that we're not in awe of, but I still think that at the end of the day, the human experience is an important social part of all this. And any technology that sort of diminishes that or destroys it, even if I were to live in the Polydeck 97% of the time, it would still be talking to other people and, you know, I just wouldn't have to drive. I wouldn't have to spend money on gas. It would just be a simpler life. But it's still, without other people, what are we doing? Technology should not be used to isolate people. It should really bring us together. And philosophically, deep in my soul, I know that that's an important part of all this. So I think that AI is something that should be used for that purpose, and hopefully can be.
[00:18:09.275] Kent Bye: So we have the Lightfield wall display, which I think sounds amazing. Before we have that, there's also all these other displays, which is like other types of Lightfield displays, whether it's like Magic Leap or some of these other holographic displays. I'm just curious if you've had a direct experience with the full range of all of these and if you can kind of share your phenomenological experience of what each feels like and how it differs for what a lot of people may have been able to try something like HoloLens and how it sort of compares or it's different.
[00:18:38.128] Jules Urbach: Yeah, and I've tried a lot of them, and some of them I can't speak about, but I did try the EVGEN Glyph. That was my first light field display experience, and it was really compelling. But the very first thing that I thought of when I was putting it on was that if I had really good eye tracking on the ODG glasses, which I love, I think those things have such high resolution, I could simulate the depth of field effect that you have in the EVGEN Glyph with really good eye tracking. Unfortunately, eye tracking inside of a device that small doesn't really seem to be happening. But I will say that my big takeaway is, I mean, I've seen really high-resolution HMDs, and the ODG glasses are really amazing. I mean, people, they come by our booth, they can always check them out. There's no screen door effect. It's beautiful. And they're small. I mean, you know, the newer ones are, like, they're about eight ounces. I've tried ones that are even smaller. But I still feel that my experience with HMDs is that it does wear you out. I mean, I spend a lot of time in the office, and I spend a lot of time with these things on, and I found that it's just exhausting if they're not really the weight of a pair of glasses. That being said, I mean, I feel like ODG is at the best possible resolution. I don't think that a true light field display inside of HMD is that critical if you've got good eye tracking. If not, it is. And I have not seen, you know, some of the things you were talking about, and I've seen ones that are even crazier that people don't even know exist, that have insane resolution, and I don't know when they're coming to market, but it's compelling and awesome. But it also is heavy, and it's not something I would wear eight hours a day.
[00:19:57.462] Kent Bye: Let's talk a little bit about the Render Token, because that's something that the blockchain is a huge hype cycle that came through. It might be in the trough of disillusionment at this point. But what is your vision for the Render Token and how the blockchain could be able to create this decentralized network of people creating the metaverse?
[00:20:16.090] Jules Urbach: Well, you and I are on the same page about this. In fact, we've talked several times about it. You were the one that was talking about IPFS, I think at CS last year. I think that a metaverse should not be controlled by a single entity. It should be decentralized. I don't even think the web is decentralized enough. I mean, you know, links go down, pages can be blocked. But the blockchain is different and I think that you know given that there's gonna be a lot of eyeballs potentially glued to the metaverse I don't think there should be a corporate layer between that and and the rest of the system I mean, I think people can build services and and other layers around it So, you know, we have a utility aspect to our token and and in fact, it's been going great and we have a lot of people that really love what we're doing and we launched I think right after our last interview and And so we raised a bunch of ETH and we are in phase one that's about to launch for doing these light-filled render jobs and just any kind of rendering on the cloud. We already have that service running on Amazon and we run out of GPUs, so we need more. And instead of mining Ethereum and wasting all that energy, you can actually finish other people's render jobs, get paid what you pay Amazon, what we pay Amazon to finish it. and people make more money. And so the first group that really got intrigued besides our own customers were miners, who were thinking, well, wait a minute, why am I spending all this GPU power mining Ethereum or Zcash when I could be doing renter jobs? And so a lot of them are waiting for us to launch this network to see if that makes the money. And it will, because we really do need more compute power. But the later phases of this system are all about being able to track the creator's IP and their services. And we have an SDK that we just started to roll out. And those things are meant to help us build something more web-like. Because, I mean, the web has just been obliterated by the App Store model and abuse of narrow focuses of things. And we just need something different for the metaverse. And a lot of my advice from some of these ideas has come from Brendan Eich, who created JavaScript and founded Mozilla and Firefox. And I think, you know, the web was a great model for what, you know, led up to Facebooks and Amazons and Googles. But in the future, we need something more spatial. And I think that Render and the token system we have, and even just the fact that it is counting all those trillions of rates. Imagine that really all of this work goes into creating media and content for these Lightfield displays we're talking about. That's the future that I want to have backed up on the blockchain. And I think we have to start with content creators and artists and the real foundational pieces. And that's what we're doing with Render at the moment.
[00:22:31.405] Kent Bye: Yeah, and just to go back to light field displays and the social aspect of that, right now you have these things over your face. You can't see your face. And you've done a lot of capturing the face of technologies. And so if you paint me a picture of what things are going to look like in 2025 when you have these holodeck rooms, if you're able to have some sort of light field camera that captures your face without being occluded, and then you're able to basically create these entire social situations where you're able to take you and put you in different places, you could Potentially even be in multiple places at the same time and but you're able to basically have a social High-fidelity interaction as if we were standing across from each other right now and be able to capture all of that fidelity of all the micro expressions and everything else
[00:23:13.066] Jules Urbach: Absolutely. So I mean other than the microphone being here, this wall between us is basically a wall of light field rays and if you had a bi-directional light field surface, you in your place in Hong Kong, me in LA or whatever, right, and we were doing this, it would be like we were both there and that's why I was saying that if I were in the holodeck, I wouldn't be in a cage alone, I would actually turn on different surfaces and talk to people and it would be like I'm standing right in front of them and that social connection is really powerful now. I don't know about touch and all those other things, but a bi-directional magic mirror kind of holographic surface absolutely is where you'd have to go once you get the basic display up and running.
[00:23:45.752] Kent Bye: Great. And so what's next for Otoy? What is the big open problems or challenges that you're trying to solve?
[00:23:51.403] Jules Urbach: I think a big challenge this year is getting the render network and the blockchain system really built out. We've got four phases. We're just about to really get into phase one and three more have to get done this year. And I think the other piece that is just as critical is we need to get real-time AI denoising to run on everything. And that is a lot of work. So Octane 4 is a huge milestone. Five years of work went into some of these pieces. And the AI worked out better than expected. But we need to get this running on mobile devices. We need to get mixed reality supported correctly. And we need to start figuring out when light field displays are really going to become viable so that we can plan for that. And I think those are the major goals for us in the next six to eight months. Great.
[00:24:29.082] Kent Bye: And finally, what do you think is the ultimate potential of virtual reality and what it might be able to enable?
[00:24:36.583] Jules Urbach: I think I'd like it to be a great equalizer in the sense that, I mean, if we were to talk about virtual reality in the way I was just describing with these light field displays where you could actually be anywhere, you didn't have to go to places to experience things and to learn things and to go to school or to have opportunities. That's something that is really interesting. Now, also, I think as far as being able to learn things in spatial ways, I mean, that's also pretty great. But the potential is simply the digitization of the physical world. I mean, if you just don't need to actually have things other than maybe medicine and food that's 3D printed, energy comes from the sun or fusion reactors. I mean, it really, what we've seen with humanity is that when physical things that we thought we needed get digitized, whether it's, you know, DVDs that now, you know, we have video on demand and, you know, newspapers get turned into basically, you know, the internet webpage. I mean, when that happens with everything in the physical world, things will change. And I think that has to start with things like energy and fossil fuels. But I do think that people's homes and their lives and just the standard of living will go up when things are holographically beamed into your ambient environment and you can experience that with everyone else in the world at the same time. That would be amazing. So that's my vision for VR. Awesome.
[00:25:41.826] Kent Bye: And is there anything else that's left unsaid that you'd like to say?
[00:25:44.167] Jules Urbach: There's so much, but that's probably for another interview. We have to talk about what this all means, what the meaning of life is, and the information theory, you know, Gödel's Completeness Theorem, all those things. But at the end of each interview, we start to get into that. But this is a spiritual journey. I mean, our, you know, as I've said before, I mean, the whole idea of us being human social creatures and sort of matching everything and measuring everything by that is really important. But there's a lot of quakeness. I mean, the reality of our World is something that is challenged by people like Elon Musk with simulation theory So there's there's so many things to think about and discuss philosophically mathematically even but in the work that all this touches on I think really is sort of that first layer of exploring all these amazing ideas Awesome.
[00:26:22.317] Kent Bye: Well, thank you so much for joining me today on the podcast pleasure Kent.
[00:26:24.879] Jules Urbach: Always a pleasure. Thank you so much
[00:26:26.710] Kent Bye: So that was Jules Urbach. He's the co-founder and CEO of OTOI. So I have a number of different takeaways about this interview is that first of all, this concept of a digital light field wall where equivalent of you kind of looking out a window. So you're creating a window into another world. And usually with a lot of these cave projection environments, you put on these shutter glasses, and it's those shutter glasses that allow you to navigate around in the world. And I've seen up to six different people where they're able to have a high enough frequency to get six people in the same environment at the same time. What Jules is saying is that with these digital light field technologies, it's literally like shooting out the photons as the paths as they would actually be going. And your eye can't literally kind of tell the difference between what is real and what's not, what's coming from the display. It will just look like you're getting a window into another world. if you use a mirror it'll still kind of reflect those photons it just it sounds like this kind of magical technology that it's essentially the equivalent of what Star Trek used in the holodeck. So the holodeck technologies are coming and Jules just wants to be at home and have these kind of holodeck technologies just to be able to have these interfaces with other people because you can imagine a time when you're able to capture digital light fields and broadcast them in real time and so you have this seamless low-latency real-time digital light field displays with this digital light field labs, you know, holodeck walls where you're able to go into these virtual worlds and kind of really just run around. So I imagine that this is a very high-end, very expensive thing that will be in location-based entertainment. But, you know, eventually maybe they'll be in our homes. But I have yet to see this kind of magical technology. But, you know, theoretically, mathematically, what Jules is saying is that the density of the light field display is at the level where you have this critical mass where it kind of flips over into kind of fooling your mind into it just being reality. So with that, you have this OTOI, which is using artificial intelligence technologies to essentially denoise this process of physically-based rendering. So physically-based rendering takes a lot of different iterations, and at the first pass, it gets pretty close. And what Jules is saying is that this is kind of like the perfect use case for machine learning, because you can basically do all those iterations and show these machine learning algorithms, like, this is what it's supposed to look like when it's finished. And then you do it in the first iteration and then, you know, he's created this octane renderer to be able to kind of fill in the gaps and do this AI mediated denoising of these renderings, which is getting like this a hundred times speed up, which is amazing. That's a, you don't, there's not very many times as an engineer where you see like a hundred X speed up and it's like as perfect as anything else. Now, I'd imagine that with machine learning in this training of these neural networks that he'll have to continue to throw all sorts of different things at it to make it more and more robust. But I imagine that where this is going is to be able to potentially distribute this out in a way that could do light field rendering in real time, which I think has been one of the things that has been not possible. I haven't really seen anybody that's able to do kind of real time light field rendering. But if you're using some like these AI mediated machine learning, that's even potentially eventually baked into the hardware, then you have this potential of doing like real time light fields, which is just absolutely mind blowing. And light fields, I think, are also going to be super important when it comes to augmented reality. We have something like the Magic Leap, which actually just launched this past week on August 8, 2018. It was announced that Magic Leap was finally getting delivered in terms of their development kits. They always kind of said that they were going to be launching their initial launch would be kind of the full launch. But I think they realized that they really need to develop these ecosystems of developers. And so they kind of send out these dev kits that are about $2,300. this idea of being able to have augmented reality mixed with digital light fields, when you're looking at reality, you are seeing a level of fidelity that looks real. And the affordances of digital light fields are trying to render things in a way that look just as real. I think there's going to be a little bit of uncanniness where your mind is going to kind of know that some of these things are rendered within the context of an augmented reality experience within a digital light field display, like Magic Leap, that you're not going to fully believe that it's completely real. especially as the kind of narrower field of view, I think it's going to have to get a lot bigger. And there's, I think there's going to be a number of different iterations before we get to the point where your brain is just absolutely tricked. But this is on the trajectory of where we're going. And in order to get there as well, in order to get away from kind of the more of these computer graphics that are a little bit, I guess, a number of years from where the cutting edge of a lot of the video games are, because virtual reality is in real time and 90 frames a second, then it can't be as a higher fidelity as what you may see in some of these AAA games on a 2D PC gaming platform. So the graphic fidelity is a little bit more a few years back, but digital light fields are trying to close that gap and make it just like a photorealistic. And so it just looks absolutely real. And the render token is something that there was a token launch of the render coin. It's a ERC20 coin. It was launched on October 5th, 2017, and then closed out on October 12th of 2017. It looks like they raised about 4,044 Ether, which I think translated probably around like $1.2 million or so. They had a hard cap of $134 million, and so it wasn't anywhere near the amount that they were kind of capped out at. And so I don't know where the Renderer project is going to end up going, but that was kind of where it ended up. So it sounds like they're continuing to move forward and there's this kind of open question as to whether or not you're going to be able to kind of distribute this work that's out there. And, you know, if we're going to be moving towards a future where we're going to be wanting to do all these types of renderings of these different immersive experiences and that it's not just 2D, but it's able to kind of render it in 3D as well. So I think it's an open question as to whether or not this kind of blockchain enabled rendering is going to really take off. But if there's any company that's going to make a viable stake at doing it, I think Otoy has been, you know, this has been their bread and butter of being able to find ways of using the GPU to be able to do more and more faster rendering. And they're baked in into the Unity and, you know, have all these different integrations and adoption within the larger special effects industry in Hollywood. And so, you know, they're really someone to watch, especially when it comes to these digital light field renderings. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. So, you can donate today at patreon.com slash voicesofvr. Thanks for listening.