#605: Future of Mixed Reality Lightfields with Otoy

Otoy is a rendering company that pushing the limits of digital light fields and physically-based rendering. Now that Otoy’s Octane Renderer has shipped in Unity, they’re pivoting from focusing on licensing their rendering engine to selling cloud computing resources for rendering light fields and physically-correct photon paths. Otoy has also completed an ICO for their Render Token (RNDR), and will continue to build out a centralized cloud-computing infrastructure to bootstrap a more robust distributed rendering ecosystem driven by a Etherium-based ERC20 cryptocurrency market.

jules-urbach-2017I talked with CEO and co-founder Jules Urbach at the beginning of SIGGRAPH 2017 where we talked about relighting light fields, 8D lightfield & reflectance fields, modeling physics interactions in lightfields, optimizing volumetric lightfield capture systems, converting 360 video into volumetric videos for Facebook, and their movement into creating distributed render farms.


In my previous conversations with Urbach, he shared his dreams of rendering the metaverse and beaming the matrix into your eyes. We complete this conversation by diving down the rabbit hole into some of the deeper philosophical motivations that are really driving and inspiring Urbach’s work.

This time Urbach shares his visions of VR’s potential to provide us with experiences that are decoupled from the normal expected levels of entropy and energy transfer for an equivalent meaningful experience. What’s below the Planck’s constant? It’s a philosophical question, but Urbach suspects that there are insights from information theory since Planck’s photons and Shannon’s bits have a common root in thermodynamics. He wonders whether the Halting problem suggests that a simulated universe is not computable, as well as whether Gödel’s Incompleteness Theorems suggests that we’ll never be able to create a complete model of the Universe. Either way, Urbach is deeply committed to trying to creating the technological infrastructure to be able to render the metaverse, and continue to probe for insights into the nature of consciousness and the nature of reality.

Here’s the launch video for the Octane Renderer in Unity

This is a listener supported podcast, considering making a donation to the Voices of VR Podcast Patreon

Music: Fatality

Support Voices of VR

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So in the beginning of August this year, it was during SIGGRAPH in Los Angeles, I dropped by the Otoy offices to catch up with Jules Erbach, who is the CEO and co-founder of Otoy. So Otoi has been thinking about how do we render things to look photorealistic. And so they've been having this Octane Renderer out for a long time, and you can look at the beginning of Westworld, you see the amazing intro sequence which was rendered using Otoi. So they're doing the physically-based rendering. They want to be able to simulate how all of the photons move through a space, how they're reflecting, and where they're coming from, and the surface material properties, trying to both use the computational resources to be able to render out that photorealistic look, but also find the most efficient way to capture a slice of the light field, but also to be able to recreate that within a sixth degree of freedom, to be able to walk around, Right now a light field is kind of like from one perspective and they want to figure out what are the ways that you can capture an environment and be able to do all the things you need to do to be able to translate that into a 3D immersive Sector of Freedom scene and to eventually do that in real time. And so they're thinking a lot about like, well, if we want to do that, we actually need this huge, massive cloud computing infrastructure and backbone. And so they're thinking about distributing all that compute resources out and using cryptocurrencies like the render token. And so in this interview, it was before they had announced the render token, but you can kind of see their strategy of getting the Octane Render into Unity and to set up everything that they need to be able to turn into a service company that is selling these compute resources through this massive distributed network to be able to render scenes into a photorealistic way. So Jules Urbach thinks about this all the time, so I had a chance to sit with him and to talk about the technology stack and the latest problems that he has on this journey towards rendering the metaverse. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Jules happened on Monday, July 31st, 2017 at the SIGGRAPH conference happening in Los Angeles, California. So with that, let's go ahead and dive right in.

[00:02:28.831] Jules Urbach: So I'm Jules Urbach, CEO and co-founder of Otoy. And Otoy is a rendering company. And by rendering, you know, that means both cinematic offline rendering and visual effects work. Most of our users that use that part of our software know us for Octane Render, which has been used to create, for example, the opening of Westworld and in a lot of different movies and VFX. And then we've also been working on real-time graphics that are attempting to match that cinematic fidelity And most recently, that work's been showcased in our partnership with Unity, which integrates Octane Render and a lot of our Lightfields rendering work, which is designed to take cinematic offline rendering and rebuild it in a way that can be played back very quickly. And then also, in the last few months, we've announced a major partnership with Facebook around 6 Degrees of Freedom video capture. Those videos are meant to be integrated with all of our software, both offline and real-time rendering, so that you can blend in light fields and Facebook video inside of Unity and publish that in segments and chunks that can be blended together, both spatially in VR and AR and in other ways in the pipeline.

[00:03:35.408] Kent Bye: Great. And I know also, you know, since last time we've talked, Apple has come up with their AR kit. So I'm just curious to hear how you see some of the technologies that Otoy is working on is going to fit into augmented reality and also specifically with the iOS and AR kit.

[00:03:50.807] Jules Urbach: Yeah, so ARKit's incredibly exciting for a number of reasons. I mean, I think one of the things that's been, you know, hurting the VR or AR industry, both of them, in a sense, is that you just don't have the scale that you've had with phones and desktop computers. You know, the Gear VR is, I think, closing in at 10 million, but, you know, that's the maximum amount of the platform. And I think that, you know, a lot of these recent closures, like OutSpace VR, which really was a kick in the gut for a lot of people, including me, part of the problem is the market size isn't big enough. So ARKit's really interesting, because in my view, rendering spatially, whether it's AR or VR, it almost doesn't matter. You wanted to see that be something that could be leveraged by a lot more people. And having tested ARKit now for a while, and we have a lot of demos that we're showing at SIGGRAPH, Built an air kit. It's incredible. I mean sixth off really works on all these existing You know iPhones and iPads and it is something that I think people can immediately appreciate in a way that it has less friction than VR And so what we're hoping for is that with AR kit getting out there and people building experiences that are meant to be experienced spatially It's a really important way of sort of bootstrapping the entire ecosystem and I I do think that AR and VR are intimately tied together if you're a building something for VR. It can be experienced in a really good AR device that's, let's say, AR glasses. But also, if you're just doing objects that are meant to be experienced in a tabletop manner, or even objects for a virtual world universe, ARKit is perfectly, you know, suitable for that as well. And the performance that Apple's pulled off is just phenomenal. I mean, things look really, really, really good. And if people are looking at the work that we're showing at SIGGRAPH, it's really sort of a starting point to show how all the work we're doing for cinematic offline rendering of light fields, our integration with Unity, our integration with Facebook, all these things can be experienced almost perfectly through ARKit. You don't really need a pair of glasses to appreciate that. And that kind of expansion of the market is really welcome, and probably a sign of things to come, pointing towards where Apple's going. I mean, I think they've gotten everything right by getting this software in ARKit to be this good. And then if they ever, let's say, do release a pair of goggles or something that allows HMBs to connect to the system, the fact that ARKit's already solved 6DOF and all these other issues right from the get-go is a unique way of entering into this ecosystem. So I think it's awesome.

[00:06:04.570] Kent Bye: There also seems to be something, when it comes to perception, there's something about virtual reality such that if you see a light field within a VR headset, it may actually break presence in the sense that it's not stimulating all the other senses in a way that's really making you feel convinced that you're actually there. And I feel like the light field rendering and that level of fidelity actually works a lot better when you are mashing it up with real reality. And when you start to have that light field rendered in an augmented reality, like a phone-based AR. But the other issue, I guess, is lighting and making it feel like it also is fitting into that scene and not having some sort of triggers that are making you feel like it's not actually there. So I'm just curious to hear some of your thoughts in terms of where we're at right now in terms of doing light fields within the context of iPhone-based AR and where you want to go in the future with being able to perhaps detect the lighting and then relight some of these objects to match the scene a lot better.

[00:07:00.033] Jules Urbach: What was fascinating about Relighting is that we're well known for pushing the envelope with light fields rendering. Three SIG wraps ago we showed that working with SIGstaff and it was really an eye-opener for a lot of people. We really got a lot of great attention for that. It wasn't something that was attached to a product other than the fact that Octane could render these light fields. We were showing the playback of that. But it was never about the 6DOF alone. It was always about modeling an 8D light field, what's commonly known as a reflectance field, which means that not only does this dataset have the ability for you to quickly look at it from any viewpoint, but if you shoot lighting into it, all of that is also pre-calculated. So of course it means the amount of data that you're calculating here is exponentially larger. On the other hand, it can also be very easily compressed. So what we've been working towards is something that's actually perfect for AR. Because if you build an octane rendered light field that can be looked at from any angle, you can also expand that to all the relighting conditions. And that means that when it's dropped into the middle of an AR scene, or the real world, it can be relit. And we've extended that really almost, I guess, retroactively to even the capture pipeline. We've had a service called Light Stage, which is really well known in the VFX industry for capturing reflectance fields of faces. And what that does is it gives you, of course, a really good representation of a human head, but it also gives you all of the light transport through that face, so you can relight that. And throughout our pipeline, including capture and rendering through Octane and now eventually streaming to AR, the ability to relight and view from any angle is almost implicit in this system we're building. And what we found is that compressing relighting information is almost easier than compressing light fields. It's a little bit more of a cost to render, but in some ways it's not a huge amount more work. And by doing that, we're making it so that you have assets that can work really well inside of augmented reality. You can create closed systems, even experiences and animations that can almost be blended together spatially very easily without having to fully re-render the hard number of rays needed to calculate that. And it also gives us one more layer, which is like, well, what else can we add into this object? Can we add physical properties? Can we actually do multi, multi-dimensional, not light fields, but physics fields and all these other things that can account for how objects interact? And that is sort of this fundamental evolution of where we're going from basically path-traced relighting and also capture towards modeling every possible interaction efficiently between objects. And right now the one missing piece that does that really well are game engines. Because game engines can model all these other interactions including physics and how things work together. But there isn't really some sort of natural law other than maybe physics systems like PhysX or Bullet that define how, let's say, two different, not just objects, but entire experiences in the metaverse can be blended together in any which way. And visually, lighting and relighting is really well understood because you're not changing the surface topology of something. But when something's animated and it collides, at the very least, you have to do physics, you have to understand what is the intent, what is the expected behavior relative to human experience. And that qualia is really interesting because when we talk about social VR and how things blend together, it's not just avatars in a steady state universe, it's really about how people can change the world around them and how existing content and experiences and work can be blended in a human, meaningful way. And so that's kind of where I see us going beyond being able to mix objects and experiences together towards coming up with a system, and this is how I think the metaverse should be built, that is very much built around the laws of how really complex thought-based media and interactions or people or agents can be blended together. And I think it's sort of a function of what we're already doing with light fields where we're encoding a lot of potential complexity that's meaningful to humans, you know, viewpoints and compositing now towards what happens when we mix two high entropy systems together in a way that results in, you know, behaviors that feel like it's, you know, meaningful and even more high entropy to the viewer or the participants in that shared universe.

[00:10:51.340] Kent Bye: And now that we have augmented reality starting to really, you know, take off in terms of a lot of the phone-based AR with iOS and Google Tango is coming out and the HoloLens is out there as well. And virtual reality has been out there for a while and it seems like virtual reality, you're being able to be transported into another world. And AR is a lot more about bringing these virtual worlds into your real world. And so you're bringing these different digital assets and objects that may be from these shows or intellectual property for these different companies and you're bringing it into your own world. I guess the question I have is that in terms of like, it seems like marketing is a clear use case for a lot of these companies in order for you to find new ways that you can engage and interact with these digital content. And with VR, you're transporting people to these worlds where they're able to interact with it. And then as you're bringing these different objects into your world and being able to interact with them like with ARKit, what's the user interaction pattern or what's the sort of like user experience of that, of bringing some of these objects from this fantasy world into your world? And then what are people doing with that?

[00:11:57.905] Jules Urbach: It's a really interesting question, and it's something that I basically got to test in a really interesting way when we did this partnership with Facebook, where really, obviously for Oculus, a big focus is on presence inside of the world in VR, and so the Facebook 6DoF camera was initially, the initial things we were thinking about showing off at F8, or when this was announced, was basically about putting you in the middle of these 6DoF experiences, which we did show, and that did work. But the demo that we did, the one demo that we did which was a lot of fun with Unity and with these other layers, was you had a button on the touch controller that could shrink the world down and it was like a snow globe. You could look at the world outside in. And what was fascinating about that is that you really in some ways, of course it was still in VR, but what we're showing here in ARKit is we can take that same Facebook demo and of course now it's splatted on the table. And what's weird is that you'd think that by being in the middle of a scene, that sense of presence would be the most compelling way of experiencing that, but you still have to turn your head around to see what's going on. And what's strange about going to Google Earth mode, or God mode basically, or SimCity mode, and still seeing all that, is you're seeing way more information beamed into your eyes. And yet you can still appreciate the experience at a scale that is not necessarily natural in everyday life, but in a way that almost, you know, I appreciate the data more. I appreciate the content more. And so anything that you can experience in VR, you can sort of like zoom out of it and look at that world coming in from any view much more quickly with all the information of that scene being beamed into a way that we understand. So shrinking the world down in AR and putting it on a table or in your hand means that you're able to see a lot more information all at once. And it is something where in day-to-day I think that's a really interesting way of looking at content and experiences. And if you are limiting yourself to what the human physical body can do inside of something, that is a compelling experience because it resonates with our day-to-day lives. But then again, there's a lot of things that are part of our lives today. I mean, being able to communicate telepathically basically over instant messenger or text and being able to sort of connect all this information faster than you can read it through books. I mean, there's a lot of exponential scaling functions in the human experience that we really only appreciated and made part of our lives once technology allowed that. So I think that what's going to happen is that whether you're in VR or Amarod, the idea of shrinking down, you know, these experiences and even looking at your avatar in a smaller scale relative to the rest of the world around you might be something that might be more compelling. Not that you'd be ever limited from going down into that, you know, human scale version of it and looking at it from a first person point of view. But I do think that there's something compelling about that. I mean, when we think about the universe, we look at even, you know, stories and movies. I mean, a lot of that is a little abstract. but it allows us to, in a very compelling way, get the meaning and information of what's going on filtered in a very fascinating chain. And I think that when you look at being able to showcase everything that's happening in a universe that you're supposed to experience from a first-person perspective, and you can zoom out and see it from a third-person perspective, I mean, the way that things move around you in that world, the way that, you know, an entire room of people can interact with you, is something that makes, you know, a lot more connections when you can look at it, you know, from that perspective. And it's very much the same way that top-down gaming works when you look at tabletop games or chess or even World of Warcraft. So there's a lot of value, I think, in being able to do sort of scale shifting of environments. And I think that also allow a lot more interactions. And I think that also is probably going to define, in a lot of respects, the balance between VR and AR. Because VR, kind of by its definition or its implicit vision, is you're fully in that world the way you are in the real one. And that difference isn't meant to be that much. It's just meant to give you the human scale experience in there. But I think that the scale above that, you know, if you were in the physical world able to sort of, you know, go outside your body and see, you know, huge amounts of information that look real and feel real, scaled in a different way, and still have, you know, one or more, you know, pieces of presence in there. I mean, that's something that gives you this out-of-body existential experience. And we're just at the beginning of exploring that with some of these AR things that are coming along, you know, the pike, and allow us to, you know, play around with the shifting between those two. And in my mind, the perfect VR HMD allows you to shift between VR and AR almost seamlessly, not just by blocking out the real world, but by allowing you to zoom in or zoom out of VR experiences and then see them in AR relative to the real world you're in, and relative to other people in their real world scenarios, and then sort of doing a power function where you go down and you're in this experience. And I've always said that VR and AR probably are similar to a YouTube video experience where you have, you know, the YouTube video played inside of a web page with all this metadata inside of, you know, potentially a window on your desktop. And then when you go to full screen mode, you know, your desktop, you know, the web page, everything goes away and you're just looking with all your focus and all the pixels available at that video. And VR is probably going to be a lot like that where if you want to sort of go in and fully experience everything that that sort of spatial experience covers, that's VR, but going in and out of that needs to be pretty seamless. And even Oculus, which started all this in a meaningful way, I think I mentioned this in our last podcast, Michael Ibrish talking about the future of VR devices having pass-through, camera pass-through where even if it's still a VR device and there's no transparency, just being able to map the world around you and then allowing you to see that as if you were really looking at it with your eyes, is going to be very important, and I think in some ways that's why mixed reality is so valuable. Because my testing of VR and AR over all this time, even with very lightweight glasses like the 4-ounce ODG ones, at some point I want to be able to just take off the VR device or not have VR be something that attracts all my attention. It's something that I think devices need to consider going forward. So there's no future that I see where even a really high quality VR device isn't going to allow you to have really good pass-through and to make that something that really feels easy to leverage and go back and forth with.

[00:17:37.408] Kent Bye: Yeah, it reminds me of Playful Corporation and Paul Bettner talking about Lucky's Tale, how they were creating this third-person perspective, but it was in the near field. It was something that was small enough and, you know, the thing that Paul said is that anything that is like your hands with distance away from you, you have so many neurons to be able to process what's happening within that. And so, by taking a scene that's captured of a space and having it shrunk down, you almost are invoking that near field quality that makes it feel like a toy. And you're able to see these other qualities and changing your perspective. And you were just showing a little bit of a scan of this room that's right outside of us here. So maybe you could describe a little bit about how you did that capture and then what you were trying to show in terms of having that within ARKit and if that's something that you plan on releasing more widely.

[00:18:26.127] Jules Urbach: Yeah, so when we first showed our first take on light field capture, it was actually in our office, the office we're in right now at Otoy, where I just spun a still camera around and we took about 1,728 different viewpoints around an axis and we built a light field bubble around that. And it was a really great way of showing how even one still image multiplied by space was able to generate this holographic representation of an office that we had here. You know, that's almost been like a standard candle for comparisons against other formats of capture. You know, we don't want to build all these cameras. We just want to sort of build the software system that can ingest them and make them useful. So it was more about, you know, having a test or exemplary data format. And we actually submitted that data set that we took two years ago to MPEG and to JPEG. And in fact, I'm now working with a group at MPEG around building an open standard around seeing graphs and light fields and the like. So that data is actually public. What I just showed you right before this interview was us taking, with just a few small set of sparse photographs, a number of pictures of that very same office, feeding that into a computer, and then building a spatial representation of that. And it's orthogonal, essentially, to the light field data because it doesn't necessarily capture all of the light rays for specular objects, but it gives such a robust model of that scene without having to have any special hardware that it really is sort of a compelling metric. If we could have that kind of spatial navigation so cheaply with all of the density that a light field has, you know, that's the ideal capture system. And the way that we're sort of heading towards there is that if we had a few photographs, but we could have a lot of different blinking lights, which is what we do with light stage, we could basically model the reflectance properties of surface area and we can construct essentially the same light field that we had from that, you know, high-density capture and just from a few photographs. In other words, being able to understand more about how the world around us is actually bouncing and moving light around, as well as its topology, is enough for us to not need to render the light fields or capture light fields. And that is why, you know, the idea of having even at some point AI look at what the world around us is and not just figure out what the depth maps are, what the 3D model is, but also what's the reflectance, what are the light sources. If we can turn the real world into a CG scene or a synthetic scene the way that we do for our cinematic pipeline, you know, that's the ultimate goal is to basically turn the real world into a synthetic scene, have the synthetic scene be rendered holographically, and ultimately synthetic scenes could be defined by a procedural function that is more and more and more almost mathematics versus just data, and that's the ultimate form of compression. So the data that we have running on this iPhone in ARKit is just showing how sparse capture could be turned into this high entropy view of this world and how with a little bit more work we can basically replace thousands of camera views in a very small space with a few views and a couple of ways of multiplexing lighting and giving the processing that's analyzing those lights all the information needed to replicate high-density light fields and essentially reality. And that is something that we are doing in an offline process. We send those images to the cloud. We basically generate that mesh model similar to how we do with light fields. And it's very similar to how Facebook is using our cloud services to send their X24 frames to the cloud. And then we use their software to generate the depth information in a point cloud that we can then turn into a light field or bring into Unity. But ultimately we want this to happen in real time so that if you're inside of the real world, we can turn that real world very quickly, essentially in a fraction of a second, into a synthetic asset that could be used for relighting or for transmission so that you can do essentially shared virtual worlds. And if we don't get there, mixed reality will never reach its potential. So we're trying to basically map out all these different maximal data sets and then figure out ways of making that really efficient to both render, capture, and stream on a mobile device that you could be either wearing or that's ambiently in your environment.

[00:22:11.333] Kent Bye: And can you walk through the pipeline for how you take an input of whether it's a 360 video or something that has more depth information and then turn that into a 6DOF video where you actually have the ability to move your head around? And how are you able to extrapolate the depth information?

[00:22:29.329] Jules Urbach: Well, you know, that's something that we've, there's a lot of known ways of doing that. And one of the simplest ways is to have a depth sensor like the Kinect, which gives you exact ground truth, I guess, for certain services where you get an actual depth map for each pixel. And the problem with the Kinect is that it's very hard to do that in 360 or to do it in ways that are reliable. Like we try to bring the Kinect outside, for example, and sunlight messes it up. It also has a limited range, you know, it just doesn't work in a certain depth. While you've seen LIDAR get around that, LIDAR is very expensive. I mean, if you look at the HypeVR 360 experiences that were shown, I mean, those are beautiful, but those are using, you know, expensive LIDAR rigs. And the reason why I liked this partnership with Facebook is that they basically came up with a stack that fit into our ecosystem that takes basically 24 different cameras, puts it in a sphere, and essentially just shoots video that comes back just as normal color, but they're aligned in a way that is very easy to generate depth information. And that process is something that Facebook and Otoy partnered on to provide essentially people shooting scenes with this camera the ability to get back videos that can, at the highest quality, be turned into a light field that includes depth information. Or, as we were showing at this very simple Unity demo, you can just take the depth and the color, pack it into Unity as a point cloud, and it just works. And it works as a six-stop video, but it's still topologized into a mesh. And that's something that can be done in real time. So depth estimation is really important. And I think that the quality of what was able to happen through Facebook, and also what we're showing, starting at SIGGRAPH with just simple sparse camera capture, is that photogrammetry on its own is really, really good. And it's really in competition with things like LIDAR and active depth sensors to see whether or not you can get there with just normal images. Because obviously if that works, that means you can go back to existing things that have been captured, or you can build very simple camera rigs and not worry about depth sensing. But I think in some ways, once we get LIDAR down to a reasonable cost, you might want to have that in there. It's something that is useful. But even more useful than that is the ability to do light multiplexing, maybe 120 frames a second with polarizers. That gives you really, really high quality surface topology and a much better sense of what the actual specularity is with surfaces. So we use that with Light Stage to get ground truth skin reflectance for people. And I think that that's sort of where a lot of these capture systems will go. But Facebook really showed how far you could take this with video in a format and a camera system that wasn't, you know, a million dollars. And I think the alternative to that is bigger rigs like Lytro, which have just a huge number of lenses. And those are amazing and awesome, but they're also unwieldy. So it's really about efficiency and finding the right balance between the number of camera lenses that you can put inside of a sphere and the simplest ones being maybe something like the Ricoh Theta, which just have, you know, give you back a 360 video. But even two of those in tandem would give you enough information to start extrapolating depth. And depth is really important because you want to be able to generate a correct parallax and start with that. And so 360 plus depth is a really good intermediate step. But I think going to light fields is probably in some ways almost too much work. What we want is 360 plus depth with multiple layers of occlusion. and enough information about the materials and connectivity of that scene to be able to generate the equivalent of a light field in real time. And that's where things are heading. But if there is this intermediate step where the calculations for that are too complex, being able to bake things out or store light fields is a really useful way of turning everything into a lookup table. And that's why we started with that years ago, so that we could skip ahead towards what the output results would be. And in my mind, the entire rendering equation is all about mapping the smallest amount of compute to get a scene rendered with the biggest amount of data needed to represent that quickly in 6DOF or 8 degrees of freedom. And cameras and capture basically also represent those very same scales. If we could do everything with a single set of cameras and simple lights without any extra capabilities, that would be the ideal. And that would also be something that would be the easiest to integrate with a pair of glasses or an ambient sensor that could give you real-time, real-world synthesis and feed that into the experience or the metaverse social layer that we want to all hook up.

[00:26:36.431] Kent Bye: Can you talk a bit about some of the announcements that you're making here at SIGGRAPH?

[00:26:40.145] Jules Urbach: Sure, so probably our biggest announcement, and these are all things that are essentially fulfillment of things we've been talking about for a while. One of the largest and most important announcements is that we've already, at the time of this recording, we've already announced and shipped Octane for Unity to a lot of users that have been testing Octane Render integrated inside of Unity Editor. It's been in beta. At SIGGRAPH, that will be made available to every single Unity user, of which there's about 7 million. And it's a really important milestone for us as a company, because what we've been trying to do is, Octane represents sort of the convergence of even higher than cinematic quality rendering. It's basically the laws of physics and light embodied in a perfectly parallel GPU render that you can run on a couple of GPUs. And it scales linearly with multiple GPUs or greater surface area on a GPU. And getting physically correct rendering to everyone has been really important. Right now, that's been something that has been passed around through word of mouth, and it's sort of in the visual effects industry. But we see Unity as a critical tool for people building experiences that are spatial, interactive, that connect together. And the reason why we did this deal with Unity is we wanted to get democratization of what Octane can do. And doing a free version of Octane, which is, you know, typically right now it's a $600 product, getting that out for free to Unity users is a big shift for us. So having 7 million people turn on Unity 2017 and then get a physically correct GPU path tracer that's the same one that was used to render the opening of Westworld or used by all these major studios, that's huge. And giving that away for free, yes, it takes away some of our business, but that also is a shift towards what we want to do, which is provide all of that free rendering power, something you can imagine scaling up towards multiple dimensions for light-filled rendering or ultimately having GPU path tracing in the cloud that could be shared across these virtual worlds. So Octane 3D and the free launch of that and getting that sort of really synchronized is ground zero for a huge amount of work that we're going to be doing over the next 5 to 10 years, I imagine. And SIGGRAPH 2017 is a really important milestone in that regard. And then the second launch that's happening at SIGGRAPH is that Our cloud rendering service has been available for a while. In fact, we turned it on to basically allow users for free to render the Metaverse entries, which we did with John Carmack at Oculus a couple of years back, so people could use Octane to render these beautiful Stereo Cube maps and do it at scale. And we've allowed certain Octane customers to come to us and say, well, we want $1,000 worth of rendering so we can render these really high-powered animations or high-resolution VR renders. But we've never made that available to the general public, where you could just buy rendering credits. So, as of today, right before this interview, we turned that on and basically for anyone that wants to subscribe to Octane, which is $20 a month, with that subscription you can then basically just buy $5 worth of cloud render credits. And we've perfectly mapped those credits to basically rays per second on a GPU that you can basically shift in time. So you can buy more credits to give you more rays per second. And there is this really interesting function, especially with octane, because it does allow you to map the physical world and physical interactions of photons in a way that is scalable. And the more credits you have, the more credits you sort of put together in a giant block, the closer that can get to rays per second in real time. And so having that available is also a starting point for quantifying the cost of rendering the metaverse in a way that matches the physical world. And we've known this is coming for a while, but trying to get into sort of a commoditized token, essentially, even if it's mapped to a certain cost today, another big milestone. So these two pieces, Unity for everyone and being able to have rendered tokens that are also available to everyone at a certain cost, both of those things represent big steps forward for us. And SIGGRAPH is where both of those are online.

[00:30:23.303] Kent Bye: So I've been doing even more deeper dives into the deeper philosophical implications of virtual reality. And one of the things that I've been thinking a lot lately is this Pythagorean thought that base reality could be some sort of like symbols or math that is then leading into maybe a non-dual layer of consciousness that's, you know, beyond space and time. And that brings into both space-time matter and energy once we get into what we consider the real world. But there could be these deeper layers of reality and you know, you've been doing a lot of trying to replicate the physics of light through all this math and through Really studying and looking at a lot of the different fast-forward talks at SIGGRAPH There's a lot of people that are doing all these math equations to figure out, you know, how do you replicate reality? So I'm just curious to hear, you know, your thoughts of the implications of that. If, you know, the closer you get to the mathematics of what reality as we experience it versus synthetically experiencing that, what the deeper implications of that might be.

[00:31:22.105] Jules Urbach: Yeah. It's such a great question. And it's, it's really, you know, I have to tell you that obviously as a business, you know, Otoy is a business I want to succeed, but as a philosophy, what you just described is exactly why this company exists. It's a small company. focused on rendering, but the ultimate foundation of rendering isn't replicating the laws of physics. It's, you know, really we experience reality, right? That's the only thing we know to be true is that, you know, there's some sort of correlation between our awareness from one moment to the next. And base reality is interesting. Your example of the Pythagorean theorem is something that basically isn't tied to the physical universe, because it exists in totality without anyone experiencing it. I mean, there are certain fundamental laws of geometry and time and nature that don't need the actual universe to exist. And the only thing that we really do know exists is that we can basically collapse the wave function of probabilities relative to our experiences over time. And it's fascinating when you really get down to the Planck scale of what everything means. It really is sort of information theory. And that one bit, that one Planck scale of time or space that You know, there's what, 244 bits per second of Planck units for time and something similar for everything that's in the universe. I was just actually talking with a physicist, Lisa Randall, who was in this office just yesterday, you know, looking at octane because we're talking about how we could use octane on the cloud to map dark matter and all these things because it's a physical process. And I was asking her, what do you think is sort of below the Planck unit? And she said, well, it's a philosophical question. But if you look at sort of information theory, a Shannon bit, this uncertainty that is essentially a big part of things like the halting problem and all these other things, if you wanted to map everything in reality into a computer simulation, you still need something that sort of collapses uncertainty to certainty, and that's what the Shannon bits represent. And I do think that there is something that's below everything that's happened in the material universe that is real, that basically is the equivalent of the morphic field. The human experience doesn't exist particularly in one place or time. It's really the summation of our sort of world line as a society that exists outside of our individual impacts. And it really is about a resonance function. and some higher dimensional layer represent meaning and understanding and correlations outside of experience, that's what experience for us means. And I do think that it is possible to work backwards from there and say, well, let's build a virtual universe that maps to information theory, that resonates with people, that starts to replicate everything that we depend on in the real world. And at some point, and this is sort of where the rubber hits the road, it's like, can we actually build an entire virtual universe that doesn't have any entanglement at all, even quantum entanglement with anything in the physical universe? And that's where, you know, things like the incompleteness theorem and everything come in. We don't know that that's possible, but it might still represent this sort of, you know, maximal efficiency for humanity. And it also is something where, you know, if you look at whether consciousness is emergent or if it's built into the universe, right, that also describes where AI fits into all these things. Because if AI is able to participate ideally with the same qualia that humans can, then it's essentially as alive as we are. But it may not necessarily be the case if there's something, if there's something that like the entire universe evolving us to the point where we as humans experience things in this meaningful way, including the limitations of humanity. If that's what defines us, then it may very well be that an AI that is not at that frequency or that doesn't experience things in that way just doesn't have the same qualia we do and may be able to do a lot of things you know in a very powerful and different way but it may never have I mean maybe just be a subset of our experience by definition so these are all existential questions but I think the way to sort of start to answer that is with the ability to say what's the nature of our reality and how much of that can be mapped to things that we can create on our own and basically sort of chip away at the powers of reality that are around us. Maybe the universe and the past and future don't matter if we're not super high frequency singularity where experience is totally decoupled from the way that the real world transfers energy. Everything is about transfer frequency. you know, potential entropy versus expected entropy. I mean, that's sort of how you can imagine things being mapped to meaningful experiences. And I think philosophically, it's the heart of everything that we have going forward. And I love, I love thinking about these pieces of the human experience.

[00:35:30.970] Kent Bye: Awesome. Great. And, and finally, what do you think is kind of the ultimate potential of virtual or augmented reality and what it might be able to enable?

[00:35:40.030] Jules Urbach: I think at a practical level, if you're going to have AI, in the optimistic sense, be a complete augmentation of humans, the way that'll happen the best is if we're able to live in a world that we share with AI, where it's basically just a natural expansion of our ability to think. And you have obviously people like Elon working on things like the neural lace, which increase the bandwidth. If human experience can be collapsed into a world where there is no real world energy needed to transport ideas and experiences and the entire human world is built in that, that means that AI is probably maximally able to augment that. And similarly, if AI is going to represent this new human level of consciousness or maybe some super parallel consciousness or hyper consciousness that's meaningful to us, I think that the closer we are to sort of sharing that point of, if you want to call it the singularity, the better. But it still means that if we want to map things like that back into the real world, having this really close mirrored layer between things that are in the real world and the virtual one means that we can model things in the virtual one and then 3D print them or map them back into the physical. And I think that is the ultimate potential of this system. I mean, ultimately for us as humans, we want to have maximum, you know, impact on our lives. And I think that AI is a big part of that, but also being able to seamlessly take things from the imagined virtual world and bring them back into reality in a way that maximizes our civilization, society, and happiness and qualia, super important. And that's the impact of everything that we're doing around VR and AR in these early days, in my opinion.

[00:37:05.600] Kent Bye: Is there anything else that has been said that you'd like to say?

[00:37:09.040] Jules Urbach: Yes, there's one company that I want to talk about which is doing some really great work called Lightfield Labs. I was on a panel with them at NAB and it's from a group of people who used to work at Lytro on the Lytro cinema camera. And they're building what I always imagined would be a fundamental part of the holodeck. That's something we all want to see happen. And they're building a light-filled wall, basically. It's like wallpaper. You can scale it all the way to basically mural-sized. And it's a true light-filled wall where you have all of these rays of light coming out of the wall. And so if you think about the impact of that on humanity, it means that you can replace windows with that. You can essentially have no glasses on and have a surface in your house or your wall be a portal to anything. And I think that is, in a way, that HMDs can't ever accomplish. I mean, it means that you can have the virtual world brought into your ambient life at all times. I mean, that's why it matters where your house is and what your window is looking out onto, because it's essentially tied to your experience in your world. So the work that Lightful Labs is doing is really important. We're trying to help them any way we can. And their stuff is, I believe that they're gonna have prototypes as early as next year that we can all look at. And I think having true light-filled surfaces to sort of work in a complimentary way to what AR and VR glasses could do, super important. So I wanna close by saying, watch out for these guys. We're super excited by the work they're doing, and that's probably a huge part of how all this can integrate even more fully in our lives in the near future.

[00:38:34.100] Kent Bye: Awesome, well, thank you so much, Jules, for joining me today.

[00:38:38.005] Jules Urbach: Always a pleasure. These are my favorite interviews. Kent, it's a pleasure. It's always to talk about all this stuff with you.

[00:38:44.127] Kent Bye: So that was Jules Urbach. He's the CEO and founder of OTOI. So I have a number of different takeaways about this interview is that, first of all, I think some of the things that I got from this interview is that they're working on all sorts of different algorithmic ways to be able to push the limits of where light fields are at and where they're going in the future. you can kind of think about light fields as like the vector paths of photons as they're moving through a three-dimensional space and that when you're usually rendering a physically based rendering you're basically fixing a camera in a specific space and then you're simulating the paths of all those photons and then all the reflections and you just basically do a bunch of math until you finally render it all out. When you do the first couple of passes it has a lot of noise and you just keep iterating until you figure out all the different potential paths and you get this clear picture. But some of the open problems are like you have this mathematical representation of a space then you have to start to translate that into 3D meshes and other things that they're doing within Unity in order to actually have a six degree of freedom. exploration of that and is it possible to eventually get to the point where you can have a real-time digital light field render using this huge cloud compute platform. I think that's pretty far off, but some of the things that he said that they're trying to do now is start to make these more interactable, to start to integrate multidimensional physics fields of how these objects interact with each other, which is getting into another level of abstraction, which is that usually when you're in a light field, you're pretty much just passively receiving a scene. You're not able to actually do anything. Jules is saying, well, how do you actually make this into more of a game engine and actually have a way for you to interact with these objects? So that's pretty mind-bending, like what that even means and the implications for that. But he's also working with MPEG on an open standard for some of these scene graphs and light fields and that some of the data are public out there from some of the early tests that they've been working on. but that with their integration into the Unity game engine, they're going away from a model of you buying the Octane Renderer and you being able to render everything on your own and then basically giving it away for free and moving into more of a service model so that they can have this whole cloud backend to be able to distribute all those compute resources and you participate in this either as a service to buy rendering time or as a distributed cryptocurrency driven cloud computing platform using an Ethereum render token. to hopefully create this distributed network of cloud computing that they can allow lots of people to just get access and resource to all this compute resources. And presumably, the more that you potentially help render other people's tokens, maybe you earn more tokens, and they can have sort of a self-sustaining economy doing that, that has people participating in it and generating value based upon these cryptographic puzzles that are usually being solved by cryptocurrencies. Rather than it just being kind of a meaningless puzzle that's being solved, the render token is aiming to have those spinning GPUs actually solve render problems that people can actually see actual value from. And I think that given that virtual reality is a very computationally intensive, when it comes to those GPUs, it makes sense to make use of those GPUs into some sort of economy like that, such that, you know, you could either get paid by having access to more experiences by the render token mining that your computer may have done. there or perhaps sell it to other people. So it's having this participation for the collective and to kind of use the cutting edge of what's available and happening within the blockchain and these cryptocurrencies that are out there. So I think this will be interesting to see where that goes and how that evolves in terms of whether or not this is going to be a viable way of doing this kind of distributed compute system. They put the render tokens on sale and that sale has now closed. And I think they, they sold about 4,088.28 Ethereum total. Then if you convert that out to what Ethereum's worth today, it's about $2.9 million worth of Ethereum. So the viability of the VR market and the need for these types of rendering resources is going to drive the potential success and utility of something like the Render Token. And this interview was conducted on the last day of July. ARKit had been announced, and the Google ARCore wasn't announced until the end of August. And so all the stuff that Jules was talking about, augmented reality, is going to be equally applicable to both ARCore and ARKit. But just the fact that they want to take mixed reality to its full potential of Basically, there's a lot of additional information that's made available for using digital light field technologies that just really makes sense within the context of augmented reality, especially when you're talking about capturing something that looks photorealistic and you're putting it into the context of a photorealistic environment, which is essentially what your environment looks like through the phone-based AR. But eventually, it's going to get to head-mounted displays and Magic Leap and whatever that Google comes out. and that they want to be able to render the distribution of a light field in a way that could be either used in a holographic display or these heads-up displays with augmented reality, and to have it just look real. It's moving away from the existing model of meshes and more towards the digital light fields. So it's also fascinating to hear that he's been collaborating with different physicists to talk about how something like the huge cloud computing resources could help actually visualize and do science around black holes based upon the equations that they have and to be able to actually maybe get some visualizations of it and do some more intuitions as to what's actually happening with these black holes, given the math equations. So that's super fascinating. And it reminds me of after I did the interview with Google's Clay Bavor, he said that in order for them to design the lenses for the daydream view, they actually had to create a huge cloud computing based physically based renderer to be able to model the photons. that were coming through the lenses in order to actually design and craft the lenses. And that, you know, once they did that, they're going to move into like different iterations of doing machine learning and artificially intelligently designed lenses that are optimized for the physics. But the larger point there is just that the Google's also working on their own digital light field rendering solution to be able to do cloud based physically based rendering. And so It's something that Otoy is the leader of right now, but this is something that Google has also built in-house in order to design their lenses. And finally, just this concept of the nature of reality and these discussions that I had with Jules, I always enjoy talking to Jules about this because I know he thinks very deeply about this. And right before I did this interview with Jules, I had just finished the book by Max Tegmark, which is called Our Mathematical Universe, which puts forth this hypothesis that base reality is mathematics and that the structures of reality are isomorphic to a mathematical structure. That's a very Pythagorean idea, and it's something that goes to the philosophy of mathematics. If you ask a mathematician, is mathematics invented or is it discovered, you get this sort of like, well, are there these platonic objects that are non-spatial-temporal, they're beyond the structures of space-time, and so do they have some sort of actual structure that is discovered. That's sort of the open question as to whether or not there's a Platonic ideal or whether or not it's all sort of just invented. And so there's the different philosophies around Platonism that believes that there are those ideal forms. And then the fictionalists say there are no ideal forms. The only thing that's real is the things that we can actually empirically observe and mathematics is a fictional construct that we shouldn't believe. And then there's the anomalists who kind of take a little bit more of a middle road in between those two and looking at the things that are actually empirically observed. There is something that is canny about the ability of mathematics to be able to describe the nature of reality that people like Quine and Putman have the indispensability argument in the philosophy of mathematics that just looks at like how useful math is for describing the basis of reality. But the problem on the other side of that is against something like the platonic idea is that well There's no actual way to measure and to get any Specific experience of these platonic forms to be able to know whether or not they're real or not It's basically you can't falsify the theory. There's no way to really get access to it. And what I would say is that it's possible that consciousness may be something that's beyond the structures of spacetime as well. There's different arguments as to whether or not consciousness is emergent. That's basically a material reductionist perspective. But there's other theories like panpsychism, or transcendental idealism, or neutral monism that looks at consciousness could be a fundamental field that is beyond the structures of spacetime, or it could be universal so it's at every single photon has an amount of information that's able to process and so there's an ability for every photon to make decisions and choices and that cautiousness is just permeating everything that's out there. So some of the things that Jules was talking about specifically is that okay well you can get down to the Planck scale and you don't know what is beyond the Planck scale so once you go beyond the structures of space-time then then he says that maybe there's these Shannon bits of information theory that perhaps that is a mechanism for consciousness, or maybe that is a layer of this, you know, what could be some type of archetypal realm that goes beyond what we have in space-time, and that, you know, he sees the potential for, okay, well, you know, maybe one way to test and to develop an epistemology to be able to test whether or not there's these ideal forms or not is to create these experiences such that they're able to mimic reality as closely as we can, and that maybe we'll discover something about the nature of experience and the nature of reality. I'm personally skeptical that we're going to ever be able to compute the universe to the degree of the complexity that we have. Just looking at the halting problem in Godel's incompleteness theorem, there's these contradictions and paradoxes such that it could be just not computable or it could be incomplete. Right now we don't have a complete theory of quantum mechanics with general relativity and so there's a bit of a unknown in terms of what the fundamental mathematical structure of base reality could be and maybe we'll never figure it out. Maybe Godel's incompleteness theorem is just going to continually always have consistency but it's always going to just be incomplete and that's kind of like the thing you can pick one or the other you can either be complete or inconsistent and what seems to be is that things are pretty consistent but yet there's just always going to be an incompleteness so there's always going to be these vast mysteries that we don't fully understand which when you try to think about trying to write a computer program that's replicating all of that then you can't actually do that until you're able to formalize and complete all of reality, which is a never-ending project if reality is isomorphic or very similar to a mathematical structure, which suggests that you're never going to actually be able to complete it, according to Gödel's incompleteness theorem. So that was sort of some of the takeaways that I took from listening to Jules talk about this, and I'm actually pretty skeptical that we're able to actually do that, but if they are able to do that, then we're going to know a lot more about the nature of consciousness and the nature of reality. So that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a member to the Patreon. This is a huge passion project for me to do this. It's also my livelihood. And I just love going to these conferences and having these types of discussions with people. Jules has told me that he feels like he's able to just speak as much as he wants to about all the different topics that he likes to talk about, and he doesn't feel like he has to edit or censor himself. And I just love the opportunity to be able to talk to brilliant people like Jules to be able to have these types of discussions. And it wouldn't be possible without the support from Patreon and the support that I get from you. This is a listener supported podcast. And if you enjoy these types of conversations and want to see more, then please do become a member and support me on Patreon at patreon.com slash voices of VR. Thanks for listening.

More from this show