Stephanie Hurlburt is a low-level graphics engineer who has previously worked on the Unity Game Engine, Oculus Medium, and Intel’s Project Alloy, and now she’s creating on a texture compression product called Basis at her company Binomial. I had a chance to catch up with her at PAX West, and we take a bit of a deep dive into the graphics pipeline and some of her VR optimization tools and processes. We also talk about how to determine whether an experience is CPU-bound or GPU-bound, an open source game engine being built by Intel, the future of real-time ray tracing in games like Tomorrow Children & Dreams, and why she sees texture compression as a bottleneck in the graphics pipeline worth persuing for the future of wireless streaming in VR.
LISTEN TO THE VOICES OF VR PODCAST
Here’s a recent talk that Stephanie has given on texture compression and the future of VR
Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR Podcast. So in today's episode, I have Stephanie Harbert, who is a low-level graphics engineer who in the past has worked on the Unity game engine, as well as on Oculus Medium. She's now started her own company called Binomial, where she's working on a texture compression product. And so I talked to Stephanie today about the process of optimizing your VR experience and the graphics, and kind of an overview of the overall graphics pipeline and the different points that she's been focusing on to try to optimize VR experiences. So we'll be doing a little bit of a deep dive into graphics on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by Fishbowl VR. Valve suggests to game developers to make a good game, price it right, listen to your community, and update, update, update. But sometimes you need to get candid feedback before you release it to the public, because you don't want to have to dig yourself out of too many negative reviews. So Fishbowl VR does on-demand private user testing videos at an affordable price. You can watch a broad range of users play your experience in their natural environment and get that valuable feedback that will help you make your experience a success. So start getting feedback today at fishbowlvr.com. So this interview with Stephanie happened at the PAX West conference happening in Seattle, Washington from September 2nd to 5th. So with that, let's go ahead and dive right in.
[00:01:44.919] Stephanie Hurlburt: So I'm Stephanie. I own a company called Binomial, and we make a texture compression product that can be used in VR. We also do VR consulting.
[00:01:54.265] Kent Bye: Great. So why is texture compression important?
[00:01:58.387] Stephanie Hurlburt: Basically, textures are the number one data bottleneck in all games and applications that we see, or a lot of them. it's important to get your data smaller for a lot of reasons. On things like Gear VR or mobile platforms, people talk about download times all the time, like you need to get that down. And on higher-end headsets, I mean, we can start to talk about possibilities for wireless high-end VR, but we can also just talk about getting more photoreal games, which is hard right now in VR, allowing artists to make better quality textures by compressing that data and making that possible, yeah.
[00:02:35.190] Kent Bye: And so when you talk about compression, where does that fit in into the pipeline of creating a VR game? Like, where would someone put in the binomial product?
[00:02:44.073] Stephanie Hurlburt: They would put it in the game engine, typically. We could work at making plugins for various engines, and we've actually talked about that. Right now, we're talking to a lot of companies that either have custom engines or have the ability to stick this into the source code of the engine.
[00:02:59.918] Kent Bye: I see. And so for you, you've been working in virtual reality as kind of like a low-level graphics engineer. And maybe you could just talk a bit about what makes VR unique when it comes to graphics.
[00:03:12.945] Stephanie Hurlburt: Oh man, well VR is fascinating because for me one of my biggest interests is performance and optimization and all of a sudden that job is so important in VR. So VR if you don't hit 90 frames a second as you know people will throw up and so all of a sudden performance and optimization isn't like a non-negotiable. Whereas in games, they're often like, oh, this one area stutters the frame rate. Oh, well. There's also a lot of graphics techniques that are new in VR that you can do. For instance, screen space calculations might have been an easy way to make things like bloom and other effects. But in VR, we need to use more volumetric methods because you could turn your head really fast at any moment. You also have such warped vision that screen space becomes harder. Yeah.
[00:04:02.766] Kent Bye: So when people are building a VR application, what are the type of optimization, performance, benchmark, and metric tools that they're able to even look at and see what's happening and know that maybe the texture compression is a bottleneck, that they might need something like Binomial?
[00:04:19.690] Stephanie Hurlburt: That's a really good question. So, I mean, there are lots of tools that you can use to profile your graphics, but the number one thing is just taking a look at frame rate. If you're not hitting frame rate, okay, there's some problem going on. You can inspect what goes on in your frame with tools like RenderDoc and various GPU performance tools. It's important to assess where your bottleneck's happening. For some people, it's GPU-bound, and that's what I hear a lot of people talk about. But for some people, it's really CPU. It depends. And then when it comes to wondering if you need texture compression, the key part is looking at networks and streaming. And sending textures over a network is usually where we see the biggest problems. Like, for instance, there's a streamed game and it's stuttering. That may be a texture compression problem. You're sending too much data. It can't handle it. You need to squish that data a little more. And there are also solutions where people's current solutions for texture compression cause data to take too long to be sent to the GPU, either the decoding or transcoding step. So if you start to feel like there's too much delay in sending your data, texture compression can help you there.
[00:05:30.960] Kent Bye: And can you talk a bit about some of the different scenarios that you've seen that are causing something to be GPU-bound versus other scenarios that result in it being CPU-bound?
[00:05:42.184] Stephanie Hurlburt: Sure. So with GPU-bound, for instance, if you have really complicated shaders and a lot of memory and operations done on the GPU, that would cause it to be GPU-bound. And actually, a really good example of this is when I was working at Oculus, we were making Oculus Medium. And there was a lot of discussion of, do you process voxel data on the GPU or the CPU? That's a lot of data. So it kind of locks up whatever one you process it on, in a sense. So it's kind of a design choice of, do we send all this data to be processed on GPU or CPU? They're different. They're completely different architectures. So pros and cons. Yeah.
[00:06:22.125] Kent Bye: My understanding of the CPU is that it's kind of taking care of a lot of the physics engine. It's doing a lot of the multiplayer. It's doing positional tracking. It's doing the game logic, all these different other dimensions. And so from the graphics level, what is the CPU kind of really specifically focused on?
[00:06:38.414] Stephanie Hurlburt: Well, you could do a lot of calculations related to graphics. Basically, the trade-offs are the GPU is kind of bad with memory. And also, you can't do as complex things as a result. And it's highly parallel. So you can basically, if you have a problem where you can do a lot of simple stuff in parallel, GPU is your person. But if you need like lots of memory and it's hard to make your application parallel, which is true for a surprising number of things, GPU is not your friend. So for instance, that's why AI is being used on the GPU now, because it's essentially just a lot of simple operations all at the same time. And graphics is another good contender. And maybe there are room for other processes to go to the GPU, too.
[00:07:26.625] Kent Bye: What are some of the other examples of things that you've seen that are not parallelizable, things that have to be done in kind of serial, sequential processing?
[00:07:34.148] Stephanie Hurlburt: Oh, man. I mean, so many things. Like, for instance, all our algorithms are done on the CPU now. And that's for multiple reasons. And our compression algorithms are an example of something that's hard to parallelize at times. Another example is, say, particle systems where all the particles need to know about all the other particles. Things with high amounts of dependencies are hard to parallelize. So if I need to check all the other particles before I do my operations, well, on the GPU, they're all doing their own thing. They don't know about each other. So I can't do that. I have to wait for all of them to finish before I do that check, and that just kills the GPU.
[00:08:12.545] Kent Bye: What are some examples of dependencies that you've seen that, you know, I'm just trying to get a sense of understanding what some of those actually look like, things that are kind of dependent on each other?
[00:08:20.808] Stephanie Hurlburt: Yeah, physics is one of the best examples. So when I worked at Unity, I dealt a little bit with particles, and we talked about particle physics being an option and things like that. And basically, to do it on the GPU, you kind of need to cheat it a little bit. You need to say, I don't actually care about all the other physics objects or particles. I just care about what's next to me and try to guess what that would be. So it's kind of a hack. It's definitely not done properly, but that's a very common example actually. In textures, there are certain texture formats where basically textures are processed in blocks. The image is split up into blocks when it gets sent to the GPU. And basically each block comprises the compressed texture. And when the GPU needs to access one, it can pick out just the block it needs and leave the rest compressed. It has very special ways of handling this. In some GPU formats, what those blocks are are dependent on the blocks around them. So all of a sudden, that becomes a very hard to parallelize algorithm on the GPU, because I can't check the other blocks when they're all processed in parallel.
[00:09:28.026] Kent Bye: And what about procedurally generated content? Is that something that is using both the GPU and CPU?
[00:09:34.348] Stephanie Hurlburt: It can. But procedural generation is a great example of something, depending on how it's done, of something that can be done on the GPU. Because basically, you can feed it the algorithm, feed it the position, and it doesn't care where its neighbors are. It can just make its content based on that algorithm. So that makes it very GPU friendly. Of course, you can do procedural generation that has dependencies. a cleanup step, like I'll procedurally generate it and then I'll check all my neighbors and make sure I still look good, for instance.
[00:10:05.911] Kent Bye: And something like a shader, when I look at the code, it's like a lot of math and a lot of equations. Is that something that's being parallelized against many of the different GPU cores? Is that something that's also using both the CPU and GPU?
[00:10:21.178] Stephanie Hurlburt: No, shaders are awesome because basically you just send the shader code to the GPU from the CPU. The CPU doesn't compile the shaders at all. And the GPU will deal with compiling it and basically turning that into GPU instructions. And they all happen in parallel. So all your math operations that you see in your shaders will happen in parallel. And in fact, you have to be very careful for that reason. Don't have too much code in your shaders or too complex operations because Since everything is done in parallel, each unit doesn't have a ton of processing power.
[00:10:55.498] Kent Bye: I see. And so for you, what are some of the things that you've seen that are different in doing something that you could do some cheats within a 3D environment versus actually being immersed within that 3D environment? So for example, seeing something on a 2D screen versus being immersed within VR.
[00:11:13.214] Stephanie Hurlburt: Oh man, I mean particle systems are actually a great example of that. I mean there's so many examples, right? Where in a 2D game you might be able to deal with like billboarded particles. So basically just 2D rectangular sprites that always try to face you. In VR you notice that that's a 2D sprite and you get really irritated by that. So basically your particles in VR really should be volumetric and actually that is a perfect case study because there's just a lot of things that are 2D on your screen and you're not bothered by that because it's a 2D screen but in VR you need everything to have volume or else you get annoyed.
[00:11:53.067] Kent Bye: In talking to different people about the future of where things are going, I hear a lot of talk about digital light fields. And there's a lot of data-intensive processes of both from capturing, of compressing, and then streaming and delivering it. So from the stuff that you're working at, Binomial, are you looking at digital light fields at all? Or is that sort of a completely different problem?
[00:12:13.871] Stephanie Hurlburt: The stuff we're dealing with at Binomial is basically we do contract work. So we might help optimize digital light fields as a part of a contract. For a texture compression product right now, there's so much code that's specific to image, like dealing with colors and color spaces and pixels and all this. So that code isn't just droppable into something like a light field unless it was structured in that way. But it has a lot in common. In fact, we talk a lot about trying to do more general purpose compression, and it might be a field that we go into in the future.
[00:12:51.290] Kent Bye: Yeah, and talking to Neil Trevitt of the Kronos group, there's this new format for containing 3D objects called GLTF. And there's the potential to either extend it or include different levels of compression. But in talking to Neil, he was talking about the MPEG group using a specific type of compression for, I don't know if it's for the mesh compression. So maybe you could talk about GLTF and some of the other mesh compression versus some of the texture compression that you're doing.
[00:13:18.403] Stephanie Hurlburt: Exactly. So mesh compression is much like what I was saying about light fields. It's just since we deal with such texture specific data, it's not exactly drag and drop, but there are commonalities. And glTF is super interesting. We've actually been most very recently exploring getting on Kronos and helping with the glTF and seeing how we can help with compression on Kronos. So I'm actually very excited. I like open formats a lot. Like I've always been a huge OpenGL Vulkan supporter. I think having an open standard for all things is very important to our industry. Yeah.
[00:13:54.847] Kent Bye: So would your product be incorporated within those open standards, or would that be something that you still productize and license out?
[00:14:01.473] Stephanie Hurlburt: I'm not sure yet. If we keep this closed source proprietary product, which is what we're planning on doing, likely not, but it depends. And we would be open to contributing our compression knowledge to that effort, so basically be contributors.
[00:14:17.567] Kent Bye: And what are some of the biggest open problems that you're trying to solve right now?
[00:14:20.865] Stephanie Hurlburt: Oh man, there's so many. Basically, the biggest open problem that we're trying to solve is like, how do you make even better? There's three metrics for texture compression. There's quality, making sure you don't kill your quality of images. There's transcoding or decoding speed. So that's taking the image from the CPU and how long does it take to actually render that? Do you have to unpack it or do anything? And then there's size, shrinking it down to as small as possible. And often you make trade-offs between the three. So for us, a big open problem in compression is basically how to optimize that as much as possible and get the best speed, quality, and size, and not sacrifice too much.
[00:15:05.038] Kent Bye: So it sounds like what you're saying as well is that the compression is kind of like the data bottleneck for being able to do something like, potentially in the future, wireless VR.
[00:15:13.586] Stephanie Hurlburt: Yeah, exactly. I mean, again, we're in texture compression. So if you can think of a solution for wireless VR, which we think is definitely possible, that involves texture compression, sending textures over a network, you can definitely look at our compression solution to make that possible. And in fact, from our early benchmarks, we think that our texture compression solution will literally be the best texture compression solution out there. So it would definitely be worth looking at.
[00:15:42.898] Kent Bye: So with people getting into VR, they could download Unity and start to download assets from the asset store and start to produce VR experiences without ever having to get as low-level as some of the stuff that you're working on. So I'm trying to figure out, like, for you as a graphics low-level expert, like, where would you kind of enter into the pipeline with some of these VR projects and what type of problems would you be solving there?
[00:16:07.515] Stephanie Hurlburt: Yeah, all kinds of stuff. So, I mean, one important note to touch on is that I think it's really important to the VR ecosystem to have open source alternatives to engines. And so we've actually been working partially with Intel to try to create an open source VR engine. And it's still in progress. We'll see how it goes. But we've been working with them on Project Alloy a little bit. And that's been a big topic of discussion. But for helping game developers, as things are now, I mean, Rich and I both worked at Unity, so we have a pretty good view of how it internally works, and we also have a lot of C++ experience, so something like Unreal wouldn't be hard for us to dive into either. And we can basically take a game that's not running well, and with our knowledge of engine internals, even a higher level, optimize it and go in there and tweak things to make things run a lot better. And of course, we naturally work well in engines where we have the source code. So either custom engines or a source license.
[00:17:08.982] Kent Bye: I see. So if you see some sort of bottleneck, you'll be able to kind of dive into the low level. For Unreal Engine, since the source code's available, you could go in there and start to modify it and then make it perform better, it sounds like.
[00:17:20.265] Stephanie Hurlburt: Exactly, so we could use different techniques or we can just say like for instance, okay your shaders are too huge let's optimize them and both of us have a really good knowledge of hardware and how it works under the hood so we can It's easier for us to look at, we don't look at code from the sense of like, we'll make a fun gameplay experience, we look at code from, we know how the hardware's gonna handle that in memory, you're not structuring things in the right way, we'll clean it up, and then you can focus on making the most fun VR experience and not worry about that.
[00:17:51.617] Kent Bye: And I've also heard some design agencies starting to use Cinder as a platform to be able to build VR experiences. So maybe you could talk about what you see is happening within the Cinder community when it comes to driving VR experiences.
[00:18:04.869] Stephanie Hurlburt: Yeah, Cinder's really close to my heart. It's free and open source, and it supports VR. I love Cinder's efforts. And most recently, I helped a student get a project building in Cinder in VR, and it was actually a really good experience. I think that's a great example of a library that has low-level code that you can dive into, but it has higher-level code that's easier to understand. So it's definitely not something that beginners couldn't try out, which I love. That's hard to find sometimes.
[00:18:36.960] Kent Bye: Yeah, and I know that there's a lot of these content creation tools like Tilt Brush and Oculus Medium where you're actually going in, and as a creator, you're able to create these kind of immersive 3D sculptures or illustrations in 3D with Tilt Brush. And so, what were some of the biggest graphics problems that needed to be solved in order to make a program like that possible?
[00:18:58.476] Stephanie Hurlburt: Oh, man. I mean, lots, and I'm sure it's still ongoing and evolving even after I've left the project. Like, for instance, the biggest thing is in a lot of games, you can make a lot of optimizations if you already know what your meshes are, or if you know they aren't moving, or if you have some sort of constraint on your meshes. But in programs like that, people can draw anywhere and erase anywhere. So you don't have those luxuries. So it's all about like, you know, The easiest thing to do might be to say, let's just make a really high poly mesh, just in case someone decides to carve out a little detail, then they can just push down the triangles. But then that takes up processing power. So you have to explore, like, what are more efficient ways to generate that? And there's also the debate between how you should store that data. Like, should it be in voxels? Or should the data you have to store be the actual mesh points? And if they're the mesh points, it's hard to be like, I carve this surface, so how can I manipulate those mesh points in a way that actually looks good? Yeah.
[00:20:03.227] Kent Bye: What's some of the big difference between the voxels and meshes then?
[00:20:06.225] Stephanie Hurlburt: So with voxels, it's just a 3D grid of data. And you can render it in a number of ways. So one way to render it is to basically every frame or every change that's made, say, here's my grid of data, make me a mesh around this data. And you can generate that in other ways. Another way to do it is to send that data to the GPU and say, like, ray trace this or render it another way. With meshes, you literally store the points in a mesh. You can imagine voxels are easier for us to understand, like we have a concept of here's like a 3D map of space, whereas meshes were like we don't have any concept of a 3D map of space, we just have these points that are supposed to shape into triangles, like it's not as intuitive, but it's what our computer kind of natively understands.
[00:20:55.254] Kent Bye: Yeah, the thing that I've been wondering is whether or not some of these programs that you're able to create 3D objects within 3D, like Oculus Medium, you might have a lot more creativity of creating it, but at the end of the day, you might have an object that might not be as efficient if you were to create it, say, in Maya, a little bit more optimized. But do you see that there's a difference in the sort of performance and optimization of 3D objects that are created within Sculpture program like oculus medium versus if they would have created the same object within Maya Well, actually I'm not sure because there's a lot of room.
[00:21:32.515] Stephanie Hurlburt: I mean, there's no reason why There are obviously performance bottlenecks in VR that you don't get on 2d screens in Maya in fact significant amounts But that still there's no reason why it can't be as performant So actually I'd be very curious to see the current performance. It might be pretty good obviously when you're early in development for something, your main focus is just making it work. But then after that point, there's plenty of opportunity to make it performant. In fact, you could even do steps like when you export it, do a pass to clean it up and make it more. There's all kinds of tricks they could do.
[00:22:07.960] Kent Bye: Great. So what's next for you in Binomial then?
[00:22:12.035] Stephanie Hurlburt: So we're continuing to develop our texture compression and talk to people about who needs it and who would want to buy it. We're also, to fund that effort, doing VR contracts and also just because we love doing VR demos. So most recently we worked with Intel's Project Alloy. Too many of my projects are secret and I can't talk about them. But we basically help people both with graphics optimization, since we have that engine knowledge, and making demos, since having that engine knowledge can be useful in making those demos.
[00:22:43.224] Kent Bye: Well, is it safe to assume that because you are a low-level graphics engineer that you're working more on projects that either use open source VR platforms or the Unreal Engine?
[00:22:55.860] Stephanie Hurlburt: We do a lot of projects in that, but actually we get a lot of requests to do Unity projects as well because a lot of people, teams who start Unity projects, the whole point of starting that is to not have an engine programmer on your team. And so sometimes they still run into problems like with memory or optimization or shader code not running well, but they didn't hire an engine programmer. So now they can hire us as contract to kind of clean that up and optimize it. So we do both. Yeah.
[00:23:25.080] Kent Bye: I see. So what's the process look like to write the shader code for you?
[00:23:29.242] Stephanie Hurlburt: Oh, man. Well, when it's an optimization project, it's taking into account the person's already written shader code. And the first step is just kind of building a mental model of like, how do I think memory is being allocated? How do I think the compiler is taking this code? Like, for instance, I know a lot of GPUs, if you have a lot of if statements and nested statements, they'll just compile all of it and run all the options. Because think about it, it's cheaper to run every option and then pick the one the user ended up choosing than to stop and have to start again. It's parallelism. You want to do as many tasks at the same time as possible. So just thinking of those mental models and thinking, how can I make this code better given what I know of GPUs and how they work?
[00:24:16.735] Kent Bye: Do you look at ShaderToy a lot for inspiration?
[00:24:19.177] Stephanie Hurlburt: It's amazing. And it's also getting really ridiculous. Someone told me there was a Commodore 64 emulator on ShaderToy. I'm just like, why? That's amazing, though. I really admire that.
[00:24:34.224] Kent Bye: Yeah, the other thing that I find really interesting is the demo scene of people who are able to, just from the process of writing these shaders, write entire fully immersive experiences that may be just like 64K.
[00:24:47.548] Stephanie Hurlburt: It's amazing. And people have started to talk a lot about ray tracing because of things like ShaderToy. And I've heard people say to me entirely realistically that especially for VR, ray tracing could be in our future. In fact, you see small examples of games that use ray tracing already. Like I've heard that Tomorrow Children is all ray traced. Dreams uses a lot of ray tracing. Unreal will take concepts like voxels or ray tracing and use them in parts of the engine. And ShaderToy is a large inspiration for a lot of that work.
[00:25:19.719] Kent Bye: Yeah, just my understanding of ray tracing is that you're kind of actually tracing the photon path to be able to do the material and reflective properties and that it's typically been in the domain of things like RenderMan for Pixar to be able to do like these. super kind of photorealistic rendered scenes but that may take a number of hours for each frame to be able to be rendered and so you're talking about doing that at 90 frames per second real-time ray tracing is you see it kind of on the horizon?
[00:25:46.972] Stephanie Hurlburt: And it's already happening, but in simplified forms, of course. Like we're not, we're not making Pixar RenderMan for sure, but we are already seeing examples of games or demos that are ray traced and work in real time. And it's interesting because it depends on your scene in VR. If you have tons of geometry and just like tons of mesh points, maybe it's more efficient to ray trace it. Cause that's a lot of data you're sending to the GPU. I don't know. It depends on your application and what kind of graphics effects you need. But it's something we should definitely be creative about. Graphics is so new, and VR is even newer. There's a lot of creative possibilities out there.
[00:26:30.959] Kent Bye: Great. And finally, what do you see as the ultimate potential of virtual reality, and what it might be able to enable?
[00:26:38.245] Stephanie Hurlburt: Oh my gosh, it depends so much but on the compression angle it'd be really nice to see like really high quality experiences like you get on Oculus Rift or the Vive headset and photo real experiences where you feel like you're actually there and have them be wireless and have them be good quality and not have to make that trade-off. I mean there's so much in the future for instance I'm looking at HoloLens like if we could improve the performance of that and have photo real like crazy immersive experiences in AR, too. I think that's the step I'm really excited about as a performance engineer.
[00:27:16.005] Kent Bye: Have you started to look at augmented reality at all?
[00:27:18.186] Stephanie Hurlburt: Oh, most definitely. Yeah, we definitely do HoloLens projects as well, which has been really awesome. It's a different experience, because it's lower power than something like a beefy computer running a Vive demo. But it's really, really awesome.
[00:27:33.152] Kent Bye: Anything else left unsaid that you'd like to say?
[00:27:35.758] Stephanie Hurlburt: Well, if anyone has any questions or wants to check us out, our website's binomial.info. And my name's Stephanie. And you can email us any time and ask us questions.
[00:27:47.825] Kent Bye: Awesome. Well, thank you so much, Stephanie.
[00:27:49.206] Stephanie Hurlburt: Yeah, no problem. Thanks.
[00:27:51.227] Kent Bye: So that was Stephanie Horbert. She is a low-level graphics engineer who formerly worked on the Unity game engine and Oculus Medium, and now has her own company called Binomial, where she's working on a texture compression product. So I have a number of different takeaways about this interview is that first of all, For most people who are working on VR experiences, they probably wouldn't need somebody who's as low-level as Stephanie working on their product. Mostly because, out of the box, a lot of the stuff is taken care of by products like Unity and Unreal Engine. However, because she knows the internals, she can go in there and optimize other parts of the VR experience. And also with Unreal Engine, because the source code is available, they are able to go in and start to change the source code to be able to optimize the VR experience if they need to. So it also seems like the things that Stephanie has also been working on are open source implementations of graphics engines. So she mentioned that she had been working on one of these open source game engines for Intel as part of the project Alloy that is coming out. As far as I know, that hasn't been announced or discussed anywhere else, so I guess we'll kind of have to wait and see what comes of that game engine. But another thing that she takes a look at to optimize is shader code. So if you've never looked at ShaderToy, it's something that you should go check out and kind of look at what you can do with this code that is essentially parallelizable. You can take these instructions and they're sent out to the GPU to be able to essentially generate different graphics. And Shader Tool is just kind of like a playground for people to go and do all sorts of really crazy shaders or shaders that you might be able to take and just put into an experience to do all sorts of different effects. So optimizing shaders is another thing that she has taken a look at. So in terms of texture compression, this is something that's a little bit above my head in terms of the specifics of it. But from what I gather, it's an issue that is a bit of a bottleneck in the graphics pipeline. And she's trying to figure out the perfect balance of trade-offs between these three things, the quality of the compression, the size of the compression, and the transcoding and decoding speed of that compression algorithm. So it sounds like that this may be a thing that is going to be more and more important with wireless VR, which I think at the Steam Dev Days, it was announced that Nitero's wireless VR solution was announced for the first time, but in the future for wireless streaming, especially to mobile and perhaps with like a standalone system like the Oculus just announced at Oculus Connect 3, that's their Santa Cruz system, something like the different compression algorithms could be a lot more important for systems like that, especially if they're going to be starting to receive streams that are being transmitted wirelessly. So the other thing that was really striking to me was this idea of real-time ray tracing. Stephanie mentioned a couple of games that have already started to experiment with this a little bit. One was for the PS4 game called Tomorrow Children, which she said was pretty much all ray traced. And looking at that, it looks like they're using this technique called cascaded voxel cone ray tracing. So the idea is that instead of rendering out all of the dense geometry that may be involved with some of these Experiences they're instead trying to trace the path of all the different photons And this is again is something that typically has taken programs like Pixar's render man hours and hours to render out a single frame and we're talking about doing that like a anywhere from 30 to 60 to 90 frames per second. Especially with VR, you're looking at the 90 frames per second. In some of the other PS4 games, it may be around 30 or 60. But Dreams is another game that's going to be coming out soon for the PlayStation as well as PSVR. And that's also something that Stephanie says that they're using some ray tracing techniques. So this is just something that is going to give the quality of the experiences just a little bit more of a photorealistic rendering quality. So it'll be interesting to see where that all blades and kind of how digital light fields and real-time ray tracing starts to play out and perhaps become more and more of a part within the future of virtual reality. So that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And if you'd like to support the podcast, then please do tell your friends, spread the word, and consider becoming a donor to my Patreon just to help this keep going and to support this work as a service to the larger virtual reality community. So you can donate at patreon.com slash Voices of VR.