#265: NVIDIA’s GameWorks VR Features & Integration with Unreal Engine

Tony-TamasiTony Tamasi is the senior vice president of content and technology at NVIDIA, and they recently announced the first steps towards supporting foveated rendering with Multi-Res Shading as well as VR SLI integration in Unreal Engine 4. I talked with Tony at the VRX Conference about NVIDIA GamesWorks VR and VR specific features like Direct to Rift support, NVIDIA’s research into advanced lightfield display technologies, and some of the unique graphics challenges that virtual reality has posed to GPU manufacturers.

LISTEN TO THE VOICES OF VR PODCAST

NVIDIA has been looking at VR and AR as a catalyst for new features and functionality, and even made some hardware changes within their Maxwell architecture to support features like Multi-Res Shading. They’re hoping that some of these features could provide enough of a performance boost such that VR enthusiasts could match the performance of a GTX 970 with a card that was below the minimum GPU specs recommended by Oculus.

I talked with Tony about some of the other VR-specific features that NVIDIA has been working on for both VR content creators as well as VR hardware manufacturers. Here’s an excerpt from their press release that dsce the VR features that we talk about on the podcast:

For game and application developers:

  • VR SLI—provides increased performance for virtual reality apps where multiple GPUs can be assigned a specific eye to dramatically accelerate stereo rendering. With the GPU affinity API, VR SLI allows scaling for systems with >2 GPUs.
  • Multi-Res Shading—an innovative new rendering technique for VR whereby each part of an image is rendered at a resolution that better matches the pixel density of the warped image. Multi-Res Shading uses Maxwell’s multi-projection architecture to render multiple scaled viewports in a single pass, delivering substantial performance improvements.

For headset developers:

  • Context Priority—provides headset developers with control over GPU scheduling to support advanced virtual reality features such as asynchronous time warp, which cuts latency and quickly adjusts images as gamers move their heads, without the need to re-render a new frame.
  • Direct Mode—the NVIDIA driver treats VR headsets as head mounted displays accessible only to VR applications, rather than a typical Windows monitor that your PC shows up on, providing better plug and play support and compatibility for VR headsets.
  • Front Buffer Render—enables the GPU to render directly to the front buffer to reduce latency.

NVIDIA’s press release said that these features are more likely to land within the next Unreal Engine release of 4.11, but that an exact release version has not been confirmed yet.

Finally, I talk to Tony about balancing cooperation in order to make VR successful while at the same time still push forward with competitive advantages. While there have been a number of surprising collaborations with NVIDIA, they will be primarily focusing on innovating on their product features to keep pushing the limits of performance and what’s possible in striving towards rendering more and more realistic real-time virtual environments. For more information, be sure to check out their NVIDIA GamesWorks VR page.

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast.

[00:00:12.080] Tony Tamasi: I'm Tony Tomasi. I'm the Senior Vice President of Content and Technology for NVIDIA. Basically what that means is I work with the folks who are creating games in VR and technology and building tools and technology to help them make it cooler. And what we're doing today was kind of giving our perspective on the VR platform, how big it could be, what it'll look like in the next few years, and giving an overview of our GameWorks VR SDK, and in particular talking about multi-resolution, just kind of a first step towards foveated rendering. And the idea is in VR, particularly because of the way the lenses work, The edge of the screen has less resolution than the center of the screen. The unfortunate fact is that today graphics processors, computers, have to render everything at one resolution regardless of what the user is going to see. So what we invented was the ability to kind of dice the screen up into multiple sections. put the full resolution at the center of the screen where you can see it, and put less resolution at the edge. It's visually imperceptible, you can't actually see the difference, but you get about 50% more performance because you're putting the resolution where you can see it and not wasting it where you can't. Is that require eye tracking in order to do that then? Yeah, so this doesn't because it's kind of assuming that you're looking towards the center. And it's actually programmable by the developer, so they can kind of decide how far they want the inset to be in terms of lower resolution, how much resolution they want to spend where. The reason I said it's kind of a first step towards foveated rendering is that the obvious the idea with foveated rendering is use an eye tracker to track where the user is looking and you put the resolution where the user is looking. This is just assuming that the center region of your screen is going to where you're going to put full resolution. And the neat thing about that is that we actually modified our GPU architecture a while back. This is the Maxwell GPU architecture or the GTX 9 series to support this in hardware. So there's no performance cost for it. There's just nothing but kind of upside and benefit. We integrated that into Unreal Engine 4 and that's going to be available in the next couple of months.

[00:02:06.485] Kent Bye: In terms of being able to do multiple GPUs, is this on the path in order to scale up to 4 or 16 GPUs to have even more advanced rendering within a VR scene?

[00:02:17.333] Tony Tamasi: It's kind of related to that, so multi-resolution can work on a single GPU. We have another technology called VRSLI, which is more along the lines of what you're talking about, which is being able to kind of gang up multiple GPUs to get you more rendering horsepower. Of course, the problem with traditional SLI, the way it's worked in PC gaming, is generally it uses what's called an alternate frame approach, which means one GPU renders one frame and then another GPU renders the subsequent frame, and that's a serial approach. and while that can improve your performance in terms of frame rate, it actually doesn't reduce your latency. So what we did with VR SLI is allow you to assign essentially one GPU to one eye, one GPU to the other eye, since you're rendering in stereo, and you render them both at the same time, so you get twice the GPU horsepower with no increase in latency, which is ultimately what you want for VR. Now, the current generation of graphics APIs, DirectX 12 and Vulkan, have support for what's called explicit SLI, which means the developer now has explicit control over where they want to do the rendering work, and they can now kind of span work across as many GPUs as they can keep busy. So the combination of multi-resolution, VSLI, and these new graphics APIs should kind of untap a whole wave of cool stuff.

[00:03:25.471] Kent Bye: Great. And so what do you see that this is going to enable that you can do now that you weren't able to do before?

[00:03:30.893] Tony Tamasi: A couple of things. So one way you can kind of benefit from something like multi-resolution is you can get much higher quality visuals without requiring super crazy hardware, which would give you a larger install base to potentially run your game on or your experience on. Another way to think about it is that if you look at the Oculus min spec, which is a GTX 970, you can actually get something like about a 960 or maybe even a 950, use multi-resolution and get the exact same experience that you would have gotten with that 970. So we can get to a kind of a larger install base sooner. or if you have that higher-end piece of hardware, you can deliver 50% to 2 times better visuals, better imagery. Now the combination of multi-res and VRSLI and these new APIs will let you gang up an enormous amount of graphics horsepower and do that now, and we kind of view that as almost like a time machine into the future. And so if today's PCs are struggling to do kind of the Oculus and Vive resolutions at 90 Hz in stereo, With that kind of a rig, you can start to think about 4K or even 6K resolution displays at even higher frame rates. And at least doing the development for that now, the technology exists. It might be expensive and it might be not reasonable since you're going to have a hundred million PCs out there like that, but in a few years, there will be that much horsepower out, you know, and then you can start to get your development going so that when that arrives, you're ready.

[00:04:43.759] Kent Bye: And what has NVIDIA had to do in order to kind of keep up with the demands of virtual reality?

[00:04:49.613] Tony Tamasi: Yeah, VR is a huge, basically an infinite appetite for graphics horsepower, which of course is great for us, we love that. So in fact, I mentioned with Maxwell, we specifically architected some features into the Maxwell GPU architecture just around virtual reality. That multi-resolution technology that I mentioned takes advantage of a feature we built in called multi-projection, and what that lets you do is take all the geometry for a scene, and broadcast it to a number of viewports. In the case of multi-resolution, it's nine viewports, and do that in a single pass. So we had to build special capability in the hardware to do that, specifically to enable things like multi-resolution, and you can imagine that for stereo or those kinds of things. We've also worked really hard to reduce latency, both in our GPU architecture itself as well as in our software stack, because in the end, that's the thing that kind of gets everyone sick. and ultimately what you want is the ability to sample tracking input as late as you possibly can and get the resulting image to the screen as fast as you can. So we've done things with Oculus and Valve, particularly with Oculus on the late warp idea, where you can sample head tracking input really, really late, grab the last frame that is complete, and then instead of re-rendering it all, just actually do a late warp, which for some classes of content works great. It gives you that sense of immersion and presence and helps you get the latency down even further.

[00:06:01.767] Kent Bye: And is something like direct to the Rift display, is that something that also had specific considerations for the GPU?

[00:06:08.790] Tony Tamasi: Yeah, so the direct-to-riff display idea is a couple things. One, it's a display interface that's kind of completely controlled by the Oculus SDK. The advantage being is you can do some different things than the normal Windows display driver stack would let you do. And the other advantage is it doesn't kind of show up on the Windows desktop because it's kind of weird for an end-user to go out, buy an Oculus head mount display, plug it in, and all of a sudden Windows boot to their head mount and, you know, looking at a Windows desktop in stereo is, you know, not all that awesome.

[00:06:37.127] Kent Bye: Right. And what is your first memory or recollection of VR starting to come into the picture here at NVIDIA?

[00:06:43.811] Tony Tamasi: Well, my first memories of VR are a long time. I've been around a while. It kind of goes back to the early SGI days with Jared Lanier and caves and things like that. And then there was the Nintendo Wave back in the, gosh, what, early 90s or late 80s, whatever it was. For me, probably the first inkling that I thought this was going to be big was when I saw the first dev kit. And I saw how they built it using components that were leveraging pieces from other industries, in particular the cell phone industry, so you could do it affordably. And they made huge strides towards reducing the latency, so you really got that sense of immersion. And then from an experience perspective, the Tilt Brush demo on the Vive really opened my eyes to something beyond gaming. I don't have too much artistic capability, but just the ability to paint in 3D and walk around it It really opened my eyes to a whole new wave of experiences that might be possible. My son is a Minecraft fiend, and I'm sure when Minecraft for VR or a whole lot of this happens or something like that lets you create in VR, I'll probably never see my son again.

[00:07:40.135] Kent Bye: Yeah, I think those creation tools are pretty incredible and something like Minecraft is very low resolution, voxel-based and I've noticed personally that there's kind of like this uncanny valley in virtual reality where more stylized artistic experiences tend to me to be a little bit more immersive because of our expectations of when we have something that looks photorealistic we kind of expect everything to be realistic when it's not and so because of that then it tends to be less of an immersive experience and so With that, from NVIDIA's perspective, are you working with content creators? Is that something that comes up in discussions of kind of like this path towards having this photorealistic VR experience but yet it may not be as good of an immersive experience than something that may be less graphically intense but more immersive?

[00:08:25.411] Tony Tamasi: That's our challenge, quite honestly. I mean, if you talk to the creators and they want to do things that are stunning and photorealistic. And I think we've come a long, long way in ultra-traditional PC and console gaming. Today's games look amazing. They're not quite, you know, the real world yet, but they really look quite stunning. But delivering that in a VR experience is a whole other level of horsepower. You know, if you look at today's console or PC, they're primarily rendering it on an HD display at 60 frames per second. But with VR, even the current generation of head-mount displays from Oculus and Vive, it takes about seven times as much horsepower to take the same content and deliver it in a head-mount display, and that's a pretty big ask. So that's our job, to deliver on that ask. And so right now, people are having to make trade-offs that they might not really want to make. Do you really want to do the less realistic shooter when you really thought you could have done the realistic shooter? Do you want to do a a flying game or a racing game and have to do garrosh shaded as opposed to, you know, physically based rendering. Well, maybe not, but maybe that's a compromise you have to make right now. And I think, frankly, that's OK. Right. I think because we're going to we're in this world of kind of exploration in VR. We don't know what the next big thing is going to be. There's a lot of experiences. We don't know what it is yet. And I don't think rendering alone is the only barrier to VR taking off. I think we have to find some of those experiences and just like every other kind of gaming medium, you'll find them and then they'll evolve. You know, the first shooters weren't even rendered in 3D, right? They were two and a half D style shooters and now you have, you know, physically based shooters that are doing indirect lighting and global illumination and, you know, are close to what you can get out of film. So that'll come too and that's just NVIDIA's job to make sure that it can.

[00:09:54.498] Kent Bye: And what is the experience that you're showing here today in terms of what's the content and how are you actually able to see the impact of this new technology that you're just introducing?

[00:10:02.799] Tony Tamasi: So we've worked with Epic, and we've integrated GameWorks VR, and in particular, multi-resolution and VR SLI into Unreal Engine 4. So what we are showing is their Reflection Subway demo, which is a demo that shows kind of all the great features of Unreal Engine 4. But it was targeted at a high-end PC and targeted at a non-stereo, non-head-mounted display kind of experience. And so it's a performance challenge to bring that class of visual experience to a head-mounted display. So what we did by integrating VR SLI and multi-resolution is basically turbocharge the capability of the PC such that you can deliver that class of visual on a head-mounted display, and in particular the demo that we show shows the Subway without multi-resolution running at about 60 frames per second, but with multi-resolution, visually imperceptible, you literally can't tell the difference, it runs at over 90 frames per second, which is kind of that threshold for goodness for the current generation of head-mounted displays.

[00:10:53.056] Kent Bye: Nice, and I guess one thing that I talked to Leila Ma of AMD and she was talking about sort of the more advanced rendering stuff and eventually kind of potentially hitting a limit, a hard limit in terms of what you can do with an AMOLED display at the high resolution and having to look at something like virtual retina displays of something like Magic Leap technology where you could potentially use 16 GPUs at the same time and asynchronously shoot photons into your eyeballs, you know, at all sorts of different higher capacity than what you could do with an AMOLED screen. So, from NVIDIA's perspective, is this something that you're also looking at more advanced like, you know, virtual retina displays and things beyond what the current generation of VR technology is?

[00:11:35.463] Tony Tamasi: Yeah, absolutely. In fact, we've got a research team that's quite large in NVIDIA and they've been doing a bunch of research not just in rendering techniques but actually in fundamental display technology. Things like light fields and prisms and optics and just trying to explore all the avenues of what it's going to take to kind of deliver on that vision of the future. The beautiful thing about almost all of those technologies is they all enable and require enormous amounts effectively of resolution. Some of it could be volumetric, some of it could be just spatial resolution, but all of that is a big challenge for the GPU industry because you've got to drive it now, and you've got to drive it at very high rates, and you're going to want to drive it with really realistic pixels. So that'll keep us busy for at least another decade or two.

[00:12:13.117] Kent Bye: And maybe you could comment on the significance of Vulkan, which was announced at GDC this year.

[00:12:18.001] Tony Tamasi: Yeah, so Vulkan is somewhat analogous to DX12 in that it's a low-level graphics API built in the Khronos standards body. The idea there is it puts a little bit more of, I'll call it, the control of the graphics processor into the hands of the game developer. So, if you're clever, you can get a little bit more control of the scheduling and of the optimization, and particularly for things like multi-GPU, it just gives the developer a little bit more low-level access. Now, the disadvantage for that is they have a little bit more low-level access, so it's now their responsibility as well. So, some of the things that the driver used to take care of for them, they now have to take care of themselves. But the potential benefit could be things like less overhead on the CPU, so at very high frame rates per VR, things like that might be potentially a benefit because you're not going to end up CPU-limited. maybe better ability to schedule things like multiple GPUs, or maybe the ability to schedule things like asynchronous compute. So you can kind of schedule compute jobs while you're doing shadow mapping jobs, for example, kind of making more full use of the GPU hardware that's available.

[00:13:15.180] Kent Bye: And I think that in the 90s there was probably, from what I've gathered from doing a lot of interviews, is that there was a lot of competitive mindset in that VR now seems to have a more cooperative mindset in terms of people really wanted to succeed and so let's try to work together. And yet there's still some competitive components there in terms of proprietary approaches, innovating, trying to push the edge rather than trying to get a consistence or agreement on specific technologies. And so maybe you could comment on that in terms of how you see that dynamic between like having that competitive edge, but yet also in order for some of these techniques that you're innovating on, maybe they don't get adopted because there's no sort of standard. If it's at that level in the hardware, is this something that's going to be adopted widespread because maybe AMD technology is not going to be able to support it?

[00:14:03.079] Tony Tamasi: Yeah, I mean, I think you're right in that everyone wants to see this happen. So I think everyone in the industry is working, maybe in a different spirit than that they worked in other ways to try to make it happen. So we're working with Oculus and HTC, I'm sure AMD is, I know Intel is. Ironically, we're talking to Sony, which seems kind of weird, but we even talked to Sony. A bunch of kind of collaborative work on the hardware side as well as on the software side. We're all working with all the major engine companies to try to make sure that they incorporate basically the best advancements that everyone can deliver to enable VR. I think from the secret sauce or the competitive thing, everyone's got a clever idea and I think everyone's going to try to make sure that that benefits their customers as best they can. One of the kind of interesting twists that's happened is a lot of the major game engines have kind of changed their business model. If you look at Epic, it's open source. So now people can just integrate their idea right in the game engine and there's none of this like super secret sauce. It's just there. And anyone who uses that engine, which is going to be a large number of people, just kind of get it for free. They don't have to invent it themselves. And that allows, you know, the entire industry to kind of collaborate and advance things and move it forward without being dependent on a private branch or a, you know, a special implementation for one person.

[00:15:09.040] Kent Bye: Yeah, and that's something that Leila Ma actually mentioned that AMD is like open sources their architecture in the way that they kind of publish detailed specifications about that. Is that something that NVIDIA may consider moving more towards that open approach of like publishing all the details and specs of the architecture?

[00:15:24.115] Tony Tamasi: We'll see. I think we publish quite a bit of documentation. I mean, if we particularly look at the compute side of things, we publish a great deal of documentation. And whether we open source our architecture documentation, I think we'll see. In general, I think we provide a huge range of tools and APIs and material, including documentation, that lets developers learn and do what they need to do. Everyone will always say they want more, and we need to try to find that balance between making sure the developers have what they want and making sure that we have a reason to keep buying NVIDIA GPUs and not someone else's.

[00:15:52.533] Kent Bye: Right and and finally what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?

[00:15:59.098] Tony Tamasi: Yeah, it's a good question. So, you know, is it the Snow Crash future? Is it the Matrix future? Is it the Holodex? You know, it's hard to say. I think the thing that you can say for sure is it's a whole new class of experience. I think we've kind of iterated on the traditional 2D screen game idea for quite a while, and it's come a long, long way from, you know, Pong and Space Invaders on your Atari to, you know, what we have today in these, you know, multi-hundred million dollar budget AAA franchises. But I think the experiences that VR can enable are going to go well beyond that. And whether it's the experience through a head-mounted display, or through holographic projection, or through retina scanning and retina optics, the experience itself is going to be something that's kind of much more involving, and almost maybe much more connective. And you can see it even with today's, I'll call it primitive, even though relative to 20 years ago they're not, but today's early generation head-mount displays. You know, everyone knows that the Vive and the Oculus head-mount displays, they're really good, but they're not nearly as good as we'd like them to be. And yet, when you get one of those experiences, It changes things. It opens your eyes to what it could be and kind of what that next wave could be. And I think that's why everyone's so excited. Awesome.

[00:17:04.106] Kent Bye: Well, thank you so much. Sure. Thank you. And thank you for listening. If you'd like to support the Voices of VR podcast, then please consider becoming a patron at patreon.com slash voices of VR.

More from this show