#244: Layla Mah on the Future of Advanced Rendering with AMD’s LiquidVR

layla-mahLayla Mah is the lead architect of virtual reality and advanced rendering at AMD. She talks about AMD’s LiquidVR technology built to help bring comfort, compatibility and content to virtual reality. VR requires a lot of graphics processing resources, and Layla has been looking at different architectures, scaling strategies, and display technologies that can meet the growing graphic processing demands. AMD is not only making sure that VR can work out of the box today, but also continuing to innovate in order to meet the growing graphics demands of VR over the next 5-10 years. She talks about some of the GPU hardware innovations, multi-GPU strategies, overcoming the limits of LCD displays with virtual retina displays and digital lightfield technologies, as well as how the game engine will need to evolve in order to handle up to 16 GPUs.

LISTEN TO THE VOICES OF VR PODCAST

Layla Mah thinks a lot about the future of virtual reality and how to solve the exponentially increasing graphics processing demands to drive a 90 Hz display across two eyes, which amounts to 180 images per second at a resolution of 2160×1200. She says that the brute-force approaches that are taken today are not that sustainable as the displays move to 4K and 8K resolutions. She says the most important thing is to not drop frames, and so AMD is collaborating with content creators to debug their CPU and GPU piplelines in order to consistently hit the 90Hz spec.

Layla also points out that LCDs have evolved from the scan line approach of CRT monitors, and that a lot of the cables and hardware has been architected around the assumption that there will need to be a single frame with all of the data that’s updated at 90 frames per second. When looking to scale out to as many as 16 GPUs, then there are diminishing returns and inefficiencies in trying to break up an individual scene into different sections. Not only may there be an object may span across 3-4 different sections, but there’s also an overhead in re-combining the final image into a coherent image.

Layla says that photos are asynchronously streaming into our retinas, and so she’s investigating digital light fields as a solution that’s potentially more sustainable. This could mean that Magic Leap’s approach with a virtual retinal display could be more well-suited to meet the future graphics processing demands.

In traditional games, adding in multiple GPUs could be achieved by alternating frames between the two GPUs. But the motion-to-photo latency is more important in VR, and this alternating frame approach is not viable for VR. Splitting a scene into multiple slices also introduces inefficiencies. There are going to need to be both hardware and software changes at an architectural level in order to scale up to the future rendering needs.

However, there’s currently a chicken and egg problem that Layla is facing. The hardware companies are waiting on the games engine software companies to support a more scalable multi-GPU processing architecture. But the software are also waiting on the hardware to become available that could actually support it. So Layla is stuck in a situation of trying to designing for the future that doesn’t exist yet, and she recognizes that the software and hardware will need to reach a convergence point to provide a viable solution.

There may need to be a leap in either the hardware or software side first, while also moving away from the largely brute-force implementations and taking advantage of both perceptual hacks as well as best practices for creating beautiful graphics from the gaming industry.

Layla talks about how the new Vulkan API has been derived from and built upon components of AMD’s Mantle. She also laments how there hasn’t traditionally been a lot of collaboration between AMD and Nvidia, and that some of the innovative features that each company is creating don’t typically get ubiquitous adoption until these features become standard features for both AMD and Nvidia. If there was more collaboration earlier in the process, then perhaps they’d be able to reach that place of ubiquitous adoption of innovation much faster. But she also recognizes that this competition is what has continued to put pressure on each company to continue to innovate.

Finally, Layla believes that VR has a lot of potential to change all aspects of our civilization ranging from applications in education, medical, social, and increasing empathy. There’s still a lot of challenging problems ahead to meet the graphics demands of VR, and she’s excited to be working on the newest features of AMD’s LiquidVR to help meet those insatiable demands.

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast.

[00:00:12.268] Layla Mah: I'm Leila Ma, Lead Architect of VR and Advanced Rendering at AMD. And so, in the past, we created LiquidVR about a year ago and have been basically pushing that out to production. We're actually getting ready to have Asynchronous Time Warp eventually come out with an Oculus SDK update. Working with Valve right now on direct-to-display and that kind of stuff. And Oculus actually pushed that out publicly, I think it was like a month and a half, two months ago. So that's been kind of the production stuff we've been doing. And of course, we're working on future stuff as well. So LiquidVR 2.0 kind of stuff. I can kind of hint that we're looking at things like, you know, audio and visual stuff, you know, cameras and lots of other pieces of the VR puzzle that we haven't addressed so far.

[00:00:55.494] Kent Bye: Great. So how do you describe what LiquidVR is doing and, you know, what the problem is that it's trying to solve?

[00:01:02.123] Layla Mah: It's pretty simple. I mean, basically, when we came up with it, it was, you know, comfort, compatibility, and content were like the three things that we came up with to begin with was when you plug in a VR headset a year ago, you have no real guarantee it's not going to crash your system or a lot of times it would cause it to not start on reboot or the drivers wouldn't work properly. You know, the experience wasn't always guaranteed to be good. So that's the compatibility piece. We wanted to dissolve that by basically saying, OK, when you plug in a VR headset, you're not going to see a desktop. Why do I want to see a 2D Windows desktop on my face? It doesn't work. So we took that part out with direct-to-display. We also added ways to do front buffer rendering to reduce latency, asynchronous compute to do asynchronous time warp to, again, reduce perceived motion to photon latency. That starts to get into the comfort part. So we want people to not puke on their keyboards. And that's some of the things that we did to address that, as well as latest data latch was another piece of reducing latency to improve comfort. And then the last piece was content. We wanted people to be able to create better content, not have to wait around for a new GPU to be able to get that next edge on the content. So by adding Affinity Multi-GPU, you could have two GPUs or four GPUs. And for example, Valve actually has a version of their Aputure demo that supports four GPUs. So that enables, you know, content developers to sort of push the envelope of what's possible and not have to wait another year or two for us and NVIDIA to release a new card. Particularly for development, if you're working on, you know, the next-gen thing and you want it to be, like, you know, a top-tier demo a year from now, two years from now, how do you develop that on existing hardware if you want to max out two years from now hardware? So MultiGPU lets you do that.

[00:02:46.826] Kent Bye: And so when you're doing benchmarking for, you know, even internally, you know, what are the type of specifications that you look at in terms of knowing that this card is going to be able to drive a good VR experience?

[00:02:59.399] Layla Mah: It's really complex right now because things change all the time and it's not so much like one card is good for this level of VR and one CPU is good for this level of VR because individual developers, what they choose to do, you could have something that works great one day and then some artist came along and put in more fish and suddenly the simulation cost goes higher and you're dropping frames. Our main goal is really just don't drop any frames, right? It's not frames per second anymore. You know, 200 frames per second doesn't give you a better VR experience than 90. You really just want to have 90 and always have 90 and always deliver the exact frame rate you need. And in fact, if you could run 200 frames per second, what you'd want to do is start your frame later so that you're still doing 90 frames per second, but you're actually lowering your motion to photon latency because you've grabbed your input and created your image out of it in a shorter period of time. So that's what we look for is ways to just maintain comfort, maintain that bar of minimum performance, which it's really just an on or off. It's like, are you dropping frames or not? And then from there, increase content quality with the rest. That's what we're looking at in terms of performance.

[00:04:09.780] Kent Bye: And what are the biggest trade-offs that you have to make in terms of dialing something back and then in order to increase something else? When you're looking at it, engineering is all about having constraints and you can't have everything. So what are the biggest kind of trade-offs that you have to make?

[00:04:24.365] Layla Mah: So I mean, mostly that's not us doing it. That's the content developer's choice. But yeah, I mean, we look at things like, for example, foveated rendering is something we're looking at. So the current lenses on these VR headsets are very bad around the periphery. you know, for every like 4, 8, 10, 12 pixels, depending on the optics, you might actually see just a blob that's a combination of all of them, and it's kind of very low quality. So why do you want to render all those pixels at the same quality as what you're seeing in the fovea, where actually the lenses are much clearer? So that's one thing that we can do that's not necessarily the content developers, but it's on us to release mechanisms by which they can easily say, OK, we're going to optimize the amount of rendering and amount of computation we do to create basically an image that looks the same after the optics get in the way. But the input, we're not wasting so much time in places you won't see it. So we are really working a lot on those areas. And then we advise content developers a lot. So we work with major game studios and content studios. And we do have labs. And we look at the content they're creating. And sometimes we'll say, well, look, in this place, you had this huge explosion. And you started juddering. You dropped some frames. And we try to help them figure out what it was in the GPU pipeline or the CPU pipeline that actually caused that problem and how they can address it.

[00:05:39.184] Kent Bye: And it seems like the GPOs are really kind of the heart of virtual reality experiences. And so from you, when did you first get into this VR world with this kind of latest revolution of consumer VR?

[00:05:52.233] Layla Mah: Well, I mean, you know, it depends what you look at. I've been doing the rendering stuff that applies to VR since, like, 2004. You know, nobody really knew what VR was yet, and I wasn't necessarily focused on VR, but I was solving the same problems that VR needs solved. And, you know, about four years ago, you know, really kicked some of that into high gear, and, you know, then the consumer headsets start to be a thing when Oculus was, you know, purchased by Facebook. That was when suddenly everybody said, oh gosh, this actually is happening this time. And so from there, that's where actually AMD as a company started to, I think, see the value a lot more and started to see, okay, this isn't just something that, you know, somebody's working on a garage, but this is something that's going to transform the world and we need to get ahead of it. And that was, you know, from there is eventually we created basically a VR team and I became the lead architect of that team. You know, my boss came to me and said, I want to move fast on this. And, you know, I think you're the right person. So basically from there it became more official. But really, you know, we've been working on the same kind of ideas for many years and it was just a matter of now there's the budget to formalize them into products like Liquid VR.

[00:06:57.422] Kent Bye: And what was your... you said you were working back in like 2004 on the same... like what were those things?

[00:07:03.894] Layla Mah: I've been doing graphics research, you know, I started in 2004 in graphics research. Basically, one of the problems that VR has, that still has, is that we are really brute forcing a lot of things. So something I've been working on for many years is the alternative approach. How do we not actually pretend that every frame is almost, you know, a separate thing from the one before it, but how do we find coherence between them and how do we optimize rendering and stuff like that. So my work since before I was even at AMD has been along that thread of efficiency and all of that applies really naturally to VR because VR ups the number of frames that we create per second. HTC Vive and CV1 are going to be 90 frames per second but that's 2i so it's 180 images per second and that's at around 2160 by 1200 is the resolution. So that's quite a lot of data that you're pushing there, a lot of pixels you're pushing. When you go to 4K or 8K, the numbers just explode. And so eventually, the way that we're doing rendering now isn't sustainable. And my expertise has actually been in solving the problem of how do we transition to a more sustainable way of doing it.

[00:08:12.181] Kent Bye: It sounds like quite a paradigm shift. So what does that look like?

[00:08:17.412] Layla Mah: I don't know how much I can actually say at this point, but I think it looks like, for example, you look at CRTs. CRTs are why we have scan lines now. CRTs actually had a beam, and it went across the screen and back over, and it actually scanned out pixels. And LCDs have kind of evolved from that display technology, and so they continue to do that scan raster pattern. In reality, a lot of that has been engineered into the cable specifications, HDMI and DisplayPort. There is a lot of technology that relies on that now, so it's not just that you could do away with it. But over time there is the question of really why do we still do it this way. So I am looking at things like that for example, at what are the right ways to do this. The real world actually just has a bunch of photons bouncing around and eventually some of them hit your eye and they don't all hit your eye at one point in time like a frame. They hit your eye just whenever they come in. So for example why are we not looking to build displays that function more along those lines. I can't exactly say what I'm doing, but those are kind of the directions that you might look in. I see.

[00:09:22.180] Kent Bye: Well, I just did an interview with Tom Furness talking about virtual retina displays and digital light field technologies.

[00:09:27.662] Layla Mah: Light field is one of the main things that I think over the next 10 years is going to transform the way that we do rendering in VR. That's definitely something I'm working on. It seems more natural.

[00:09:38.865] Kent Bye: Have you got to try out the Magic Leap technology then?

[00:09:46.757] Layla Mah: Actually, I haven't. I was supposed to go down there a few weeks ago, but I've just been traveling so much lately that I haven't had the chance to try it. But I can imagine it's pretty cool. From what I've read of the patents and stuff, I kind of have an idea of how it works. I think their biggest problem at the moment is going to be the field of view, because waveguides are hard. Also, they probably have to do actually a lot more rendering than traditional approaches do. And so, they're facing an even harder problem in some sense than Oculus and Valve are facing in terms of the brute force overhead.

[00:10:18.391] Kent Bye: Now, when it comes to adding multiple GPUs, what are the biggest problems when you're going from just one GPO to adding multiple ones?

[00:10:27.997] Layla Mah: It's a challenge. If you're talking about traditional rendering, what we did for games on a 2D screen was this kind of brute force thing where we said one GPU will render the current frame and the next GPU will render the frame after that. And so there's not a lot of dependency traditionally between those frames. That's actually changed recently as temporal AA and other effects have come in that actually link the multiple frames together. But that was sort of an easy brute force way to do it, because each two frames are probably very similar, so the two GPUs are doing roughly a similar workload, and they don't actually have to share a lot of data between them. When you get to VR, that paradigm doesn't work, because latency is one of the most important things. If you have high motion photon latency, your brain will perceive that as, you know, you might be poisoned or something, so you get sick. So in VR, we do something with Affinity Multi-GPU. One of the ways we do it is slice up the screen into subsections, and each GPU handles a piece of the screen. Our API is quite easy. It's intuitive to do that. But still, it's extra work for the developer, something they didn't have to do before. And the more you slice and dice, the more inefficiency there is in having to send all those commands and all the data through the driver layers and duplicate things. for example, geometry across multiple places, there's an overlap potentially and geometry might actually hit three or four different slices if those slices get small enough, right? You might have a triangle that spans multiple slices and that triangle then has to be replicated on multiple GPUs and all the work has to be done multiple times. So there's diminishing returns to that screens-based slicing and dicing when you're talking about multiple physical GPUs. But something you could imagine is if you rather than had multiple GPUs in multiple sockets, if eventually you got to a single chip that looked a lot more like lots of tiny little GPUs on that chip, and the interconnect between them was really fast, then some of this overhead would naturally kind of go away. And you could move things into where they need to be, and the data movement would be low, and the power could be better. So the paradigm itself is not an untenable one, but it is untenable when you're across a PCI Express bus with multiple discrete GPUs. If you try to go to 16 of them, the overheads of transferring data around are going to usurp your actual benefits. So architectural changes will really be needed to continue down that path. But then there are also other ways to do multi-GPU. So for example, you could have one GPU that's just rendering the frames, creating the images that the eyes see, or two of them. And you could have another GPU doing physics, another GPU doing audio, another GPU bouncing light around the scene like global illumination, or four of them bouncing light around the scene. And those could actually be doing ray tracing while you're doing rasterization to your final image. So there are ways to scale. But again, these actually require dramatic changes to the way engines are built. And currently, most engines aren't prepared to really go this broad with, you know,

[00:13:15.087] Kent Bye: You mean like the Unity or Unreal Engine? Is that what you mean?

[00:13:18.251] Layla Mah: Unity and Unreal at the moment are not at all architected to just take advantage of all these different GPUs and to scale. If you had 16 GPUs, they just wouldn't know what to do with it. They don't have a way to utilize them. And if they did, you know, the first way they might do it is that naive sort of dicing up the scene way which becomes very inefficient. So if you want to then start doing what you could do and dicing up, you know, different pieces of the world like the audio to another GPU, that takes additional work and it takes new software paradigms that they have not yet gotten to. I think that's, you know, what we're going to see happening over the next five to ten years is these kind of advances so that we can actually scale out because Another problem in manufacturing is actually making really big GPU cores, you know, 600 square millimeter GPUs. The yields are lower, you know, you have errors that actually causes, you know, some of the GPU to not be viable. If you could make lots of smaller chips and then put them on a substrate together, for example, you know, you could get higher yields. It wouldn't cost you as much to build the chip. You could build lots of small ones and put them together And then if the software paradigms actually get to the point of allowing you to distribute your workloads like this That could be a great way forward in terms of actually getting more computation for less money and less power So how do you how would you sort of combine the output of all these 16 GPUs? I mean there seems to be like we would have to fuse all that data together somehow where the inefficiency comes in as well as you know if all 16 GPUs are dicing up a picture Now you've got to transfer the results from each one back to one unifying image, unless you have 16 separate outputs, which no current headset has, right? They're all one or two cables. So again, what you would do here, if you had audio processing on one GPU, that actually could have an output to an audio jack, right? So you wouldn't need to recombine it with the visuals. If you had physics on one GPU, physics might be tolerant to more latency than the rendering is, so you could actually potentially have the physics data a half a frame behind or something, and then it transfers some data back to the main GPU, like the quaternions of Here's the new location and rotation of this object or stuff like that. So the data you'd have to send wouldn't be as much as the data you needed for computation. And there are ways to make that kind of a system work. But it is a lot of elaborate software engineering. So it's going to take time to get there.

[00:15:37.042] Kent Bye: Yeah, I'm really getting how there's so many bottlenecks of like, you're really kind of ahead of the, you're really the heart of VR, but also ahead of where a lot of the technology is really at to be able to even use it. So it seems like an interesting design problem to be designing for the future that doesn't exist yet.

[00:15:53.188] Layla Mah: Yeah, I mean, you're right. I mean, who has 16 GPUs in a computer? Maybe people doing supercomputing work, but not most consumers. So there's a point of convergence. And if we hit it right, then the software could kind of be developed in parallel with the hardware, and they could come to market at the same time. That's not how things have traditionally happened, except with maybe console world. So it's really interesting to watch that and say, hmm, well, what's the chicken and the egg? Normally, a hardware company won't build the hardware. until the software is ready kind of to support that hardware and vice versa. The software companies don't want to change their engines for something, you know, that 16 GPUs or something that doesn't exist yet in commodity, you know, out in the consumer marketplace. So we know where we want to get to, but actually getting over the chicken and the egg problem is hard. And it might take some people taking leaps to really get us there quicker.

[00:16:45.352] Kent Bye: Yeah, and it seems like, I mean, I don't know, potentially if digital light fields could, you know, since it's shooting photons, it's less, more replicating how light is naturally hitting our eyes. And so you'd have potentially the possibility, I would think, to maybe operate in parallel a little bit more than having to fuse the data back into a single unified sort of display.

[00:17:03.459] Layla Mah: Yeah, light field is interesting, and you know, like you talk about path tracing or ray tracing. I mean, rendering is already a very parallel, embarrassingly parallel problem, and that kind of stuff still is. There are still issues to write a really good path tracer that's distributed, but people have done it. I mean, there are companies already which have produced path tracers that scale out to thousands of GPUs in a cluster. One thing about that is it's still a very brute force approach typically. So, you know, something that I think we need to see over the next coming years is getting away from the brute force of just sending, you know, random rays everywhere like the world does, because that is, that's ground truth. That's what actually happens. But if you want to do that, you're probably like 5 million X performance away from where we want to be if we want to have a mobile device having this level of quality eventually. You're probably more like 5,000 X away from that goal if you kind of take some of the hacks the game industry is always done with graphics, you know where we say Yes, there's the mathematical correct version, but then we just want to have the one that looks really good It might not be mathematically true, but most people won't be able to tell the difference if we can get to that where we're having a similar quality of it's not correct and it's not perfect, but we're taking advantage of a lot of those shortcuts and those ways that we can exploit how actually human perception is not perfect, then I think we're going to get somewhere much sooner. In our lifetimes, we'll be able to see something that's photorealistic, at least, again, to the limits of our perception. It won't be right, it won't be real, but it might fool us, and I'd like that to be the path that we go down.

[00:18:37.669] Kent Bye: Yeah, at GDC this year, AMD was in the talks about Vulkan. Talk a bit about the importance of Vulkan in terms of kind of creating these open standards within the graphics industry.

[00:18:48.619] Layla Mah: Yeah, I think that Vulkan is amazingly important. And of course, that came out of Mantle, kind of like Liquid VR. But Liquid VR is actually not something that AMD wants to be. We don't want to have a proprietary API for VR. We just wanted to see the industry get where it needed to be. And so Liquid VR was, in a way, like Mantle. It was like, let's solve these problems. We see the problems. We know how to solve them. Let's solve them. We can solve them very quickly. And then the industry will pick up those solutions. Microsoft now, in their next release, is going to have an actual first-class citizen version of direct-to-display. So the Windows itself will get rid of the desktop. It won't have a desktop on an HMD. And that's a great thing to see. We don't want to be actually the ones that have to do that forever. We want to see it happen in the OS where it should. You know, it's really exciting to see Vulcan come out of the ashes of Mantle, so to speak. I mean, Mantle is still something we use in research at AMD. It's not dead. But as a consumer product, Vulcan is what we see as the way forward because it's an open standard. You know, it's part of Kronos. It's developed, you know, it was started from Mantle, but then the whole industry had their insight and their additions and modifications to it so that it's something that everybody sort of agreed upon as the way forward. And at the same time, you know, it works across all different platforms, smartphones and Linux and, you know, Mac OS. So it's, I think, a great thing that we've sort of been able to at least take that next step forward in graphics API design. And it only does solve, you know, a few problems, a few major problems that there were with the old APIs. So I think Vulkan is not the end either. There will be a Vulkan 2 or something like that after that will address even further limitations or inefficiencies in the APIs. But it's great to see us getting there as an industry.

[00:20:29.331] Kent Bye: Yeah, that's the thing that's striking to me is that just the, in order for VR to succeed, that there's probably a little bit more collaboration that we may see if it was a little bit more of a, like, perhaps in the 90s it was a little bit more competitive in different ways. And so, from your perspective, what have you been surprised about in terms of the types of collaborations that AMD has had with, say, an NVIDIA in terms of making VR successful?

[00:20:53.507] Layla Mah: I would say unfortunately I don't think there's been any collaboration. And I think you're right, what's best for the industry would be to see us both aligning more. For example, a lot of times AMD will put a new graphics feature in our hardware. And then NVIDIA will put a different version in their hardware. And we're kind of generally trying to solve the same problems, but oftentimes we end up with different implementations of those solutions. And sometimes they're so close that they would almost work together, but they're just slightly different. And so in the end, people actually writing game engines and such are left with a problem. Do I implement AMD's version? Do I implement NVIDIA's version? Do I implement both? Or do I just forget about this until later when it actually becomes a standard feature that is the same on both? And oftentimes, that's what happens. Oftentimes, until a feature is standard, it won't actually get ubiquitous adoption. And so it would be, in some sense, wonderful if we could actually take the hardware engineering architecture design process and get consensus before we build some of these features between the two companies. Unfortunately, the competition between the two has been largely based on, well, how are we different and how are we exclusively better rather than, you know, trying to have a unified thing that we both just try to optimize for. And that's sort of a business problem that I don't know that I can fix, but I think we probably all lament the situation. In some sense our competition strives us both to be better and of course if AMD or Nvidia didn't exist I don't think we'd be where we were today if only one of the two companies existed because there wouldn't have been that push to keep getting better. But at the same time there is so much that we could do if we actually did in some senses collaborate as well as compete. It would be a great thing to see. I mean AMD for our part is trying to do this by we're opening up a lot of our stuff. We put all of our ISO documents about our GPU instruction set. We're putting most of our GPU documents, the actual architecture documents, out publicly. So that developers can read them, understand them. The PlayStation, the Xbox, the Nintendo console are all based on our architecture. And so there's a lot of opportunity there for people to do open things and really innovate on this architecture and learn about architecture and contribute back to us what would they like to see in a future version. Because hey, they know how it works. And they're like, well if it worked just this little bit differently, this would be better. And they'll tell us something like that. And oftentimes we'll do what we can to make that happen. unfortunately on the Nvidia side we don't get to see the documents and developers don't get to see the documents so it's hard if they would go open as well I mean I think that could be an amazing thing for the industry and if there was just you know x86 for example is an open spec it's got a license but it's you know everybody can implement it in terms of knowing how it works and I think in some sense that's done a lot for CPUs because you know an AMD or an Intel you plug them in they just run the same code there's no driver for a CPU. It would be nice to see drivers for GPUs eventually become less of a bottleneck.

[00:23:56.757] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?

[00:24:02.179] Layla Mah: Oh my gosh, this is, I mean, everything. I can't think of an industry or a piece of human life that VR doesn't have the potential to touch. Unfortunately, it will also, you know, it will touch humans in bad ways, as most technologies do. But I'm really, really excited about all the potential good ways that VR can change human civilization. For example, education. Can you imagine being able to, let's pretend you want to know how a jet engine works. You could read a book or you could now go into VR and you could actually start taking the engine apart and looking at the internal, seeing how each piece works. and then sort of jumping from portal to portal as you do with hyperlinks in the internet to, you know, all the different pieces of information related to that. There's, I mean, I could go on for hours literally about, and in every space, you know, medical and social and all of these different things. Increasing empathy, you know, bringing people who are very distant, you know, live across the world from each other, you know, closer. Like what Chris Milk did with the whole Syrian refugee camp thing. You know, he's able to actually bring people who would never probably personally be able to set foot in that environment. It's not the same thing, but they got to be closer to that experience. And that gave them the ability to maybe empathize with those people on a level that they couldn't really attain before. If we can do things like that, imagine what it could do for preventing wars. And I mean, it's the potential is really limitless. And I'm super excited about it.

[00:25:30.558] Kent Bye: Great. Well, thank you.

[00:25:31.399] Layla Mah: Thank you. Thank you very much.

[00:25:33.413] Kent Bye: And thank you for listening! If you'd like to support the Voices of VR podcast, then please consider becoming a patron at patreon.com slash voicesofvr.

More from this show