WebGPU shipped in Chrome 113, which brings high-performance 3D graphics & parallel compute capabilities to the web. I was able to chat with Google Chrome software Engineer Brandon Jones, who is a W3C specification editor for both WebGPU and WebXR Device API. We talk about the history of WebGPU, some his speculations as to how Apple may be actively working on support both WebGPU and WebXR (spec editor Ada Rose Cannon works at Apple), the future of WebXR, the new WebGPU Shading Language (WGSL), nascent ecosystem WebGPU support from Babylon.js and three.js and Play Canvas, and some of the AI and Machine Learning capabilities that will become available on the web, which Jones refers to as the “compatibility layer for the world’s computing devices.” It’ll be open standards like WebGPU, WebXR, glTF, and WebAssembly that start to define what an open and interoperable metaverse might look like, and WebGPU will start to close the gap on bringing the web closer to native performance. Though Jones believes the web will always lag behind trading off performance for more interoperability and cross-compatibility on a broader spectrum of devices.
Here are some links with more information on WebGPU:
- Chrome Ships WebGPU
- WebGPU Samples
- Andy McClure’s Co-Host post on WebGPU history: “I want to talk about WebGPU”
- Compute.toys WebGPU and WGSL examples in the vein of Shadertoy
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the future of spatial computing, and you can support the podcast at patreon.com slash voicesofvr. So in today's episode, I'm going to be talking to Brandon Jones, who's one of the spec editors of WebGPU, which is a new WC3 graphics level API that is now shipping in Chrome. So on April 6, 2023, there was an announcement from Google that says Chrome ships WebGPU. So after years of development, the Chrome team ships WebGPU, which allows high-performance 3D graphics and data parallel computation on the web. So on May 10, 2023, at Google I.O., During the main keynote, there was Matt Waddell, who was up on stage and said that WebGPU makes the web AI ready. So that's one framing, which is more along the side of the data parallel computation aspects. But there's a whole other range of high-performance 3D graphics applications that started from going all the way back to OpenGL, which then came WebGL. There's WebGL1 and WebGL2. So all the ways you see graphics on the web have come from this 30-year-old spec from OpenGL, and there's been a lot of other developments that have happened for how to use parallel computing that has been catalyzed both from the 3D graphics industry, but also this huge innovation when it comes to AI, machine learning, and using these parallel processing capabilities of GPUs to be able to do all sorts of really amazing artificial intelligence and machine learning applications. So having this be embedded into a browser means that there's this compatibility layer across all the world's computing devices that can use the web to be able to take advantage of whatever GPU might be available. So I had a chance to sit down with Brandon Jones, who's a software engineer at Google, who is a contributor to multiple browser standards that have to deal with 3D or immersive computing. And I wanted to get a sense of both the history of this specification and why it came about, but also what's happening with Apple and can we expect Apple to be shipping this? And from all indications from what I hear from Brandon is that Apple's actively working on this. It didn't have any timelines or anything, but we have on June 5th, 2023, the WWDC, which is Apple's development conference. And by all accounts, there's many different rumors that are suggesting that Apple is going to finally be announcing their virtual and augmented reality, mixed reality device. that they're going to be showing off for the very first time at this WWDC. And hopefully on the back end, that means that the folks that are working on the Safari team and WebKit have been working on things like WebGPU and potentially even things like WebXR, which we talk about in this conversation as well. So we're covering all that and more on today's episode of the Wasteless VR Podcast. So this interview with Brandon happened on Thursday, May 11th, 2023. So with that, let's go ahead and dive right in.
[00:02:53.293] Brandon Jones: My name is Brandon Jones, and I'm a software engineer at Google. And in terms of spatial computing, I have contributed to multiple browser standards over the years that all deal with 3D or immersive computing in some way, shape, or form. That includes WebGL 1 and 2, WebXR, of which I'm a spec editor, and most recently, WebGPU, of which I'm also a spec editor.
[00:03:18.951] Kent Bye: Great. Maybe you could give a bit more context as to your background and your journey into working on all these specifications.
[00:03:26.366] Brandon Jones: Well, I stumbled into it entirely by accident. I've always had an interest in video games and 3D software and, you know, rendering of all sorts, whether it be the thing that your console is doing at home or lots of imagination sparked by seeing the dinosaurs from Jurassic Park come to life on screen. There's a period of my life where I'm like, ah, computers can put anything on the screen. It's amazing not knowing how carefully everything had to be crafted to make those kinds of effects work at the time. But it's just kind of been a lifelong passion. And so I've always tinkered with it even just as a hobby, finding ways to make use of. I did a couple of software renders back in the day and then moved on to OpenGL and DirectX and then professionally was doing a lot of web development. I found that that was something I enjoyed. And then when WebGL came along, I discovered, hey, there's this environment in which I can put my two favorite computing topics together And so I started doing a lot of very early experimentation with that. One of the first people to really be diving into it and, and working with it, not affiliated with any browsers at the time, just sounded fun. And from there I built some utilities to go along with it. Probably the best known is matrix library called GL matrix, which gets used by an awful lot of web development in 3d space. And. did some fairly well-known demos at the time. I did a render for Quake 3 levels and stuff like that all in WebGL. And those little pieces of demos and libraries and software that I put online all open source were what first got me my job in the Silicon Valley tech industry. I was hired by Motorola actually when Motorola It was acquired by Google. I had people who were aware of me on the Chrome team, who liked the work that I was doing, that were very eager to get me to come over and work with them. And so I found myself, by virtue of just sharing the hobby work that I was doing with the rest of the world, kind of stumbling into what I consider to be my dream career, which is helping make those APIs, those tools, those specifications available to everyone.
[00:05:47.042] Kent Bye: Yeah, well, I know that through the years I had a chance to talk to Neil Trevett a number of different times. And I think it was at GDC 2015, where I saw him give a presentation where he was talking about how with each of these different standards that are developed, there's usually a proprietary competitor. So you can look at DirectX from Microsoft, and then you have an open source version like OpenGL and then WebGL, and then the Vulkan API, and then WebGL2 and then the WebGPU. So it seems like there's this lineage of different graphics programs that have been developed going all the way back to June of 1992 from OpenGL. WebGL was launched in March of 2011. You have the Vulkan API, which was launched in February of 2016. WebGL2 was in January of 2017. And then WebGPU has been worked on for a number of years and just this past April 6th of 2023, it just started to be shipped within Chrome. And so let me get a little bit more of your perspective of this history of these different graphics APIs and how it went from OpenGL to WebGL. Then there's the Vulkan that came up. And then what was the catalyst that saw that there was a need for WebGPU after the WebGL2 that had launched in 2017? Oh, goodness.
[00:07:02.596] Brandon Jones: So first off, I'll point people at Andy McClure on co-host, just did a really great post about WebGPU in which she covers some of this history in a kind of comical way. But if you want to read a little bit more about it in a very easy to digest format, go find that post because it was really good. From my perspective, what you kind of saw was You know, OpenGL has been around for a good long while. It is a 30-year-old API at its core. And it's gone through a lot of evolution, but there's still some pieces of that API that are just adherent to the basic patterns of software development that were common at the time. And in OpenGL's case, that mostly means there is just this giant array of state that you flip globally and you kind of have to be aware of what every single piece of your program is doing if you want to manage that state efficiently, if you don't want to be just flipping every piece of state every time you want to do anything. And that worked out 30 years ago. That was kind of the pattern of the day. But we see today that that's just not standard software practice. And we know that there's some problems that come scalability with that kind of stuff. But it being the only open API, as you mentioned, Khronos has fostered some of these more open specifications that aren't controlled by any single corporation, it tended to be the de facto thing if you wanted to do something that was cross-platform. There came a point, however, where all the companies somewhat simultaneously said, you know what, this isn't good enough anymore. It kind of started both with Apple saying, hey, we want to move forward with Metal. And then AMD actually had an API called Mantle which was their own little proprietary thing that they started with that was much more akin to what we now think of as these modern APIs. They started running with those saying, hey, this is just a better way to approach really the design of the hardware at the time. There was so much effort that was going into taking the commands that we were feeding in in the older APIs and remapping them into the reality of what the hardware actually does. And so these new APIs were an attempt to reshape the programming interface into something that more closely matched what the hardware actually needed. From there, Microsoft jumped on to DirectX 12, which followed much of the same patterns. And Khronos began to say, well, what's the next step for our ecosystem? I had the privilege of being in the room for a lot of those conversations. So that was kind of a fun piece of graphics history to watch develop. And what came out of it is that AMD provided their Mantle API and said, we're willing to donate this to Khronos as the basis of a new API. And so Vulkan is not Mantle. It is a distinct thing. But that was kind of used as the foundation that Vulkan was built up from. And so you ended up with these three new APIs, Metal and Vulkan and D3D12, all of which approach the problem of interacting with the GPUs in a more efficient way in approximately the same way. But now the problem was, whereas previously you had OpenGL and it kind of worked everywhere, even if it wasn't the latest and greatest thing, you didn't really have that equivalent anymore because Vulkan kind of wanted to be that equivalent, but Apple made it very clear that they had no interest in that and Microsoft wasn't really pushing for it hard. You know, the IHVs, the GPU vendors still were providing drivers for it, but support was kind of spotty across the board. And the only platform where it really is like the core native API is Android. So you have this fracturing of the APIs where no single API was serving everybody's needs across all the platforms. And so when we jump back into web space, and we're still building everything on top of OpenGL, WebGL, the question comes, well, how do we evolve? And it was very, very clear at that point that Khronos was putting OpenGL into maintenance mode. They were not going to push new features into that ecosystem outside of some basic quality of life improvements. And so we needed to figure out what the next thing was going to be for the web. There was some proposals for something akin to a web Vulkan. But Vulkan as an API is very, very complex and very low level, very verbose. And it gives you access to things that you don't necessarily want people on the web to need to worry about. Lots of very low level memory management and managing the pools and stuff like that. So having kind of made the assessment, yeah, Web Vulkan is probably not what we want. Web Metal is not really going to happen. Like Apple is very proud of their intellectual property there. They're probably not going to just give it away for free for everybody on the web to use. And Web v3d would probably suffer approximately the same fate. We basically came down to, well, we're going to just have to build our own thing to make it work across the entire ecosystem. And that's how WebGPU got started. That was about seven years ago, if I recall correctly. And so it's been a long time coming, but in the end, I feel very, very good about the product, the API that that team has been able to build. And I should point out that I I was there for a little bit of the start of it, and then I took a big long break from graphics on the web to do XR on the web through WebXR. And then once that had shipped, I came back around to helping out WebGPU because it turns out that the immersive web needs a really good graphical foundation behind it. And I was very enthusiastic about what WebGPU could potentially bring to that ecosystem in the future. But yeah, it's a good middle ground between all of the new APIs that is not quite so low level as any of them, because we have to be able to put the appropriate abstractions in place. But as a result, it's much more approachable than any of them. still gives you those fundamental pieces of more low-level, closer-to-the-metal interaction with the GPU without asking you to strictly keep track of every single piece of memory that's allocated or anything like that, like the Vulcans and D3Ds of the world do.
[00:13:48.060] Kent Bye: Yeah, I just watched the Google I O keynote that was yesterday. And I kind of laughed to myself when I saw the framing that web GPU makes the web AI ready, which it certainly can, but it felt like the history of this was going back to 3d graphics. And there's so much of the immersive web, virtual spaces, and just 3d graphics in general, you know, whether it's gaming or virtual worlds, or if we want to call it the metaverse. It seems like this is actually a key component of what is going to enable the performance of what you see on the web being at the same level of what you see in a native application. And I think that's been part of the frustration is that the web has been really hindered by less performance and you can't really get the same level of fidelity. And so love to hear some of your reflections on whether or not you think that WebGPU is actually going to bring more feature parity when it comes to native XR experiences.
[00:14:40.918] Brandon Jones: Right. So WebGPU, like you mentioned that there's a lot of framing of it right now around, oh, this will enable AI on the web or stuff like that. And, you know, that is true, but it's by virtue of the fact that there is compute available to it. And that's just one piece of the baseline expectations of what any modern GPU API would include. And so, yeah, we're making sure to cover like all of the bases and make sure that All of the fundamentals that you need in place for rendering, for compute, for interaction with other pieces of the browser ecosystem are all there, because it's all important. Now, in terms of parity with the other systems, there's this law of nature in the browser ecosystem that we're always going to be a little bit behind. And it's unfortunate, but it's just kind of the way things work. A good example of this is ray tracing has been a big piece of what modern GPUs have been striving towards. It's available on the consoles. There's a lot of research that's gone into. And so when we come out with something like WebGPU, there's this immediate chorus of people going, oh, so can I do ray tracing with it? It's like, no, not yet. Because the ray tracing mechanisms that are available for all of the native APIs haven't quite coalesced. They're not exactly compatible. It's not clear exactly how you write a layer that abstracts away all of them and still gets reasonable performance out of it, still lets you access all the capabilities that you would want to. We both need to allow that ecosystem to mature a little bit and do a lot more research into what's it going to take to bring those features to the web. Now we have to release at some point. We don't want to wait until we've covered every single possible GPU feature on the market before we ship because then we will never ship. So because of those factors, there's always going to be this sense that the web is a little bit behind and that's okay because we are fundamentally the compatibility layer for the world's computing devices. And if we have to sacrifice a little bit of being on the bleeding edge in order to get there, in my estimation, that's an okay trade-off. It's more important that we can say, yes, this fundamentally is able to work across the entire ecosystem. Now. That also brings with it kind of an expectation that We're working on anything from the highest-end gaming PCs to the lowest-end budget smartphones. People generally, when they put stuff on the web, want something that's going to work across that entire spectrum. Maybe you're okay leaving off a little bit of the low end, but certainly you want, say, an iPhone from three years ago to reasonably run your content because there's still a lot of those floating around in the ecosystem. That also puts a natural pressure on content that you'd see on the web to not live up to the standards of a 100 gig PC gaming title that you pulled off of Steam or something like that. They're just kind of a fundamentally different environment. And so when you start asking about things like, will this bring parity to some of the native APIs? It's a tricky question because Both we are always going to be a little bit behind them in terms of the highest end capabilities. And the web is not always the ecosystem that you want to target for that same kind of content. But with those in mind, this should bring us much, much, much closer than we've been able to get before to the kind of fidelity that you've been able to get out of some of the more native toolkits. We've already seen good results from partnerships like with Babylon JS, Play Canvas, 3JS, where they've been able to get very meaningful improvements out of their libraries in terms of performance, in terms of graphic fidelity. We've also seen Unity be able to make much better use of this API for their web exports, which are a work in progress on their end. So I'm very hopeful that this will help bring the web more to the modern stage in terms of making use of the GPU.
[00:19:09.920] Kent Bye: So there was the Google IO keynote framing of WebGPU, which is all AI all the time. And then in your more in-depth discussion, you were able to dig into a lot more details as to what's happening, both in WebGPU content ecosystem, but also some of the other contexts of what's happening with other browsers. And so you'd mentioned that there's work that's happening in Firefox and Firefox Nightly is shipping it. And you had said that work is being done in WebKit, which is always a little bit Like there's been work that's been done on WebXR on WebKit, but yet Apple hasn't necessarily shipped something like WebXR. And so curious to get some of your take on, you know, with the keynote, there's a whole thing about baseline for each year. There's going to be like a baseline of what. we can expect for the following year as to what's going to be made available. Love to hear some of your take as to what's happening with the WebGPU and other browsers, especially Apple, since Apple seems to be the one entity that is kind of holding things back. Like they still haven't shipped a reasonable version of WebXR as an example.
[00:20:14.676] Brandon Jones: Right. So going into this with the understanding that I am a Chrome engineer, I do not speak for the other browsers, and I cannot give timelines for them specifically. What I will say is that I have been very, very happy with seeing the amount of effort that has gone into implementations from the other browser vendors. Now, Chrome shipped this first, and that is largely by virtue of the fact that we just have more headcount to throw at it. The Chrome team at Google is kind of staggeringly large So we were just able to put more effort into it faster because we have more people available. But that doesn't mean that the other browsers are any less focused on being able to ship this. We've seen a lot of great progress from Mozilla, for example, recently. I go back and check the compatibility of some of the samples and demonstrations that I've coded up. and check them in Firefox nightly every once in a while. And the delta between what Chrome can do and what Firefox can do is shrinking at an astonishing pace, really. And so I'm very enthusiastic about the progress that they've made. And while I won't give timelines for any of it, I feel pretty confident that you will see WebGPU shipping in Firefox sooner rather than later. They're really putting a lot of good effort into it. And the fact that they did not ship day and date with Chrome is no commentary on how dedicated they are to it. It's, again, just a matter of Chrome has more manpower to throw at it. For Safari, Apple does not comment on future products or releases, is their standard line. And so they're more tight-lipped about it. I can't just get a Safari Nightly and try out WebGPU in it. But what I have seen personally, is that, first off, Apple has been part of the specification discussions from day one. It's something that they have been collaborating with the other browser vendors on in a very meaningful way. It's not just Apple signing on to saying, yes, we're technically part of the group and then never pitching in, which is what happens in some cases on these standards. So I've been pleased to see that. And I will say that A lot of times the questions that get raised from Apple in terms of specification technicalities and implementation details and validation requirements and all that kind of thing are indicative of a group that is actually working on something rather than just sitting back and theorizing about what problems they may encounter. So while I have less of a sense of how far along they are in their implementation. I do have a very good sense that they are actively working on one. And there's a lot of people that have kind of pointed out, oh, you know, this is shipped in Chrome and I guess we'll just, you know, in five years we can actually use it because it'll finally show up in Safari. I am far more optimistic than that. I think that we are likely to see that Safari, Firefox, and all the Chromium-based browsers all have this shipped and basically at parity in a much, much shorter time frame than we saw the same thing for WebGL.
[00:23:42.702] Kent Bye: Well, Apple's development conference of the WWDC is coming up here in the beginning of June of 2023, and they're rumored to be announcing potentially some new XR headset, their mixed reality, virtual and augmented reality headset. I'm wondering if you have any insight, not in terms of what they're going to be announcing, but in terms of WebXR implementation, because I know that the WebXR shipped a number of years ago now, but again, because it hasn't shipped in Safari, I think it's in some ways kind of held the big green button to everybody just start creating these immersive content. And so I know there's folks like Ada Rose Cannon, who used to be working with the Samsung internet is now working at Apple. I see that as an incredibly good sign. I know that there's some open source companies that have been contributing different aspects to say the open source version of WebKit, but you know, to what degree of things that are going to get into the, official Safari release. Like you said, there's no comment, but I'd love to get some of your sense since this is another specification that you've worked on of WebXR. What can you tell me or what can you see from your perspective of what is happening with Apple and WebXR?
[00:24:49.005] Brandon Jones: Yes. So again, Apple, being a fairly opaque organization, I don't have tons of insight to give here. I certainly don't know what they're going to be announcing in June. I'm hopeful that it's going to be, you know, the devices that have been rumored, because that would be a very, very interesting and compelling injection into the ecosystem to show some faith in that direction in computing and get investors excited again. But what I will be able to say is, as you pointed out, there have been some movements in terms of Apple's participation in the WebXR standard, most significantly Ada joining. Ada is one of the chairs. She's a very important part of the WebXR standards process. And so to see Apple pick her up and have her continue doing what she was doing at Samsung in terms of chairing the WebXR group was an extremely positive signal, I think. In fact, we just recently, within the last few weeks, had a WebXR face-to-face of the standards group and it was hosted at Apple. So obviously, as you might expect, there were multiple people from Apple in the room. They were very actively engaged in the standards discussion process. Some features more than others, you know, it was always interesting to see what features peak a particular company's interest. But yeah, I don't know anything about timelines or. whether or not they are guaranteeing that WebXR will ship on any browsers that they might ship on a hypothetical headset or anything like that, but I can say that Apple is engaged in the standards process for WebXR and I see it as an immensely positive sign.
[00:26:40.426] Kent Bye: Yeah, that's great. And I know that there's also OpenXR, which they haven't, uh, do you know if they've made any sort of commitment to OpenXR? Because that seems to also be a key component. Like there's a part of OpenXR that's connected to WebXR if I'm correct, but I don't know if that's a prerequisite.
[00:26:56.660] Brandon Jones: So to clarify OpenXR and WebXR do not have any direct association. Now, all of the WebXR implementations that I am aware of today happen to be built on top of OpenXR because it is the most ubiquitous native API for that kind of work. And a lot of companies who have already bought into that ecosystem have been building on top of OpenXR and contributing to it. So for example, when you run WebXR on a meta headset, you're running it on top of their OpenXR implementation. But it is not the same kind of association as we see with, say, WebGL and OpenGL, where the web standard is basically just a thin layer over the chrono standard. In this case, we paid a lot of attention to OpenXR as it was being developed, because we viewed it as if something went into OpenXR, it was a good signal that multiple companies at the native level were saying, yes, we can support this functionality. And that gave us confidence on the WebXR side that we could expose a lot of those things to the web and have it be something that was going to not just be a niche part of the ecosystem. However, we took a very web-centric approach to designing WebXR and making sure that we weren't just copying the OpenXR API word for word. It was done under the W3C, not under Khronos. We designed everything to make sure that it felt native and natural in the web environment, and then made sure that you could do a reference implementation on top of something like OpenXR as a sanity check, basically. So there's not a direct association. And therefore, if Apple does not implement OpenXR, they can still implement WebXR. And that's important because I do not expect Apple to touch OpenXR with 10-foot pole. Apple and Khronos have history. And I'm not going to get into it as part of this, but there's some legal quarrels that have gone on there. And as a result, Apple is unlikely to, at any point in the foreseeable future, from my estimation, ever use or support a Kronos-backed standard, with the exception of WebGL, which is a Kronos standard in which they've kind of carved out a special place for simply because it's part of the web.
[00:29:23.716] Kent Bye: But they did WebGL 1. Did they ever ship WebGL 2? Yes, they did. Okay. Okay. Okay. Well, that's all helpful information and context. Sorry. There's been so many Apple questions, but if we're talking about web standards, it's sort of like, you know, unless Apple does something with some of these, then it kind of makes it like, okay, fine. There's a couple of implementations, but it's not across the entire web, which I think is part of the benefits of the web is that you have to get buy-in from all these different companies. And. Apple seems to be the one player out that hasn't been all in when it comes to all these different specifications.
[00:29:56.577] Brandon Jones: It is important. And anytime you're talking about web specification, like the interaction of these companies is kind of the elephant in the room for a lot of it. I know that there's a lot of concern from many people in the web ecosystem about the fact that so much of it has consolidated around Chromium based browsers. And to be perfectly honest, as a Chrome engineer, I kind of share that concern. I'm very grateful that we have Mozilla. And I am grateful that we have WebKit as alternatives in the ecosystem, because I do feel like for something as important as the web that claims itself to be completely cross-platform and kind of this ecosystem within all the rest of the ecosystems, We really do need to be able to keep ourselves honest, keep ourselves from leaning in too much to the needs of any particular company. we need to be able to collaborate on these solutions. And by and large, we really do. It's not nearly as much of a dire situation as you might think from the outside, where you look at it and say, oh, there's all these Chromium-based browsers. And then there's Mozilla and WebKit over here. And so therefore, Chrome is just taking over. If you're actually in these standards meetings, there's no question that Chromium-based browsers do have a large presence in the room. But Fundamentally, everybody is there to solve the same problem. And everybody is there to make sure that their voices and the voices of the people that they're collaborating with are heard. And nobody really wants to go into these things and just steamroll over the other companies. So it is important that everybody is participating and moving forward. And when you have one company that's trying to run too far ahead, like Chrome has sometimes had the finger pointed at them for, or you have one company that's lagging way behind, which is something that Apple's gotten plenty of accusations of lately, it makes things difficult. So my personal perspective as somebody who lives in this world is that I see people from all ends trying to like pull it into the center and say, we all want to collaborate on this thing and move it forward at a reasonable pace and not leave anybody behind and not let anybody get too far ahead. And so it's always difficult to gauge exactly where the lay of the land is there, especially as an outsider who doesn't have the kind of direct insight that some of the developers like I do have, but I'm largely optimistic about it. I think that there are corporate politics that sometimes get in the way of that, but really everybody on the ground doing the work pretty much wants the same thing, and that's just to make the web better for everyone.
[00:32:55.372] Kent Bye: Yeah, well, that's really helpful context. And I wanted to also ask a number of different follow-up questions specific to WebGPU, specifically the shader language that I've noticed that there's a new shader language called WebGPU Shading Language, that's WGSL. So you see a lot of WGSL files. There's also TypeScript files, .ts files. So there's other existing shader languages, GLSL, that's the OpenGL shading language, or the HLSL, the high-level shading language, then SlashCG. The Shader Lab in Unity uses the HLSL. There's ShaderToy that's online that uses GLSL. There's a new Compute.Toys that's going to be featuring the equivalent of ShaderToys, exhibiting some of these different shaders. So I guess there's, on one hand, people are probably not terribly excited to know that there's a new shader language, but on the other hand, as I was watching through all of the different announcements at Google IO, there's Bard, which does translations from one language to the next. And I imagine that at some point it may be easy to maybe say, let's take this existing shader and translate it into WGSL. However, there may be a lot of limitations for what the large language model would be able to do. I think there's probably going to be a lot of engineers who actually know how to use the full extent of this new shader language to be able to write some stuff before you have something like a large language model that's going to be able to just do these automatic translations. But I'd love to get some of your sense of this new shader language, of WGPU shading language, why it was needed, and how you expect folks who maybe not, who want to just kind of copy and paste existing shaders and get them to work on the web if you expect there's going to be a lot of having to write from scratch or if there's going to be tools to help create this library of existing cool things that you may find on a website like ShaderToy.
[00:34:44.476] Brandon Jones: Yeah, good questions all around. So for one, just to get it off my mind, you were mentioning, you know, AI and large language model driven translation of these kinds of things. One thing to be careful about at this point is something like WebGPU and a shading language, which we call Wigsel. They're very new. And these language models function by taking a very large corpus of content and digesting them down into these large data sets of, I don't even know what the right word is for that primitive, but- Like a token.
[00:35:24.217] Kent Bye: Or like, yeah, that's not it.
[00:35:25.699] Brandon Jones: I want to say tensor, but I think that's the wrong level of abstraction. Higher dimensional latent spaces. Sure. That sounds great. Um, anyways, when you have something that's new, like web GPU, there just simply isn't the right volume of examples and people talking about it and code snippets and whatnot. to correctly feed those kind of models. And so if you ask any of the large language models right now, hey, I want to know this thing about WebGPU. It just doesn't. It's so bad at it. It will confidently give you something, but it's not going to be the right answer. Usually it ends up with something that looks kind of DirectX-y with a little bit of GLSL as the shaders, and it's just all wrong. So this is just one of those areas where we collectively need to build up a very large amount of WebGPU content before these language models can start even approximating correct answers in that realm. So maybe someday, but we have to put in the work before we can get there. In terms of just the shading language in general, yes, there is a new shading language. And in my experience, graphics developers like to complain about any time that they have to rewrite any shaders that have already been done. And so, yeah, there's a lot of people who like to kind of gripe about it. Now, realistically, if you look at the situation as it exists today with WebGPU just taken out of the picture completely, you have DirectX, which functions off of HLSL as its high level shading language and Dxl is the compiled IL, which is completely different than SPIR-V, which is the Vulkan compiled IL. which doesn't really have an official high-level language, but it usually comes from GLSL. That's what most of the toolkits are built around compiling. Then on the metal side, you have MSIL, which is the bytecode for the metal shading language, which is basically just C++. It's like a specific versioned LLVM. None of them are compatible. We're in an environment where if you're doing something like Unity or Unreal or anything like that, those engines all are translating their shaders into three different environments, at the very least, anyways. And there's tool chains that exist to do that. And we're adding one more to the pile. And sure, it would have been nice, I guess, if we had just said, oh, well, HLSL is the thing that seems to be the most popular one outside of the web environment. and try to work with that. But that in and of itself represented some challenges in that we were, as I stated before, trying to build this new thing that kind of abstracted over the concepts of all the rest of the APIs. And when it came down to evaluating the different shading language options that were available, it was difficult to see how any of them could really, truly fit the evolutionary nature that we are going to need for a web standard. We couldn't depend on just evolving in lockstep with one of the existing APIs like we had with OpenGL and WebGL, which is what allowed us to use GLSL in that case. And even then, you can't use the full suite of GLSL for WebGL. You can use up to a specific version of it. And the language continued to evolve past that. and there's some features that we needed to take special steps to clamp the outputs or anything like that. And with that history in mind and looking at the options on the table and a little bit of corporate level legal wrangling because of aforementioned Khronos disputes, the group made the decision that it was going to be best to move forward with their own shading language simply because at that point we could control how it was built, how it abstracted over the rest of all the platforms that we needed to support on the back end, and how it evolved in step with the WebGPU API itself. From that perspective, I don't think it's going to be a popular decision, but I would encourage any graphics developers who are curious about WebGPU to just give it a try, because my personal experience with this was I came in, like I said, I kind of took a WebXR-related break from doing GPU-based stuff on the web and then came back into the WebGPU group later on about two years ago at this point. And so a lot of the foundational infrastructure for the API was set up at that point. And I just kind of dived in and said, I'm going to just build some things with it and see how I like it and then complain loudly about the bits that I don't. And I tried out Wigzle at that point in time as the shading language. And I kind of hated it. It was not fun. It was kind of janky. And there was a lot of bits that were missing. And there was just lots and lots of typing required to do simple things. And I was not happy about it. And I communicated some of the things that I was having trouble with to the team that was working on the language. A lot of other people did too. And the team that was working on the Wigzle language just did an amazing job of taking my feedback and the rest of the community's feedback and iterating and iterating and iterating and iterating and working out all the bugs and implementing the syntactic sugar and addressing the pain points. And two years after that really, really bad first impression, I can confidently say that it is my favorite shading language to work in. And I couldn't even point to a specific point at which that Switch happened. It was just going back and, you know, every few weeks I'd hear from them, Oh yeah, like we've implemented this and we've done that. And we fixed this problem that you complained about. And so then I'd go into my own projects and I'd update all the shaders to take into account the latest and greatest. And it evolved in front of my eyes into this thing that really felt very good to work with. So while nobody is going to be happy about being told, Oh, you have to do yet another translation of these shaders. If you are just sitting down and saying, I'm building for the web and I'm going to have to write the shaders for the web, for this specifically, I think that you're going to be pleased with the experience of actually building these shaders because it really does an excellent job of making a very nice feeling environment to work with. Bonus points if you're using WebGPU in a native environment like Rust, because the language is very Rust inspired and so it will just feel very much at home in that environment.
[00:43:29.654] Brandon Jones: So it's a really interesting question, and I'm glad you brought this up, because it very, very much depends on the architecture of the library that you're talking about. With something like Babylon JS, my understanding at this point, and I haven't actually tried it, I really should, is that you can basically go in and flip a single Boolean in your initialization code and say, I want you to be WebGPU, and it'll just do it. They've constructed their library in such a way that You generally aren't interacting directly with the shaders behind the scenes. You're selecting high-level materials and saying, like, this is the properties visually that I want of these objects and go. And because they've made that architectural decision, they were able to completely hide the abstraction between the two APIs. behind the scenes and say, well, you flip this flag and we initialize it with WebGPU, and it just gets better in these different ways. We're able to do this faster. We're able to do this effect nicer, et cetera, et cetera. On the other hand, something like Three.js was designed in such a way that it did, I don't know if they promoted having people change the shaders directly. But that was certainly how people use the library a lot, is going in and directly manipulating their own shaders, injecting it into the default materials and all that. And as a result of that particular usage pattern, 3 is going to have a more difficult time updating to use WebGPU. Now, they've been working hard on it. And part of that pattern is saying, well, we don't want to encourage people to manipulate the shader strings anymore. Instead, we're going to build something that is much more like the material graphs that you will see from the Unities or Unreals of the world, where you're describing things at a much higher level of saying, well, we're going to have this particular input and this particular output, and they tie together in these ways. And then we will build the shader for you based on that description. That's work in progress on their end. I haven't checked to see how far along it is, but I know that there are multiple demos on the 3JS site of using WebGPU. They're cool. They're compelling. And I feel very confident that they're going to make good use of the API, but it's not going to be the seamless one flag switch that some of the other libraries might be able to achieve. So yeah, again, it's going to depend very much on how the library is structured.
[00:46:04.269] Kent Bye: In your nascent GPU ecosystem, you'd also mention things like Construct 3, Google Earth, Google Meet, Play Canvas, Sketchfab, TensorFlow, Unity, as well as like Babylon.js and 3.js, which we've already talked about. But I'd love to hear some of your other thoughts of how you see some of these other applications or services starting to use WebGPU.
[00:46:26.053] Brandon Jones: Yeah, I'm not going to be able to get into exact details about how everybody's using it, because in some cases, I just simply don't know. But for that particular slide, we just started reaching out to a bunch of companies that we felt like were likely to benefit from WebGPU, or that we had already heard were interested, or doing ports, or anything like that. And we got back almost universally positive feedback from everybody involved saying, yes, we are planning on it. We are actively in the middle of it. We have already published our version of using WebGPU. Everybody's enthusiastic about it. Anybody who has been working in that graphical space on the web seems to be very, very enthusiastic about this next step. We are not hearing very much in the way of, why couldn't you just make WebGL better because it would have saved me time? I think everybody recognizes that having a little bit of a clean break from, like I said, that 30-year-old API shape is going to be beneficial. And so, yeah, there's just tons of enthusiasm in the ecosystem all around for what this next step can bring.
[00:47:45.103] Kent Bye: Yeah, I'd love to hear any additional context for something like tensorflow.js, especially because the whole emphasis during the keynote of Google I O that the web GPU can make the web more AI ready. So that kind of framing. And so obviously there's things that are happening, maybe cloud rendering, something like mid journey where stuff is happening on the backend server, but you showed some demos of doing like a diffusion model natively on the web that is able to do it, you know, less than 10 seconds, whereas it takes like three times longer doing it on the previous methods. And so. Yeah, I'd love to hear how you see the AI use case of using the GPU of either a phone or computers as part of what do you think that's going to start to enable with more of these embedded native AI compute that's happening more on device rather than on the cloud and what that might be able to enable.
[00:48:34.035] Brandon Jones: Yeah, so I can't speak too much to the TensorFlow use case specifically simply because I don't know a lot of the details. I will be able to say that a lot of the speed up that they got because they were already using WebGL to offload some of these computations to the GPU. But when you're doing something like that with WebGL, everything has to be structured in a very specific way so that it looks like a rendering operation. And effectively, you're saying, well, I'm going to push out some triangles. And every pixel of that triangle is going to represent a particular computation that I want to do that has nothing to do with the triangles and nothing to do with color outputs. But I have to encode them as color outputs in order for the GPU to make sense of it all. And you just end up jumping through some hoops to make that happen. that slow you down and there are certain ways that you would like to go about using that data like Caching intermediate results and whatnot that you just really couldn't do because of how those APIs were structured so a lot of the benefit that something like tensorflow.js sees from Being able to move to web GPUs that we have proper compute shaders now and they can do the same workloads But they don't have to pretend that they're drawing operations anymore And this both brings a clarity of code, where you can state more directly what you mean. It brings more flexibility to the inputs and outputs you can have for any particular step of that, because you don't have to treat everything as if it's being packed and unpacked from a color. And it also brings things like being able to control shared memory for each batch of computations that's taking place on the GPU. So you can say, well, I'm going to do a whole bunch of work over this little group of neurons or tensors or whatever they're called. I really ought to figure out the term for that. But because these are all interrelated, I can take some of those operations and I can save them off to this little piece of shared memory over here and not have to redo them. Whereas with WebGL, you just kind of had to redo everything every single time. And all of those add up together to make for a lot more efficiency. It's not completely free. You do have to do the work to actually do that conversion. And it's not like you can just take the shader that you used to be using in WebGL and plop it into WebGPU and say it's magically faster now. You have to convert shader languages for one. But while you might get a little bit faster that way, it's generally you want to make use of the capabilities that the new API gives you. But there is that opportunity there. Looking more broadly at the impact that this can have on those use cases, I have not been very deeply immersed in the machine learning ecosystem up to this point. But as I've been looking around at some of the use cases that people are talking about for WebGPU, obviously, machine learning and AI comes up a lot. One of the interesting things that I've seen is that there's a lot of dissatisfaction with the status quo for how some of these operations are done in that from my little bit of fiddling around with this, a whole lot of these operations are being done with like, Oh, well go out and get this very, very specific Python environment and download this huge collection of dependencies. and then go get the specific model. No, not that one. This one, this version of this one. And oh, wait, you didn't have an NVIDIA GPU? Well, you can't do it because everything that we're building is written off of CUDA, which is NVIDIA only. You have this just giant pile of dependencies that's really pretty hard to manage. If you've ever tried to install any of the local models and try to run stable diffusion on your own GPU, it is a pain. This is not a trivial thing to do. And so there is a lot of enthusiasm around, very specifically, the idea that I can build something in this environment now, WebGPU, on the web, where I can send you to a page that will just have all the dependencies there. You don't have to get this specific environment, because I'm already hosting it on my server. I'm just giving you the right package. And because we've written the shaders that do all this computation in Wigzle rather than CUDA, they should work on anybody's GPU, all the way down to your Android phone, not just this one specific device with this one specific set of drivers and this one specific language. And there's going to take some time for that ecosystem to develop, but we're already seeing the beginnings of it with people exposing some of these large language models or diffusion models to the web. And it's really interesting to see how much simply being able to deliver something through the browser can change the calculus. it makes it much more approachable for a lot of people. Now, the flip side to that is what the web doesn't fix is the fact that a lot of these models still depend on lots and lots of VRAM. And they're running off of models that the set of data itself is like five gigs more easily. And so you go to any of these existing demos, and you say, oh, this is WebLLM. Type something into the chat box, and we'll send something back. And you say, great. I'm asking a question. And it says, hold up. Just downloading 4 gigs in the background. Don't mind me. Come back in an hour. You'll be fine. And that's only if you're running on a device that has enough GPU memory to actually execute the model. So there's a lot of potential there. But there's also some hard fundamental problems that aren't going to magically be solved just because we're on the web. And if we want to get to an environment where you go onto a website and it's able to run a nice compact little LLM that you can use from your phone, and it's just going to happily spit away on your GPU for privacy's sake, that's a wonderful future. And I hope we get there. But we're going to need to have a lot of people who are much smarter about this thing than I am that are working on reducing the size of the models, making them easier to execute, making them so they don't take up so much RAM, so on. So I see that there's a lot of potential there. I also see that there's a lot of work to be done.
[00:55:18.672] Kent Bye: Just a quick follow up on that. Cause you've mentioned how Wigsel, the web GPU shading language is the input to the GPU. Is that, am I getting the sense that as Wigsel was being developed, it had maybe some of these more abstracted or AI use cases in mind that weren't so specific to graphics, but just more of a parallel processing compute language. Is that accurate? Or was there anything that was happening within the machine learning or parallel processing that was changing the way that Wigsel as a language was being developed?
[00:55:50.133] Brandon Jones: I'm not sure that I would say that. Now, I'm not the best person to speak to this. I did not participate in Wigzle's development by and large, other than complaining about things that I didn't like, as I mentioned. But one thing that I can say, just as a overarching theme for both WebGPU and, I assume, Wigzle, is that we spent a lot of time looking at both what the commonalities were with the other APIs, And also, specifically, what were people actually using? Because there's a lot of cases where it's like, well, all of the cards can do this, and so we should add it to the API, right? And then we'd reach out to Microsoft and Khronos and NVIDIA and everybody, and they'd say, Nobody uses that. We thought they would and we put it in the API, but this just isn't worth the Silicon that it takes to run it. Don't bother adding it to your API. And so there's a healthy dose of being able to look back at what had happened in that space for the prior five years or something like that and say, oh, this bet did not turn out the way that all these API developers thought it would. And so we're going to try and adjust course and target the reality of how people are using these things, not the hypotheticals that came about when they were first being designed. When it comes to that, I'm sure that we could look at some of the things that are being used in Compute Shaders today and say, oh yeah, these particular sets of functions get used all the time because machine learning became a thing. and these ones not so much. There's also a couple of features that are upcoming, such as the ability to do lower precision floating point calculations and using some very specific instruction sets that we heard very specifically were useful for machine learning, and so we're trying to expose them as extensions to the language. But generally, I think when Wigzle and the compute capabilities were being designed, we were just like, we're just going to try and expose the best subset of everybody else's capabilities that we can, keeping in mind how these things are actually used. And machine learning, like, I don't think anybody sat down and said, we need to change the language because machine learning wants this. No, it's We want to just make the language as good as we can. And also, we heard that this will make things better, so we'll bring it into this extension.
[00:58:29.878] Kent Bye: Awesome. And finally, as we start to wrap up, I'd love to hear some of your thoughts of the future of WebGPU. And, you know, there was also very recently an article in Business Insider that was declaring the death of the metaverse. And I thought that was quite funny, just in the sense of WebGPU is just launching that for me, that seems like it's going to be a core component of whatever the metaverse starts to unfold as we have these new parallel compute capabilities that are in this open standard that are built into the future of these web browsers. Yeah, I'd love to hear what you think the ultimate potential of WebGPU might be and what it might be able to enable.
[00:59:04.933] Brandon Jones: Yes. So I skimmed through the article that you were talking about. And to me, the overall impression that I got was the metaverse is dead because it's not the buzzword anymore. The buzzword is now AI. And so we're all going to pretend that that's going to be the next big thing until it becomes not the buzzword anymore. And boy, The real world just doesn't work like that. Like there are so many things which are no longer the buzzword and yet they're around in meaningful ways. Like nobody's looking at smartphones these days and being like, Oh, you know, so-and-so has a smartphone, therefore they're going to take over the world. That was the case for a while. And it's not like smartphones went away. We all use them constantly. There's still a big source of. interest in revenue and advancement in the computing industry, but they're not a buzzword anymore. And so it's just kind of table stakes. And I see that as being both where the metaverse and AI are headed in the future. that the metaverse is no longer a buzzword. And I always hated the term metaverse anyways, because it just means whatever marketers want it to mean. So more specifically, I think immersive computing as a general is no longer the buzzword, but it will continue to be present in the tech industry in a meaningful way going forward. And especially as players like Apple potentially enter the fray, which hopefully is very soon now, Like, yeah, you probably can't base your entire company identity on the metaverse and still hope to be a multi-billion dollar Forbes 100 whatever. but that doesn't mean that it's worthless and gone and someday AI will be in the same place. We had an entire Google I.O. where the word AI was spoken every five seconds approximately because it's the big new thing and it's where a lot of advancements are happening and it's the shiny exciting tech right now and one day it's no longer going to be the buzzword but I feel very confident that it's not going to go away it's just going to be integrated in ways that aren't so flashy, that aren't so in your face, and just become part of what we expect from any of our computing devices going forward. And there will be a new buzzword at that point, and somebody will declare AI dead, and the rest of us will just go on making use of it in whatever form it exists at that point. So I think that this is just the normal hype cycle running its course. And as far as WebGPU goes, we're having a little bit of that moment ourselves. Like we had a great big moment up on stage at Google IO the other day. There's a lot of interest in WebGPU right now. We seem to be hitting it just the right moment. I've never had a web standard get so much enthusiasm before. It's a little weird, but it's very welcome. And I anticipate that one day, probably very soon, it's no longer going to be the exciting buzzword and nobody's really going to care about whether something's running WebGPU or not. It's not going to be something you write articles about, but it is not going to go away and it will continue to grow and evolve. And I think within a year or two, anything that you're doing online that would require either graphical rendering or compute capabilities is just naturally going to use it. and it will become a invisible but meaningful part of the web as we use it today. There are so many places that I never would have expected that you can go online and see WebGL-based content. Whereas when we first started up, we're like, ah, games. It'll be great for games. And now it's used for lots and lots of things. Like, you can go to different online storefronts, and they'll have little WebGL based spin arounds of their products or, you know, just like effects in the background of their page that do nice parallax things when you scroll, like all these weird little things that they're just there. And, you know, nobody is saying, look, WebGL here. It's just part of what you expect from the web ecosystem and web GPU is going to take the same place and make it even better and even faster and even more efficient. And we will all just be happy that it's there without. it calling itself out over and over again. Awesome.
[01:03:37.780] Kent Bye: And is there anything else that's left unsaid that you'd like to say to the broader immersive community?
[01:03:43.483] Brandon Jones: One thing I will mention, since it's a question I get a lot, yes, WebGPU will work with WebXR at some point. It's really just a matter of getting the right people to work on the integration. And I don't think anybody's allocated the time for it yet. But that will happen at some point because everybody recognizes that You know, immersive computing is one of the more demanding graphical applications that you can get out there. And so certainly we want to be using the best tool for the job. So yes, that will happen. But I don't have time.
[01:04:15.199] Kent Bye: Awesome. Well, Brandon, thanks so much for giving a lot more context is the history and evolution and some of the broader political dynamics that are happening with all the different browser vendors that I think as we move forward, I'm looking forward to all the different major companies putting it into their browsers and you can start to play around with it today. And both Chrome and edge and anything that runs Chromium, I believe even like. the oculus browser uses a chrome so start to play around with it and look for other integrations from unity and play canvas and babylon.js 3js you know all of them are working on different integrations and yeah i'm looking forward to seeing other shaders sites like compute.toys i noticed that were these aggregations of folks tinkering around and Yeah, it needs a lot of human labor to actually develop Wigzle translations of all the existing shaders that are out there. There's going to be a process of using the API to its full extent. And so I'm looking forward to having libraries that I can grab stuff that maybe other people have written that I could start to integrate and play around with. But, uh, but thanks again for taking the time to come on the show and help break it all down.
[01:05:14.868] Brandon Jones: Yeah, no problem. I I'll mention one last thing since you were talking about it with WebGL. We found that shader toy was a. never ending fountain of bugs. Like, people would do the craziest stuff on there. And it would always flow down to us and like, oh, this exposed the weirdest bug in this particular driver and we got to go work around it now and I am absolutely thrilled at the idea that Compute Toys is going to be the next generation of that and we're probably going to get all sorts of awful bugs out of there and we're going to love every minute of it because it means that people are pushing this thing to its limit.
[01:05:56.509] Kent Bye: Awesome. So that was Brandon Jones. He's a software engineer at Google, and he's contributed to multiple browser standards that deal with 3D or immersive computing, including WebGL1, WebGL2. And he's been the spec editor from WebXR, as well as a spec editor for WebGPU. So I have a number of different takeaways about this interview is that, first of all, Well, obviously, as we move forward in the future of the Metaverse, there's going to be all sorts of different open standards that are going to be driving whatever happens with this interoperability of these XR devices. WebXR, OpenXR, you have GLTF, you have WebGPU and WebAssembly. All these standards working together is going to create these interoperable standards that are going to make the future of the virtual worlds and the Metaverse. And so what Brandon was saying is that by the very nature of being this compatibility layer for the world's computing devices, is that it's not always going to be up to the latest bleeding edge of whatever the native APIs are going to be able to do with these GPUs. Oftentimes those native APIs are doing proprietary applications. And so you end up having to rewrite all these different things anyway. And so if you want to write it once and have it be used across many different application devices, then WebGPU is going to be the place to be able to do that. And if we're thinking about the future of the interoperable metaverse, that's something that's going to be a key component as we start to move forward. So it was really fascinating to me both to dive deep into what's happening with Apple, because Apple has been the one company that has been holding back WebXR. It's been shipping in Chrome 79, which originally shipped back on December 10th, 2019. WebXR has been available in the Chrome browsers for well over three years now, and Safari still has yet to ship any viable version of anything like WebXR. And when you don't have all the browsers shipping one of the features, then it's as good as not even being shipped at all. Granted, you can start to use all these different features on any of the different Chromium browsers that are shipping that include everything from The Oculus browser, which is built into the MetaQuest headset, but also Microsoft Edge and Brave and Opera, a lot of different web browsers that are based on the Chromium browser. Now, one of the things that Brandon was saying is that you'd love to see a much more diverse ecosystem of different browsers because you have Firefox and Mozilla, you have WebKit, which is the basis of Safari, and then you have all the different Chromium variations. And when one of the biggest browser vendors of Apple and Safari slash WebKit, if they're not going to be shipping something, then you can't really necessarily use it. From all indications, Brandon says he's very optimistic that Apple is going to be shipping something with WebGPU. And with this WWDC coming up on June 5th, we'll all be watching very closely to see if Apple is going to be finally announcing whatever their XR device is going to be and what kind of support they're going to start to have for WebXR and things like WebGPU. It'd be great to hear either some sort of indication or announcement that comes at that same time. So stay tuned on June 5th to get more information on that. but you can start to already play around with what's happening with WebGPU. It does have a new shader language of the Wixel, that's WGSL, that's the WebGPU shading language. And you can look at compute.toys, and there's lots of different samples. I'll include some links in the description where you can go check out some of the different examples. Be sure to have at least Chrome version 113, and you likely need to be on a PC or Mac version of Chrome rather than the mobile Android version, which I think is gonna be shipping sometime in the future here. And check out what's happening with Babylon.js. It sounds like they've been having a lot of integrations. It's still ongoing work that's happening with VGS, Play Canvas, Unity Web Exports. And there's a whole 11-minute talk that Brandon was a part of as part of Google I-O. I'll put a link in the description for that as well, because that's great just to get a little bit of an overview of what's happening with WebGPU. So really fascinating to hear a little bit more of the evolution of technology where you have both proprietary implementations where someone can be completely vertically integrated and just worry about their own hardware, but you have this cross-compatibility issue if somebody's trying to write something across many different drivers, then you end up having to rewrite the same code. And so the need for this high-level graphics API has been there for a while, and because it's through the web, then WebGPU seemed like a great use of that. Vulkan is doing that in more of the low-level ways, but I did actually read through Andy McClure's co-host post, I Want to Talk About WebGPU, and that dives deep into both the history and a lot more context as to how the Vulkan API was really not really meant for humans. It's literally meant for middleware developers like unity or unreal engine to be able to have Some way that they can speak to the gpu through machine code And so it's not necessarily meant to be for humans and because of the long-standing legal disagreements between the chronos group and apple I was not actually aware of the context or what? Brandon's talking about, but I do know that Apple has not been actively supportive of things like OpenXR, which is, I think, a great shame because you would like to see some of these different hardware devices be treated more like what you see with the PC, which is this interoperable platform. But Apple is completely closed in everything that they do. And so if they do end up shipping Some sort of XR device you can expect none of your existing peripherals or anything that's coming from outside of the ecosystem To be compatible with it In fact, I think even saw that they're going to be developing yet another proprietary core to be able to interface with it So anyway, we'll see what Apple has to announce on the 5th of June to get a little bit more context about what they're doing and But brand is not very optimistic that there's going to be any sort of interface with any of the Khronos standards, whether it's Vulkan, you know, Apple has metal, or if it's something like OpenXR, which Apple's likely going to generate their own interface to interact with something like WebXR, which may be some of the delay for why it's taken them so long is because they're developing their own device level XR interface layer that goes above and beyond what OpenXR is trying to do. So that's lots of great context. We'll all be watching on June 5th to see what happens there. And like I said, Ada Rose Kanan is one of the spec editors of WebXR and has been working at Apple and was the host of one of the recent WebXR discussions. And so all that I see is very positive news. So even if they're not supporters of OpenXR, hopefully we'll at least see them implement something like WebXR within Safari that's shipping here soon. both on the mobile devices of Safari and their computers, but also whatever potential XR device they may be shipping, if there is a browser that they have some sort of native integration for the web built into there. And I think once that launches, we'll see a lot more innovation. And especially once you see broad compatibility for something like WebGPU, there's all sorts of different parallel compute applications, as well as 3D rendering applications that are going to start to be developed. Like I said, you can start to go look at what's been happening with the OpenGL shading language, the GLSL, with something like ShaderToy. And so something like compute.toys is a place where you're going to see the equivalent for the Wigsel, the WebGPU shading language experimentations. You can start to see what's possible. There's lots of different samples, and there's more and more. It's just officially launched within Chrome, and so it's very early. And I'm personally very excited. This is going to be a key component for whatever the Interoperable Metaverse is in the future, because If you're building upon these open standards, this is a key thing to be able to have both 3D graphics as well as the different aspects of parallel compute, which you'll likely see a lot more AI and machine learning applications, which are featured very prominently during the Google I.O. conference. But there's certainly lots of 3D graphics and virtual world and interoperable metaverse implications for WebGPU as well. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.