#350: VR on the Open Web with A-Frame and WebVR

josh-carpenter-2016Is VR on the open web going to provide a good enough experience as to be a viable distribution platform for certain VR content? That’s the big question that people have been asking for the past couple of years, and there’s been a lot of big steps towards that within the WebVR community. Before GDC this year Mozilla and Google proposed the 1.0 version of the WebVR specification.

I had a chance to catch up with Josh Carpenter at the VR Hackathon before GDC, and he also had some exciting news about moving frame rendering from the browsers to the Oculus and Vive runtimes. He talks about going from 10 fps to 500 fps with the Servo Webrender, the LA Times Mars experience using WebVR, AFrame, and the future of the open web and WebVR.


Here’s Patrick Walton talking about the Servo Webrender at a meetup hosted by Mozilla in February:

Here’s a demo of the Servo Webrender getting 60fps compared to other browsers running this demo scene.

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. My name is Kent Bye and welcome to the Voices of VR podcast. Today, I talked to Josh Carpenter, who is one of the original members of the MozVR team at Mozilla. So I've talked to Josh a number of times over the years where he's given me kind of a state of the union when it comes to the open web and web VR. So in the podcast, we start to look at is the open web fast enough to be able to drive VR experiences that don't make people sick? So we take a look at some of the latest innovations, such as offloading the frame rendering to the Oculus and Vive runtimes and what kind of performance that gives you, as well as some big examples of the WebVR out in the wild, such as the Washington Post and the Mars Explorer experience. So that's what we'll be covering today in today's podcast. So this podcast is brought to you by the Virtual World Society. And if you haven't had a chance to listen to the interviews that I've done with Tom Furness, they're really quite amazing. I mean, they're some of my favorite interviews that I've done so far. If you go back to episode 245, you'll hear 50 years of VR with Tom Furness, and you'll hear how Tom was actually one of the original pioneers of VR back in the 60s, and that he has gone through a number of different phases of his career, going from working in the military, going into education, and now through the Virtual World Society, he wants to bring forth this vision of creating the Peace Corps of VR, where people take virtuality and take it out and solve real-world problems. Take a listen to episode 347, it's the latest interview that I've done with Tom, and we cover all of his vision, and it's really quite inspiring, I must say. So with that, just to set the context a little bit with this interview with Josh, I was at the VR hackathon that happened just before GDC, and I was at the Microsoft React space, and Josh was actually one of the judges of the competition, and he had just finished judging and declaring the winners of the VR Hackathon over the weekend. And so right after that, we had a quick chat to talk about the open web. So with that, let's go ahead and dive right in.

[00:02:20.441] Josh Carpenter: My name is Josh Carpenter. I was the founder of the MozVR team, bringing virtual reality to the open web. And previous to that, I was a Firefox OS design lead, building a web-based operating system.

[00:02:31.086] Kent Bye: Great. So maybe you could give me a little update as to what's happening with web VR now.

[00:02:35.428] Josh Carpenter: Yeah, sure. So it's been two years since myself and Vlad Vukicevic at Mozilla wondered, can we actually combine virtual reality and the open web? And if we can, does anyone want it? Is it going to have the performance that we need to have, or are people going to be sick? And so Vlad and myself along with Brandon Jones of our Google release builds of Firefox and Chromium that enable a user to plug in an Oculus Rift and actually open up Firefox or Chrome and actually be inside a virtual reality website using WebGL. We released that not really knowing what the developer response was going to be. And the response has been really, really awesome. Developers have done some really amazing stuff. The feedback has been unanimously positive. People seem to understand that while it's great we have these app stores and you can install amazing applications in Unity, that it would be really nice to have a low friction consumption experience. Like, just click on a link and you're in a virtual reality website. that it's nice if I'm a web developer, I don't know Unity, I can leverage my skills to actually create a web VR experience. Or if I am someone who wants to publish content, I don't want to have to necessarily put it through an app store, I want it to run on any app store, I can just publish it to a server somewhere. All the inherent advantages of the web are still needed in virtual reality. And so response, frankly, is more positive than we thought it would be. So we've been kind of riding this process over the last two years of validating that the performance works and that the market actually wants what WebVR can actually bring to the table. And so now what we're looking forward to is, well, what's next? What's going to happen this year? What do we need to actually take WebVR to the next level?

[00:03:56.358] Kent Bye: And so I guess up to this point, a lot of the WebVR has been enabled in the nightly builds and not the official mainline branch of all of these browsers. When can we expect to see some of these features be baked into the main releases of these browsers?

[00:04:10.885] Josh Carpenter: Yeah, my hope is this year. A really big milestone just happened. We've been using a version of the WebVR API that was pretty experimental. It actually predates most of the modern runtimes from Oculus and from Valve. So we were making assumptions about how VR would work. We don't need to make assumptions anymore. We know about things like pose prediction. We know about things like async time warp. We know there's motion controllers. So what we've been able to do over the last couple of months is redesign the API. And now we have a proposal for a WebVR 1.0 API. This is really important because it's actually been ratified by Google, by Microsoft, and by Mozilla, by representatives and engineers from each of those companies. We've just published it as a proposal to the community saying, well, what do you think? This can't be top down. We have to agree as a community that this has what it needs to have. We're going to let that bake a little bit, get responses to it. You can go to webvr.info or iswebvrreadyyet.com to get more information on the API and what's actually happening with it. We really are looking for feedback from the community that this does what they want it to do. One reason I'm optimistic is You know, when you talk to people about WebVR, again, I thought that people would be skeptical. They're not skeptical of the idea at all. Everyone's read books about the metaverse and look at the web as actually being maybe part of what the future of the metaverse will actually be. What they're skeptical about is performance. They want to see, you know, can the web actually do 90 hertz? Can the web do sub-20 milliseconds latency? You know, can it actually have the performance that we need? The WebVR API is actually not just an API about talking to hardware, it's actually going to unlock a whole bunch of those performance benefits because it stops trying to do everything inside the API, inside the browser, and actually hands off a lot of the rendering process to the runtimes from Valve and the runtimes from Oculus and from others. meaning we inherit for free a lot of the performance paths that they're actually baking into those runtimes. And frankly, from a browser vendor standpoint, we have to do less work. The API surface reduces drastically, easier to use, easier for us to implement. So what all these things mean is that you've got a new API with better performance, better design, frankly, easier for developers to use, and agreed on, in principle, from all the major browser vendors. So my hope is that actually translates into those browser vendors implementing it into their main releases. And if people want to actually help make that happen, what they can do is build more web VR stuff. When there are sites like Chromium Features where you can actually go and vote on features you want to see, I think the same exists for Edge, the best thing the community can do is if they think this has value and they're interested in seeing it happen in browsers, let browser vendors know that. Those voices are heard.

[00:06:26.518] Kent Bye: Yeah and there was a number of different web-based VR projects here and I did notice that there was still either low frame rates or the latency wasn't quite at that spec that really is matching the native binaries and so do you see that it's a matter of time for the technology stack to get better to be able to drive these 90 Hertz or sub 20 millisecond latency experiences or Is this something where there's other optimizations that need to be done at the underlying architecture of how the web browsers are able to render these real-time environments?

[00:06:57.993] Josh Carpenter: There's a lot of room for improvement within the existing render engines before we have to throw them out. I know that from talking with the engineers. There's a lot of things we can be doing to make it run faster. This API redesign is one of those things. We don't have to do everything inside the browser the way we're doing it. We can actually throw frame rendering over to the runtimes and actually have them take over a lot of this stuff. We should get a chat going with Brandon and people like Vlad to actually talk to the nitty-gritty of the browser render engine internals to validate that. I myself am not an engineer, but I work with smart engineers and then translate what they say. And then what's interesting is, OK, well, everything I'm describing right now is WebGL. WebGL is amazing, but WebGL is not HTML and CSS. WebGL is like a black box of imperative code. HTML and CSS, it's declarative. You can parse it. If you've used applications like Instapaper or Pocket or Flipboard, they take web content, and because web content is based on standards and can be parsed, they can take an image or they can take text from a site and transmogrify it, you know, improve the legibility, create a reader mode, create a whole product around that content. You know, I would hate to lose that as we go into WebVR. If WebVR is only WebGL, I think we are at risk of losing that. I think we have to find ways to actually bring HTML and CSS, which may seem clunky, but maybe our conception of what they actually are may be limited. We have to actually bring them forward into virtual reality with all the actual advantages that they bring. Now, that's a challenge because we spent the summertime at MozVR last year actually taking experimental versions of Firefox that had support for CSS VR, using CSS 3D transforms to make virtual reality experiences out of just pure HTML and CSS. It's actually really, really fun. Like, a couple lines of HTML, I could make a world, like a bunch of divs that made a box, or a 90-foot screen with an iframe in it that was actually running an emulated version of Neuromancer from the 1980s, which is so meta, it's amazing. But the performance wasn't great, and there's a lot of problems with glitches, essentially, oblique angles. But there's new versions of new render engines coming on the pipeline where I've seen demos that... Imagine a scene with like 4,000 animated elements with border radius on it. I mean, we're talking something that's very punishing for render engines to animate traditionally. All HTML and CSS, thousands of divs. 10 FPS in Chrome, 10 FPS in Firefox, 600 plus FPS in these new render engines. If anyone wants to verify what I'm saying, look up Servo, look up WebRender. This is a new WebRender engine from Mozilla Research, built on a new language called Rust. What Rust is optimized for, among other things, is GPU-CPU architectures that are massively parallelized. So it runs really, really well. It is optimized for the direction that computer, CPU and GPU architectures are heading. And what WebRender does is, WebRender is a four-month project by a couple of guys inside Mozilla to try and reimagine a WebRender engine to be as much like a game engine as possible. So I would connect you to the work of Patrick Walton at Mozilla, he's really been the main guy on this, he's a really brilliant guy, he recently did a talk on this, everyone wants to follow up on this, just look up Patrick Walton web render. He did a talk at Mozilla recently at a Rust meetup on this stuff. The performance will blow your mind. So the implication of this, if we can continue to improve that performance, Because all of a sudden, you've got a world in which we can render the DOM, we can render HTML and CSS at hundreds and hundreds and hundreds of FPS at a very high performance. So perhaps we can actually create WebVR scenes that don't sacrifice all the wonderful things that the DOM gives us, like parsability, frankly, like just displaying text. Like, I love WebGL, but displaying text in WebGL is a nightmare. And that's what HTML and CSS are sort of naturally good at. So new WebVR API, one for performance is out of the gate. A lot of room for improvement within Chromium, Gecko, and other render engines. And then I think you're going to see a new generation of render engines come down the pipeline that are the next big leap forward.

[00:10:30.525] Kent Bye: While you were at Mozilla still, you worked on a project called A-Frame. And I understand that with WebVR, there's an interface like Three.js as a JavaScript library to be able to create scenes that ends up getting rendered to WebGL that you can then see in virtual reality. And so how did A-Frame come about, and how does it relate now to other libraries like Three.js?

[00:10:50.947] Josh Carpenter: Yeah, so if you're listening to this and you use Unity, I wouldn't necessarily pick up A-Frame and think it's going to be an amazing Unity-like tool. Like, Unity is amazing. If you're a 3D graphics person or you're a game designer or anyone who has experience with 3D content creation, Unity is awesome. And I happily use Unity myself on projects. But there's a lot of other people out there, roughly tens of millions of web developers who've never touched Unity. But man, they know Node.js, they love GitHub, they love installing NPM modules, they know HTML, CSS, they know React. What we wanted to do, imagine the world in which those sorts of developers could create VR experiences. And what sort of creativity would they bring to the table from a different perspective? You know, because the web is so open source, what about a world where someone in Italy could create a physics component and then just share it with another developer who's maybe 14 years old in North America, you know, and she could just plug it into her scene and be using it immediately, using just simple JavaScript. These sorts of experiences that are fairly easy to create in Unity and the sorts of sharing of components that are fairly easy in the Unity world, we wanted to bring into the web. And that's what A-Frame is. The whole proposition of A-Frame is build WebVR experiences that run on desktop, mobile, or virtual reality without having to know JavaScript. You just write a couple lines of HTML saying I want a sphere, I want a sky, I want a cube, I want a model. The model is this URL, OBJ, the sky is this particular texture map, and hit save and you're good to go. You have a scene that just works out of the box on cardboard, on your mobile device, iOS device, on desktop, etc. Incredibly easy to use, and we've done demos where you can actually teach someone to use this in five minutes. And that learning curve is amazing, and people begin to create very basic scenes. Not scenes that would compete with what's made in Unity, but it's just so fun to actually make a scene and share it with your friend. And because of the web, anyone, anywhere can actually view that scene. You're not limited to having to own a headset. You're not limited to having to be on the right platform. And then, so under the hood of A-Frame, if you don't want to be limited to a cube, a sphere, etc., there's actually an entity component model. So anyone familiar with Unity knows that one of the, really, the strengths of Unity is permissionless innovation. You're not limited to what Unity has out of the box. You can go to the asset store and grab something, that's built in an entity component model. So we actually built A-Frame around an entity component model itself. So you can actually say, well, I don't want to use a cube or a sphere. I want to make an entity. I want to add these components. And this component is a physics engine made by someone in Italy who just shared it on an open source server somewhere. So since we launched A-Frame, we've seen tons and tons and tons of components, custom components created by the community and shared with the community. So we're really hoping that A-Frame triggers a cycle of permissionless innovation between open source developers. It so closely embodies the spirit of what we love about the web that we think isn't quite there in native virtual reality quite yet. This is where we kind of hope the web can be a nice yin to the existing yang of like native applications and apps like Unity.

[00:13:34.996] Kent Bye: One of the things that has made the web so amazing is to be able to view source and be able to look at the code there and copy it and apply different things to your own website. I guess one of the tricky things is that there's certain copyright around the images and the content that's there. And so when you start to look at 3D models or 3D content that may be rendered out into code, and perhaps you can look at it, What are some of the intellectual property right considerations when you have immersive 3D content and are you able to just kind of grab and view source and copy it and put it on your own website?

[00:14:08.762] Josh Carpenter: Yeah, that's a great question. And to be fair, I'm not a lawyer, so I shouldn't even attempt to answer this question. But, you know, there are creative commons and licenses like that that will actually allow people to use assets uploaded to the web under fair use policies. We have also seen this interesting prior art in applications like Flipboard and Instapaper, whereby someone can take web content and then alter the formatting and present it in a different fashion within a different application. So I think nothing about virtual reality, in my mind, should actually change the way the web currently works. It would be a shame if it did, because that would be sacrificing one of the web's biggest advantages, which is sharing content in a very open way.

[00:14:42.592] Kent Bye: And you've since left Mozilla, so maybe you could talk a bit about what you're doing now and what's happening with the Mozilla team.

[00:14:48.337] Josh Carpenter: Yeah, sure. So it's pretty exciting. It was two people for a long time, and there's no headsets in the market. I think people kind of looked at us in Mozilla as kind of like weirdos working on a kind of weirdo sci-fi kind of thing. But headsets are on the market now. And just the other day, we saw Washington Post use A-Frame to create a Mars Explorer experience, which anyone can experience on a WebGL-enabled browser, which means billions of people. So you're really seeing some traction behind WebVR right now. And so as a result, Mozilla's been adding more resources to the WebVR projects. We're hopeful that the Chrome people will actually be doing the same thing in the next year. So it feels like we're the starting gun of virtual reality on the open web has been in the last month with that API and with heads turning this year. I'm very, very excited. Which means that people like myself who really believe in WebVR, you know, it's a good time to go and start to build empires on top of it. It's a really good time to start looking at it if you're a startup and adding it to integrating into a stack that maybe also includes Unity. And I'm talking to a lot of different companies who want to do that right now. Within Mozilla, the MozVR team is in extremely good hands. The guys who actually do all the work are continuing to actually run it. So guys like Chris Van Weersmich, Casey Yee, Diego Marcos, Kevin Yeo, Kip Gilbert. These are the programmers who developed A-Frame, the programmers who developed the API. They're getting more resources, more support. There's some really, really cool stuff coming from that team. In fact, a new version of A-Frame might be out by the time this podcast actually hits the air. And it'll actually be at GDC next week. So if you actually want to see WebVR in person, it's dropped by Mozilla's booth. I think it's going to be right next to the Unity booth.

[00:16:14.298] Kent Bye: Great. And so what type of experiences do you want to really experience using the open web?

[00:16:19.282] Josh Carpenter: Yeah, that's a great question. I'm really excited about multi-user experiences. So we just judged a competition, a hackathon, the VR hackathon, at the Microsoft Reactor Space in SF. And the winner was actually a project whereby anyone could hit up a URL on any device and actually step into a virtual reality experience or interact with that virtual reality experience. So on a mobile phone, I could tap the screen to actually drop balls on participants in the world. And if I had my headset on, I could actually see those balls dropping all around me. Just think about all the interesting variations on that essential core kind of interaction that could be built. I think that's fascinating. And that's one of the inherent advantages the web has, is we all have a web-enabled device in our pocket. So I'm excited to see that. And then frankly, I'm excited to see what the open source community does. I mean, if there's one thing about the web is that because it's so low friction, because the tools are so low cost, people tend to build really weird esoteric things that you probably wouldn't install as an application. So there's a whole, I think, realm of creativity we're going to see come out of that. I'm really excited to see that. It's really, I think, picking up steam in a big way this year.

[00:17:13.844] Kent Bye: The winner of the VR Hackathon had 25 players and 6 viewers. So there's 31 people within a VR experience at the same time, you know, being able to use the open web and Node.js to be able to drive those types of experiences. And so what do you think, you know, using the open web stack, what does that give you an advantage of versus doing something like Unity? From a technological standpoint, what would that enable you to do that you might not be able to do otherwise?

[00:17:39.673] Josh Carpenter: Yeah, sure. So I think that with the open web, these sort of multiplayer experiences are obviously just a lot easier to create, I think. You know, you get a lot of these tools for free. And a lot of them, like tools like Meteor.js make this stuff very commoditized to actually create. The other aspect I think I'm really excited by is not everyone has to be viewing the virtual reality version of that experience. So I can be viewing a desktop text version of that experience. You can be viewing a version of that experience that's optimized for your mobile device. And someone can be viewing the virtual reality version of that experience. The web is inherently responsive. We're actually very good at actually . . . And everyone already has an application in their pocket that actually already is capable of viewing that experience with no application installed necessary. So I think you're going to see experiences and business models, frankly, built on top of low friction consumption with as many eyeballs as possible. Like if you care about anyone being able to click on a Twitter link and suddenly be inside your experience, like the web stack's a really nice web stack to actually use. So Reach is going to be a massive, massive driver for this.

[00:18:40.957] Kent Bye: Yeah, I think that was the thing that was so striking to me is that they just gave a URL and then everybody could just type it in and then share a VR experience like right away.

[00:18:49.339] Josh Carpenter: Exactly. Yes, it's really exciting.

[00:18:50.880] Kent Bye: I mean, I really I'm just waiting for some really interesting business models be born on top of this and Finally, what do you see is kind of the ultimate potential of virtual reality and what that might be able to enable?

[00:19:02.324] Josh Carpenter: Yeah, I've heard you ask this Well, you know, it's pretty interesting the notion that it can subsume every other form of media is obviously pretty mind-melting and I'm curious to see how it'll dovetail with other forms of emerging technology like artificial intelligence to reinvent the tasks that we do on a day-to-day basis. Let's say like even just productivity tasks, plus AR, plus VR, plus artificial intelligence will yield some really interesting results. Once we can begin to present alternate realities to you and once we can actually begin to modulate your emotional state or pay attention to your emotional state in sort of a feedback loop, that's very interesting. Very interesting. Like one of my pet projects is kind of imagining operating systems. Like I built one as a result of that. I built stadiums and home automation systems before that to try to kind of influence your mental state based on like the home being aware of you. I'm really interested with where this is going. You know, like, when I sit down to my computer to try and do work, my computer doesn't know I'm trying to do work, even though my foreground application is, let's say, like, NvAlt or Notes applications or Markdown. What if the operating system did know and could actually block out reality or occlude certain portions of reality, like noise-canceling headphones for reality? That's very, very, very, very interesting territory. I feel instinctively, like, it's time for us to be building operating systems around those sort of fundamental building blocks, so that's what I'm excited about.

[00:20:20.858] Kent Bye: What would a biofeedback add to an operating system, and what would you imagine that looking like?

[00:20:26.422] Josh Carpenter: It knows what I'm trying to do, and it knows how focused I am, for example. Foreground application is writing application, but my mental state is not focused. And it's really time for me to do some work. So let's dial down all the distractions. Let's say turn off notifications. Let's actually occlude portions of my view. Let's say, imagine you blur out everything around me that is not actually the work at hand. It helped me get my work done. Or let's say I'm running, and I'm five minutes from the goal line, but I can't quite make it. I'm tapering off. It knows I'm tapering off. It knows I need an injection of music. And maybe the saturation levels are brought up on reality. Or maybe everything that is not the finish line is just kind of fuzzed out and blurred out. Or frankly, I just want to escape after a long day. Help me escape. Or help me feel closeness with someone. Yeah, it's really interesting. You have to kind of imagine a fusion of ones and zeros with meditation, with anything that actually can be mind-altering. That is a pretty exciting frontier. I mean, they say that software eats the world. The notion that software could actually eat pharmaceuticals is kind of mind-blowing. Yeah, so it's a very interesting world ahead of us, I think.

[00:21:25.309] Kent Bye: Anything else that's left unsaid you'd like to say?

[00:21:27.667] Josh Carpenter: I'm looking forward to GDC and trying as many things as possible. I have to say, if anyone's thinking of taking a sabbatical right now, take a sabbatical right now. It's a really good time to take a sabbatical in virtual reality, and then just sample as many things as possible. There is so much to learn. And I was joking with you before that, listen, your podcast has been one of the things I've enjoyed most about taking a bit of a sabbatical, as I actually have time to just consume everything. And the Vision AR videos from the last couple weeks, like, there's so much stuff to learn right now. It's almost overwhelming. But it's a good firehouse.

[00:21:53.264] Kent Bye: Awesome. Well, thank you so much. Yeah. Thanks, Kent. So that was Josh Carpenter, formerly of Mozilla. And now he's off on his own to the next great adventure, freelancing, likely doing some sort of bringing VR to the world through the open web. So just a couple of quick takeaways is that with the web VR, it's something that you have to kind of be patient with, because for a long time, it hasn't been performant enough. And I think it's one of those things that the technology stack is going to continue to improve, and it's going to continue to get there. you kind of have to look at the runtimes from both Oculus and Vive as being the real starting point for other people being able to build on top of APIs. You can imagine if the WebVR would have set forth an API during all the different changes that have been happening with the Oculus runtimes and the Vive runtimes over the last year, then it would have been not only frustrating for the people trying to build WebVR but everybody involved in terms of trying to set the standard because it's really just now starting to settle in. Now I also had a chance to talk to Brandon Jones about some of the other aspects about the open web and WebVR and I'll be putting that podcast out later as well but in the long run I'm definitely optimistic about the potential of the open web and You know, it was really quite amazing to be able to be at this hackathon and to have them send out a URL and within moments later have over 20 people within a shared virtual reality experience. And so I think the web is going to be a huge distribution platform. It's not going to be this walled garden. It's going to be a little less curated than the Oculus Store or even getting titles up onto Steam, which still has to go through a peer review and vetted process through the community. But for the open web, you really start to get into these long tail experiences of the web that are a little bit more ephemeral, but also a little bit more arty and also doing something in a format that may still work in five to 10 years. At this point, if you wanted to create a VR experience and ensure that it was going to absolutely work in five to 10 years, I think that you'd be pretty hard-pressed to rely on some of the current technologies just because they're changing so quickly. And so something like the open web, it starts to lead towards creating these VR experiences on the web and doing it in a way that is using technologies that a lot of web developers are very familiar with. And so I'm optimistic for what is going to come of the open web and through The efforts from both Google and Mozilla and you know, who knows? Maybe we'll hear a little bit more of Google's plans at Google IO coming up here in a couple weeks So with that I just wanted to send another shout out to my patreon if you do want to help this program continue then please do consider becoming a contributor at patreon.com slash voices of VR

More from this show