#472: Google’s Josh Carpenter on Bringing WebVR to Daydream

josh-carpenterGoogle announced at the W3C WebVR workshop in mid-October that they would be shipping a WebVR-enabled Chromium browser in Q1 of 2017. I had a chance to catch up with Josh Carpenter last week to talk about some of the work that Google is doing to enable innovation on the open web, and more about his W3C talk on HTML, CSS & VR and some of Google’s early experiments with hybrid apps that combine OpenGL with web technologies.


Josh talks about how WebVR is drawing inspiration from the Extensible Web Manifesto in being able to provide low-level APIs that will create a common baseline of a solid experience based upon web technologies.

At GDC in March, the WebVR demos on the expo floor had trouble hitting 90 fps, but since then they’ve been able to start to meet that minimum baseline of performance. Achieving this milestone helped to show other VR companies that the web could actually be a viable distribution platform for VR.

But Josh compares talking about WebVR to a VR developer right now as sort of like what it must have felt like to be Tim Berners Lee talking about the potential of the open web to a CD-ROM developer in the early 90s. There will continue to be premiere experiences and innovations happening within native VR applications, but there will likely be unique affordances and convenience that the web can provide to an immersive experience that goes beyond what a native app can do.

Josh gives Netflix as an example to to show the power of the open web. If we were to just look at graphic fidelity as the ultimate measure of performance and value, then we all would be watching movies on Blue-Ray discs rather than on Netflix. But there’s lower friction and instant gratification with Netflix, even though the graphic fidelity is not nearly as good. This is one example of how Josh thinks about the potential of an interconnected Metaverse in comparison to a closed, walled-garden app ecosystem that by all objective measures provides a vastly superior experience.

Josh appreciates the power of strongly vertical integration and proprietary solutions, but also believes that a common horizontal baseline of WebVR could enables the same type of rapid innovation and emergent creativity that the open web has enabled.

He also says that Google’s WebVR browser is going to be based upon the open source Chromium browser, and that Oculus’ WebVR browser named “Carmel” is also based upon Chromium. He says that native web apps like Slack are built on open web standards and bundled with Chromium and Electron, and that he’s looking forward to seeing what type of innovation comes from how developers imagine what a browser could do in a VR experience. One example is an anthropomorphized NPC character that is powered by the Chromium browser.

Josh sees 2017 as a year for exploration and seeing what developers do with the draft specifications of the WebVR standard. Right now Google’s team is dealing with how to view web content within a VR context. Josh says that Apple came up with pinch-to-zoom mechanic that allowed desktop-optimized layouts to be viewed with mobile browser before responsive designs were invented, and that Google is in the process of experimenting
with optimizing 2D content into a 3D context when the pixel density isn’t high enough to do a direct translation. Google has also been experimenting with combing HTML and CSS with OpenGL content in order to do rapid prototyping of user experiences using standardized web development technology stacks.

Josh also shared with me that the Voices of VR podcast has been an important part of the evolution of WebVR since the beginning of the consumer VR gatherings starting with Silicon Valley Virtual Reality Conference in May of 2014. He said that the previous Voices of VR episodes on WebVR have been an important part of both getting the word out, but also helping to build internal buy-in at different key moments of WebVR’s history.

So here’s a list of my previous interviews about WebVR that go all the way back to episode #13. It’s pretty amazing to hear the evolution of where it started and to see where it’s at today with every single major VR company and browser vendor participating in the recent WebVR workshop.

To get more involved in WebVR, then be sure to go to WebVR.info and check out some of the additional links in the previous Voices of VR episode about WebVR.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to The Voices of VR Podcast. So the metaverse is something that science fiction writers have been writing about for 30 years. And I think VR as a medium, we're still really exploring what's really even possible. So I think it's really difficult to really fully imagine the full potential of what the metaverse is going to become. But I think the foundations of the Metaverse are being laid down right now, just from the different VR experiences that are being created in native apps with Unity and Unreal Engine, and at least in app stores. But there's a different vision of this interconnected world that is going to be built upon the open web, and to be able to use what's already out there with the Internet and the World Wide Web to be able to power a 3D version of that, which could arguably be called the Metaverse. So we're gonna be talking to Josh Carpenter today who has been working on this issue of bringing VR to the web for a long time and I've done a couple of other interviews with him when he was still working at Mozilla and he's since been working at Google on the Daydream team trying to bring web VR and all these open standards into the Chromium browser that is going to be released early next year. So Josh actually gave a lightning talk at this W3C gathering talking about his vision of how he sees CSS and HTML and JavaScript being integrated into this 3D web. At the moment, it's a lot of a black box of WebGL compiled down binary code that is either exported for Unity or generated through JavaScript. And so that's one version of what is going to kind of bootstrap this open web. But Josh talks about some of the early experiments that Google has been doing to be able to create this hybrid combination of OpenGL with CSS and JavaScript. So able to add all sorts of different user experience layers just through this web standards technology. So there's a lot of excitement and momentum of where this is going, but it's still very early days. And so we'll be talking to Josh about where things are at and where he sees things are going and how he sees the strengths of the open web are going to really be tied into virtual reality here and moving forward. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first a quick word from our sponsor. Today's episode is brought to you by the Voices of VR Patreon campaign. I personally am just completely fascinated with the potential of VR and how it's going to change our society. And so for the last couple of years, I've been traveling around the country, going to all the major VR conferences to be able to see what the latest trends are, to be able to have all those experiences, and to share them back with you, my audience, to be able to both get inspired for what you want to create, but also just to hear what else is happening out there. This field is moving so quickly that it's difficult for any one person to keep up with it. And so I just try my best to go to these different conferences and bring you the latest news. And if you appreciate that as a service to you and the wider VR community, then become a donor to my Patreon campaign at patreon.com slash Voices of VR. So this interview with Josh happened in a cafe in San Francisco on Saturday, October 29, 2016. So with that, let's go ahead and dive right in.

[00:03:25.354] Josh Carpenter: My name's Josh Carpenter. I work on the WebVR team at Google on the Daydream team.

[00:03:29.937] Kent Bye: Great. So maybe you could sort of catch me up a little bit. There was a big, huge gathering that just happened with all the major players of WebVR that happened about a week ago or so. Maybe you could tell me about what led up to that and what came out of that.

[00:03:43.230] Josh Carpenter: Yeah, it was pretty cool. So two weeks ago, we had W3C, which is one of the standard bodies that determines the future of the web platform, had a WebVR workshop. And the organizers were talking with him afterwards, like, we didn't really know who was going to attend. We knew Mozilla would be there. We knew Google would be there, because they both announced that they were working on WebVR. But the roster was like every company in technology from like even Apple and Sony, Google, Microsoft, Oculus. I've been doing WebVR for two years now and it was an esoteric thing to be working on at the beginning. To see it actually breaking through into the quote-unquote the mainstream of computing is pretty wild. It feels like we've crossed a Rubicon. and that every major company in VR is now getting behind this, is now in some way embracing it, which is amazing.

[00:04:31.295] Kent Bye: Yeah, to me, I definitely saw that as an inflection point as well, just from kind of tracking it from the very beginning with talking to Vlad Pesivic and Tony Parisi back at Silicon Valley Virtual Reality Conference 2014, it's still, you know, even at GDC was yet to be showing a demo on the floor of GDC that was running at 90 frames per second, so To have that amount of recent breakthrough but also get that amount of momentum and everybody there together to talk about it, to me it was like this huge turning point for the future of what I would imagine to be the metaverse built upon open web technologies.

[00:05:06.005] Josh Carpenter: Yeah, exactly. And so we made an announcement, too. We've been, we being the Daydream and the Chrome teams inside of Google, for the last, well, I've been there five and a half months now, we've been working on Chrome VR. So the notion is, the beauty of the web is you just lose yourself for hours going from hyperlink to hyperlink to hyperlink, surfing without any friction from site to site to site, no one telling you to install something or you can't go to a certain place. I want to be able to put on a headset and do the same thing and surf from not just site to site to site, but world to world to world. So we've been working on that. It's Chrome VR. It's going to be a new browser UX experience built onto Chrome for Android. So the experience will be, once released, you get out your Android phone with Chrome to start. We want to be everywhere eventually. But you get your phone out. You find a cool site. It can be 2D. It can be video. It can be web VR. It doesn't matter. You just drop it into your Daydream headset, and you put on the headset, and you're looking at that website, that content in VR. Whether it's 2D, like I said, whether it's video, kind of a home theater mode for video, or whether it's a WebVR experience, which is full 360, high performance, and it's an immersive experience. And then you click on hyperlinks, and you just surf from world to world to world. And you've got an interaction model that's sort of like, we think of ourselves as doing the mobile safari of WebVR, of the VR web, where we have to have both new enablers for creating new immersive experiences, but also backwards compatibility modes. Remember when Steve Jobs got up on stage and showed Pinch to Zoom for the first time? And you're like, oh, of course, that's how they're going to shrink the New York Times desktop site down into a 3.5-inch iPhone one screen. We've got to do the same thing, where small text that's hard to read on a low-resolution current generation display has got to be made easy to read. So we're doing all sorts of really fun design experimentation to figure out, well, how do we make it easy for you to zoom into a piece of content, to target a piece of content as effortlessly as we do on our mobile phones? So we're really excited by this announcement.

[00:06:50.949] Kent Bye: Yeah, to me, it's exciting because I think up to this point, we've sort of seen this future of the metaverse potentially being created by proprietary tools like Unity or these kind of fragmented app ecosystems where you have to jump in and out of experiences. You know, I think, talking to Neil Trevitt, he said back at GDC in 2015, like, there's always this tension between open and closed. And so, I sort of see this playing out with the future of the metaverse. And to me, what makes it so exciting is that using the web as an infrastructure that is already, you know, like Vlad Fisovich told me, we already have the metaverse. It just doesn't have a 3D interface yet. It's called the web and the internet. And talking to Philip Rosedale, one of the things that he said, in the early days of the World Wide Web, there was AOL and CompuServe that were trying to go down this closed walled garden ecosystem. They were trying to really give a very highly authored and just polished user experience. But yet, there wasn't a lot of interconnectivity between the different sites. And so, Philip Rosedale was like, you know, you got to look at Metcalfe's Law, which is like the value of the network is increased by the square of the number of connections. And so, in other words, given the hypertext links, the more that there's actually connections to these different websites, that actually is more valuable in the long run in terms of being able to create this environment where information can kind of like be connected and discoverable. So it sounds like that's kind of the direction that this Chrome VR and the WebVR infrastructure of building the Metaverse might be kind of headed towards.

[00:08:18.596] Josh Carpenter: Yeah, the way I think of it is I like app stores. I own Apple gear. I love really high performance native applications. I'm glad developers can make money selling through app stores. I think they're a good thing in general. I also like a yin to that yang, though, which is a totally wild, totally open, decentralized space where no one can tell you you can't publish something. The tools are mostly open source. I think that's a really wonderful vehicle for emergent creativity, low friction, decentralization, open source. I think it's a really good thing. So I want there to be both. So in a year, we will have multiple browsers out there, multiple VR browsers, which is mind-blowingly exciting. You have Carmel, Moculus, you'll have Chrome VR from us, and hopefully there'll be additional ones as well. Gear VR is already out there. You're going to be able to sit down at your headset, no matter what platform you're on, and either install an application, play with it, have a great time, or fire up a browser and surf from world to world to world. And so I think you're going to have the yin and the yang. And then I think where it gets really interesting, and this is sort of inside baseball, but that used to be two extremes. Like there was the web, which was a web equal browser, and then apps equal to apps, you know? And there the twain shall meet. That hasn't been true for many years. Like on your mobile phone today, most of your mobile applications integrate web views. You may not even know it's the web. There's no URL bar necessarily, but it's rendering at high speed. It's presenting all your content to you. It's got backwards compatibility with the web. And so you're in Facebook. You click on a link, and it opens up in a view, in a web view. So the web render engines have been so commoditized that they're now being embedded effortlessly into native applications. On Unity, you've got coherent UI, which does this. So if you want to make a game UI, it used to be you did it in Flash. Now you can take coherent UI, and you can do it on top of Chromium. Because the web is really good at 2D. It's actually pretty hard to do a 2D layout engine. So let's use it for what's good. So I think what you're going to see is there will be the extreme of a pure native app. There'll be the extreme of a pure browser. But in the middle, there's going to be some really interesting experimentation, where you'll have hybrid applications that are hybrid web, hybrid native. And that's where some really cool stuff's going to happen.

[00:10:08.517] Kent Bye: Yeah, to me, the thing that I think about as someone who wants to create more VR experiences and actually get into the technology, it's a little bit of like choosing the horse that you're going to ride on. Is it going to be Unity? Or is it going to be like you want to do something that's going to be easy to use with a large asset store and something that has a lot of information online for how to do stuff? Or is it going to be Unreal Engine that has a lot of visual scripting language, a high visual output? Right now, those are the two major ways of producing native apps, which I think most of the performant applications, over 95% of them are based upon one of those two technologies. There's some open source alternatives or handcrafted stuff that's out there as well. But there's also other alternatives in terms of being able to write something, export it to WebGL, take that WebGL content and be able to run it as a native app so that you can have this true cross compatibility I know that there's different initiatives and efforts for the different engine companies to eventually export WebGL-enabled outputs, so you maybe create a scene within Unity or Unreal, and I think eventually you'll be able to output it and put it onto the web. I just remember in the early days of the web, there's these Adobe products to generate HTML, and it was just always crufty. just not beautiful and elegant. It was messy and I kind of expect something potentially similar. These kind of proprietary closed solutions exporting this web native code and maybe it's just going to be compiled to binary and WebGL and you're not going to be able to actually edit it and you have to go back into the system and no one else is going to be able to edit it as well. So I kind of see these different tools are going to eventually probably support the ability to publish to the web, but also be able to take that WebGL content, wrap it within some sort of wrapper to be able to then run it as a native application and deliver it both through Oculus Home or SteamVR.

[00:11:55.588] Josh Carpenter: Yeah, one of the macro trends in the web in the last, I guess, seven, eight years, before I joined Mozilla, has been this notion of what's called the extensible web, which is to say, you know, it'd be pretty neat if tomorrow you and I have an idea and we can just hack it up and we don't have to wait for a standard, you know? And it's actually good for innovation because before we go through the long standardization process, which is a good process but is inherently slow, we've seen five years of experimentation on the open web. So the extensible web philosophy, the manifesto, in fact, it's literally a manifesto, says empower developers to create solutions using very unprescriptive, low-level APIs, and then learn from experimentation in the open, and then pave the cow paths once we know what people should do. So this is totally different from the web in the 90s. This did not exist previously. So that's really exciting. Now, at the same time, what I'm describing is total freedom. It's the freedom for any developer to make a website that is completely different from the website that preceded it. And I think that's wonderful. But I am also a strong believer in the notion that one of the reasons the web took off in the first place was that if I taught you how to click on a blue link, you knew how to use 90% of the web, 99% of the web. And as a developer, we didn't have to create our own scroll bars. Can you imagine if every single website that we've gone to had a different user experience you had to learn, had completely different performance characteristics, and took a deep engineering prowess to create because you didn't get anything for free? I don't think the web would have scaled. So in my opinion, the formula for the web is a really great, what I'm thinking of as a guaranteed minimum experience. If you do nothing, you get a really good performance web application, you get a really good user experience. And then on top of that, there's total freedom. So do you want to go and do something crazy and really low level? Go at it. Do you want to start in Unity and export your application to run on the web stack through WebAssembly at near native performance? That's awesome. That's fantastic. You can do that. But if you don't do any of that, and your tinkerer wants to put something up in five minutes, you can do that too. So this is something I really think is strategically important for the web platform, and it's also really going to be empowering for developers. What I'm pushing, and what you're going to hear more from me and my team in the next year, is doing this with HTML and CSS. So right now, if today you want to create a website, it's all WebGL. So it's imperative code. Again, very flexible, but inherently a black box from the browser's point of view. We don't really know what's going on inside there. And also, it's enough kind of rope to hang yourself with as a developer. The browser can't really help you in terms of performance other than making WebGL fast. What I want to do is be able to give developers HTML and CSS as tools that you can use to create 360 experiences. So for example, you put on your Daydream headset, you look at a website. It's a 2D website. It's floating in front of you as a quad. You can probably imagine what this is going to look like, right? It should be as easy as one line of CSS to put a 360 stereo image in the background. Super easy. So now your car browsing website can actually put you in the interior of the car. Or your Airbnb can actually put you inside their property. But then two, if we're doing that, and the window is still totally opaque, that's kind of lame. You can't actually see the background. So the window itself has to become transparent. Now it's getting interesting. Now you can actually style the contents of that window to match seamlessly with the background. Maybe there's a menu, for example, so you browse from site to site to site. Again, all HTML and CSS. The user interaction model here is being provided by the browser for free. It's kind of probably a laser pointer model, probably the lowest common denominator. You don't have to reimplement it yourself. Then it gets really interesting. If we have CSS 3D transforms right now, but they're just doing visual transforms in a flat plane. But what if they could pop off the surface and be any size? Maybe you can make a skybox of divs that's like 10,000 feet wide. It gets really interesting if we allow developers to then take HTML and CSS and pop them off the surface. It sounds weird, but as a design tool to be using HTML and CSS to do this, but we actually built it at Mozilla a year and a half ago. We built a HTML and CSS based web browser, no WebGL. And I was creating user experiences and sites that were built entirely out of DOM elements, HTML and CSS. So I would sit down in one evening, I'll grab a Vimeo video, put it on like a 10 foot iFrame floating in front of me with the metadata at the bottom. And it was unbelievably easy, super, super fun. And I had access to all those awesome skills and all those awesome developer tools and all that infrastructure that I have as a web developer. I could just dragoon them in minutes to create content. I think that this is how the web is going to scale to billions of experiences. Desktop, mobile scale web. It will not be just on WebGL. It's going to take a hybrid of the tools I'm talking about and WebGL. I think that's essential. So we're going to be working really, really hard on this.

[00:16:12.250] Kent Bye: There's a couple of things that that makes me think of. First of all, there's this concept of taking a 2D game and then porting it to VR, which is generally within the industry been kind of frowned upon, being like, yeah, don't do that. That doesn't work, right? Don't just put this in 3D and expect that to actually translate. To me, I'm somewhat skeptical of being able to do a similar transformation of taking the 2D web and just, you know, throwing some 3D on it and calling it a day. So that also reminds me of, like, whenever there's a new communications medium, then they try to kind of replicate what was done in the previous medium. So just from going from non-web interactive content and then throwing it on the web, there's a certain amount of how it was formatted and looked that was, like, Yeah, then that was Web 1.0, and then we had Web 2.0, where it was sort of like trying to really figure out how the web was actually trying to work with a little bit more collaborative interactivity and emergent properties that happen with comments and whatnot. But then we had the mobile revolution, which was yet another new communications platform, and then people tried to then take the existing ideas of the web and put it into mobile, but then they had to go through a whole other revolution of trying to figure out well how do you design mobile first and then take that mobile first and then backport it. So I'm curious what your thoughts on that whether or not people should take a VR first approach and then take back that capability into a responsive design that then goes from the mobile then to the web.

[00:17:32.082] Josh Carpenter: I think it's a responsive design There's a really elegant theory that designers tend to use. It's called jobs to be done. If you go to CNN.com, you're hiring CNN.com to do a job for you. In that case, it is to inform you about something in the world. If you watch CNN on TV, it's the same job and it's just being delivered in a different way. Now you're watching Anderson Cooper standing on the wall being hit by a hurricane, let's say, and you feel like you're there. That underlying job in VR is really interesting. You can imagine browsing a website on your phone and reading an article about what's happening, and there's a little embedded video, and it kind of wobbles. It's a magic window, and it responds to your device accelerometer. You realize it's VR content. You drop that into your headset, and you're not seeing a wall of text. You're actually now standing in a 360 orb of footage delivered just in time, no installations necessary, and Anderson Cooper standing there next to you telling you the significance of what you're actually seeing. And at the bottom, a Chiron with just procedurally generated HTML the site's nice and big, it's easy to read, is making you aware of other contextual information about the scene you're seeing, but also other adjacent stories. You click on a link, and you're in a different location. Maybe a different reporter's telling you a different angle on this story. Let's say Mosul, for example. It's totally instant. It's totally serendipitous. I think there's going to be use cases that the web will enable that are different from native applications that we just haven't been able to do yet, because friction is too high. I've got this line now, but I really think it's true. If visual performance was the only definition of performance, graphics, we'd all be watching Blu-ray instead of Netflix. But we don't. There's other factors that are important in defining what performance actually is in the broader definition of the term. Friction is obviously a pretty big one. There's another angle here too, which is I don't think of my role or the web's role as being to try to design the end state like an intelligent designer. I really think of us as being almost like gardeners. Our job is to think about the right formula. and then to empower other people to create, and then to sort of guide and nurture as much as possible that ability of people to create, to kind of make sure there's enough sunlight, enough water in the soil, etc, etc. I really believe in the web as a vehicle for emerging creativity. I hope that in two years, we're sitting down, we're like, geez, we didn't see that coming. But someone, some amazing web developer or some large organization, it could be anyone, came up with these killer use cases that we didn't even see coming. And they were able to do it because they took these totally open source tools, they took this decentralized publishing platform, and they just built it and put it out there. And lo and behold, it gave rise to the next great idea. I really think the web's going to be a pretty powerful vehicle for that. So I'm pretty excited about that.

[00:20:00.860] Kent Bye: Well, I think part of the reason why the web had taken off so quickly, in part, was for other developers to be able to view source and look at that source and be able to actually build upon that. And so I guess the exciting thing about this future with declarative code from both HTML, CSS, to be able to actually look at a VR experience, look behind the scenes, and then copy it and modify it in different ways. You know, within the web, there's a bit of, like, the CSS is copyrighted and proprietary, but yet, you know, the other kind of code is just out there. I think in VR, it starts to get a little bit trickier, just because there's a lot more 3D assets that may be still kind of, like, within that realm of copyright. But for you, what do you see as some of the big advantages of this declarative approach? It's getting away from a black box, and you'll be able to actually see it, but is there other performance advantages of, you know, kind of using CSS as a way to put out textures and potentially shaders or other things like that?

[00:20:54.410] Josh Carpenter: Yeah, well certainly. I mean, if you're doing anything that's text-based or is a 2D layout, it turns out the web is really, really good at that. So that's fantastic. That's something that's really hard to do in WebGL, for example. Actually, impossible. We don't allow the DOM, HTML and CSS to be rendered in WebGL for security reasons. But a hybrid approach is really nice. So you've got to imagine a really rich WebGL experience in the background, like A-Painter. But then the UI attached to your motion controller is rendered on the classic web. It has beautiful SVG graphics, HTML and CSS. It's just easier to work with. I think that we'll find what each is good for and use each piece according to its strengths. I think what I'm excited by is it should be as easy to put a 3D model into a website as it is to put an image. The notion that to put a 3D model into your site, you need to write a bunch of JS is ludicrous. The web wouldn't have taken off if it required that to do an image. So what we want to create eventually is a model tag. So a model tag, and I'm thinking glTF seems like a pretty good option for the delivery format, that makes it easy to plop in an image into your site and then just scale it with new CSS physical units. Units actually let you determine how big or how small something is. If, in a couple of years, you've got a whole bunch of content that isn't just a black box of imperative WebGL code, but is also parsable, indexable, HTML and CSS, imagine the browser experience that a guy like me can create on top of that. Right now, we're all concerned about, for very, very good reason, your personal safety and security in VR. If it's imperative black box code, there's not a lot that the user agent can really do. It's up to the developer to keep you safe. What if the user agent, what if you can actually keep yourself safe? So imagine if it's HTML and CSS content, the browser can occlude content, effectively create a personal safety zone. And for a site to get any closer than you'd allow it to, it has to ask permission. Like, hey, can I get within five feet of you? Imagine the developer tools that can be built on top of this, because it's, again, indexable, parsable. You can open up any website and just inspect its properties and figure out how they did it and pull out little pieces of code. Imagine the ability to, right today, you can actually drag an image to your desktop. And that's only possible on the web, right? Imagine being able to do that in VR in the future. We can reach out in the scene, grab a piece of content, pull it forward, look at it, open it in a new tab, or even totally transform the contents of the site a la Flipboard or reader modes to be more conducive to your accessibility issues or to a use case that you have. I think this is very, very, very rich soil for user experiences that, again, I think is complementary to native and complementary to WebGL.

[00:23:10.470] Kent Bye: Yeah, you mentioned security. And that's something that Tony Parisi had mentioned. And just in the course of link traversal, I think that right now when you go to a website and you are doing a full screen of a video, it says, are you sure you want this to take over the screen? And you kind of have to give a positive affirmation. And it's really annoying to do that every time. But yet, it's a security measure to prevent something from taking over your web browser. So when you go into VR, you have to do that same thing. Do you want this to be able to take over your browser? Is something like the Chrome VR or having something that's native, is this something that's taken care of in the background of being able to mediate that trust? Or you're just already immersed, and so you don't have to take over anything? Or what are some of the security implications of that, and what could go wrong, and what you're doing to address that?

[00:23:56.734] Josh Carpenter: Yeah, let me describe to you what I think the user experience needs to be for the web to have any value whatsoever. We should be able to put on a headset, step into a world, click on a link, and then be in another world with no friction. And ideally, this should all load in less time than it takes a Netflix video to start over a decent connection, or roughly equivalent, I think. That's a world that I want to be in. I think that's a good user experience world. Now, if every single time you click on a link, you're having to grant a permission, like, does that sound like a web that would have taken off on desktop or mobile? Like, there's no way. So what we're working on are ways of requiring permission in some instances to get into VR, but then once you're in VR, you're sort of granted implicit permission. and allowing sites to automatically present the VR versions of themselves. And we do some clever things behind the scenes. We create some new allowances for developers that allow them to actually opt into this mode. It's all very strictly gated. You have to act within and use APIs in a very prescribed way in order to get this. But then we can allow them to go basically right into VR mode. So you're just going from world to world to world. The other thing we're doing is we don't want a world where there's content popping up in your face that you don't want. You can imagine how horrifying pop-ups could actually be in this world. So we're not even like, I come at this from a user experience standpoint, less than API standpoint, we really wanted there to be an escape valve. So you hit a single button and it's easy to find and you immediately have an escape valve, that site is paused, it is out of your face and you can get away from it. You're surfing a web of arbitrary content. I mean, a couple years ago, I think Jezebel got hit by a wave of basically gore postings from some, you know, pretty, troll is not a strong enough word for what these guys did, right? It was pretty horrific stuff. You can imagine how horrifying that could be on the web, you know, in VR rather. So we want to create safety escape valves, people to actually utilize to get away from that kind of content. I mentioned the personal occlusion zone. I think that comes later. In the immediate, it's just like one button and you're escaping out of that content, you're in a safe place.

[00:25:45.234] Kent Bye: Yeah, and to me, just talking about that, it brings up the issue of identity. I think right now, on the 2D web, identity is not necessarily required in a certain way, in the ways that you could go to some sites anonymously. But other sites are kind of really super built around identity. And I think identity within a virtual experience is going to become a lot more important. There may be instances where you're totally an anonymous ghost and nobody could really see your true identity, but I think there's going to be a lot more of a need for that. So I just feel like there's going to be a number of issues with both using things inspired by the blockchain for potentially identity payments, distributed web content, and taking some of the challenge that I see in the overall infrastructure right now of the web, which is basically If you want to put something on the web, you have to buy a server and pay money, and that there's no real good way to distribute that across multiple computers. And I feel like, in some ways, I see this vision of the metaverse enabled by WebVR as kind of like another chance to kind of correct things that maybe we didn't get quite right with the Web 1.0 and 2.0. This is like the next iteration to be like, OK, what works and what doesn't? Let's try to figure out a way to really find a way to viably scale out this metaverse that we all dream of.

[00:26:59.173] Josh Carpenter: Yeah, it's really interesting. As you were talking, I thought of this weird analogy. I don't know, maybe it's stupid. But the web today and its relationship to social networks is kind of like, imagine walking down the street in a city and not seeing anybody else. You can go anywhere and do anything, but there's no one else. And we crave social interaction. So to get social interaction, the only places that have it are like clubs. You have to go open a door and go into a building, and that's where the people are. That's kind of like the relationship between the web today and social networks. You can get to them on the web, but as you go between them, you're totally isolated. You're totally unaware that anyone else is looking at the same site as you are, unless you log in with an identity and you log into a network. Now, I actually don't think that's a horrible world. You know, it would be neat if we'd done identity in an open, federated way on the web, and certainly people have tried. But, you know, I think it's pretty cool how things have netted out. There's a lot of services I take great value from, and there's a lot of cool businesses that have been built on top of providing that identity piece and creating spaces for people. It would be pretty neat, though, if as you're walking down the street, you'd get to see other people. And any space, not just indoors, not in a walled network, could be a social space. You know, you might have some pretty cool experiences. Imagine, like, the Flash Mob. The Flash Mob equivalent in VR, built on an open, federated identity system. I don't think that that's where we need to start. I think we can create enormous value right away by giving better tools and more emergent and open decentralized tools to developers. But I think it'd be pretty cool to figure out that piece. There is one piece on that actually that is already underway. I'm not an expert in it, but the Web Payments API is gathering some steam from what I can tell. That's pretty cool. That's a way to enable a site to actually conduct a transaction without there being an intermediary directly from you to the site. Again, I don't want to overstate my expertise on this. I actually don't know the particulars of it. But again, you begin to see what could be an API for providing payments on the web without an intermediary, which is really interesting stuff. I think that would be one of the critical building blocks to enable people to make money in a way that doesn't necessarily rely on an app store. An app store becomes optional, not necessity.

[00:28:55.716] Kent Bye: Yeah, I think that's one of the probably the biggest challenges with the web right now is that there isn't a really viable way to do microtransactions because a lot of the authority and trust is put into these big companies that are mediating that and the costs of paying for that makes it not really worth doing a microtransaction at all. And so there's no real way for people to send a tip based upon what their browsing behavior is. I just wonder if moving from kind of the information age to the experiential age if there's going to be a little bit more of an ethic of kind of paying for experiences and being able to have some sort of like Bitcoin blockchain enabled way to be able to do financial transactions to actually make it viable in a way that's rather than for something going viral and you actually cost you a lot of money for the server fees that you have to end up paying that actually could somehow be a microtransaction payment system that is actually able to compensate people for things that emergently go viral.

[00:29:49.092] Josh Carpenter: Yeah, again, this is not my area of expertise, but I am fascinated by it. WebTorrent is interesting. There's a wonderful expose written by someone recently who basically created a WebGL visualization of the galaxy. That's a lot of data. So you actually put the data that drives this visualization onto a torrent. And there's a new API. It's a WebTorrent API that lets a browser to actually access a torrent. So that's really interesting. There's also a thing called IPFS, which is a decentralized, I think it's a P2P-based network layer for the web. It can run alongside. It has some level of interoperability. But it is another place where you can actually host your files in a P2P decentralized fashion. There's some really interesting stuff percolating out there and obviously everything to do with the blockchain. There's interesting stuff percolating out there that we can kind of squint our eyes and begin to see how this might really change the essential formula of the web that's remained fairly static for the last coming on 30 years now. Since TBL made it in 89 or thereabouts. Yeah, space is deep. We'll see how that plays out. I know that in the immediate term, what I'm really focused on is, let's get performance to a place where you don't throw up. That's got to be a baseline of performance. I don't think it has to be toe-to-toe with native. I really, really don't. I think it has to be good enough. And if it gets good enough, the web's other advantages kick in. If you've seen A-Painter, perfect example of this. A-Painter loads up in seconds. It's near native performance. And it's also shareable. So you make something, host a unique URL, put it into Twitter. Super, super, super shareable and viral. And it's extensible. It's totally open source. So within 24 hours, someone had made a rainbow brush, and they shared it. Like, that's awesome. Is it toe-to-toe in performance with native? No. But it has a whole bunch of other killer advantages. Again, it's the Netflix to the blue air analogy. But we've got to have good enough performance. And then the user experience that makes you actually want to put on a browser and surf from site to site and actually genuinely enjoy it, we've got to nail. And we're doing some really fun experimentations inside. Like right now, we've got one where you point a laser pointer at the surface of a page, and you can't quite read something. You want to bring it closer. You long press, and you pitch the controller back. And like a fishing rod, the line arcs. and pulls it forward with a bit of a physical wobble, really trying to bring physical direct manipulation into the design of the browser. I think we're going to have some pretty cool stuff that's going to make people go, oh, I never thought of that. And oh my god, that's actually fun. It's not just utility. It's actually genuinely fun to use. I think if we nail those two pieces and then get easier to use tools in the hands of developers, which is where the HTML and CSS pieces that came in, but also some new frameworks and some better tools on the WebGL side, I think that those are the conditions to really get the ball accelerating. Like, OK, another analogy. At MozVR, we talked about our job is not to worry about the outcome. It's to throw a lit match into a dry field and to get the burn started. And just the way the web works, it'll just accelerate and accelerate and accelerate. I really want to keep fanning those flames and making it go faster and faster and faster. Those are the three things I'm really focused on this year. And then in 2018, we sit down and have this conversation. We've got a bunch of browsers. We've got really great performance. You can begin to build some serious businesses and some really interesting things on top of that stack.

[00:32:35.047] Kent Bye: Yeah, the one thing about the gaming community I think that I've really noticed coming from the open source world, you know, I worked in Drupal for about eight years or so, is that there isn't a strong ethic of sharing back code into GitHub and I think with the web development community there is an ethic of using open source license material and content and be able to have these ways of taking stuff and build upon it and extend different stuff and I think for me that's what makes me the most excited to be able to see these types of user experience experimentations. Just the thing that you're mentioning there makes me think of the skeuomorphic debate within whether or not you should create user experiences that mimic reality in certain ways, or if that's actually worse to be able to do it in that way, you should actually create an entirely new way to figure out how to do what's perhaps simply a new paradigm of trying to interact with this 3D immersive environment.

[00:33:24.926] Josh Carpenter: Yeah, I love bad analogies, so here's one more. I think game designers are in the business of creating steaks. You sit down to eat a steak, you eat one steak. It's rich, it's complex, takes a lot of work to make, it's expensive. That to me are games, or also movies are kind of this form of entertainment. But then there's rice. An individual grain of rice is not like lovingly handcrafted. It has a lot of value in a small piece of rice, but in aggregate it has enormous value. I think that is the gap between the web and games. When I talk about the web to some game developers, it's like, again, Tim Berners-Lee trying to talk about the web to a CD-ROM developer in the early 90s. He's like, I make CD-ROMs. I make beautiful, handcrafted experience is like this is totally I can't do anything cool looking on this and they're right it's not quite maybe exactly what they need but I'm glad I live in a world where there's both because I think they're really really complimentary so you know a lot of what we want to do is just again like we want to create the yin to that yang and then see what people do with it

[00:34:18.648] Kent Bye: Well, yeah, that just brings back to the closed versus open. And I feel like, just as we were talking before, that there does need to be a balance between the closed and open, because you can't do the way that you described it as the horizontal integration of the web and the direct integration of a proprietary solution. Maybe you could expand on what you mean by that, the advantages of the horizontal for the web and the vertical of the proprietary.

[00:34:39.853] Josh Carpenter: Yeah, I think there are places in computing or in society where we all intuitively understand that a shared solution that we can all stand on top of would be a good thing. You know, it's great that Apple doesn't have to hire its own security force because we have a civil society, we have a police force, you know. It's great that Google doesn't have to invent an electrical grid because we have one. It's shared. There's a standard there. I think that standards can be very, very useful to build on top of. They enable people to start at a higher floor and not have to reinvent the wheel. Bluetooth is another one. But actually, Bluetooth is an illustrative example because often these standards don't go all the way, especially when it comes to user experience. And so that becomes an opportunity for someone to say, well, I'm going to start on top of the standard, then add some integration on top of that to really nail user experience, add value on top of that standard. So I think everything in computing, and even in society, I think, is a combination of those horizontal layers that we all build on top of and that transcend any one individual vertical. And you've got verticals where there's opportunities for people to create value, to actually say, we're going to integrate at this point and create sort of an integrated solution around a better user experience or some other factor. And so what I'm hoping is that I think the web as a consumer user experience, a place you go to and you enjoy and you surf, that's fantastic on its own. I am also a really big fan of the web as infrastructure, as plumbing. And I mentioned before that the web is now a commoditized tool. Anyone through these open source engines like Electron and Crosswalk and Chromium Native Framework can plug into their native applications. I think that's tremendous. I think the web wins, even if it's just plumbing. So I'm really, really excited by that, and actually, I hope that if we have this conversation again in a year that people will have taken Chromium, which is open source Chromium, and will have taken the infrastructure that our team is currently building and built new browsers. It's hard to build an innovative browser on a 3.5 inch screen. It's just really difficult. But in VR? It could be completely divergent and fascinating. You could have a browser that looks like Vim or Emacs, made for the Neuromancer case, kind of like in a crowd. But you could also have a browser that was like an anthropomorphized character, you know, that had an actual personality and actually was visualized, almost like Cortana from the Halo games, in the world with you. Like, that's fascinating to me. Karambit Explosion is such a cliched term, but it would be pretty neat if the infrastructure that we're building on top of Chromium, as it flows in the open source world, is taken by developers who create other new browser experiences and just do really cool stuff with it. That's one of the, again, one of the emergent outcomes I hope to see, is not just innovation in the content that is consumed, but is actually in the user experiences through which we consume that content.

[00:37:10.407] Kent Bye: Yeah, just to clarify on that point, do you mean that the Chrome VR browser that you're building is based upon Chromium, which is an open source build, so that all the work that you're doing is, in some sense, open source? Yeah.

[00:37:21.297] Josh Carpenter: So Carmel from Oculus is built on top of Chromium, because it's open source. They can do that. And they'll build a really slick, I have no doubt, I cannot wait to see what they build from a user experience standpoint, experience on top of it. And then Chrome, again, same code base, same platform, will do something for Daydream to start. And then we want to be elsewhere eventually as well. That's fantastic. That's really, really exciting. And then, this is really geeky, but it's really neat when a designer can implement their own interface, when we don't have to work on a god-awful spec and then give it to an engineer, who did not get into engineering, to implement the whims of a pixel-pushing and retentive designer. It's really neat. It's great for iteration time and therefore for design quality if a designer can implement their own UI. So what we're working on in Chrome VR, and it's very promising. It's not a slam dunk yet, but it's very promising, is the ability to build user experiences in content. So in Chrome VR right now, in these early prototypes that we have, if you point your controller at a button, it glows. And that glow is actually created in HTML and CSS. It's just a CSS hyperstate. Any web developer knows how to use this. When you click on it, it's a JavaScript event handler that's creating that click, totally vanilla JavaScript. And yet, it renders a beautiful high performance like any native application would because it's actually been rendered in OpenGL. What we're doing is we've actually created a hybrid HTML and CSS, JavaScript, and OpenGL UI content creation layer. And the takeaway would be, if I can build a UI on top of this, anyone can build a UI on top of this. Any moderately experienced web developer would be able to. So it's like we're putting some very, very, very, very powerful new user experience creation tools into Chromium. And so it will not be an arcane knowledge to build your own browser. It will be something that many, many people can do. So we're really excited about that. We're having some interesting conversations with people who want to create products on top of this stack to do this. Like if you use Slack on desktop today, Slack on desktop is built on Chromium. It's Chromium wrapped in a wrapper called Electron that enables it to be installed as a desktop application. So I use Slack on desktop on my Mac. It's a native application, but it is actually the web stack through and through. There's a lot of people who want to deploy, know the web, and want to deploy on top of the web, or want to just be through an app store. They're going to be able to do that, and they're going to have the advantage and all the access to all the tooling that we're building into Chrome VR. So it's very, very exciting.

[00:39:31.848] Kent Bye: So as someone who was working in web development in the 2000s and having this fractured web browser ecosystem with all these different implementations and lack of support of standards, Internet Explorer. But essentially, there was a lack of consensus that was coming up in terms of being able to write it once and be able to have it work equally in all the different browsers. So it sounds like with the momentum that's gathering around this WebVR, that there's at least going to be these different implementations that can be done in these different browsers, but that the ideal is to be able to have some sort of standardization so that you could write it once and not have to deal with hacks to be able to make it work in the Microsoft Edge browser.

[00:40:10.817] Josh Carpenter: Yeah, I mean, it has to work. You make trade-offs when you go to the web. Especially, like I said, if you're going for pure graphics performance, today, you probably want to be using native. You go to the web because you do have that interoperability. Because a site that you publish can be accessed by someone on their mobile phone, or on a desktop browser, or in a VR browser, or in a different VR browser. Like, that's essential. So we had that meeting last week with the W3C, this workshop in San Jose at Samsung's headquarters. Having all the browser vendors there and having other primary technology companies there was really exciting because you have people, you have Microsoft there saying, well, we're buying into this. We love this API. You've got Oculus saying, yeah, we love this API too. Mozilla, Google, everyone is there and they're buying into it and they're saying, we're going to observe the standard. Because you're right, we don't want the web defragment into these little fiefdoms with people trying to insert proprietary technology into it. That would be very bad. It would go against the grain of the web. It would undermine its fundamental value proposition. I think the dynamic that you're going to see is the extensible web, which I mentioned before, will enable the web to innovate faster and try new things while waiting for standards. Standards bodies will come and pave cow paths where we believe that a new standard is needed because we've actually observed a need in the market. The web will also learn from native. So an alt space, a genus, a high fidelity, they are blank slate approaches to do something that is webby. So the real web can learn an enormous amount from what they're trying because they're not burdened by the obligations of billions of users so they can try stuff out. Again, bad analogy warning, I think of them as being speed boats in front of the oil tanker. Like, I'm the guy in an oil tanker trying to steer that in the right way, but it's a slow-moving ship, inherently, you know? It's gotten faster, but it's still kind of slow. And also, it's for a genus. They can just try stuff and find out what works, and then we can learn from it. So, social. I mean, I think we all intuited social would be pretty darn important based on, like, 30 years of science fiction, but we're understanding just how important. So the web can learn from what we're seeing and can begin to add in, maybe prioritize social a little bit higher than we might have otherwise. So that's a dynamic I think is really healthy and I'm grateful for, actually.

[00:42:00.555] Kent Bye: Great. So what's next for WebVR and where it's going from here and what people can do to get more involved?

[00:42:06.159] Josh Carpenter: Yeah. So Chrome VR coming next year. WebVR API, which makes WebVR experiences impossible, landing on Chrome Stable. So no more experimental builds. It's in Chrome. So lots of people suddenly have WebVR API. Better tools. I mentioned HTML and CSS becoming tools you can use. We'll probably start with a 360 background tool, like one line of CSS, 360 background. It's pretty easy for us to implement that. We should do that. I think Samsung's already got a version of that running in the Gear VR browser. So better tools. And then also just an ongoing push on performance. The web hasn't had to be good at 3D, fundamentally, yet. It's got to get good at 3D. Computing is not going to become less 3D. There is no version of the history of computing where computing gets flatter. It's becoming 3D. So if we care about the web, and we want the web to be just as vital in the future as it is today, the web's got to get good at 3D. So we're going to be pushing really hard on performance as well. So those are three things we're going to be doing this year. You'll see other browsers jumping in the fray as well. And then in terms of getting involved, the web's open source. So people can go get A-Frame. A-Frame is near and dear to my heart, obviously. People can go get A-Frame and start building stuff with it. There's React now, React VR from the team over at Oculus and Facebook, which is a very, very, React is enormously popular, almost a meta-platform unto itself for web development, and even for native development. They can pick that up and start to use it. The job is to just jump in, in the open source community, find some tools, get on board. The web is kind of wild and decentralized. There's lots of endpoints. And you start to experiment.

[00:43:25.231] Kent Bye: Awesome. And finally, what do you see as kind of the ultimate potential of virtual reality, and what it might be able to enable?

[00:43:33.553] Josh Carpenter: The trend in computing is for computing to go from big, expensive, and alien to small, affordable, and very human. I don't see that trend slowing down. I see it only accelerating. And it's not hard to extrapolate kind of how that plays out. I think VR and immersive computing in general is extraordinarily important in this step towards the marriage of the digital and the human, you know, and to a computing that is more, I think, sensitive. When I sit down to do Microsoft Word, I'm not using Word because I love Word. I'm using it because I'm trying to get something done. I'm hiring it to do a job, which is I want to do some writing. It does it really well. But what prevents me from doing writing is actually bigger than that. It's me being distracted. It's the environment around me being distracting. It would be pretty amazing if computing is smart enough to know that and to help me get work done more effectively. I think VR and AR are going to be some of the important steps in the path to actually get into that point. So yeah, it's going to be an interesting decade. Awesome.

[00:44:29.989] Kent Bye: Well, thank you so much. No worries. So that was Josh Carpenter. He's on the Google Daydream team working on WebVR and bringing VR to the open web. So I have a number of different takeaways about this interview is that, first of all, this analogy that Josh made about Netflix and Blu-ray, I think is really key because I think it really describes the potential of the web. So what Josh is saying is that if you're really looking at visual fidelity as the ultimate in terms of performance, then we'd actually all be preferring to buy Blu-rays and to watch media on that because that would be the highest quality. Now that's true, but there's a certain amount of convenience and low friction and also just cheaper to be able to use Netflix to be able to stream it live and it's instantaneous and it's lower quality, but the convenience is there. And I think that is a great metaphor for the power of the web because it starts to allow you to have an experience instantaneously. You don't have to download it. You don't have to wait for a super long time or worry about discovering it. Maybe just a link that you're sent and you're able to pop right into an experience. So I think that's an important distinction, especially if we start to think about where the metaverse is actually going to start. So it sounds like with Daydream having kind of the first web VR enabled browsers, you can expect that some of the early VR experiences that are going to be on the web are actually not going to be all that intensive. You have to remember, this is going to be like a mobile phone that's going to be running these experiences. And so right now, there's already kind of three different types of VR experiences. You have a mobile VR experience, a desktop experience, and then a room scale experience. And while there is going to be the APIs that are going to enable the room scale experiences from the beginning, I think these first experiences are going to be a lot more similar to a mobile VR experience, because that's where the Daydream hardware is going to be at. It's not going to be able to run a full Oculus Rift type of experience. And also, people want something to load quickly, and they're not going to necessarily want to sit there for a minute or two waiting for everything to load in. It also brings up the issue of a media query. So right now, when you go to a web page, there's a media query that is trying to determine what kind of device you're looking at. Is it a desktop computer? Is it a phone? Is it a tablet? To be able to then determine how to deliver the page. And it's with those media queries that's able to kind of dynamically serve up the content and style it in a way that's really optimized for your device. And so I imagine there's going to be something similar eventually for VR that's going to detect whether or not are you on a Daydream phone? Are you on a Oculus Rift or a Vive? And then given that information, it's going to then perhaps get some more information as to what type of input controls you have. Is it just a daydream where you have a 3DOF controller? Is it a Gear VR where you don't have any controller? Or do you have like a gamepad controller for Oculus Rift or six degree of freedom controllers for the Vive? And so there might be ways of dynamically changing the level of fidelity of the experience based upon what kind of hardware you have to be able to run it. But like I said, I think probably the beginning people are going to mostly, if you're going to be doing web VR, they're going to probably be targeting the biggest demographic of audience, which is going to be the mobile phones at the beginning, since there's just going to be so many daydream enabled phones are going to be out there. The other thing that Josh said that I thought was really interesting is just that with the open source nature of Chromium, then they're going to not only just enable other companies like Facebook and Oculus to be able to build the Carmel browser, but he started to say, like, what if the browser was not this kind of screen that you're looking at? What if it was like an anthropomorphized version of a character? And, you know, my mind with that was that, well, you could be in a VR world and you have like this NPC character that maybe has some AI behind it. And then maybe you could start to do some chatbot conversations. And maybe this browser is able to dynamically query the API of Twitter and be able to have a conversation with this character, which is being driven and powered by a web browser. and it's able to take your natural language input to be able to call out a natural language processing API either from Watson or Google Speak API or Microsoft's version of that. And so you're able to have a natural language conversation with it. And then it's able to perhaps call a Twitter API query to be able to ask it a question about, what do you think about this? And then maybe it searches through all that user's tweets about it and then it answers and then is able to synthesize that search and all the different thoughts that that character has. something like that and being able to be powered by a web browser. And so in my previous interview with Mozilla, I talked about this electron where you'd be able to write an app in HTML and CSS and JavaScript and basically bundle it into like this little self-contained container and then release it as a native application on all these different platforms, whether it's on a Mac, Windows, Linux, iOS, or Android. So just the same, maybe there'll be this kind of cross-compatible characters that'll be able to go amongst all these different platforms and be able to interface with it. And so, I never really necessarily thought about that being driven by a web browser, but the way things are going, that could definitely be the case. The other thing is the conceptualization that Josh was talking about was the horizontal baseline that is these low-level APIs that everybody can build on top of and that's what they're building out right now to be able to create this platform to be able to build on top of to have a good VR experience that is at 90 frames per second and is able to pass the minimum bar of what it takes to be able to drive a VR experience. There's always going to be these vertical integration applications, like Josh Shebb, like the speedboats of High Fidelity, Altspace, and JanusVR. They're able to take a little bit more proprietary approach or more self-contained to be able to really push the edge of where the technology is. And they're going to be proving out a lot of the social behaviors and dynamics, and they're able to add value on top of this common baseline. But that, you know, the way that Josh thinks about it is that what they're doing at Google and with the entire WebVR initiative is to try to raise the tide for everybody with these common baseline of all the different APIs that are out there. Sounds like there's a couple of other APIs that could help enable microtransactions with the WebPayment APIs. There's also the WebTorrent API to be able to kind of distribute the load in terms of asset delivery. So it sounds like there's just going to be more and more innovation on the open web to be able to get to the point where it's able to do things that you might not be able to do in a self-contained experience. I think there's always going to be the premium AAA experiences that are driven by amazing graphics and an amazing interactivity. The performance is going to be solid. all these apps that are generated by Unity and Unreal and be able to be sold as apps. But I think the other analogy that Josh said is that right now it's kind of like Tim Berners-Lee talking to a CD-ROM developer about the potential of the open web when nothing had been built yet. And then people who are building these CD-ROMs are saying, this is an amazing authored experience that has a level of quality that you're not able to do. So we're going to keep on working in this kind of siloed walled garden of the CD-ROM. I think that's similar to where things are going to be in the future with VR is that these VR native apps that you download and run on your desktop are kind of like those CD-ROMs. They're amazing and they have great experiences and there likely will be an ecosystem for that for a long, long time because they're able to provide a quality of experience that you just can't meet right now on the web. But it's going back to that Netflix example, you know, are you watching videos on Blu-ray with the top quality or are you doing Netflix because it's just convenient? It's not going to have every movie or thing that you want to watch and so you still may have to go to a video store or to buy it directly on a download or to even go out to the movie theater. But the point is just that the open web just enables a lot of convenience and low friction that I think is going to play a huge part with how the metaverse develops. And I think with Daydream coming out first, it may help kind of set the tone for some of these applications that are out there. And soon there's going to be support for doing browsers to be able to see WebVR experiences on both the Vive and Oculus. But in terms of market share, that's just going to be a lot smaller. I think in the beginning, since Daydream is going to be the first and probably the most prevalent, I think that's where we're going to see some of the first big WebVR specific experiences. So it's still really early days of WebVR. If you want to get more involved, you can go to webvr.info to get all information you need to get up and running, including there's development builds of both Firefox and Chrome to be able to actually start to look at things within WebVR. There's also a Slack channel that you can go join. It's a public Slack channel. And if you wanted to go back and listen to some of the Voices of VR podcast, tracking the WebVR from the early beginning, even before it was called WebVR, I think that the Voices of VR podcast, I think, has actually kind of been a part of the story of helping to broadcast to the larger community to kind of keep up to date as to something that was kind of fledgling, but I think that there's people who that just believed in the vision of the open web and that continue to push through and there's been a whole community that's been developing around it. And so if you want to go back and hear kind of the evolution of WebVR, then you can go back and listen to the interviews I did with Mozilla with Vlava Sivich in episode number 13. with Josh Carpenter in episode 150 and 350 and then with Diego Marcos and Chris Van Weverish back in episode 471, the one right before this episode. I talked to Tony Parisi back at the very first SVVR episode 40 and then, you know, different check-ins in episode 240 and then 465. Brandon Jones has been leading up the Chrome side of things for WebVR. He started as a 20% project, now he's doing it full-time. You can listen to an interview I did with him back in episode 112 and 362. And I asked the vice president of Google VR, Clay Beauvoir, in episode 456 about WebVR, which I think is actually the first time that he talked about WebVR in public. So that's all that I have for today. If you enjoy the podcast, then please do spread the word, tell your friends and become a donor to the Voices of VR Patreon. Just a couple of dollars a month makes a huge difference, especially if everybody starts to chip in. So you can go to patreon.com slash Voices of VR.

More from this show