Mozilla’s mission statement is to ensure that the Internet remains a global public resource, open and accessible to all, and they’ve been helping bring VR to the web for the past three years. A-Frame is an open source framework that has gained a lot of momentum over the last year with more participants on the A-Frame Slack channel than the official WebVR Slack.
I had a chance to catch up with A-Frame core developers Diego Marcos & Kevin Ngo at the IEEE VR conference in March to get an overview of A-Frame, and how it’s driving WebVR content and innovations in developer tools. Mozilla is also planning on shipping WebVR 1.1 capabilities in the desktop version of Firefox 55, which should be launching later this year in August.
LISTEN TO THE VOICES OF VR PODCAST
Mozilla believes in open source and the open web, and they have a vibrant and very supportive community on the A-Frame Slack that is very helpful in answering questions. Ngo has been curating the weekly highlights from the A-Frame community for over a year now posting the latest experiences, components, tools, and events in his Week in A-Frame series on the A-Frame blog, which has helped to grow the A-Frame community
A-Frame uses a entity-competent model that’s very similar to Unity’s model where you spatially position 3D components within a scene, and then you add behaviors and scripts that drive the interactive behavior. There’s a visual editor to move objects around in a scene, and a VR-editor is on the roadmap to be able to put together WebVR scenes in A-Frame while being in VR. There’s an open source collection of components that is being officially curated and tested in the A-Frame registry, but there’s also various collections of interesting components on GitHub repositories such as these Awesome A-Frame components or this KFrame collection of components and scenes.
Google even announced at Google I/O that they’re using A-Frame in order to rapidly prototype Google Expeditions experiences. WebVR and A-Frame is a perfect combination for Google as they’re trying to organize all of the information in the world. The strength of the open web is that you’re able to mash-up data from many different sources, and so there are going to be a lot of educational and immersive experiences focusing on mental presence are going to built on top of WebVR technologies.
In my interview with WebVR spec author Brandon Jones, he expressed caution of launching the Google Chrome browser with the existing WebVR 1.1 spec because there were a lot of breaking changes that will need to be made in the latest “2.0” version order to make the immersive web more compatible for both virtual reality and augmented reality. Because Chrome is on over 2 billion devices, Jones said that they didn’t want to have to manage this interim technical debt and would prefer launching a version that’s going to provide a solid future for the immersive web.
Some WebVR developers like Mozilla’s Marcos and Ngo argue that not shipping WebVR capabilities in a default mainstream browser has hindered adoption and innovation for both content and tooling for WebVR. That’s why Mozilla is pushing forward with shipping WebVR capabilities in Firefox 55, which should be launching on the PC desktop in August.
Part of why Mozilla can afford to push harder for earlier adoption of the WebVR spec is because the A-Frame framework will take care of the nuanced differences between the established 1.1 version of the WebVR spec and the emerging “2.0” version. Because A-Frame is not an open standard, they can also move faster in rapidly prototyping tools around the existing APIs to enable capabilities, and they can handle the changes in the lower-level implementations of the WebVR spec while keeping the higher-level A-Frame declarative language the same. In other words, if you use the declarative language defined by A-Frame, then when the final WebVR spec launches then you’ll just have to update your A-Frame JavaScript file, which handles the spec implementation and allows you to focus on content creation.
Mozilla wants developers to continue to develop and prototype experiences in WebVR without worrying that they’ll break once the final stable public version of WebVR is finally released. Mozilla is willing to manage the interim technical debt from the WebVR 1.1 spec in order to bootstrap the WebVR content and tooling ecosystem.
Mozilla is also investing heavily in a completely new technology stack with their Servo browser, which could eventually replace their mobile Firefox technology stack. Marcos previously told me that Servo is aiming to be built to support immersive technologies like WebVR as a first-class priority over the existing 2D web. Servo has recently added Daydream support with GearVR support coming soon. They’ve shown a proof of concept of a roller coaster Daydream app built in three.js that runs natively as a native application within Daydream.
Overall, Mozilla believes in the power of the open web, and wants to be a part of building the tools that enable metaverse that’s a public resource the democratizes access to knowledge and immersive experiences. There’s a lot of questions around concepts like self-sovereign identity, how an economy is going to powered by some combination of crytpocurrencies and the Web Payments API, as well as the concepts of private property ownership and how that might be managed by the blockchain. A lot of the concepts of a gift economy that Cory Doctorow explores in “Walkaway” are being actively implemented by Mozilla through the open source creation of the Metaverse, and everyone in the WebVR community is looking forward to a stable release later this year. For Mozilla, that begins in August with Firefox 55, but this is just the beginning of a long journey of realizing the potential of the open web.
Support Voices of VR
- Subscribe on iTunes
- Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye and welcome to the Voices of VR Podcast. So on today's episode, I'm going to do a deep dive into A-Frame, which is the most popular web VR framework that's out there, and it's created by Mozilla. So Mozilla has been working on WebVR since the very beginning and part of their mission statement is to promote an open decentralized web that works for everybody. They see the web as a public resource and they believe an open source is the best way to promote that type of openness and collaboration and sharing that comes from the internet. And so I had a chance to talk to a couple of the core developers of A-Frame, both Diego Marcos as well as Kevin Goh. And we talked about the A-Frame framework which is essentially a way for you to describe a scene in a much similar way that you would within a Unity system where you have these different objects and there's different entities and components and you attach behaviors to those objects. So, there's a lot of web technologies on the low end that have to take care of actually rendering out things into WebGL that's being driven by 3GS JavaScript and A-Frame is like this layer on top of everything else where you just get to define what you want to see in the scene and A-Frame makes it happen. So, it's like this declarative language. So, the other dimension is that Mozilla wants WebVR to get out there, and so there's been a number of different internal discussions within the WebVR community as to when they should ship. They have a 1.1 version, which is pretty solidified, but in my discussions with Brandon Jones from Google in yesterday's episode 537, Google's position is that they've got like 2 billion Android devices, and as they push out these updates into Android and put it out to basically the entire world, they want to make sure that it's solid. And because they know that WebVR 1.1 is going to break, then Google is a little bit more hesitant as to push it out. They're kind of waiting for the latest specification, which is internally being called 2.0, to solidify a bit more. But from A-Frame's perspective, they don't really care because the framework's going to take care of those nuanced differences between the 1.1 and 2.0 on the back end. That's just a matter of upgrading your A-Frame JavaScript file and that everything else that you do within A-Frame is going to work, whether or not you create it in the latest version of 1.1 or you wait for the official launch of WebVR sometime later this year. So the bottom line is that if you want to get started in working into WebVR, then A-Frame is a great framework to start. It's got a huge open source community and a lot of people participating and creating different components that you could start to create some of your first WebVR experiences. So we'll be covering all that and more on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by the Voices of VR Patreon campaign. The Voices of VR is a gift to you and the rest of the VR community. It's part of my superpower to go to all of these different events, to have all the different experiences and talk to all the different people, to capture the latest and greatest innovations that's happening in the VR community and to share it with you so that you can be inspired to build the future that we all want to have with these new immersive technologies. So you can support me on this journey of capturing and sharing all this knowledge by providing your own gift. You can donate today at patreon.com slash voices of VR. So this interview with Diego and Kevin happened at the IEEE VR academic conference. They were just about to do a half day workshop teaching academics how to use A-Frame VR. So that was taking place in Los Angeles, California on March 18th, 2017. So with that, let's go ahead and dive right in.
[00:03:53.853] Diego Marcos: So I'm Diego Marcos. I'm one of the core developers of A-Frame. I started doing VR mostly at the very beginning when WebVR was not even named WebVR. We started A-Frame, and it's been growing, and we are very happy with the results, and now very excited about what's coming.
[00:04:11.089] Kevin Ngo: And I'm Kevin Ngo. I'm also a developer on the Mozilla VR team. And I started working with the VR team in September of 2015. And I was at A-Frame kind of towards the very beginning. And we helped release it. And I work on it with Diego.
[00:04:26.016] Kent Bye: Great. So I've done a lot of podcasts about web VR and VR over the last couple of years. But I haven't specifically had one just solely focused on A-Frame. So maybe let's take a step back. And you guys can describe to me what A-Frame is and what it enables you to do.
[00:04:40.203] Diego Marcos: So with A-Frame, we were basically scratching our own itch. So we are web developers, right? And I have some background in computer graphics, but I'm not an expert. And it feels like overwhelming. If you know 3D graphics, we don't know anything about VR. It's difficult to get started. And if you look at the web, most of the people don't have 3D graphics knowledge. And we said, OK, so if we want WebVR to be successful, it's easier to onboard all the millions of web developers that are out there versus trying to convince 3D graphics people to come to the web. And since we are web developers, we speak the same language. So let's build tools that resonate with them and reuse concepts and terms that they already know.
[00:05:23.928] Kevin Ngo: So, A-Frame is a web framework for building virtual experiences, and the outermost layer of it is HTML, so people that don't even know code at all can get started with building VR and WebVR applications. We've seen kids be able to use it, educators use it, designers that don't know JavaScript, and it's especially familiar for web developers that are accustomed to HTML and JavaScript and all the DOM APIs. They can just jump into A-Frame and use the tools they know and love. And since it's based on HTML, it works with all the web frameworks out there, jQuery, React, Angular, or Vue.js. It's very, very familiar.
[00:06:01.368] Kent Bye: Yeah, so the idea of a declarative language, you know, go back to VRML and they attempted to do that. And this is sort of like the updated version, but yet it seems like there's a lot more libraries and standards to be able to actually pull it off. So maybe you could talk about what you're putting at the highest level and then what kind of like libraries are involved to actually translate that into an immersive 3D experience.
[00:06:26.394] Diego Marcos: Yeah, so A-Frame is built on top of 3JS. So at the core, it's just a declarative layer on top of 3JS that makes possible to declare your scene graph using HTML, pretty much the same way that you structure a website. So basically, we use custom elements. That is a way to define your own tags in HTML. And you can attach some logic to those custom elements. So basically, when you add one of those custom elements to the scene, under the hood, we are actually instantiating a 3JS object that does all the 3D graphics rendering part.
[00:07:02.844] Kevin Ngo: A-Frame is also based on any component pattern. All the Unity developers are really familiar with this, but it's a very pattern common in game development where every object is an empty object and you attach components to provide behavior, functionality, and logic. If you look at a box, it's actually just a geometry component and a material component together. And the beautiful thing about this is anyone can create components to do anything with JavaScript or 3GS. They package it up into a component, and then a developer that doesn't know any code at all can take that component and just use it in their HTML declaratively. So someone across the world would make a physical component, and another person can just pop that into HTML and just use it straight out of the box.
[00:07:43.341] Diego Marcos: Yes, so just to emphasize that the domain difference between VRML and other standard approaches and A-Frame is like A-Frame is not a standard. So basically, it allows us to iterate much faster based on usage feedback. And the other advantage by not being standard is that easily extensible without asking for permission, we have to go to committees or through approvals or through specs conversations. So as Kevin mentioned, through the Entity Component Architecture, everybody can extend it. You can reuse those components. And there's a whole community that can build around those extensions.
[00:08:20.632] Kent Bye: Yeah, as I'm thinking about A-Frame and the web, some of the unique affordances of the web is that you can view source and then perhaps copy what other people have done and then start to copy that over. We don't necessarily have that when things are compiled down to a binary with either a Unity or a neural engine. And yet, on the other hand, working with game engines like Unreal or Unity, you have a spatial interface where you can go in and start to move things around with a graphical user interface in a 2D plane, but it's abstracted into this 3D arranging of different experiences. So maybe you could talk about some of the tools that you have available in terms of being able to either write that code or directly manipulating the objects within that spatial medium.
[00:09:03.385] Diego Marcos: So with A-Frame, we want to actually keep the original spirit of the web alive, which you mentioned, like the view source capabilities. And in A-Frame, on any scene, you have a shortcut. It's Control-Alt-I, that you can actually invoke a visual inspector. So any scene that you see out there, actually many people are not aware of it, is that you can just open up this editor and inspect any scene and see how it's made and learn from other people's work. And that's one aspect of it. So you have this visual way to manipulate an A-frame scene. And in the roadmap, we have in the pipeline an actual in-VR editor where you will be able to pop in into a scene and actually manipulate that scene with your hands, pretty much on the line that we are seeing on the Unreal and Unity editors.
[00:09:49.282] Kevin Ngo: Yeah, that is called the inspector, and it's also hooked up to something we have called the registry. And it's very similar to what we have for the Unity Asset Store, where they have a collection of components. We're doing something really similar where developers are building open source components. And we collect the best of them. We curate them, code review them. And then just from this visual inspector, you can inject lots of components, like physics, or rain, or mountains, right out of the box.
[00:10:15.218] Kent Bye: So it seems like one of the big things that we want to be able to do is go from one VR experience to another VR experience. So maybe you could talk a bit about, you know, what's happening in the realm of WebVR or A-Frame specifically in terms of these portals of being able to do what Philip Rosedale pointed to why he thought the early days of the web won out versus like the AOLs and the CompuServe was that the benefit was in to be able to interconnect and link these different experiences together so that the, you know, Metcalfe's Law, the value of the network increases with the square of the nodes of the number of nodes in that network. So the more that you have the interconnections, then even though the visual fidelity may not be as good as an Unreal Engine or Unity right now, but it's that interconnection that is happening that I think is going to really give WebVR in the long run something that's going to be as compelling as the web.
[00:11:06.988] Diego Marcos: Yeah, so if you go to Google and you Google for the first website ever, you're going to end up on the CERN website. And actually, the two first sentences that you read there, they mention hyperlinks, connected information. So links is the fundamental ingredient of the web. And due to the rush of shipping things quick, So links in VR, they don't work the way that you would expect. So it means you can transition between sites. But right now, if you enter VR in a site and you transition to a different one, you get drop out of VR and you have to engage again. But the latest spec of WebVR actually considers this use case, being able to only engage VR mode once, and then you can transition between sites without taking your headset off. And this is coming very, very, very, very soon in Firefox Nightly, like I would say even a few days, that we will have the ability to transition between sites without exiting VR mode. And we will have on the A-Frame side, like a link component that will allow you to actually create those seamless experiences within VR. So you will be able to traverse worlds And you will be able also to decide how you want to represent those links. If you are in a world, how you represent those connected worlds that are accessible to the user. So we are going to have our version of how those links should be represented. But this component allows for customization. So we are looking forward to see what people come up with.
[00:12:39.108] Kevin Ngo: So link traversal is a very core value proposition of WebVR. And yeah, we'll definitely see developers create what they think of as links. Like in Job Simulator, if you want to go back, you eat a burrito. And that's what they represent as a link. And once we have this interconnected, everyone can accept their own world and just publish it instantly and you can travel across different websites. I think the web will start to more manifest itself as something that we look at as the metaverse. It's where this shared collective virtual space that we see in science fiction. You can travel from place to place and it's all persistent and you can see it with other people. You can take people along with you. And once we have this link traversal, it'll also arise, what will a browser look like? What will your vehicle look like as you're traveling? What kind of UI? What's the browser Chrome of WebVR? So once link traversal comes out, we'll start to see what it looks like.
[00:13:34.510] Kent Bye: Yeah, back in the fall, there was the first WebVR WC3 workshop, where it really was bringing a lot of the global WebVR people together for the first time. And I don't know if that point that was announced, either the 1.0 or the 1.1 spec. We're kind of in this phase where we're still getting integration with WebVR across all the different browsers. And I've heard rumblings of even a 2.0 spec. So maybe you could tell me what you can about an update as to what is happening in this realm of WebVR, and when should people start to build against 1.1, or wait for the 2.0, or just sort of where you see this kind of inflection point might be.
[00:14:11.587] Diego Marcos: So right now, we have two coexisting versions of the WebVR spec. The one we call 1.1 and another one we call 2.0. And 1.1 is almost finalized, and 2.0 is still under development. And 1.1 is the spec that current experimental browsers. And in Mozilla, we express intent to ship the 1.1 version of WebVR. Because we think it's important to ship early because there's a lot of things that they have to mature. It's like you have to mature like tooling, you have to mature the performance in the browsers. So there's a lot of people that they have to learn new skills to experiment with the medium. There's a lot of things that have to happen before you will actually see good content out there. And all the native platforms, they are already in that process. All the tools and content and game studios are improving very fast. It is the chicken and egg problem. If there's no users, there's no content. And if there's no content, there's no users. And if WebVR doesn't ship on release browsers, it's not going to be taken seriously. So people are not going to invest to create high quality content if they cannot target users. So for us, it's more important to ship something quickly than to ship something perfect. This is why we express intent to ship 1.1, even though we know that it's not the final ideal API we want. But yeah, we are trying to get consensus and to convince other browsers that this is a good idea and see what happens.
[00:15:39.081] Kent Bye: Yeah, and you just finished up your one-year anniversary of This Week in A-Frame, which is curating some of the best of A-Frame examples that you're seeing out there. And I'm just curious to hear from your perspective some of the ones that really stand out for you as you're thinking about what's been accomplished with A-Frame over the last year.
[00:15:56.742] Kevin Ngo: Yeah, we've seen lots of AFRIN experiences in lots of different disciplines. We've seen lots of use cases for journalism. So there's a notable one by NBC International UK called Fear the Sky, and it takes people to the barrel bombings in Syria. And another similar one is for the Washington Post also create one that takes people to Mars and shows them all the different actions in the space program. People have been using it for education, which there's a researcher in Washington State University who created an experience for medical research and education, where you can use Leap Motion to manipulate human anatomy. People have been creating just fun sandbox, like a city builder, where you can have access to hundreds of different models, and you can just build a city with HTC Vive and kind of tell a story.
[00:16:45.771] Diego Marcos: Yeah, I like to see experiences that actually try to take advantage of the particular characteristics of the web.
[00:16:52.076] Kevin Ngo: Yeah, I like the ones that prototype with room scale and controllers. So unlike Native, where you have to create an experience, and it's really hard to share with other people your experiments, because they have to download and install it. With the web, someone can just make a select bar component for the controller and just share it out as a link. And people can check it out and see how well it works.
[00:17:15.572] Kent Bye: Yeah, and when you were showing your highlights at GDC at the WebGL meetup, I noticed that there was a lot more data visualization experiences that I've seen elsewhere as well. Maybe you could talk a bit about some of the things that you've seen in data visualization and some of the toolkits that people may be starting to integrate from existing JavaScript libraries.
[00:17:33.446] Kevin Ngo: Yes, so a common library people use is D3JS for data visualization. So they'll take a common data set, like something about petals and flowers as a very popular data set, and just view it in 3D, so scatter plots. But not only that, but you can take, it's also room scale integrated, so you can actually have a list of facets in front of you. And with your controller, you can reach out, grab a facet, and just plop it into a graph and see a graph change in real time. And not only did he do it with flowers, but there was a web VR competition called Virtual Leap. And out of the top 10 projects, I think A-Frame placed 9 out of the top 10 experiences. But a data visualization was he took all the winners and made a graph out of them and took everyone's name and country and made them facets. And then you could actually take countries and create graphs out of those winners.
[00:18:22.985] Diego Marcos: Yeah, the web is the ideal environment to do this mashup of data. You can access tons of APIs already out there. And you just need a way to visualize and present the data to the user in interesting ways. And we are looking forward to see more of those kind of experiments, like taking the unique characteristics of VR and being able to represent that data in interesting ways. So we are at the IE3 VR conference, this academic conference, and we are looking forward to see actually there's so many papers that get published with interesting interaction techniques that you can read the actual paper, but you can never actually try those techniques. And we are looking forward to see some of the research community adopting the web to actually preserve and archive those interesting experiments. So for posterity, people can go back, read the paper, and actually try those experiments.
[00:19:21.033] Kent Bye: I think one of the things that would help is once the tool set gets down so you can export a Unity experience, because a lot of the people that are creating these experiences are creating it within these game engines, and then to have that be able to be exported into and put on the web I think is going to help that out. One question that I had around progressively downloaded content. So I think one of the challenges of seeing a binary is that you often have to download the entire thing before you can start to experience it. And one of the unique affordances of the web is that you have this kind of progressive downloads of content. So is that something that's also been implemented, or is there a limitation there in terms of the streaming hook within a GLTF to be able to actually progressively stream assets down?
[00:20:05.611] Diego Marcos: Yeah, so I can tell the experiments we've done so far that we haven't gone very deep on that regard. So right now on A-Frame, you have a way to download appsets upfront. So that is going to block your rendering loop. So once you start rendering, everything is ready to go. And we are experimenting with it, but it's kind of hard, right? Because there's tolerance in the web. Like when you load a website, if the images start to pop up, After the first render, there's tolerance for that. People are OK with that. But if you're in a VR environment and there's an object that you're supposed to grab and interact with, if that object takes a while to show up, then the experience is not going to work. We still have to figure out how to do this in a graceful way. But right now, as it is today, A-Frame is blocking for assets to load. And then we start rendering. You can break down your experience in different levels. So you just load one part, start rendering that part. And then when you switch to a different level, you block rendering. You show a loader, load new assets, and show the new level. But yeah, we need to explore more advanced workflows to make it more progressive and not have those stop gaps in that breakout. Yeah.
[00:21:20.229] Kevin Ngo: Yeah, on the web, everything is asynchronous. So you have really fine-grained control when to download your assets. For A-Frame, that would just mean when you're ready, you just set the source of a model on an object, and it'll start to download it. So maybe when you get closer to an object, you'll start to download it. Or you can do some level detail stuff. When you're far away, you'll download a low-res version of an image. But as you get close, it'll progressively download a higher-res level of an image. That means you don't have to download up front. You only download what you need. And there's also an API called ServiceWorker, which lets you cache stuff offline. So you can define which parts of your app that you want the user to download and store locally and make parts of your app similar to a native app if you want it to be offline capable. So with the ServiceWorker API, you have really fine-grained control of what you cache, what you download, which servers you download from.
[00:22:11.127] Diego Marcos: Yeah, more advanced caching would be really helpful, even like caching across experiences, right? If I'm using a tool that I want to use across experiences, a nice way to cache those, both logic and 3D model associated to that tool, so just immediately available when I transition between experiences. Some sort of concept of virtual backpack that you carry with you that is already preloaded. Yeah, that's very hand-wavy, but we want to get there very soon.
[00:22:39.787] Kent Bye: So I'm curious to hear from you some of the other either web standards or technologies that you think are going to be a key part, whether it's cryptocurrencies, distributed file systems, distributed identity. What are some of the other things that you see are still in the nascent stages of forming or have already been well-specified that you think are going to kind of fit into this web VR ecosystem?
[00:23:04.207] Diego Marcos: For me, for the metaverse to flourish, it has to have an economy eventually. An economy means like exchange of wealth, could be like traditional money or cryptocurrency, and exchange of goods. Obviously, it's going to be virtual goods. And a requisite for an economy is both having the currency and both the sense of identity. So the two entities exchange wealth and goods. And in order to have identity, you need some sort of mechanism to be able to say that you are you and to prove that another person is the person you want to exchange a good with. And right now, those services are provided by walled gardens. You have Facebook accounts, you have Google accounts, you have Steam accounts, and there's nothing truly decentralized. And I think if we want the free metaverse that we think the world deserves, we need open and distributed alternatives for those kind of services, like identity and currency, that works across borders, across companies, and they are not controlled by a single entity. And those are things we want to experiment with. There's a lot of startups around blockchain, and they're trying to pull this off. But I haven't seen anything so far that is really easy to use and solve exactly the problem that I think we need. But to be honest, I just read documentation and websites. And if someone can point me to something that we can actually use today, I'm more than happy to integrate to A-Frame and start to explore those words.
[00:24:39.144] Kent Bye: I'm looking forward to catching up with Philip Rosedale of High Fidelity. I think he's been thinking deeply about a lot of these things. I don't know if he has anything ready to use yet, but yeah, I don't know if you have any other sort of technologies that you see are kind of going to be fitting into this ecosystem.
[00:24:52.763] Kevin Ngo: So WebPayments is an API that's kind of still nascent and still under development, but I don't know where Bitcoin or decentralized currency fits into that. But we definitely need probably be depending on the blockchain for definitely for payments and for avatars as well for user identity. So I think I've heard one called Web Profiles, it's like a JSON information about yourself that you store in the blockchain, so people know who's who, as well as knowing what you own in this virtual space. We've seen some experiments within the A-Frame ecosystem where someone created an experiment called Sense of Promise. It's this very relaxing area with different rooms, and within each room is like an elixir, and you have the option to pay like 50 bucks for an elixir. And once you buy that elixir, that payment is processed on the blockchain and it's gone forever. So once you buy it, no one else can no longer see it because you kind of own it in a sense.
[00:25:45.461] Diego Marcos: The sense of ownership that is associated to the concept of identity is like, I have an object, I own it, it's mine, it's unique in a sense, and I can transfer that ownership. So yes, being able to register what things you own is also a mechanism that doesn't exist today on the web. And the blockchain indicates that is the solution for that. But yeah, there's really no much action on that front yet.
[00:26:09.735] Kent Bye: But there's also the element of permissions around the objects, because right now on the web, you could copy an image and put it on your site. And there's also talk of a distributed file system, which how do you add a sense of ownership on something that's distributed, or how does that work as well? There's identity, but there's also the aspect of locking down the permissions and the DRM around what people can and cannot have. And so I guess a larger challenging question of when you talk about an avatar and what you look like, if that is kind of your expression of your identity, how are you going to prevent people from capturing that?
[00:26:46.862] Diego Marcos: Yeah, that's interesting because I mean we've seen like a little bit that like for music people had to buy the physical support and owning that thing had some value. But as we move to the digital support like the sense of ownership has totally changed. You are paying to access a stream of music but you don't own the music anymore. And the concept of ownership in the metaverse, like owning an object, so owning like a pair of shoes, like very expensive shoes in the metaverse, if you can copy at zero cost, then differentiating yourself by owning a digital asset, maybe it's not that easy anymore. And I don't know what's like DRM or the concept of uniqueness is something we want to preserve on the metaverse. Because I think it's ingrained on human nature to be able to differentiate yourselves from others. But I don't know if the ownership of things is going to be the base for that differentiation. Because, yeah, owning has no that much value anymore. I don't know. That's very, again, hand-wavy.
[00:27:52.913] Kevin Ngo: People buy hats on Team Fortress, too.
[00:27:57.217] Diego Marcos: Yeah, but that's in Team Fortress, it's like Steam that actually is the source of truth that decides you own this and this person doesn't own. But yeah, if you have a system that is completely open, like it is the web, you can grab any image, right? And just copy it automatically, right? There's no like really protection of those assets. Maybe it's like, I don't know. I don't know if you can actually put a mechanism in place that is not super restrictive to protect those assets. Maybe we should just give up with the concept of that kind of ownership and replace it with something else, right? But I remember when, what was the book? It was Ready Player One or the one that the more wealthy people, they have access to better simulations. So actually the the scarcity is not the virtual ownership, but the access to computing resources So maybe it's like your avatar will have like better Reflections or very like a special effects and this is the way because you have access to more computing power And this is the way to at the end of the day you have like control over a physical Good, which is the actual machine that is running that simulation, but the virtual object itself doesn't have any value I have no idea
[00:29:04.278] Kevin Ngo: Maybe you'll be publicly shamed if you're wearing fake virtual Nikes you didn't authenticate. Does that person actually own these Nikes?
[00:29:11.942] Kent Bye: Yeah, these seem to be like some open questions. You know, there's also like a lot of digital copyright management, you know, the DMCA, copyright, and there's the laws that are on, you know, so I think we're kind of entering in these early phases before things are getting locked down and some of these questions really being asked and answered. Just in terms of multiplayer, I've seen some early experiments of multiplayer within the context of A-Frame. What are some of the other underlying technologies that are actually allowing multiplayer collaborative experiences? Because to me it seems like that's going to be another huge compelling thing.
[00:29:43.428] Kevin Ngo: So one API is WebRTC, which enables peer-to-peer transmission of data and video and audio. And unfortunately, it's support on desktop browsers, but mobile browsers aren't really supporting it yet. But we'll see a lot of multiplayer experiences start to depend on WebRTC. Another service that people use is Firebase, which is sort of like a WebSocket database service. WebSocket is another way to be able to transfer data, but it goes through a server, so it's not peer-to-peer.
[00:30:14.517] Kent Bye: Cool. And finally, what do you each see as kind of the ultimate potential of virtual reality and what I might be able to enable?
[00:30:23.058] Diego Marcos: Yeah, I think that's my second answer to this question. I'm going to keep almost the same one. For me, it's like transcending the physical reality of human society. Sometimes people say that AR is going to be the future because they kind of see current workflows and tasks that they carry on the physical world improved by AR. But they think VR is going to open to completely different kind of human interaction, like that it's not attached to the physical world. is not limited by the loss of physics. And you can fly, you can be a giant, you can be very, very small. You can teleport yourself anywhere instantaneously. And it's going to open up mind-blowing kind of societies and new habits and social conventions that are completely unthinkable today. So that's, for me, the ultimate potential of the other, transcending the physical reality and see what's on the other side.
[00:31:20.189] Kevin Ngo: It's going to be like what science fiction mostly describes is this shared collective virtual space that everyone interacts in, plays in, does their business in. And it's kind of like going to space. We only need to go into space more. Because this is like the next frontier, this virtual universe.
[00:31:37.265] Kent Bye: Awesome. Well, thank you so much.
[00:31:38.767] Diego Marcos: Yeah, thank you. Thank you very much. Thank you.
[00:31:41.333] Kent Bye: So that was Diego Marcos and Kevin Ngo of Mozilla, and they're both working on A-Frame, which is an open source framework to do web VR experiences to put VR on the web. So I have a number of different takeaways about this interview is that, first of all, A-Frame is super exciting. I mean, I think as I'm looking at the different communities that are online, there's a Slack channel for web VR, as well as a Slack channel for And the A-Frame actually has more people in it and participating in it than in the WebVR Slack channel. So it's really a powerful framework to be able to quickly prototype and to make WebVR experiences. So in talking to Brandon Jones in episode 357, this previous episode, I got a lot more insight as to why I think A-Frame is so compelling and interesting. One of the things that Brandon was saying is that a lot of the aspects of WebVR are dealing in imperative code. So in other words, WebGL is like a black box. You give it the input of what you want to see and then it basically paints out pixels. You have no ability to kind of go in there and see any sort of semantic information about what's actually being painted into that scene. And in order to drive that WebGL experience, then you're using something like Three.js, which is then basically a JavaScript framework to be able to feed into that black box. But yet, for a lot of people who are wanting to create an experience, that's still a lot of complexity just to do all the JavaScript bits. And so essentially, A-Frame is driving a Three.js interface to be able to create the WebGL. So at the very top level, you have A-Frame as this declarative layer where you can basically start to layout objects in a way that is very similar to how you would use a Unity editor where you put a 3D object in there, you say some scripts and behaviors that you want to have that object do, and then you start to create that scene in that way. And it even has this visual editor where you can start to go in and move things around. And so A-Frame is like this interface for you to be able to drive all these kind of black boxes. And because A-Frame as a framework is kind of agnostic as to what the actual specification is, you don't really need to worry about the nuances of the 1.1 versus 2.0. You can just create your scene with an A-Frame code and then it takes care of everything else for you. You just need to update the JavaScript file once the official versions of WebVR get out there. So there's been a little bit of debate within the WebVR community as to when this should be made available because basically it's a huge pain in the ass in order to actually get WebVR up and running on your system. You have to go download experimental web browsers if you're on a PC. If you try to see some of these experiences on WebVR then you have this thing where you can be in GearVR and then if you try to go into a WebVR scene it already sees that you're already in VR and so it doesn't necessarily always like take. Oftentimes like the best way to see a WebVR experience is to not be in VR at all and to like click the little icon in the lower right hand corner and that'll jump you into like a Google Cardboard. and you can see the scene and if you're already in VR then it's got all these different layers of complexity where you have to like get out of VR and click it and then sometimes it works sometimes it doesn't. Enable all the right flags in order to get that to work. There's two browsers with the Samsung Gear VR. There's the Samsung browser which I found to be probably one of the best ways to look at different web VR experiences but I found that my S6 phone was overheating so it's like I've heard that the S8 phone is a lot better. But the Carmel web browser that Oculus has released is now also in Gear VR so that there's a little tab for the internet so if you go and open up a Gear VR you'll see an internet tab. Sometimes the web VR experiences will work there such that you click the button and it kicks you into the screen. But I think right now probably the most solid way to be a developer for WebVR is to do it within like a HTC Vive or an Oculus Rift where you just have your development environment and you can kind of go in and out quickly and you're using one of these experimental web browsers. But for most of the people trying to see some of these WebVR experiences, it's not that easy. So that's part of the reason why Mozilla wants to push out a 1.1 version. They have it slotted so that it's going to be coming out in August and release number 55. So they've got all the code there and they're willing to do the work that's required in order to have to kind of maintain this technical debt of this interim period between 1.1 and the next version. So they're fine with that. They want to get it out there. They want the tooling to develop and they want people to start making experiences because you need the content to get people inspired to be able to be driving people to actually use WebVR. Because if it launches without any content, then it's going to be okay, no big deal. And I think this content is going to be what is really driving WebVR. So the amazing thing about WebVR is that you're able to start to pull in all these different dimensions of the power of the web, stuff that people have already been creating, whether it's pulling in data from different places, different visualizations, different JavaScript libraries whether it's jQuery and D3 visualizations and AngularJS as well as like ReactJS which Facebook has a ReactVR which has a little bit more tight integration between ReactJS and ReactVR but there's also been integrations between A-Frame and React where you could start to have these different combinations. So basically with the power of the open web, you're combining all these different open source libraries as well as the data that's going to be coming in. And I think having these experiences of the early days of what we could imagine would be the metaverse. Again, I think that the visual fidelity of the Unity experiences and the Unreal Engine experiences is going to be unlike anything else. But the power of the WebVR is going to be in you being able to go to a single place and link to all these other experiences through these portals and these link traversals, which should be coming out in the latest 0.6 version of A-Frame, which should be coming out within the next week or so. And one of the things that I'm also excited about is that at some point, I think that Unity is going to figure out a way in order to export some of their projects into something that is going to be compatible and maybe look something a little bit like A-Frame. Because, you know, Tony Parisi is recently hired there as the head of AR VR strategy. He's one of the original authors of VRML. And so he's been thinking about how to create the declarative open web for over 20 years now. And Vlad Vesovich, someone who was one of the early pioneers of the web VR, is now working at Unity. So again, this is basically two key players on Unity that are working on these emerging technologies that believe very firmly in the open web. So I expect at some point Unity is going to be totally in compliance to outputting something that looks very similar to A-Frame, if not A-Frame itself. And the final point that I just wanted to bring out in the course of this conversation is that Diego said something like, okay, he's really looking forward to transcending the limitations of our physical world. And I wanted to elaborate a little bit because, you know, in some ways you could hear that and think, oh, wow, that's really dissociative. We need to be able to be connected to this earth and this planet. But I think the essence of what he means is going back to this interview that I did with Cory Doctorow and episode 536 where he's talking about the post-scarcity reality where you live in a world where there's no limitations into what kind of access you have to virtual objects. Because a lot of the digital assets that you're going to have access to in these virtual experiences, I think that we're going to have less of an emphasis on owning those objects and just having access to them and so we already kind of see this with something like Spotify for example you don't necessarily have to own the entire music library but you can have access to it to have the experience of that by having a subscription model. So we're already moving towards this paradigm shift from an information age to an experiential age where you're actually paying for the experience to have access to different objects and I think that's where things are going to start to go as well online that rather than buying things to be yours, I think it's going to be more likely that we're going to pay for access to have a different experience, kind of like going to a movie theater or a concert or a show. It's going to be that type of model where you're paying to go have some type of experience online and it's going to be less about what you're owning and carrying with you about whatever digital artifacts that you gather from that and it's going to be more about the experiences that you're able to have you're not going to be as attached to these digital virtual objects because I think that you can essentially you know let go of them at the end of the day. Now I do think that there are exceptions to that because I do think there is going to be a sense of personal property within the metaverse and it's going to be likely connected to you, your image and likeness, your avatar, and your clothes because I think that is the one thing where you want to remain unique is like your identity and your expression of your identity of who you are and your values and I With that, I think avatars and clothes and these different signals to express to other people who you are as a person is something that I don't think is going to go away and there are going to need these different ways of accounting for personal property. So I'm personally super excited about WebVR as well as A-Frame. I'm starting to dive in and create some of my own experiences with it. And at Google I-O, Google was mentioning that they were starting to use A-Frame to create Google Expeditions. To me, I think that of the four different types of presence that we have, whether it's emotional presence, social mental presence, active presence, as well as embodied presence, I feel like the web VR is going to be the strongest when it comes to mental presence and learning and education. So with the power of the open web, there's just going to be a lot of ways to pull down information from all over the place to be able to learn to educate yourself. And I think that in the long trajectory of things, Google is all about mental presence and learning and basically organizing all the world's information. So with both the combination of web VR and eventually web AR, you're going to be able to start to overlay information into the real world. And I think the power of the open web is the best metaphor for what's going to make sense for the future of these augmented reality applications that are giving you additional context and information about the world that's around you. So that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do tell your friends, spread the word, and become a donor. Just a few dollars a month makes a huge difference. So you can donate today at patreon.com slash Voices of VR. Thanks for listening.