#884 VR for Good: WebXR, Design Challenges of the Wider Web, & the Future of Web Assembly with Trevor Flowers

In the final interview of this VR for Good series, I’m featuring the WebXR Device API and how this represents a new open standard that allows for the easy creation and distribution of immersive content.

I had a chance to sit down with one of the members of the Immersive Web Community Group Trevor Flowers to talk about his involvement in helping to shepherd the open standards process with the likes of Google, Microsoft, Amazon, Facebook, and Apple.

Flowers compared the shift from dynamic 2D web pages to either fully immersive world’s or a portal into a virtual world as being as big of not bigger than the shift from print layout to cross-platform, cross-device, cross-form factor, fully-reponsive and reactive design. It’s taken 20-30 years to evolve and formalize the design frameworks to move from static print to dynamic and context-dependent. Now to add in a lot of new display types, control types, and input types has a combinatorial explosion of design decisions that need to be made. Based upon the TODOMVP Rosetta Stone of implement different app frameworks, Flowers estimated that the existing design frameworks can help reduce the number of decisions to be made to be at around 200. But for an equivalent spatialized version of the app, it explodes out to around 3000 decisions. Flowers estimates that it will take another 20-30 years for the design patterns to normalize to he point where we start to get bored with the options.

Flowers also has a background in networking, and had some super insightful comments about how WebAssembly may help to catalyze edge-device protocols for rendering and delivering content on local mesh networks. It’s still very early days, but he expects that there will be a lot of compelling WebXr use cases for WebAssembly.

Finally, we covered some of Flowers’ projects with his art, design, prototype engineering, and product consultation company Transmutable where he’s creating a digital production studio in WebXR for independent content creators. We also bit about his “Wider Web” responsive design framework called PotassiumES, and some of the projects he wants to see to help ensure a lot more trust and privacy verification services for XR devices.

It’s certainly an exciting time for the open web, and there are so many new design problems that are being opened up by the spatial web that will require expertise beyond existing web development & design teams. Flowers says that future spatial web teams will likely have expertise from 3D modeling, environmental design, lighting, theater, architecture, industrial design, and beyond.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR Podcast. So continuing on my series of looking at the VR for Good movement and specifically looking in this last section of different architectural or technological innovations. So this conversation is with Trevor Flowers, who has been a part of the WC3 Immersive Web Working Group, which was announced back on September 24th, 2018, for the open standard for WebXR. He's also got his own transmutable company where he's working on this video production studio for WebXR. He's a prototype engineer and working on Potassium ES, lots of really great things within the overall WebXR ecosystem. So just to set a bit more context as to why I think WebXR is extremely exciting. You have different models of closed and open, and there's a huge value for these vertically integrated systems where you can get an Oculus Quest and you have all the systems of the Oculus Home. You buy an experience, it's downloaded, and you are able to have this native application into play. It's all seamlessly integrated. But at the same time, if you want to get an experience onto the Oculus Quest, then you have to kind of jump through all these different hoops and be able to maybe not have specific things in your experience. You can't have any cryptocurrencies, nothing that, you know, violates the terms of service. And, you know, there's a lot of content restrictions about what you can and cannot hand. And oh, and you also have to, like, have something that's completely new and innovative that is attractive to the curators because, you know, they don't want to have anything that pollutes VR and anything that's going to be a bad experience. And so they're going to be super restrictive as to what actually gets out there. So we're in the situation where one of the most popular VR platforms potentially of the future is going to be extremely highly curated and very difficult to actually get your project onto that. So WebXR creates this almost like backdoor where there's the Oculus browser which is based upon the Chromium web browser which back in Chrome version number 79 which was launched back on December 10th, was officially implementing the WebXR standard, which gives a way to allow you to deliver your virtual reality experience onto a web browser, onto a website, and people can go to a web browser through a link and be able to jump right into your experience. So that's the dream of this, you know, getting away from just the closed wall gardens. You know, it's great to have Steam. It's great to have Oculus Home to have that curation. It's great to have itch.io for the indie developers, but, you know, for these smaller experiences, or maybe even like the VR for good, where you want people to have a variety of different ways to be able to get access to this immersive content. Maybe they don't have a VR headset, but they still want to have this portaled view. And WebXR was originally WebVR, but you're able to eventually have the AR capabilities. So you're going to be able to actually deliver AR capabilities through WebXR as well. But that doesn't come with this initial launch. That's going to be coming later in the future. But VR is being implemented and you can have other frameworks to be able to use, whether that's A-Frame or 3JS or BabylonJS, to be able to actually put your content onto a website and then people could, through a variety of different headsets and ways, be able to have access to that content, which I think is a huge push forward of where the immersive computing is going to go. So, we'll be covering all that and more on today's episode of the Voices of VR podcast. So this interview with Trevor happened on Friday, January 10th, 2020 in Seattle, Washington, while I was in town for the Impact Reality Summit. So with that, let's go ahead and dive right in.

[00:03:36.543] Trevor Flowers: Hi, my name is Trevor Flowers, and I run a small studio called Transmutable. And we do mostly prototype engineering, but we're also doing a little bit of product development for something called transmutable soundstage, which is a video production suite for fully digital soundstages and sets. There's no building. There's no physical cameras. It's all accessed through the web using WebXR and web browsers. And it's designed to create basically a full television talk show style quality system for people who have small teams and don't have the budget to pick up something like a big soundstage rental and all of the fees associated with that.

[00:04:27.166] Kent Bye: Yeah, and so maybe you could give me a bit more context as to your background and your journey into virtual reality.

[00:04:33.678] Trevor Flowers: Sure. Well, I mean, the first thing I did on a professional level was way back in 95, I worked on VRML internally for a university, vet school at the University of Georgia needed someone to work on a virtual surgery application. And VRML was a brand new thing. And so I sort of caught the bug in that project. And like most projects of those days, it didn't really go anywhere, but it sort of inspired me to dig deeper into the technology itself. And then the web sort of happened around me when I was sort of coming online as a person who made things. And so I ended up, after my CS degree, I ended up in San Francisco commuting down to Menlo Park to work for Bee, which made the Bee operating system. And there they had a browser called NetPositive. So I sort of started working on the guts of browsers themselves and thinking about how those two things were related. the early days of VR and the early days of the web. And it's just sort of grown out from there. I did a stint at PARC in the ubiquitous computing group there as a prototype engineer. And then have sort of since then split my time either doing tech startups or doing what I'm doing now, which is prototype engineer for hire, project manager for bigger projects that need to pull together lots of different types of skills and types of people with sort of a dash of product design and development as well.

[00:06:11.561] Kent Bye: And maybe you could catch us up into your involvement into helping develop the WebXR spec and the modern resurgence of VR and how that came to be that you started to work on W3C Working Group in order to help solidify what the WebXR specification even is.

[00:06:28.169] Trevor Flowers: Sure. So at some point I was hired onto Mozilla into their mixed reality team to be an AR researcher. So mainly to work with Blair McIntyre, who has been working in AR for decades. So I was really lucky to be able to sort of join him and do some prototyping and Of course, it's at Mozilla, and it's a web-first sort of oriented place. And so we were lucky to work with some of the original people who maybe five years ago or so started working with web VR, which is this thing that you can use in browsers to talk to headsets, VR headsets. And so as part of that work, we were building prototypes. We have an application called WebXR Viewer that was sort of an early playground for us to play around with ways that people could do portal augmented reality. So you hold up a tablet or a mobile phone and you look through it into some either fully virtual space or an augmented space. And in doing that work, I ended up working in the W3C, in the Immersive Web Community Group. And so, as I'm sure you have discovered, any time you show up in a group and you're excited about working on things, eventually they make you the chair. Because you're the one who's willing to do the annoying grunt work of keeping a community flowing, keeping people... happy working on the things that they're happy on, bringing in new people, making sure they fit into the group. So a little bit of community organizing on my part there. But mainly the WebXR device API sort of flowed out of the original WebVR API, and we expanded the scope to include augmented reality. So that's why it's WebXR instead of WebVR 2.0.

[00:08:20.739] Kent Bye: Yeah, I know that I've been tracking the evolution of WebVR, WebXR, since the very beginning of the Voices of VR, where I talked to Vlad Vesevich and Tony Parisi, you know, Tony Parisi, one of the co-founders of VRML, and then Vlad Vesevich, who helped create the WebGL, as well as on the very early iterations of WebVR, before it was even called WebVR. But my tracking of the story was kind of like, oh, they were moving towards the DK1, DK2 era, developing the WebVR spec. And then they were going to maybe sort of ship it. And then it was almost like at the 11th hour, Microsoft came in and was like, oh, hey, what if we could make this for AR as well? And then it was a bit of a stepping back and having to re-architect it so that it was a little bit more abstracted, so that it wasn't so specific to VR, but it was going to be a specification that you could build upon. So there was a bit of this hesitation to push out an official version of WebVR because you didn't want to have to have backwards compatibility forever of something that was really kind of half-baked based upon the very early versions of the DK1 and DK2, and letting it play out a little bit to see where the XR industry went in terms of the default input controls and what type of aspects around safety, security. I mean, lots of different things that you had to take into consideration. And so it's been a huge waiting game for having a shipped version of something that could interface with a VR headset without having to do special flags or be able to just have some sort of interface. And now that Chrome is shipped with some of the early draft specifications of WebXR, it can actually go into Chrome or the Quest, and I started to play around with it. And I was like, OK, finally, I'm able to put code on Glitch and then be able to pull it up in a browser and start to do this iterative process. But it's been a long journey for the last four or five years that I've been tracking this. And so from your perspective, what has been happening in that time?

[00:10:22.011] Trevor Flowers: Sure. So, the thing about web standards that I think is both a benefit and frustrating to a lot of people is that they are slow. And so, this has lots of side effects. Some of them positive, some of them negative. In the case of the WebVR to WebXR sort of standardization process, a lot of things changed around the environment, like as you know, you know, hardware is changing rapidly, even the way that people think about delivering experiences, whether they're sort of single sessions created by a single organization or person, and it's sort of all encompassing to something where you have multiple experiences running at the same time. There's just many pieces and parts of the culture and technological underpinning of XR that rapidly changed. And so a lot of the work has been actually the balance between scope creep, you know, obviously if the community group and eventually the working group when it was created, if we spent all of our time sort of taking every new technology that was coming to the domain and trying to create an API for it, we would never ship. So that's one side of the urge. And the other side of the urge is to have something that we believe that it's a foundation that won't break in a year or two when some brand new technology comes out and changes it. So the time that it took to get this done was mainly around trying to honor the promises of the web. That people often don't know why they believe it, but they're just sort of baked in. Things like, if you build a webpage now, in 10 years, that webpage will work. If you point your web browser at a web page, it won't take over your machine, and it won't take over your microphone without you knowing that it's taking over your microphone. Things like that that are sort of built into the web as sort of core assumptions are actually really tricky for XR. You know, if you're talking about a headset that has six forward-facing cameras and two cameras pointed at your eyes for eye tracking to see where you're looking, The open question of how you communicate that as a user using a web browser to a web page is actually a really complex thing. So scoping and then also maintaining what we expect for the web has turned out to be much more work than I think many people expected.

[00:12:55.700] Kent Bye: And as I was getting ready to talk to you this morning, I was just reading through the different WC3 documentation, the different explainers. And it feels pretty low level. It feels like this is something that something like a library like Three.js or Babylon.js or even A-Frame, which is a further abstraction of being able to do declarative markup language, but underneath has A-Frame interfacing with it. But they have their own ways of interfacing with the WXR specification. This feels like a method for the web to be able to interact with an immersive device, VR, AR, whatever ends up going in the future to these spatial computing devices that is able to take some sort of spatial information, pose detection, frame, all these different things that are really kind of like abstracting, like what are the elements of an immersive experience, like the spatial computing. and then trying to create this interface. But that, for most people, they're probably never going to actually touch any of the WebXR code. It's maybe the lower level libraries that are doing that. But if you want to have your hands deep into the code and do the integration yourself, the specification is there. But it allows the ability for many different types of frameworks to be able to start to use the web to interface with XR devices. That's sort of my take after looking at it.

[00:14:18.283] Trevor Flowers: Yes, that's right. So I sort of equate the WebXR device API and the associated modules as sort of the equivalent of what you would send to a printer. So if you're an artist, you open up your graphical editor, you create the beautiful image that you have, and there's sort of a whole layer of manipulation and interaction around that data, and at the end of the day, that data gets sort of sent off in a very static, standardized way to any variety of printers that they make something happen. In that case, they print it out on some sort of paper stock. Similarly, the WebXR device API is You open up some application, a modeler, a social application, and it sort of decides up in the application space how to present things and what needs to happen. And then it communicates with this lower level API. So I think WebXR device API is sort of step one of XR on the web. I do a lot of reading about the history of design, and I truly believe that the step from print design to interactive graphical design, like we have on flat screens today, is actually a similar scoped jump as we'll make from the dynamic web that we have today to what I'm calling the wider web. So, I had this realization a few years ago. There's this site called TodoMVP, which is, it's sort of like the Rosetta Stone of web frameworks. There's a specified design for a one-page to-do list application. And it's relatively simple, but it's a fully specified visual design and interaction design, but there's nothing in the design that specifies how it needs to be implemented. And so what different application framework writers do is they implement this design in their framework, and then they upload it to TodoMVP.com. And so if you're shopping around to see how different app frameworks approach these problems, you can go and see them implement the exact thing over and over. So, of course, my idea when WebVR came about and we started playing with augmented reality was, okay, I'll do that for the wider web. I'll do that that handles all three display modes. So, it handles flat like the current web. It handles portal display mode where you're looking through a tablet. And it handles immersive modes like you would get in a headset. Well, okay. If we're going to implement those three modes, then we actually need three types of controls. We need page controls like the current web. We need overlay controls that sit on top of portal displays. And then we need immersive controls or spatial controls that sit in a VR space or an AR space. So there's three display types, three control types, and then there's just a mass of input types that are coming along. So there's keyboards and mouse and touchscreens like the current web, there's tracked controllers, there's voice input, there's eye tracking, eventually there's brain activity sensing. There's this sort of exponentially growing complexity around input alone that it's radically different to design for than the current dynamic graphical displays. So, when I was looking at this, it seemed like a huge leap to try to take an industry right now that, frankly, has just figured out how to make cross-platform, cross-device, cross-form factor responsive and reactive designs to then add in what is essentially 10x, 20x more complexity. I started figuring out the number of decisions I would have to make as a designer to make the to-do MVP app on a flat display versus all three displays of the wider web with all the control types and input types. I think I had, the flat display was about 200 decisions, so doing it in something like React or Vue, you have to make kind of 200 decisions as a designer about how you're gonna lay it out and think about it, and then all the decisions are sort of built into the app framework. To do it manually for the wider web, it was something like 3,000 decisions, because it's combinatorial. It's display type times control type times input type. And it's clearly beyond the bounds of a manual capability. The idea that we would build one-off, from scratch, wider websites is just not a realistic opportunity. Yeah, so one of the things I realized early on was that WebXR was sort of step one. And then the second step is wider web frameworks that have these core capabilities built in so that it's not necessarily to reimplement everything from scratch, because that will never happen. It's too massive. And I think that's a new thing for the web. We've never really had that situation before. We've had complexity, but it's all sort of driven towards the same kind of usage. And when you add in WebXR, it's a completely different story. So I think, in some ways, we're in a position to rethink a lot of things about the web, take the lessons we learned from the existing flat web around social interaction and harassment and abuse, like all the things we've sort of baked as an experiment, you know, on a massive scale. We take those lessons and sort of maybe review them because the use of immersive devices makes it just that much more powerful and intimate in a way that it's an opportunity for us to either make the web a much better place or make it a much worse place.

[00:20:00.269] Kent Bye: Yeah, well, I spent some time working in the realm of content management systems and Drupal open source in the realm of making websites and looking at these different frameworks and looking at the history of the evolution of the web that was originally just like static HTML pages. And then you have more. You know, CSS, cascading spreadsheets, to be able to control the design, and then the JavaScript, the scripting language. But then, you know, you've had all those ways to essentially kind of a metaphor of taking a newspaper and then do a one-to-one port, to like, this is a newspaper. We're going to make our website look like a newspaper. And that was sort of like the first stages of propagating the previous medium in order to discover the affordances of the new medium but then once mobile came along then you started to have folks do mobile first designs so that completely redesigning it so that it could do mobile but then you would want to have the responsive design so that whether or not you were looking at the same HTML content on a web page on your PC versus on your mobile, it would sort of restyle it so that you would have an optimized experience for whether or not you were in the context of your desktop or on your mobile. And then with the native apps, then you have like all the different affordances of native apps, and then you have the sort of movement towards progressive web apps so that you could have an HTML page but could make it feel like a native app. But it feels like in order to get to the progressive web app or the responsive design, in some ways you have to surrender to doing the mobile first design, to seeing what is the true affordances of these new mediums. And so as I think about this progression into a progressive web app for the immersive web, it's like, well, if you're designing it to make it a good experience on either the flat screens, either on a mobile or PC, without taking into consideration the unique affordances of a spatial web, then you're kind of missing out on the magic of the spatial web. So it feels like before we create an abstracted framework to be able to reduce everything down into a single system, it's almost like completely surrendering to the immersive first design to see what is going to be unique to this design and then eventually have this consolidation of all of the other mediums to be able to have uniform progressive web apps that could be seen on any of these different systems. But like you were saying, I think we're at this phase of really trying to figure out what are all the new affordances that we haven't really even explored fully yet.

[00:22:23.068] Trevor Flowers: I think that's right. There are some basic, well, first let me back up. There's been a few decades of work in various mostly academic groups, Georgia Tech and Stanford are the two that popped to mind, on how we take the lessons that we understand about graphical design and move it into a more spatially oriented system. So, for example, graphical design, the program of the design is usually oriented around a grid, a two-dimensional grid. And all of the visual relationships, the use of white space, the balance, all of the things that we think of as the core elements of a visual design. are oriented around this two-dimensional concept. And so the idea that we would take that, which is a finite rectangle on a grid, and then try to abstract out a lot of the design lessons that we have for a dynamic spatial display It's a radical piece of work that needs to be done. There's some work to sort of explore it, but we frankly have not had enough people who could actually put on an immersive display and do a design experiment. So I agree with you fully that there's an entire realm of design experimentation that needs to happen before we'll settle into a design pattern in the way that we have with the flat web. I mean, it took us almost 15 to 20 years to go from what were essentially text-oriented web browsers, like Lynx, through the first Netscape browser that showed graphical, you know, actually showed images, all the way through to sort of a consistent design language now, where people are getting to the point where they're complaining that most corporate websites look the same, because they all use what's essentially a design system that more or less looks like Bootstrap, which is the design system that was open sourced and just sort of took off because everyone was tired of solving all the same problems. So to get to that point where we have enough of an internalized design system and metaphor and visual design that we can actually be bored with it, I think we're 20 years away from that for spatial computing. But one of the things that's clear is that if we want to maintain the ability of the web to meet people where they are, so if they're coming in from a library computer on a desktop, or they're coming in from a $1,000 mobile phone, or they're coming in from a $2,000 headset, if we want to just sort of meet people globally where they are, and not just a very niche part of the globe population, then We have to work really hard from the beginning to have enough automation, enough sort of captured concepts in app frameworks that we can even run these experiments. One of the things I tell people who are considering either going for a specific platform with a native application, versus a web application is that the specific platform native application, whether it's flat or spatial or portal, you will learn lessons about the design process for that specific thing, but you won't learn what is the more complex design question and how those things flow back and forth. The example I use is, you know, I'm on the bus, I'm commuting home, and I'm reading the news, and I find something that's a news story. I can read it on my phone, and it may offer me a way to bring up an augmented reality, sort of portal view, so I can get a sense of scale of some object that's in the story. And then it also may indicate that there's an immersive experience. And so I can sort of send off that link to my headset. So when I get home, I put on the headset and I go to essentially the same webpage. But it's responsive and reactive to the hardware that I have in the moment. So even that story, I have three different modes. It just sort of flow back and forth depending on my interest level and engagement level. And that sort of design is just not informed by solving it for one mode. And so the bigger question of how we actually get global reach and we make it so that people know that these new experiences are available to them, I think, is sort of a unique capability of the web.

[00:26:47.500] Kent Bye: Yeah, one of the ways that I think about what is happening is that you're kind of mashing up a bunch of different individual mediums, each of which have their own design philosophies and design frameworks, whether it's video games and game design or film and the cinematic storytelling language or theater and spatial storytelling that comes with architecture or being able to have different aspects of sort of stimulating different aspects of your body with different embodiment and biometric data, different inputs there. but then there's the web and all the things that have happened in the web, but also like literature and storytelling that happens with the abstractions of written language and communication, but a lot of what happens on the web is the transmission of information and communication and that you have this whole other design frameworks for web applications and social networks that are kind of being blended into like video games and the cinematic storytelling language on top of like theater and architecture. So it's like each of these have had very well-developed over long periods of time design frameworks and now we're kind of mashing them all together and figuring out what is the new way to look at presence or to look at agency and story and Symbolic communication, but how to use the full 360 space that for me There's like gonna be the fusion of all these different design frameworks, but also perhaps a new Framework to be able to take on the new affordances of mashing all these things together

[00:28:13.585] Trevor Flowers: I agree. Much the way that people who grew up in print design are often mildly horrified by the dynamic layout nature of the web, especially moving from desktop to handheld. You know, we have to break a lot of rules. That mashup is going to be painful for a long time. We'll definitely, I mean, I'm excited about it. We'll find interesting and lovely things along the way, some of which we'll stick around and some of which we'll decide were terrible ideas. But yeah, I think it's not going to be a comfortable process in many ways, not just because immersive headsets are inherently uncomfortable for a pretty big percentage of the people in the world, but because the way we'll figure out design with this level of complexity and interaction is, I mean, it's a thoroughly chaotic system, especially because this is the first sort of transition for the web. that has sort of the scale of population. When we transferred from desktop to handheld web, the web was pretty small, honestly. You had to have quite expensive PCs, and Internet access was really limited compared to the billions of people who have access to it one way or the other right now. I agree with you. I think that this agglomeration of different design communities is going to be interesting. One of the things about hiring for this kind of work is interesting is that if you're a web-centric team and you realize you need 3D modeling and environment design and lighting design, well, that's theater people. It's game design people. It's architectural students. It's industrial designers. It is very cross-domain requirements, and I think it takes a lot of teams who are used to being scrappy, you know, flat web design teams and build teams a while to wrap their heads around the idea that these new people who are not steeped in web culture need to be a part of the conversation.

[00:30:16.726] Kent Bye: One of the things that I see is this tension in dialectic that happens between closed and open. I talked to Neil Trevitt, he said something to me back at GDC 2015, he said, every successful open standard has a proprietary competitor. So there's a dialectic between pushing forward and seeing what a closed platform can do with completely integrated software and hardware and design frameworks, and then try to abstract that out into having an open system. But then it can have the potential to be more fragmented, but also more diverse, and have more decisions, and have perhaps more innovation that happens with the rising the tide for all the boats, to have a framework for people to kind of build on top of. And so I see that for the longest time, a lot of the spatial computing has been driven primarily by Unity and Unreal Engine, which have been great for gaming, but they're not necessarily inherently architectural tools or the same workflow that architects use or for storytellers or for different aspects of communicating. I mean, just to even draw a line in Unity, it was like... This type of stuff that I wanted to do, I couldn't do because the tools just weren't there. And once the WebVR actually shipped, I was able to do mathematical drawing. And I feel like there's going to be a lot of exploration of going beyond just the 3D geometries and maybe look at different mathematical visualizations and data visualization, but also maybe non-standard geometries that get beyond the Cartesian approach and hyperbolic geometries. I mean, there's a lot of ways to experiment with. new ways of putting content into the scene. But also, I feel like there's this opportunity to do small, bite-sized experiments, design experiments, that don't require a whole app download, but could just be a quick idea that you take an example from the 3GS, or A-Frame, or BabylonJS, you upload it onto Glitch, or some of these ways to share code, you make some modifications, and then you have a little design experiment. And I feel like those bite-sized design experiments, we haven't had, we had Oculus Share at the very beginning to kind of experiment with different ideas. But now, in order to get anything onto the official platforms, you have to have such an overwrought process that I feel like inherently kills innovation that happens because you can't take a lot of risks. It has to be consumer-ready product in that There's so many ideas that are not consumer ready. They just need to get out there to sort of inspire a more fully fleshed out version of that. But I feel like in this phase now that there is a Oculus browser, you can put on the quest and go in and put code on the glitch and then actually have an iterative process where you can start to do these little design experiments. I mean, just for me doing a week worth of just kind of experimentation, I was able to do that. full process. And I feel like for the larger community, it's an opportunity to start to develop the problems, open problems that need to be solved and experiments, trying to solve it and to just start to rapidly iterate on the web to start to figure out some of these underlying design principles of the spatial web.

[00:33:21.710] Trevor Flowers: I think that's totally true. We have some really good examples from the lower-level graphic side of things. You know, when WebGL was released, it was generally just sort of a slower mirror of OpenGL, so everyone was questioning what it would be used for. But very quickly, we came to have sites like ShaderToy as one example. where in a very unique way that is specific to the web, we were able to share low-level graphics programming with each other in a way where I didn't have to ship around an executable file. I could basically just write my shader code in a web browser and run it in the web browser and people could see the effects. And as a result, we've actually had a sort of a huge growth in people who feel like shader development is something that they can get into. You know, they don't have to write 400 lines of boilerplate C code in order to start playing around with these really fast, beautiful visualization techniques. And so I fully expect, that's one of the reasons I work on WebXR and play the role that I play in it, is that I fully believe that in the same way that Glitch, you know, you open up glitch.com and you start typing and then you have a web application. I think we're going to see that for WebXR. And that's incredible. One of the paths that I've been exploring is this concept of what it would take to create a full XR environment and stack and technology base that we could truly call trustworthy. And part of that is there's some aspect to a control. So right now, all of the app stores, they have their submission process, and they have policies about what can and can't be on their app stores for very good reasons. But the web doesn't have that, and as a result, like you were saying, smaller experiments that don't necessarily need to make money can just sort of go up on the web and be shared. But also things like political action that can't take place in app stores, any kind of content that just wouldn't make sense in the context of an app store makes sense on the web. And so with that as sort of the core idea, okay, so we have this way to distribute content. How do we actually have a way to wear something that's very personal to us on our face That we actually trust isn't sending information back to somebody that we inherently don't trust, because they have different motives than we do. For example, if the device is made by a company that makes most of its money on advertising, and they win by being more and more targeted, well, you know that the product that you're wearing is going to feed that targeting information back into their system. So, that's something that's sort of inherently untrustworthy. Because you don't really have control over it. The example I like to use is... So, I have a Pixel phone. And when I bought it, I sort of made a deal with Google that says... Okay. For this piece of hardware, you'll give me about $100 off of what I would pay for it. And in exchange, Google fully controls the operating system. They fully control basically every aspect of it. And I have some settings, but at the end of the day, I have no idea what's going on under there. And there's no way I can know. So that's a deal that I think we've made with arm's length computing, because it's not a big deal if something you're holding in your hand necessarily is owned by Google and controlled by Google. But when you put it on your face, I think it's a different story. These AR glasses have cameras that are always on in order to do the job that they need to do for tracking. And so the ability for us to sort of think about that hardware as something owned by an advertising company, Facebook, Google, or even a company that just has different motives about controlling their media ecosystem like Apple, I think it's really problematic. I've been working on how to create trusted hardware, how to work with the supply chains and the manufacturers to create a system that is verifiably trustworthy, that it's not doing anything we don't expect. And there's other companies that are doing this. So Bunny over in Shenzhen has a project called BeTrusted. I think it's betrusted.io. And this is a new hardware platform where it's individually verifiable. that an individual can receive the device, and they can run a set of verification steps, and then they know 100% that the software on the device is doing exactly what they expect and is trustable by them. And then you sort of seal the device, and from that point forward, it's a trusted device. And its main purpose is for instant messaging. So you have a secure way to do chat with people in a believable way. It's actually trustworthy. So that's one approach where the burden is on the individual who uses the hardware to do the verification. Another approach is to have interoperability through things like web standards, WebXR, open source code, like the Firefox Reality Browser that I worked on a little bit at the beginning, and then also third parties who do verification. So they do sampling. In the same way that we do polling where we do statistical sampling of people's opinions, we can do security sampling where we hire a third party to go through and buy some number of these pieces of hardware and do verification tasks on them to say, okay, we tested 5%, 2%, whatever the percentage of risk reduction is. And so we're pretty sure that when you buy this piece of hardware, it's doing exactly what you expect of it, and it's very clear, and if you don't like it, in fact, you're completely able to blow an entirely new software stack onto it, because you just don't trust this one. So that sort of interoperability and trust, I think, is completely lacking, but entirely necessary for where we're going with WebXR, because we can't make the same deal as we did with arm's length computing, or we're just going to live in sort of a, panopticon surveillance situation that frankly is pretty scary.

[00:39:28.852] Kent Bye: And it seems like some of that was baked into the specification privacy considerations, in that it's one thing to have the native hardware and the software be controlled by the curators of, say, PSVR or the Oculus Home or Steam. Steam is a little bit more of the decentralized model, where it's like the third-party developers can do quite a lot of stuff. And what are the auditing stuff that's inherent in that? So there's a lot of vectors for safety, security, privacy, There's aspects of our biometric data that could be revealing of either personally identifiable information or what we're interested in. And if that is being aggregated, then that is essentially weakening our protections to privacy and could be used for exploiting different aspects of our private information or different attacks to actually put people's lives in danger if you overtake a chaperone system and have people operate in a way that could actually cause them harm. I'm just curious how that has been baked into the actual specification, because you are having these variety of different trade-offs. And one of the trade-offs on the web is that you have the capability of having people, anybody who puts up the code, suddenly having access to the most intimate information that's coming out of your body. And so how do you navigate that, either at the spec level or, like you were talking about, other ways to solve that problem?

[00:40:49.548] Trevor Flowers: Right. I mean, I look at the kind of data for things like head motion and hand motion as medical data. You know, you can actually do a fair amount of diagnostic work on someone's health using a really fine grained, high precision feed of someone's hand and head movement. Things like neurological degeneration is detectable with that data. So the path that WebXR, the sort of balance beam that WebXR has taken is that the specification gives a lot of guidance to browser implementers about the type of threat vectors, but then each browser implementer makes the decision on how they communicate those threats to their users. So someone like Mozilla may make a very different decision about communicating things like hand tracking information than a company like Google would do. So, browser choice, in some ways, is more important than ever. You know, it's interesting to see Microsoft and Google start working more together on the Chromium core, and then to see the difference where how they take those core technologies and then implement them in different ways. You know, they have different goals, they have different customer bases that they're oriented around. And so as WebXR becomes more and more, as we see what the different browser vendors decide, we'll know where they stand on things like personal privacy and sharing the sort of intimate data that you're talking about. one step up the stack in the app frameworks, there's actually a lot of really interesting things that can be done. So one of the things that I'm working on for my personal application framework called Potassium ES is the idea of fuzzing data. So this is a security trick from a long ago, which is just reduce the precision of some stream of data so that it's still usable for the purpose that you need it to be usable for. But the data that actually goes out to the rest of the world is reduced in precision. So you can't do things like detect hand tremors in the way that would actually be useful for diagnostic purposes. But again, that is made on a per-site basis. So the browser can do enough to know what for the user knows the kind of data that they're sharing. The app framework can do work for organizations who are building a site. They can do sort of in the same way that they have privacy policies and they have policies for what data they'll track and how long they will keep it. They'll have policies around what amount and type of data that they'll use for WebEx are. So right now it's that sort of that three-tiered decision-making process. So it's going to be a little abstract and a little fuzzy and confusing for a while, honestly.

[00:43:39.122] Kent Bye: Well, one of the other things that reached a 1.0 specification in December was WebAssembly. And all of the web that has been built has only had three programming language with HTML, CSS, and JavaScript. And this is a brand new programming language that's like byte level code, binary code that you can compile down, other programs to be able to run, potentially things that get closer to the native code to have faster computations, whether that's physics or other things that I can imagine will be used for WebXR. But at the same time, you have the ability to take that and put that into edge devices. And so it's sort of like this new decentralized architecture that I see is going to happen with WebAssembly that's going to go into cryptocurrency and to be able to do on containers, virtual machines, all sorts of ways to be able to have self-contained ways to go to a site and maybe within that pull up some sort of virtual machine that has code that you can Isolate and know what is happening based upon the web assembly That's just sort of the conceptualization that I have is that I've just been I guess trying to get in my mind like what's it mean to be able to run a database within Wasm or to run these other programming language with the lamp stack the lineage Apache my sequel and In PHP, you have all these different programming languages that are essentially being abstracted from the server into the browser. So you have this true serverless decentralized future. So I'm just curious to hear some of your thoughts on the implications of WebAssembly and how specifically you see it as interacting with WebXR at all.

[00:45:09.437] Trevor Flowers: Yeah, I think WebXR is actually one of the key places where I think WebAssembly enables an entirely new type of experience through a couple of factors. So at its core, WebAssembly is just a way to run code faster in a browser. So that's sort of its core capabilities. So instead of running it in a JavaScript engine, you're running it sort of closer to the machine, like you were saying. So you can get basically native speed code computation in the context of a browser in a safe way. The safety of it is actually the real trick to it. We've had VMs before, but the fact that it runs in browsers in a safe way and has more or less access to all of the same APIs that JavaScript does, Sort of changes the game for XR like you're saying physics can be more complex a lot of the graphical Preparation work for graphical display can happen before that data even moves to the graphical processor So that's sort of the core thing but I think that there's another aspect to it that a lot of people miss about it, which is that We are very quickly taking the idea of this global standard that's not controlled by a single corporate entity for a sandboxed fast computation, and we're putting it basically everywhere. So we're putting it on the back ends of the servers, we're putting it, like you were saying, on the sort of edge compute nodes, which it's just a fancy way of saying computers that are spread around the world so that they're basically one network hop away from any device you happen to be using. So for example, it's really interesting to think about what web browsers will become when especially spatial web browsers So you're you're in your glasses you fire up Firefox reality or one of these other spatial web browsers you load up a web page that has maybe four different things that need to happen all at the same time and And two of them, like physics and interaction calculations, are really high energy users. So if you ran them totally on your headset, you'd run out of batteries in an hour, which is a bummer if you're trying to wear these glasses all day. The idea that we could sort of explode out the web browser and sort of fuzz it out so that it's not just running code On the glasses themselves, but it's also taking advantage of any edge compute nodes that are nearby. So you offload a lot of the heavy lifting of computation that sucks up all the power to somewhere that's one network hop away. So it's just incredibly fast to reach that sort of stuff is game changing to the complexity and the quality of the experiences that we can offer people.

[00:47:52.642] Kent Bye: So it starts to get into things like 5G, which had a potential low latency enough to be able to take the computation, offload it into a server somewhere, let's say like Amazon web servers, or maybe just say in your home, you have a computer that's doing the processing. So you want to be able to essentially use your headset as a way of. receiving whatever rendering is happening on a more powerful PC that has more efficiencies and that you can wear your headset for longer and then have this data transfer that goes back and forth. But that WebAssembly is a part of the interface to be able to actually offload that ingested in as that we're saying.

[00:48:32.238] Trevor Flowers: That's right. So, I mean, I think we'll look back 10 years from now at 5G and the most interesting part of 5G will not be faster transfer and lower latencies for information transfer. The most interesting part of 5G, when we look back, will be the new model of these widely distributed edge nodes, edge compute nodes and edge edge storage nodes where we begin to be able to really treat our individual devices as part of these like really localized, geographically localized meshes of computation. And that, honestly, that feeds back to the stuff I worked at at PARC in the Yubicomp group. The main project that I contributed to is called recombinant networking. And the idea was that as devices showed up on a network, over time there's these new categories of devices that have new capabilities and communication protocols, and they need to sort of land in a network and discover who else is there on the network. communicate things like, well, you don't happen to speak my flavor of this data transfer protocol, but here's a little implementation that you can use to then learn how to do these new capabilities. And so we sort of had inklings of this sort of dynamic, fuzzed out use of information and computation in sort of crude forms, but now we're actually, we have the network topology and the technology to actually make some of them finally happen. So it's an interesting time for sure. And I think one of the reasons we see a lot of the 5G network maintainers and creators excited about XR is because of this exactly. XR does require some bandwidth. LTE is actually fine for most cases, but the faster transfer speeds and the lower latency is great for XR. And the other sort of the second half of 5G is the use of these fuzzed out device clouds or meshes or however you want to talk about it. I always say if I had the ability to fork, the other me would go work on how to create global scale, decentralized simulation of mirror world sort of situations using this new network capability. But there's just me. So I'm working on the things I'm working on.

[00:50:52.736] Kent Bye: Well, maybe you could elaborate a little bit more on what the different types of use cases that you see with this type of distributed studio. Just this morning, I was checking out Upload VR as a distributed company. They've been gathering in VR to be able to have interviews, but also to talk about the stories of the week. And so you have what is essentially like a TV show, but they don't have a production, an actual. They're not even actually co-located with each other, so it would be very difficult. So with VR, you have the capability have these different types of shows where people can have this shared sense of embodied presence, but also potentially go beyond just the studio and bring in these different worlds. And so I'm just curious to hear, you know, some of your thoughts of either the use cases and, you know, how you can maybe use the virtual aspects to be able to do stuff that you can't even do face to face.

[00:51:38.277] Trevor Flowers: Yeah, I mean, I think Upload VR is a great example. Their show is sort of a delight in that I know the people individually, and it's super fun to sort of see their digital presence. So just on a personal level, it's exciting. But also, it's exciting to see what a small team can put together using Unity and desktop PC-based tethered VR for that sort of experience. They're VR experts. They hired a person to put together this bespoke application for them And I I think it's it's sort of leading they're leading the way in that sort of communication That we'll see more of a while ago. It used to be called synthespians when they were talking, you know the film and TV industry have this idea of digital actors is sort of the phrase that people use now and But the idea that you can have someone who's really talented, but they age over time, they want to play characters who have different body shapes, and CG for taking practical camera effects and taking captured data and then running it through the current process for creating a television show is just wildly expensive. So if you want something that sort of looks like it was shot in a real soundstage, you know, you're easily talking $500,000, just sort of to get started. And those prices will come down over time, but it doesn't actually solve the core problem, which is that it's incredibly resource intensive. So you fly everybody to LA, you get them scanned in, in these scanning stages, and then you have a huge crew of people who are responsible for running the tech. And then there's a massive sort of post-processing stage where you bring in really, frankly, talented people who they're hard to find and expensive to hire. to then make it look pretty good. And so I think the thing that we'll see is that, as with many sort of disruptive technologies, the ability to use a fully digital soundstage like the one that we're building, it won't look as good, it won't be exactly the quality that people expect from something like Late Night with James Corden. but it'll actually be pretty close and over time the rendering tech and the pieces and parts of it will become so that we're satisfied that it's kind of the same quality of show that you can make on a physical soundstage. And so that's the place that we're sort of aiming. The use cases that we see for it are actually people like you, where you have an existing audience, you might be able to get a crew of a handful of people to do things like work cameras and lighting and sort of do the crewing of a soundstage like you would on a physical stage, but they can be anywhere and they're doing it all through their web browser. So decentralizing, delocalizing, and then also reducing the initial cost of doing a show like that opens it up to a lot of voices that I think aren't currently able to create the kind of quality talk show with the physical presence that you expect from television. Honestly, I think you're an ideal candidate for the kind of show that we're aiming for But I also think people who have active audiences already in video streams like people who are big on YouTube They have a creative team, you know, it's a business for them It's sort of bigger than one person getting up and doing a single camera talking head sort of show I think there's a lot of interesting things to be done when you have a tool that's specifically made for this kind of video production. It's different than building on top of a general purpose XR tool. I've made a couple of different distributed XR space tools before and they're general purpose. So they're not built for any one thing. And so as a result, any one thing you try to do for them, you're sort of trying to fit it into this general purpose solution. And so this time around, we're building one that is specific to video production. And in the short term, it's specific to production of two-dimensional video. So the mass of people who will take advantage of watching these shows will be watching it through YouTube Live or Vimeo or one of these over-the-top systems where they don't need a VR headset. Again, they can watch it on their phone. They can watch it on their desktop PC. Or their smart TV and I think over time we'll switch to having more and more of a live audience scenario so people will come in via VR to be a live audience on set and then eventually, you know, we might get to bigger bigger events where we have big speaking events like keynotes at conference sort of thing or musical performances or things like that so We're taking the tact right now of sort of shooting for small teams of like a dozen or so people who already have their audience and they want to mix in this idea of video production into something they're already doing. And I think that's a really interesting space and I think my personal goal for it is to hear a lot of, you know, thoughtful people talking about interesting things in a way that can't get funded through the current system because it's just too expensive and the bigger distribution channels aren't interested.

[00:56:41.809] Kent Bye: Would you be able to have like a green screen where you could kind of like swap out a 3GS scene in the background. So you, you feel like you're kind of immersed into a, like a spatial world. I mean, I'm thinking in the future when you're actually have a lot of audience, but to kind of like cut through different worlds and be able to have people talking, but to give a little bit of like a guided tour through these different spaces.

[00:57:01.854] Trevor Flowers: Yeah, there's such a wide variety of interesting things that we can do to make Talk shows visually more interesting and to tell a narrative in a more interesting way than you can do on a physical set We're already seeing things like usually like the weather channel or these companies who are building in deeply rich visualizations of the things that they're describing, you know, like anything from The weather presenter can say oh, you know It's five feet of water in this downtown area and you hear it or two meters or and you sort of hear it But then if you see a visual representation of the water actually flowing up to that level around the weather presenter that's a viscerally different way to communicate information people understand it on a on a much deeper level and And so those sorts of possibilities open up when you're fully digital in a way that they, there's just sort of hard to do in a mixed reality situation where you have good tracking on the cameras and a good flock of engineers and designers who are making it happen. Yeah, so opening it up that, you know, somebody who is a 3D artist can open up Blender, create 3D models with animation, you know, that are sort of the quality that you would get in a CG movie. and then bring them on set, and the hosts can interact with them, trigger animations, move sliders to show the progression of things. All these sort of interaction techniques that we're learning around sculptural design within the context of XR are applicable in the soundstage, and that's really exciting, for sure.

[00:58:33.738] Kent Bye: Great, and finally, what do you think the ultimate potential of spatial computing might be, and what it might be able to enable?

[00:58:42.737] Trevor Flowers: Sure. I mean, the story of computing has been this sort of unstoppable movement from far away. You know, it started off in a far away room with a mainframe, and then it landed on your desktop, so it lived in your office or home office. Then it landed in our hands with mobile phones and it's just sort of getting closer and closer. And now it's pretty personal in our hands and on our wrists. And the next step is into glasses on our faces. And then sort of the final step is to bring the artificial computation and communication into the place that we do our biological computation and communication and interpretation. And that's our brains with things like neural laces. We're reaching the last couple of steps of that progression. And I think the next large step for us is to make a decision as a culture of what our relationship needs to be with those core capabilities. Because by the time it lands actually inside of our brains, it becomes sort of a second imagination. The same way that we use our biological brains to picture scenes and daydream about things that could happen We'll have this artificial ability to have this other type of imagination that takes advantages of the things that computers are good at So I think for us the ultimate destination of XR That we need to start working on now is how to make it trustworthy and how to make it something that is a global good Is there anything else that's left and said that you like to say to the immersive community and The immersive web community group that the W3C runs is the place where we make a lot of decisions, frankly, and you don't need to be a part of a big org. In fact, I often encourage people who aren't part of big orgs to show up because That's usually who shows up. And if you want the future of the immersive web to be informed by people who aren't at Google, or Apple, or Facebook, or wherever, then show up. Find us on GitHub. If you search for immersive web GitHub, you'll find us. And join the community. Help us make the future.

[01:00:49.395] Kent Bye: Awesome. Well, Trevor, I just wanted to thank you for all the work that you've been doing in this realm. And I'm super excited about the potential of WebXR and where all these immersive and open web technologies can come together and what's going to be possible. So thanks for helping make it happen at the lowest level and to be building some applications as well. So thank you.

[01:01:09.230] Trevor Flowers: Thank you, Kent. You know, the podcast that you run showed up kind of when the latest wave of funding happened. And I think it's been such a huge thing. I actually generally don't do podcasts or onstage appearances, but I'm such a fan and I've learned so much from the people that you've brought on and the conversations that you've led. So I really appreciate your work. Thank you.

[01:01:29.294] Kent Bye: So that was Trevor Flowers. He's a part of the Immersive Web Working Group. And he's also got Transmutable. He's working on this production studio using WebXR and also working on Potassium ES wider web framework to be able to use immersive technologies to be able to implement different responsive design implementations for the wider web. So I remember different takeaways about this interview is that first of all, well, I am super excited about WebXR. In fact, over the holiday break, started to play around with it a little bit. And it's great to be able to just go and type code within glitch.com. You get a glitch.me URL. You put that URL within your browser. In fact, in Firefox Reality, you can send it directly into the browser and do Firefox Reality, or you can use the Oculus browser on the Quest. and start to do rapid iterations and be able to, you know, put on your headset, push code changes out onto a website, and then just directly look at it. So I think that the tool set is going to get a lot better. We have A-Frame, we have other frameworks that, you know, Trevor's working on. There's Three.js and Babylon.js to be able to do this lower level interaction with WebGL. So I'm super excited to see where this is going to go. I expect to see a whole lot of innovation that's happening now that it's easy to publish out the content to be able to see it on the different browsers. It's going to be coming out on the Firefox browser soon. I don't know what's happening with Safari, but there's all these different polyfills that you can start to use. So to kind of like fill in the gaps as to what things aren't actually implemented yet. So I'm not sure how far that can take different aspects. Um, especially with Safari, where you have to actually provide permission in order to like kind of take over and give the sensor data. There's lots of different privacy implications for the future of WebXR. And I think, you know, part of the different projects that Trevor is trying to work on is, you know, how do you build trust with these different immersive technologies, especially with augmented reality, which is going to be putting this technology on your face and have access to all this really intimate information. Like Trevor said, it's like medical information. So once you start to make that available for just any random web developer, then that's potentially a lot of either personally identifiable information or medical information that starts to get into specific hands. What are the implications for that? So I think that's actually like one of the big things that I think probably slowed down the process for pushing out the WebXR. I think there's certain things where you're trying to disclose, okay, now we're going to be taking over your camera. Now we're going to be taking over your microphone, you know, Ordinarily, as you use your phone, you know, there's apps that ask for all sorts of different information all the time. And I think this metaphor that Trevor said is this arm's length computing. And so he's willing to make all these different agreements with arm's length computing and live with those consequences. But once you start to put it on your face, then that's a whole other ballgame. And so I think there's still yet a lot of different things to figure out, but he's wanting to figure out other ways to have trusted spatial computing as we move forward. And what type of things can you start to do? He pointed to Bunny, where they start to do auditing and other stuff like that. So just to get a little bit more of the basics of the WebXR, just in case that wasn't necessarily clear, it's basically a low-level API that tells your browser that there is some device that can take spatial input. if you have your oculus quest on it's already going to automatically assume but if you just have a computer if you don't have any vr headset that's connected to your computer then you're not going to be able to necessarily see that thing come up and so once you have it plugged in then it detects that there's a vr device there and if you go to a website then you can start to click the button that allows you to enter into vr and then you go in and put your headset on and then all of a sudden you're in this immersive experience A-Frame is really great because you start to see these different ways to start to implement it. Even if people don't have VR, you can start to have kind of like a spatialized experience of that. So you can imagine a 360 video or a spatialized website that you start to move your phone around and using the either AR features of your phone, be able to walk around a spatialized space. It's more of the portal device interface. And then, you know, eventually just have the whole fully immersive spatialized. And so this whole responsive design has come up with like Trevor was kind of tracking through the evolution of design on the web. And he said that for people who are designers, they start with print design. So just imagine, you know, all the typesetting, all the theory that went around print design, and then throwing all that out the window and able to do like dynamic design, where the size is different all the time, you know, different devices of the tablet versus the phone versus like a PC. And so you have to like create a design that works in all these different contexts with this responsive design and so now all of a sudden you're adding yet another element which is like the portal spatial dimension and then on top of that a whole other different device with augmented reality or virtual reality and so if you're fully immersed and so there's all these new input devices and ways of interacting he's comparing this shift from print to the dynamic web this new emergent spatial web is a similar if not even more complicated jump from the dynamic web that we have with all the responsive design the cross platform cross device cross form factor the responsive and reactive web designs like that has evolved to the point where there's all these different frameworks and boilerplates that he pointed to the To do MVP, it was like this Rosetta Stone of taking all these different design frameworks and, you know, basically implementing a to-do app and seeing, you know, how can you optimize the different design process and make it so that, you know, you can make about 200 different decisions in order to actually, you know, create your design. But as you start to have the combinatorial explosion of adding new control types, new input types, and new display types, multiplying all those things together, he calculated it out to be around 3,000 different decisions. Not only that, we just even have to like figure out what is the good spatial design already. It sounds like that there's been a lot of work that's been happening in academia already to try to translate these different 3D UI user interfaces. I know that the 3D UI is a big gathering that happens before the IEEE VR in the academic sense, but it's been difficult to actually give out a lot of that information that's been happening in the research. I think part of it has been not a really good distribution method. I would love to see in the future a lot more of this 3D UI interface research to be able to actually put it into the WebXR so that you could start to, you know, share a lot of this best practices that has been actually happening in academia, but we don't have actually a lot of access to it. And so I'm excited about the WebXR as this way to start to share these little design innovations. I think there's a lot that can be done with a good concept video on Twitter. Keiichi Matsuda had this whole like hyper reality video that went viral that shows all these augmented reality. design interfaces. Greg Madison, he translated all of his room at home and kind of spatialized that and then did this whole concept video that went super viral in a number of places just because he's walking around his apartment and showing all these different new user interfaces that are actually going to be a part of our whole new immersive future. And so there's a huge opportunity for designers to start to like prototype what these different design interactions are going to be and try to like break down what those different components are. Looking at all these different input types with your hands, with the controllers, with the brain control interfaces with your biometric data, with your eyes and eye tracking. I mean, it goes on and on and on. I feel like there's, like Trevor said, he expects that it's going to be another 15 or 20 years until we're bored with the different designs that are out there, just because there's just so much innovation that has to happen with all these different contexts. And so that's, to me, it's very exciting, but also very daunting because there isn't a lot of established methods of doing this type of spatial design. The Torch app is an AR experience that I think is starting to take a much more embodied approach with some of these different spatial designs. But communicating these different design principles and trying to communicate that out to the broader community, I think it's going to take specific use cases of people solving specific problems and then trying to abstract that out into these deeper principles. Yeah, to try to figure out how to like navigate a lot of that, I think is a huge opportunity. But for me in this conversation with Trevor, there's a lot of this type of design thinking that has been going on for a long, long time within the web community. And I think it's going to actually add a lot of specific perspectives of people who are coming from this either responsive design or percursive web app, or trying to solve these very specific problems of the transfer and communication of information. So I'm excited to see what happens over the next couple of years as we start to merge together the existing web communities with these spatial design communities in 3D UI. And what Trevor said is that it's going to take like 3D modelers and lighting directors, environmental creators, you know, storytellers, industrial designers, architectural students, you know, all these people who have like a spatial native first perspectives are now going to like be involved in the creation of these websites, which are essentially going to become these worlds, these spatialized worlds. So it's an exciting time. I think it's like just at the very beginning. And the last point that I just wanted to bring up here was the web assembly. And Trevor has this incredible background of not only all this design stuff and this rapid prototyping engineer, but also like this heavy duty network perspective to see like the network typologies that we have with the 5g and everything else with the future of the edge computing and. how he sees that the WebAssembly is going to start to create this way to quickly transfer information in and out and to have these protocols be spread out into not only onto your Edge device, onto your WebXR computer, but also on these other devices that are actually broadcasting that information. Maybe you have a server at home that's broadcasting that information out. You know, at the Microsoft Build, there was a demo that was from Epic Games where they were trying to push out this highly rendered version of the Apollo 11 journey. The demo actually failed during the beginning of the Microsoft Build keynote, but they were able to show like the actual demo. at SIGGRAPH, I got a chance to actually see it and then talk to someone about it afterwards. And they're able to take the positional information of your head and be able to render it on the computer and push those pixels out in like real time. And so to have that low latency ability to be able to do the rendering off onto another computer and to be able to push it onto your device is going to make it easier for like your battery to be able to last longer and to be able to have all sorts of different complex situations that your mobile processing Qualcomm chip that's in there is not gonna be able to actually like keep up with all the different stuff that you want to be able to do. So to be able to take the level of fidelity to the next level, I think having stuff like WebAssembly and these edge computes and what Trevor said is like all of the networks that are around you creating these different ad hoc mesh networks that become like distributed compute to be able to be sent into your device. And I think that the WebXR is actually going to be a big part of that because I feel like there's a larger ecosystem that's developing with the web. So anyway, that's all that I have for today. And I just wanted to thank you for joining me on this big, epic 21 episode series of looking at the VR for Good movement and all the different variety of either economic funding opportunities, the different actual stories and the cultural expressions of the different pieces of Fear for Good, a few of the different experiences that are trying to change the political and policy issues, and then a deep dive into some of the different experiences that I think show some of the different technological architectures that are going to be a big part of the future of immersive storytelling. You know, honestly, I've done hundreds of hundreds of other podcasts as well that could just as easily fit into this series. I mean, this is kind of like the ongoing looking into what the different, you know, story applications and the deeper context and the different technological innovations from an architecture perspective and also how people are actually using it to be able to tell these different stories that they want to be able to get out into the world. So hopefully this gives you a good overview of some of the different projects. And, you know, there's a lot of other previous interviews that I've done before that also dive into lots of different applications of XR. So if you've enjoyed this series, then I really do need your support to be able to continue to do this. I'm going to be doing more lectures, more opportunities to dive into different deep dives into experiential design. And that's something I'm going to be doing this year. I asked my Patreon members what they wanted to see, and that was some of the things of doing more lectures and doing more workshops, doing more trying to synthesize a lot of these different concepts and to be able to give some real practical advice for experiential design and where the whole future of immersive storytelling is going and you know, the whole ultimate potential of XR and all the different spaces and how you can get involved and be able to participate in the immersive community. So please do become a member and donate today at the Patreon. I really could use your support. Five or $10 a month is a great amount to give. So please do become a member at patreon.com slash Voices of VR. Thanks for listening.

More from this show