#1660: Enabling JavaScript-Based Native App XR Pipelines with NativeScript, React Native, and Node API with Matt Hargett

I did an interview with Rebecker Specialties’ founder Matt Hargett at Meta Connect 2025 about alternative open source and open standards, JavaScript-based pipelines for developing XR applications that he’s been working on including React Native for VisionOS, as well as working with NativeScript for VisionOS, and also working to bringing Node API support for React Native. Also be sure to check out his git visualizer Factotum, which is an app that is using some of these alternative production pipelines.

Hargett also mentions a couple of recent React Universe Conf talks covering this work including Hermes + Node API: A Match Made in Heaven and Bringing Node-API to React Native.
You can also see more context in the rough transcript below.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.458] Kent Bye: The Voices of VR podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my coverage of MetaConnect 2025, today's episode is with Matt Hargett, who's the founder of Rebecker Specialties, and he's been really focusing on this intersection of trying to create these professional productivity and enterprise applications using a blend of different open source and WebXR and open standards technologies. And so a couple of years ago, Meta had a event for a lot of the WebXR developers that I happened to sit on and listen to a lot of the different concerns. So I had a chance to talk to Matt after this meeting of WebXR folks. And Matt at the time was working on a lot of bringing React Native to XR projects because he wanted to essentially create a pipeline for XR projects using technologies that could eventually be used in the browsers, but the browser is a little bit more unstable in terms of delivering production ready software, because there's still changes that have happened to the browsers that's beyond his control. And so he wanted to build a foundation on something like React Native, where you can start to build out these OpenStack technologies and eventually maybe deploy to WebXR, but still have like a pipeline where you're using all these open source technologies. So he was going down the React Native pathway for a long time, and then since then pivoted over to NativeScript. And so he talks a little bit around the work that he's done with bringing React Native into Vision OS, but also some of the things that made him switch over to NativeScript. And also he's working with Call Stack and other developers to bring Node API to XR. So in other words, any of these Node.js modules, bringing them into XR projects that you could start to use, both from React Native, but also NativeScript. And so trying to bring over what's been happening in the web development community with all these different JavaScript-based technologies and trying to create a pipeline where you could start to more seamlessly bring in these different libraries and different technologies. And so he's launched Factotum, which was basically a Git visualizer for being able to see different flows of commits. And it's a whole application that you can go check out and see for yourself some of the different native script technology stack that he's been able to deploy out both to Meta, but also Pico. So he wants to create basically a way of writing at once and deploy out to all the different XR platforms and that he wants to build out like a solid open standards technology stack and open source stack where he can start to do that. So I'm super excited by all this work that he's doing because Anything that gets away from like Unity and our engine as the only way, because there's just a lot more opportunities to kind of leverage what's been happening in the broader open source technology communities to bring in more libraries, more capabilities, and more folks from the web developer communities to start to get into creating XR projects. So I think that kind of represents a broader move towards more diversity in terms of the different types of experiences that can be even possible within XR. So that's exciting. I also happen to ask Matt, like, what do you think this new engine is? Because are they using something like React Native? It's unsure. They haven't announced anything. I figured Matt might know, but since he's been so involved in React Native and working with React Native across Sony and Roblox and other companies and his whole career. And so, but he didn't have any specific information, but he did work for Roblox for a while. And so he had a lot of very specific insights and opinions for what types of things that meta is starting to do with trying to expand out their vision of meta horizon to have more mobile focus, basically to try and create a competitor to Roblox and like Fortnite and through what they're doing with MetaHorizon. And so he had a lot of thoughts on both the engine perspective, but also like some of the different considerations for the technology constraints that you have to follow in order to do that. And so being a lifetime gamer, he's got a lot of reference points for what kind of level of quality of graphics from what console generations. And so it's a question that I asked and honestly received a much more in-depth answer than i was expecting with based upon his reaction and his thoughts in both observations and kind of more speculations as to what they might be doing since we don't have any specific details for what this new engine that they announced but i figured he'd be a good person to ask and see if he had any thoughts on that And it turns out, he did. So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Matt happened on Thursday, September 18, 2025, at the MediConnect conference at Meta's headquarters in Menlo Park, California. So with that, let's go ahead and dive right in.

[00:04:32.325] Matt Hargett: My name is Matt Hargett, and I'm the founder of Rebecker Specialties. And for the last couple of years, we've been focused on delivering a professional productivity product using WebXR and open-source and open-standard technologies.

[00:04:45.612] Kent Bye: Great. Maybe you could give a bit more context as to your background and your journey into the space.

[00:04:50.293] Matt Hargett: Yeah. My mom was a programmer, and so she got me into programming and gaming actually quite early. And let me see, the first VR experience that I had was actually with a VFX1 helmet. I'm from Illinois originally, but I moved to California in 1997. And I think in 1999, someone gave me this VFX1 headset. and it absolutely made me sick and gave me a headache. But I was being technology-oriented, I was like, if this gets a little bit better, these individual pieces all get better, this could be very viable. And that was a consumer headset, by the way. And so, yeah, I backed the original Oculus Kickstarter, and I was like, maybe this is the inflection point of these things. So I've been kind of tracking this across my career, but my career didn't get into graphical or gaming or platform sort of stuff until 20 years in. And so eventually after shipping many different kinds of products in Silicon Valley across many different kinds of technologies, I was a principal engineer at PlayStation from 2017 to 2020. And then I was a principal engineer at Roblox from 2020 until late 2022. And then when I left Roblox, I thought I was going to retire. But instead, I was like, you know what, I don't see anybody else doing things in XR that I would like to see. And I think if those things are going to happen, I'm going to have to be the one to get them done. And so I came out of retirement and started Rebecker Specialties.

[00:06:19.131] Kent Bye: I remember talking to you after the WebXR meeting that happened here at MediConnect a couple of years ago and that you were talking around how you wanted to create a pipeline for WebXR experiences, but using something like React Native because that was giving you a more consistent production pipeline and continuous integration. You could do unit tests and all the things that you wanted. can't necessarily do when there's a third party that is mucking around in the browser code that is not making it reliable enough for you to deliver enterprise software. And so it sounds like that you've continued on this path, and I've been really excited to see the development of React Native for Apple Vision Pro, but also you launched a React Native, or I don't know how you describe the technology stack that you are starting to do something beyond Unity, let's say, and using open tech and open web standards to produce these type of immersive experiences. So maybe you could set a little bit more context for React Native into these other production pipelines that you're doing in order to create these XR experiences.

[00:07:22.478] Matt Hargett: Yeah. Well, I think the place to start is really with an open source project called Mozilla Hubs. That was ran in the browser, it ran on the Quest 2, and I had a real epiphanal experience joining a Mozilla Hubs metaverse session with 15, 20 other people on the Quest 2, rendering in XR, and I was recording video the whole time. What I was really trying to figure out is, okay, yeah, this works, but was it due to battery life? And I was in that session with those people for two hours before the battery ran out. And I was like, this works. This is ready. The hardware is there. The open standards are there. The implementation is open standards are there. And then we started prototyping. So I worked with an outsourced engineer vendor, an engineering partner that I'd worked with at Roblox and PlayStation and BlueJeans, which is another company I was at, called This.Labs. And I said, I have an existence proof in Mozilla Hubs, but I want to find the ceiling and floor and boundaries of what we can do in the battery life and compute that we have. And we did stuff on actually Oculus Go and HoloLens. And I was really trying to figure out how low of a hardware spec can we target and still get this big data processing workload done. I had a proof for the rendering stuff that could be done. But my point of view was to not make another cloud service, software as a service sort of play. I wanted to produce software that runs on the sovereign device so people can buy it and then go offline forever. And that was how I organized my first startup that I started in 2003, which was not XR, it was a security thing. But that was the same deal where people bought an appliance and they could take it offline forever. And that made it very compatible with very restrictive environments like the Department of Defense and the military and those kinds of folks. And that turned out to be a very profitable endeavor for me. And so I wanted to kind of recreate that sales hack of just making it easy to buy by being compliant by default. And so we did some prototyping, and by the end of that prototyping, a couple things happened. In March of 2023, we had one JavaScript bundle that ran on Oculus Go, on HoloLens 2, on the Quest 1, on the Quest 2, an HTC device, I don't remember which one it was. And I'm like, this works. We can write the code basically once, it runs just about everywhere with different levels of fidelity and performance, but still, those kinds of software development and distribution economics were the thing I was hoping for and that I saw achieved. But then by around June 2023, a couple of things happened. We had been using all those headsets and testing things and found that across vendors, the browser runtime quality was extremely variable across firmware updates, but even in between firmware updates when the browser gets its own updates. That was not fantastic for enterprise customers. Like, we're not making a game. I love games, obviously. But that's not what we were doing. And those customers have demands. And the other thing was that WebXR on the HoloLens 2 stopped functioning around that time, and it never fixed itself. And eventually, I divested myself of the HoloLens devices I had personally bought, because WebXR was not working on that platform anymore.

[00:10:41.191] Kent Bye: MARK MIRCHANDANI- Sounds like they had some layoffs, and then the team went away, and then something never got fixed.

[00:10:46.022] Matt Hargett: I will accept your conjecture or anybody else's. I don't have the facts on that, so I don't know. All I know is that I couldn't get an answer from anybody, from any of my professional contacts. Whatever they did, can they revert the change? What even happens to make that happen? That was pretty disappointing, I would say, but kind of a forward reference here. I would say that us putting our design system around HoloLens, which has a very narrow field of view, and thinking about what can we do in that field of view and that advantage has actually really teed us up quite well for AR glasses. Everything is practice for something else. Sometimes you don't know what the something else will be, but it turns out that all that work to go like, can this run on the Oculus Go with very little compute? What can we show the user in a pass-through AR, whatever kind of thing with the narrow field of view? Can we deliver insights to the human that are still really profound and very valuable from a business perspective? The answer to that was all yes. And so with the AR glasses stuff coming out in the next year or so, all that stuff turned out to be just practice. So at a time, it was this huge bummer. And then we were self-funded. So each one of these setbacks, I really felt in a visceral way, because it was just like, all right, well, that was three months of runway that could have been done doing something else. But we were talking about.

[00:12:03.426] Kent Bye: MARK MANDELMANN, So we were talking around, just the last time we talked at MetaConnect, you were wanting to have this pipeline for producing React Native content.

[00:12:12.662] Matt Hargett: Yeah, and so that experience with the HoloLens 2 browser, and I'm not trying to be mean or whatever. I'm just stating facts as they were. Awesome device. I was super excited. It had gaze and hand tracking and room understanding. It was really ahead of its time. I was really rooting for it, and I was prepared to use whatever gravitational pull I have to try to make that device a success and be out there, feet on the street, telling people, this thing rules. But that made me realize, OK, the experience that I had in Mozilla Hubs, if I had done it two weeks later, I would have hit a critical bug where the browser just would have crashed on startup of any immersive experience. But there's still this existence proof. This can be done. It's just does the team or organization or company culture around that thing support executing with consistently high quality and reliability in a sustainable way? And I want that to be true. And technically, we have the proof point, so it can be done. But I didn't have a lot of transparency with some of those other platform holders and their browser teams. I have a little bit more now, but we still get surprised when there's an update that breaks almost all immersive experiences, including ours. That's kind of a bummer. I can't build a scalable business around that. Even if I'm thinking philanthropically, which I'm not, that's just not what I came out of retirement to do. In parallel, we started working with another engineering partner that I'd partnered with across a few companies called Callstack out of Poland. They had helped me bring React Native to Windows 7 and 8 when I was at BlueJeans. They helped us bring React Native to PlayStation when I was at PlayStation and basically called them and I said, we want to bring React Native to VisionOS. WWDC had happened. The simulator was available in Xcode beta. I'm like, we could piece this together. We could pay down some technical debt in the React Native ecosystem for adding new platforms. I don't know if Apple Vision Pro will be successful, I don't know if our app will be successful, but for them, their brand is not building apps, but doing this deep tech major platform expansion stuff. That's their unique calling card. So, we made a deal, and they gave us one and a half engineers for, I think, about 11 or 12 months. And so we shipped React Native for VisionOS. That's all in open source and free and whatever else. We also made it easier for React Native maintainers for other Apple platforms at the same time. And as a result of all of this, on day one of Apple Vision Pro being available at retail, there was a React Native VisionOS app right there on the App Store. It was not an SDK recompile or whatever. It's not a complex app. But we shipped that with that partner to just de-risk. And we had crash reporting, so we're like, okay, when people start this up across however many devices get sold, how stable is it? This kind of data's very important to me. One, respecting the user's time, but also to make sure I have a capital efficient business, but also like, we're doing things that haven't been commoditized yet, and I need to know like, this seems like a good idea, but does it play out well at scale as hundreds or thousands or however many people install the application? So we did that, and we kept React Native Vision OS up to date with that engineering partner, Callstack in Poland. And then we did kind of a roadshow, if you will, where we went to React, Conf, Render ATL, and a few other conferences. We put Apple Vision Pro on, let's say, 600 or 700 developer heads across all of those conferences that summer, showed them our in-progress application on it, as well as Callstack's sort of reference React Native Vision OS application That was to drum up interest, but everybody wins, so it was time well spent. But by the end of it, and working the React Native ecosystem, I felt like that might not be the mid-term to long-term technology bet for having a native app container where we can still use web standards. while we wait for browsers to mature. And so we switched gears to partner with this company called NStudio, I believe is their name. And they're kind of the stewards of a very similar technology to React Native called NativeScript. And so they had already brought NativeScript to VisionOS. And we had a conversation about how can we share code and infrastructure to make it cheaper for both React Native and NativeScript to be on VisionOS. And then we had finished a very large project for a corporate sponsor of our work, basically, kind of a POC, if you will. And we were like, instead of jumping right back into our product, let's kind of do a sort of a hackathon thing. But there's a game company called Double Fine, makers of Psychonauts and Brutal Legend and whatever. They do this thing called Amnesia Fortnite, where for two weeks, everybody stops working on a big project. and does a project. So I wanted to try, let's ship a native script application that uses a reality kit and scene kit directly, but from JavaScript. And let's ship that and de-risk that and understand what the parameters of that are. And we did that. So that shipped in May, I want to say. And now we understand native script is probably the way forward for us. But the nice thing for people on Vision OS is that there's options. There's React Native, there's native script. I'm pretty sure it's obvious at this point that ByteDance has their own version of this called Links.js, which is kind of a competitor to React Native. One of the original React Native engineers from Instagram went to ByteDance and this is like his re-roll of this thing. That's going to be available on VisionOS and other XR platforms. I think that's very obvious in code that's on GitHub as well. So there's a lot of great options here, and I think creating options for application developers, whether it's something for fun, some serious business or whatever, so that if one of those technologies is not having a good year or whatever, there's another option where they can write cross-platform JavaScript code, leverage that entire JavaScript ecosystem, and ship something, but also not be trapped in whatever that application framework is. And I feel like that kind of portability across those runtimes is something that you don't really see in other technology choices with mixed reality and spatial applications.

[00:18:51.076] Kent Bye: Okay, so this is the first I've heard of NativeScript. Does NativeScript also allow you to compile down to say iOS and iPhone?

[00:18:58.686] Matt Hargett: Yeah, NativeScript is an open-source project, but unlike React Native, which has the corporate sponsorship of Meta, NativeScript is actually stewarded by the OpenJS Foundation, which stewards a bunch of key open-source JavaScript ecosystem projects, including Node.js and quite a few others. You can think of OpenJS as the non-commercial Mozilla almost, where they don't have a specific browser or JavaScript runtime or whatever. Their role, I don't speak for them, but to me, their role is really making sure that there is competition, there is multiple implementation of these standards, and that everything is very vibrant and healthy. And so I think that even though it can be really nice to use a project where the partner on the other side of that does have a corporate backing and a stable salary and staffing and stuff like that, that can be very advantageous at times and be very constraining at times also. I think potentially for the individuals involved as well. And so I'd say NativeScript is not as popular as React Native. And I wouldn't say that it's for every application, but for our application in particular, where we want to have sovereign compute, we want to be using open standards, we want to be able to choose the JavaScript runtime and not be funneled into a specific one and those kinds of things, it gives us the flexibility to pick from a broader inventory of open source that already exists to make sure that the vision of the company, but also that we can deliver that, but also just deliver like the best user experience possible without having to go ask permission from somebody who's bosses, bosses, bosses, boss, doesn't like what we're doing or doesn't like whatever. So in studio has been a kind of a great partner on that. And we did ship that native script application. to the Apple Vision Pro App Store in, I think it was May is when it finally deployed. That was, again, a positive learning experience. We made a cool toy application that is not a game exactly, but let us feel out, again, for me this is all about what are those distribution economics and what are the software development economics. And if we can deploy to web and deploy to native apps and get app store placement with 98% shared code, and, and, and, that's exciting and interesting to me. Versus, okay, we compiled it once on this one platform, but oh, a new device came out. Now we have to upgrade our engine and upgrade all the stuff. And oh, when we upgraded the engine, all of our stuff broke. And now we're spending a million dollars just so we can deploy on a new device. my full sympathy and empathy to the people that are in that situation. But again, I was trying to find a different path. And that means that the initial journey here is definitely playing on hard mode for me as an entrepreneur and a technologist. But I think the mid-term is already showing itself to be a pretty big leap forward over the incremental economics other people have been able to get.

[00:22:07.031] Kent Bye: Yeah, I always wanted to develop some web-based apps that could also be XR apps. And so there's a ephemeris project from the Swiss ephemeris, and it has a Node.js module. And then I kicked up a React Native potential workflow, see if I could integrate this Node.js module into React Native project across iOS, Android, and web. And of course, it didn't work on all three. And so right off the bat, it was like, OK, well, this is a non-starter. I'm going to have to either do it this way to do only web or do it this way to do. And then at that point, it's like, well, at this point, I should just develop native modules because I'm going to have to end up customizing it so much anyway. So I don't know if native script makes it easier to integrate Node.js modules across all these different ecosystems or if there's still this kind of like, oh, we'll work in this context but not in that context. Yeah.

[00:22:59.457] Matt Hargett: Yeah, that was a, and I would say that basically what you're describing is the JavaScript ecosystem between browsers and then sort of server-side stuff like Node.js or Bunn or Deno and also like NativeScript versus React Native versus Electron versus these other things. it was pretty fragmented. You would see the same native module to do, I don't know, thumbprint authentication or whatever. You'd see eight different repos with eight different maintainers. Nobody there is doing anything bad or whatever. They did what they had to do. After shipping React Native Vision OS with the partner call stack, I was like, what's the next layer here that gives me optionality as a technologist and entrepreneur? but also defragments the ecosystem a bit. And at the Chain React conference in 2023, I was chatting with an employee of Microsoft who was working with React Native internally on a product name that everybody would recognize. And he was lamenting that in the React Native Windows thing I'd worked on at BlueJeans, they had adopted it internally, it's been deployed to Xbox, it's been deployed into Office, it's been deployed into all these places. The Windows 10 start bar is a React Native application. And he was like, yeah, we're actually doing a big architecture shift and we're going to have to go out into the community and rewrite all the native modules in the open source community that support React Native Windows. And I was like, that's a big, I mean, it's just, man, it's not the technology piece of it. It's how do you submit a PR? How do you get review? How do you get merged? And a lot of those folks out in the community don't use Windows. I have a Windows laptop. I have a MacBook as well. I use everything, but no disrespect to Windows. It's been very good to me over the years. But I was like, That kind of sucks. And also, looking at React Native Vision OS, where it's like, yeah, every module that already exists in React Native, we had to kind of go into, not every repo, but the modules I needed, even for the React Native modules, we had to go in and add Vision OS support to a bunch of them. And that was worthwhile. It helps me, it helps the whole ecosystem, helps Apple Vision Pro, helps Vision OS, et cetera. But it was work. and I'm not indefatigable in terms of finances and resourcing. When I talked to that engineer at Microsoft, I was like, hmm. I also knew that there was actually going to be several new platforms being added to the React Native ecosystem in the next two to three years from then. And I was like, this seems like we need to make this a step function change to make this easier for everybody. Because the people having new operating systems for their new devices and ecosystems, them going out to 100 repos also, it's not that they don't have the money, it's that it puts more load on the open source maintainers. to have to learn or understand something. They're already on a volunteer basis, most of them. This doesn't make sense. So, I wrote up a proposal to Callstack and to those Microsoft folks. I'm like, here's what I think we should do in the broader ecosystem to defragment the whole thing, make your job easier, make my job easier, and everybody wins. Because I do like to facilitate these win-win-win situations, even if it's a little bit harder. So what I proposed was, let's make the node API, or the ABI, the application binary interface, let's make that the standard sort of module system. And let's try to get this into as many JavaScript runtimes as we can. And what I found is that I'm no genius, I'm not Nostradamus, is that other ecosystems were already doing this. Deno was kind of already on this track. Bun was already on this track. Great. And then NativeScript is also already on this track. I don't think they had shipped it yet. Maybe none of them had, but people had already arrived at this for totally different reasons. I'm like, great validation. We just have a specific point of view on this that aligns with everybody else's points of view. And that's awesome. We worked on that project. Caltech worked on it. We had partners from Microsoft, from MongoDB, and the OpenJS Foundation all working, contributing code, and facilitating all this stuff. That released to public open source in, I want to say, July. And at the React Universe conference last week, the folks from Microsoft did a presentation about it, and the folks from MongoDB did a separate presentation about it with complimentary content. I believe those talks are on YouTube now. So this exact problem that you had came up for me in this particular way. And this is why going to conferences is so great. You hear other people's problems, and I can't not try to solve things. And so And again, a win-win-win. The whole ecosystem gets defragmented. If people start in React Native or want to move to NativeScript or vice versa, they don't have to rewrite the modules or whatever else. It will just kind of go with them. And it makes the switching costs much lower. The big thing for me was it deduplicates this effort in open source from volunteer people, where instead of them making a React Native web GPU or whatever, we just make one repo. And they can have multiple maintainers if people want to have their thumbs in the pie, but it's not mandatory for them to take on more part-time work outside of their working hours just to provide this to themselves and to the community. So the thing I'm more excited about, beyond the business enablement and how much more capital efficient it makes everything that we're doing at Rebecker and for the ecosystem partners that I talked about, is I'm most excited that I think it will lighten the load for these open source maintainers, where mental health and open source maintainership is this intersection of challenges that is getting more visibility in the last year or two. But I think that's possibly, if this whole thing fails, the whole business fails, I think that that's probably the lasting impact I would be the most excited about. proud of, personally. So that particular problem you mentioned is, I don't want to say it's solved, but it's on its way to being solved. And that technology that we worked with those folks to develop is how we'll be taking our current Babylon JS WebXR app, putting it into NativeScript, and shipping it while making a WebXR module that people can use in Bund, Deno, Node.js, NativeScript, React Native. We'll just make one module to go everywhere. And that gives us optionality as well. But I think that there'll be a lasting impact and multiplier effect beyond just our single product and our single sort of platform vision.

[00:29:34.605] Kent Bye: Yeah, to me, it's super exciting just because we've had the promise of WebXR and bringing in new developers that have all these libraries, basically bringing an entire ecosystem of what has happened on the open web since Tim Berners-Lee and the the launch of the World Wide Web in 93, 94, whenever it kind of hit the inflection point of actually really taking off. And now we're at the point where we have a whole set of tens of millions of developers and professionals that have invested in these open source projects, but yet they've been completely blocked off into being tied into the future of XR. And WebXR always sort of had this theoretical, yeah, this is how that could happen. Apple was dragging their feet, going really slow. Chrome was really on the bleeding edge. But WebXR as a standard never really lived in that promise up to this point. I'm sure it's still possible, but it sounds like some of the stuff that you're working on, which is to maybe bypass some of the browser issues and maybe focus on some other open standards like WebGPU, OpenXR, and to start to produce XR applications. through this kind of alternative production pipeline that allows you to bring in some of the best of these open source projects and to play with data visualization or other types of libraries and resources that are out there, but to start to integrate it into really cutting edge immersive experiences. So when I saw what you're working on and talked to you a couple of years ago, I got really excited, but then my own experience of React Native, I got super disillusioned of like, oh, well, duh. I'd really need a team of developers if I really want to do this, because it's not really ready for someone who's more of a vibe coder-esque, even though I'm not using AI, per se.

[00:31:18.675] Matt Hargett: I'm just like, that's insane. A higher level application developer, yeah. And that's the thing with React Native and why I chose it in so many different jobs is because you can write JavaScript code and be very high level, but if you need to do innovation, you can go low level into C++ or whatever you want. And so that's one of the awesome things about both React Native and NativeScript. You want to stay in JS? You can do that. But if you want to do something that isn't quite commoditized yet, you can also do that. And I think that's a little bit harder in not just browsers, but even like Electron, which is also browser-based. It's like, and again, not dissing Google or Chromium, they do tons of great work and have done so many things to move viable web applications forward. However, comma. But you're kind of like, well, not even that build of Electron, but what was Google focused on shipping for that Chrome version? And what's enabled and what's not enabled? And what's not compiled into that binary versus is? And then you've got people downstream of Chromium, like some of the RxR headset browser teams, that also then put more filters or changes on top of that. And I think I have enough sample data from the last two years now of continuously giving people feedback. trying to operationalize things, trying things at scale, doing dozens and possibly hundreds of demos and stuff like that, where it's like, for some reason, that flywheel just isn't turning for WebXR in particular. That said, I think that the main point here is that there are definitely some people where if you are using a web standard like WebXR made by the W3C, and anyone can go and participate in that standards body, But for some people, if you're using open source and open standards, but it's not in a process called browser.exe, then they're like, well, that's still not cool. And to me, I'm like, well, I think that we can still use our breath to blow into the sales of open standards. and open source. But the thing I'm really excited about is that there are some things where, and again, I'm not throwing any shade or whatever, but there's some stuff where it's like, oh, here's this proposal spec, but we got to wait for Chromium to implement it. And then possibly for the downstream device and manufacturer to then incorporate that change. And with these native modules, we can kind of lead that a little bit more. And one of the reasons the browser folks are very careful about this stuff is because even if it's behind a flag, or in an origin trial like WebGPU, once it's enabled, it can't go away, otherwise a bunch of apps break. It's understandable why that's slower and more cautious, I would say. I totally get that, and I think that's maybe the right thing to do. But that does not explain this instability problem that seems to be quite persistent on a couple of XR platforms. Being able to take these open standards into native modules and give another separate reference implementation besides Chrome, again, nothing against Chrome, but when there's an open standard but there's only one real deployed implementation, I don't feel like you get the diversity of ideas that leads to a better, more efficient implementation, etc. But the cool thing is that we can use our Babylon.js-based application, which already implements a bunch of standards from Kronos and the Metaverse standards form. And we can take that JavaScript with our WebGPU module and our WebXR module. We can kind of implement what we need ahead of even what the browsers have and give feedback into those standards bodies to say, well, for our 10,000 users, they did this much stuff from the telemetry, and here's our implementation that's open source. Mozilla could adopt it or someone could adopt it or not, but we built it for us and either this all works and there's no feedback into the standard, or we found that without this extra little piece, this extra API, this extra Wixel thing or whatever, that we actually can't do this efficiently enough. And that's still a great feedback loop into the standards body. And what they do act on it or vote on or whatever is a whole different Pandora's box of interesting stuff. But we can get that feedback loop before it goes out to, you know, a billion Chrome users. And again, that's nothing against Chrome, what Chrome does in the ecosystem and evergreen browsers and whatever is pretty amazing. Thinking back 20 years ago to the web, before this was the case, and Firefox and Mozilla deserve a lot of credit for evergreen browser updates as well. But I don't want to overstate, but it's hard to kind of understate how much more rapid that iteration time is. And so for me, it's not the iteration time. It really is just that stability. But also, if we have the good fortune to be capitalized to engage a little further upstream instead of going, oh, there's the spec, we'll implement it and then we'll give some feedback. But to go further upstream and go participate in the spec formation itself, which would be great. And I've done that before with IETF and some other standard stuff a long, long time ago. then I think that that's another way to get a flywheel turning here for that feedback loop on the standards body itself. Not that they're doing anything wrong or whatever, but just more viewpoints and more things that are operationalized that aren't constrained by these very understandable cautiousness of being deployed in a browser or relying on an individual team and one corporations for walls, getting the staffing or getting the roadmap item or whatever to execute on it. I think it kind of opens that wider open for greater diversity of voices and a greater diversity of implementations of those standards. And with my nerd hat on, I think that's pretty exciting. Again, it's another just like it just injects even more vibrancy into this global ecosystem, even beyond XR. But it's definitely a strategic thing. It's very tactical for because we've ran into this browser instability. That said, we have shipped our browser-based progressive web application on the MetaQuest store last month. We're the first third party immersive PWA to do that. That charges money. We just submitted to the PICO that same code base to the PICO website. store today. And so I didn't give up on that. And we will see what happens in terms of crash reporting and stability, especially as firmware updates roll out across both of those vendors. And I think that there's every opportunity from a technical perspective, from a technical follow through perspective, and really a developer success perspective. I don't see why it couldn't be awesome. And if all of those platforms get so much better with immersive PWAs, then we won't have to resort to this native approach for nearly as many platforms. But having that native approach with native scripts and these W3C aligned node API modules is this parallel plan that we've been executing on the entire time. And so we won't die tomorrow if all the browsers become or remain unstable. We've got to a very awesome plan B that has even more potential upside for us and the ecosystem and our users. But it's a little more staffing and capital intensive, which is OK.

[00:38:44.890] Kent Bye: Yeah, well, some of the broader context of the discussion that we're having here is that in the last couple of years, Unity introduced the potential for a runtime fee so that if you're using their rendering engine or their game engine, that you would have to potentially start to pay for how many people just download the app, even if it's a free-to-play app. through a lot of the independent game developer community into a sort of existential crisis just because there's so many big, huge apps that, like VRChat, as an example, is built on Unity, and Rec Room. You have these huge projects that have free-to-play. It could potentially completely tank their business model if the game engine decided to implement this. So I think people were starting to look at these alternative rendering pipelines. And even with Meta, with their Horizon Worlds, had a Unity runtime. And they're starting to extract that out and replace it with their own new engine. They haven't specified anything about it. They've been emphasizing the ability to also play some of these Meta Horizon World games and experiences on mobile, and Meta themselves are proponents of React Native. Do you have any sense as to what might be underneath some of this new engine that they're starting to talk about?

[00:40:00.818] Matt Hargett: One, I will officially tell you I have no proprietary knowledge, so I could make some sort of hyperbolic conjecture here. But I think what I would say from what we saw at the video during the keynote yesterday with Mark, What I saw there, looking at pixel counts, looking at the lighting and the shadows in particular, is a thing that I really look for where it's like, what approximate console generation does this represent in terms of rendering? And part of that is from my experience at Roblox, but also because I'm just a nerd that likes games that look nice. And I think about sort of metaverse concerts, if you will. Concerts have lighting. They have colored lighting. They have many different light sources. Another big thing, and this is not my business, this is not what Burbecker is doing, but just putting that hat on for a second. The other thing I think about is expressiveness. I think about clothing, layered clothing, fabrics with different shaders, with different friction coefficients. So they kind of glide against each other in a photorealistic way, but in like a way that doesn't look like The Sims 2 on PlayStation 2 or something. And my perception from what was said in the keynote and what I saw, which is what everybody else saw, is that they do seem to be optimizing for maximum sort of interactable avatars. And something I didn't quite see in the video, but maybe this is one of their goals, is just really awesome body and facial animations, maybe. I didn't quite see the evidence of that in that particular video yesterday. But when I think about metaverse concerts or a club or, you know, discotheque sort of thing, I think I need to see more videos to understand how a sort of billboard top 10 artists, touring artists that has an amazing light show at their concert would replicate that to their artistic satisfaction based on the video that we saw yesterday. Now, the other thing I would say is that with all the things like this, what we saw yesterday is probably the worst it'll ever be. So there's always room. There's all these problems. It's just work to make it better. And the kind of lower bound for it, if they're really determined to have parity on mobile, is what is the lowest mobile generation that you support. And I can tell you from my previous professional experience, when you are targeting sort of under 13s, people that are under the age of 13, they tend to have hand-me-down devices. That audience is typically two generations behind the lowest generation. And that can be fine. That can be cool, depending on what you're trying to do. And I don't know that this is a constraint for them. I'm just speaking from my prior experience. That's what the bound is. Now, can anyone make a graphics engine from scratch that can scale up with lighting effects, for instance, and scale down when it needs to? Absolutely. Does it require skill? Absolutely. Is it unknowable or does it require new computer science? I don't think so. It's just time and brains. And that's not saying it's easy. It's not. You've got to find that talent and you've got to give them the time and the resourcing and all that stuff. And that's hard work. That by itself is hard, but is this something that no one's ever done before? I don't think so. I think that what they might be doing here is, to give them props, the thing they showed where I'm like, those are Nintendo 64 shadows. It's like a little gray circle underneath people. It's like there's But I appreciate that that was probably authentically in engine for where it is right now and that they showed us, I think, because otherwise I would think they would have made it look a little bit better. But they showed us what they have right now. And I think that I'm going to use the word authenticity here for this particular thing. I think that's commendable. When so many other platform holders would make, they're called bullshots, like fake screenshots, where they kind of spruce it up. It's in-engine, but they spruce it up. Or so many game trailers these days are not in-engine. They're made by some other trailer company that made their own 3D models, so they don't even share the models. That's not what I saw yesterday. At least, I hope that that's not what I saw yesterday. So I think that if they can have that kind of transparency about what their goals are with that technology, and maybe that's what the sessions about will elaborate on today. But I could see how they're choosing to put their engineering, their time and brains into hundreds of avatars with facial and body animations and body tracked animations that are unlike anything anybody's seen before. That might be what they're doing. And that's why other parts of this that are more obvious don't line up to direct comparables like The Last of Us on PS3 or Uncharted 2 on PS3 or God of War on PS2 or whatever. That might be very explicit choices and they've got a roadmap that's credible. You know as much as I do, I don't have any proprietary knowledge, but based on the video yesterday, that's kind of my top level analysis. Is it TypeScript? Is it for the web? Is it for whatever?

[00:45:10.558] Kent Bye: I don't know. It did mention TypeScript in the developer keynote.

[00:45:13.119] Matt Hargett: Yeah, yeah, yeah. I think I heard that. But you can write in that language, but not compile down the JavaScript bytecode. Like there's a bunch of, it probably does, but I don't know. And does that mean it's deployable to web and whatever? And does that even make sense? Because similarly to kind of targeting low end mobile to reach under 13s, exporting to web is another set of constraints on top of that. And frankly, I think this is kind of what Rec Room ran into, where they went from trying to be XR only to getting into mobile, and they had all these constraints. And then you can make a choice. You can either fragment your user-generated content ecosystem, where it's like, well, there are experiences that are XR only, and only some are on mobile. Where does it monetize the most? That's where everyone will go. And then the other thing will be left to atrophy. logically, that's what would happen. So you have to kind of like say, okay, if you're at your experience has to work in both places, both in XR and on an iPhone eight or whatever, whatever the low end was. And now all of your UGC developers, your user generated content community, has to be extremely smart about how to optimize their stuff for low-end phones. And in doing that, I think the low-end phone is even more constrained in terms of scene density and whatever the modern XR headset is, like a Quest 2 and above or a Pico 4 and above. And so then all of a sudden, all the experiences get dumbed down a generation just to be cross platform and reach more people. But I think I will get conjecture is here. I think with Rec Room, they kind of lost some mojo. trying to get that expansion there because of those constraints. And I don't know anything about Rec Room. When it launched on PSVR, I used it there. I'm like, this is interesting. Let's see where it goes. So I did use it once quite some time ago. But a big thing with Rec Room was the Unity basis. And not that Unity is bad. My first startup used Mono. I was a big fan of open source Mono. Just the WebXR stuff I do now. I spoke at conferences about opensource.net. I wrote a book that featured open source. .NET Mono stuff very, very heavily in it. And when I first heard about Unity using Mono, I'm like, that's genius. This makes so much sense. Same thing with XNA and MonoGame and stuff. But a thing I kept hearing about at Game Developers Conference and being in Silicon Valley is, Up to some point, Unity is awesome, and then people would have to get a source license and deeply customize it. And the engineering overhead and effort on that to produce a game would break the bank for a lot of people. And I don't know that Rec Room had a Unity source license and had to do deep customization. That's more conjecture on my part. But I think that once you modify Unity enough, even Unity can't help you, assuming that their developer relations are great. At some point, you've gone so far off the reservation that they're like, this is your own engine now. You've customized it to this degree. And definitely saw that on the original PlayStation VR for PlayStation 4 to get Unity performing well and having a locked 68 frames per second on PlayStation 4 PSVR. People often had to go into Unity and tweak a few things to make it more optimal for their specific experience to get it to run on that hardware. So I'm interested to see where it goes. If they announce that like, hey, this is all open source and we're working with the Metaverse Standards Forum and it's all- They didn't mention any of that today.

[00:48:42.956] Kent Bye: Even if they are doing it, they didn't mention it today, which I was sad to see.

[00:48:45.758] Matt Hargett: Totally. And so I'd say everything we've seen so far, last year at Connect, I sat with some random people at lunch and chatted with them and they were MetaHorizon creators and they were very up on a lot of stuff and that's cool. You know, I think about the original Facebook games like Farmville or even Cow Clicker or whatever. Tons of creators got to practice their craft, make money and whatever else. Great. But in terms of me, Matt Hargett personally, did anything about that make me want to jump out of bed and start creating for it? Not me personally. I'm thinking about broader vibrancy of open ecosystems where any corporate entity can come in and come out, and if somebody has a bad quarter, the whole ecosystem doesn't suffer. But I would say that a thing that could get me super jazzed about it is if they're like, we've worked with the Metaverse Standards Forum. And so in Horizon Studios, is that what they called it?

[00:49:40.592] Kent Bye: At MetaHorizon Studio is their new studio that's in early access. And then, yeah, the new MetaHorizon engine, I think, is running it. But they haven't given any more specifics, other than when Boss talked about it today during the developer keynote, he was more like, oh, it's got physics-based rendering and more of like, you can use source control. But it was like, he didn't mention like, oh, yeah, it's part of Metaverse ,, it's using WebGPU, or nothing on that level that it's doing.

[00:50:07.355] Matt Hargett: Yeah, so, you know, I'm a particular person with particular things that I like and that energize me more and that energize me less. So I do not paint with a broad brush here. This is just my own opinion. If they came out and said, it's all open source and we are going to be the reference implementation for all these Kronos and metaverse standards, forum standards, then I would go, this is interesting. Then this speaks to the Matt Hargett brain. I just need more information to know to be excited or not. And I'm not being pessimistic necessarily. I just don't have information. But also from what I see, I can kind of triangulate a little bit and go like, I don't know, like I'm not a VR chat person either. In the 90s and early 2000s, I was addicted to IRC. So I know all about being on a social thing, chatting with people, interacting with people all the goddamn time. I totally did that. But I don't do it anymore. So, like I said, I think as a developer, I don't think that I'm the audience for that. And as a player or a user, I also don't think that I'm the audience as a 47-year-old man who's not a furry. Right.

[00:51:15.948] Kent Bye: Yeah, so I agree. I'm on the same page. And I think when I look at what Snap's doing with the Snap Studio, it's kind of like a pared down. In order to get things onto a form factor of the glasses, they can't be running Unity around times. And so I feel like they're probably moving towards that path.

[00:51:33.036] Matt Hargett: But Snap Spectacles and is it Snap OS is what it's called, I think? does have a WebXR runtime and does have a browser team that runs completely on that self-contained device. And XREAL, I believe the public name is Project Aura that they announced at Google I.O., is an Android XR device that runs Chromium and WebXR. I'm very excited for those form factors as well, but it was very heartening to me to chat with some folks at Snap and understand what their goals and what their contrast that they're trying to provide is. I love that what they're doing can function as a sovereign device that doesn't need to be tethered to cloud, and that if you go through a cellular dead zone, the whole thing doesn't work. As a consumer, I like that a lot better. And as a developer trying to ship consistent experiences to users, I like that a lot better also. In terms of both of those devices, which are not released to consumers yet, it remains to be seen, can they hit that sustainability and that consistent quality? And I would say, similar to what I said before, to me, there's no technical reason why not. I think that the underlying function of can it be done or not is probably more a function of team and company culture than anything else. And so I'm excited to see what both of them do. And as I mentioned before, we started out prototyping in 3.js on HoloLens for very narrow FOV on HoloLens. And so a lot of the early UX design work that we did for all new ways to generate insights for people. Like walking around a 3D bar chart is useless. Why would you put on a headset to do that? You can see a bar chart. So we had to think of all new ways to show very high density things and deliver those insights to people. And we did it on HoloLens. So that narrow FOV and the incoming AR glasses and the fact that there are two publicly announced things that both, one explicitly has WebXR, that's the Snap Spectacles, SnapOS 2.0 or whatever it was. And the Xreal running Android XR, it seems very unlikely that they would remove the Chrome browser from Android from the operating system.

[00:53:49.630] Kent Bye: Their antitrust thing threatened that, but I think that got squashed. So yeah, there was sort of a decoupling thing that that would have potentially. But yeah, I don't think we're at that point.

[00:53:58.274] Matt Hargett: Sure. Or who's going to buy it? Perplexity was going to buy Chrome off of Google or something like this. Whatever. But anyway, I think that there's still these, not glimmers of hope, but the nice thing when there is competition, there's multiple vendors on the hardware side. Hopefully, more on the software side, but on the hardware side, in terms of IHV diversity, it's just going up at the end of this year and then going into next year. I think at CES, there'll be even more players that are coming at it from a unique hardware perspective as well. I'm super excited for all of that. It would be really funny if, well I don't want to get into more conjecture, I guess. But I'm also very excited for the Steam frame, I would say. And I think that basically every person that has been in the Steam ecosystem, or the Apple ecosystem, or the Google Play ecosystem, and that's where all of their apps are, all of their logins are, all of their whatever, All of those people are about to have a cornucopia of different price tiers of devices to choose from, where they don't have to go into a new ecosystem with new payments and new whatever. I think that, as a consumer, I like that diversity, because I can pick and choose, because not everybody's perfect, and different products make different choices. Some, as a consumer, like more. Some, as a consumer, like less. That diversity of price points is one of the reasons why the existing IDC and mortar intelligence companion annual growth rate for XR is around 38% to 42%. I think we're already seeing the signs that, at the very least, for a short amount of time, that companion annual growth rate of the total XR market is about to shoot through the roof. It's a great opportunity. I didn't have a crystal ball to know that, but it looks like us shipping our application on the existing headsets and the existing ecosystems, but also being teed up to, because I thought about distribution and these things. From the inception, that was why I was like, if we could crack that nut, then this would be a very interesting, worthwhile thing to spend time on. With this kind of proliferation of diversity and vibrancy in hardware and in software, and the kind of choices we made where we were playing on hard mode, and we got asked a lot of times by a lot of investors, like, you should just use Unity. Why aren't you using Unreal? Or why are you doing this the hard way? And I was like, I think medium to long term, there won't be two vendors. There'll be 10 vendors. And if we can be in more places at once with little to no cost to us in terms of engineering and support resources, staffing, then those economics are the exciting part for me. In addition to all the tech, which is also cool, but the tech is like the means to, can you reach this? these awesome economics. Even though we've had a tough time getting our app deployed onto MetaQuest and onto Pico, we had to do a lot of hard learning there. I'm speaking tonight at Silicon Valley VR meetup about some of those difficulties so other people can skip all the difficulties and get to the good part. I also feel like all the hardship of HoloLens 2 being erased from the planet and whatever else, I feel like all those hardships of basically hard learnings have turned into this sort of real springboard forward for this moment when there's about to be a ton more diversity on the device side where almost all the devices are going to be in a general shape where we can deploy our very high-value products, and if customers are on just about anything, we'll be able to meet them where they are. Or if they're like, now that this exists from this one vendor, they're already in our procurement system, so now we're going to buy 50 headsets. For some Fortune 50 companies, that's what's holding you back from XR. We literally can't get new hardware vendors into our procurement system because reasons. That's about to get resolved for those people as well. I just feel like all this work to create these win-win-win situations, I think that the ecosystem effects will be lasting. But also I feel like from the Rebecker Specialties perspective and our product factotum, it's really teed us up to be in the right place at the right time for this mass proliferation. that will reach, I think, a lot more consumers than it did in the previous year and a lot more enterprises than it did in the previous year.

[00:58:37.072] Kent Bye: Great. And finally, what do you think the ultimate potential of XR and all these open standard technologies and what the ultimate potential of all those might be and what they might be able to enable?

[00:58:51.604] Matt Hargett: Yeah, so one of the pieces of feedback that I prepared in a slide deck in a document for the folks here at Meta and some of the other hardware vendors is definitely there's a ceiling on what we can do in WebXR right now because it is backed by effectively OpenGL, and OpenGL was awesome for decades, but there is a ceiling on what we can do as far as in our digital twin, how many agents or avatars effectively can we render on screen at once and animate at once? And there's a ceiling on that that is not utterly low, but at the performance at which we do it, because we kind of play back the digital twin at two days per second, so that you can watch a year's worth of activity in six minutes. And also that allows you to see ebbs and flows a lot more clearly. Things that are seasonal that you never really would have noticed before become a lot more obvious. That's why we do that. So there's a couple of things. One is that there is a sort of a proposal or draft specification for WebXR on WebGPU that will dramatically raise that ceiling for our WebXR experience and others. And that is in active development in upstream Chromium. In Chromium Canary on Windows, you can start to play around with WebXR and WebGPU. And we have been testing our product against that implementation when connected via SteamVR with the Pico 4 Ultra, for instance, using virtual desktop and giving feedback to Chrome when it's like, hey, Layers doesn't work. And they're like, yep, because it's not ready yet. So that's the thing I'm excited about. But even on the web GPU side, there are some things that are already in not just upstream Chrome, but also upstream Chrome for Android that are already enabled and are already enabled on Android XR. Multi-draw indirect, MDI is one of them. And that is probably these things in WebGPU, some of which already have solutions that just have not been adopted by the downstream Chromium integrators on XR headsets. That's probably the one where if we can get those things done, if they can just literally flip on the same switches that are already in upstream Chromium for Android and already enabled in Android XR, We could probably not just double the amount of sort of agents slash avatars that are rendered at once in a scene, but we could probably not just double that, but also effectively double the frame rate from 45 FPS to native 90 FPS, which gets us close enough to 120 without running the battery down. And this is informed by we had a corporate sponsor last year who paid us to do a bunch of R&D that overlapped with our needs, which is awesome. And so we ported a whole bunch of their WebGL and WebXR stack over to WebGPU. But then also we went another step further. And anything that was not quite performing well in WebGPU, we then took that and then converted that into low-level Vulkan and Metal to go, OK, Is there something that's available in Vulkan, the lower level graphics APIs, that WebGPU doesn't expose yet that would make a difference for these kinds of scenes? And the answer was yes. And so for those things, I have not delivered that to the folks at Google or the W3C yet. But we've got hard data on that aspect of things, those gaps where the work hasn't been started yet or that's not quite in the spec yet. And those are kind of the mid to longer term things where If we are capitalized differently than we are now, right now we're self-funded and we do get paid to do R&D for people that align with our goals. We've shipped our products, so if those start selling really well, we'll be able to not wait around, I would say. We'll be able to forge forward on some of these things and then come to Kronos or W3C with Here is the ceiling. Here's the things that we tried. Here's what we found works. Could it be part of the standard or not part of the standard? And for all of these things, our intent is to release all that stuff as open source so that anybody can integrate it anywhere, including in browsers if they want to. or whatever so i think that those are the kind of ceilings right now on the current webexr api specification there are some things that are in flight right now like webexr on web gpu that raise that ceiling and i believe that will land in time for uh i would imagine it would land i'm not on that team but i would imagine that would land in time for let's say ces in january And then there's some things that WebXR doesn't do and WebGPU doesn't do, but the underlying hardware does that we would really love to see supported because we've done the work to know if we want to scale this up to this many avatars in a WebXR scene, whether it's in the browser or in the native module. There are just some things that the hardware supports that's not exposed yet. Those are the three tiers of it. The easy stuff is, hey, upstream Chromium for Android already has these things implemented. You just need to enable them. I would really love it if all the hardware vendors would jump on that faster. That would help everybody, including us. But again, we aren't beholden to that schedule or if they do it or don't do it. There's a bunch of things in Chromium Android that still aren't as enabled in some of these downstream Chromium implementations on XR headsets. And that's pretty disappointing, because it's just it could enable a lot of developers and a lot of monetization for developers. It would be nice if this happened in the browser, but we've got this other path of having a native app container that makes us so we're not beholden to that if they choose to not enable developer success.

[01:04:26.622] Kent Bye: MARK MANDELBACHER- Anything else that's left unsaid you'd like to say to the broader immersive community? Any final thoughts?

[01:04:34.181] Matt Hargett: I think it's very exciting, again, this vibrancy and diversity that's coming on the hardware side and the operating system side. As a consumer, I'm super excited. As a technologist and entrepreneur, I'm also very excited. I think that there's a number of these hardware vendors that do want to offer a contrast in terms of developer support and success, which would be fantastic. In the investment community, a lot of people went all-in on VR for a hot minute around 2019 or 2018 or so, and did not have great results for their IRR for their fund from those investments. The economics of this are changing dramatically. You don't need $20 million minimum to hire 3D environment artists and all this stuff. We haven't said AI in this conversation, and I'm not an AI startup, but I think that there's a bunch of things like the diversity of hardware providers and this other stuff that makes this a really exciting time to rethink the economics of IRR for investors, whether they're angel investors or institutional firms. It doesn't mean that there'll be a billion people wearing headsets or AR glasses tomorrow, and I think that kind of thinking is still not appropriate. But I do think that we are at the smartwatch era of this, where there wasn't 100 million people wearing smartwatches in a day or a month or a year. It was kind of a slow roll until an inflection point where People just bought one because they were cheap enough from enough places that applied to the ecosystems that they're already in, like Google Play or Apple or whatever. When I worked at LG, we actually did a watch project there for WebOS. So I think that it's just really a great time of all this stuff proving out and all of these Again, it's not the one company making a choice. I see a bunch of companies on the hardware and software side making choices that I think will yield better results for me and Rebecker, but also the other technologists in this community, but also the investment community.

[01:06:46.417] Kent Bye: Awesome. Well, Matt, thanks so much for joining me today on the podcast. I always enjoy hearing what you're working on. There's a lot of overlaps with my own aspirations of the dream of being able to write code once and have it be everywhere. I think, you know, in the web community, they have the responsive design idea that you could have multiple formats and be able to appear on that. that we're maybe starting to see some of the beginnings of that with your approach that you're doing. Because I guess Unity has been that for a lot of people. But even there's still custom things. I'm sure there will always be custom things for each of the different platforms that will have to be tuned or changed. But at least it's a production pipeline that presents a possibility where you can move into more of an open standards format to be able to produce things that will eventually lead to something like the Open Metaverse, which I think is what the Metaverse Standards Forum is trying to do, and live into a future where we can start to build the foundations of technology and platforms the production pipelines of what you're doing, of good practices of software engineering, let's call it, of all the ways that you want to make software. And existing Unity and Unreal Engine didn't really fit into that, and creating your own way that is tying together all these open standards. To me, it's super exciting and looking forward to diving into some more of the references and potentially even spinning up my own workflows and apps to see if it can start to push out some of the code out into these different platforms. And especially as we have more AR devices that have WebXR interface to have multiple formats across these different platforms. And so anyway, it's super exciting what you're working on. And thanks for taking the time to share a lot more details for where you see it all going here in the future.

[01:08:24.732] Matt Hargett: Awesome. Thanks, Kent. Thanks again for taking the time. It's always a pleasure to talk to you.

[01:08:28.612] Kent Bye: Thanks again for listening to this episode of the voices of your podcast. And if you enjoy the podcast and please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a, this is part of podcast. And so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voices of VR. Thanks for listening.

More from this show