Cryptovoxels-Origin-01

Virtual land speculation within cryptocurrency-based Metaverse platforms has gone from boom to bust over the past couple of years. I sat down to do a deep dive with the Voxels platform (formerly Cryptovoxels) founder Ben Nolan to interrogate the premise of idea of buying and selling virtual plots of land. Nolan is not all in for all aspects of cryptocurrencies, including the carbon footprint of proof-of-work chains like Ethereum that Voxels uses, and surprisingly the buying and selling virtual plots of land as a speculative investment. Nolan says that “Voxels isn’t a way to make money. Voxels is a way to build the Metaverse and to capture the value that they create. It’s not an investment vehicle.” He’s increased the supply of virtual plots of land over the last years to keep it affordable, but to also capitalize on the NFT boom and crypto Metaverse hype to the tune of $22 million of 2021 revenue in New Zealand dollars.

Even though Nolan’s intentions are that plots of virtual lands should not be seen as speculative investments, the very nature of Cryptocurrencies and NFTs means that inevitably there are a number of parcel holders who are treating it as a speculative investment. Voxels has sold over 7,300 CVPA ERC-721 tokens since June 6, 2018 with a total supply of 7863 parcels, and they have not been able to escape the preferential attachment Crypto Whale dynamics that have been documented in Ethereum and Bitcoin as over half of the parcels are owned by 10% of the 2380 holders as of July 30, 2022. 22% of Voxels CVPA parcels are owned by the top 1% of holders (excluding unsold plots held by Cryptovoxels), 44% parcels are owned by the top 5%, 56% parcels are owned by the top 10%, 68% parcels are owned by the top 20%. There are similar numbers for Decentraland LAND holders with 20% of virtual land owned by top 1%, 41% owned by top 5%, 52% owned by top 10%, 65% owned by top 20%.

cryptovoxel-top20

Nolan said that one of the key differentiating aspects for why you’re buying plots of virtual land is because of the neighbors that you have around you in the Voxels maps. But one of the things that I noticed in my experience of exploring around Voxels is that there are a lot of empty plots of land. After doing a feature analysis of the 7863 parcels, I estimated that there’s around 2530+ plots of land are relatively empty or completely developed. This is a rough approximation based plots of land with only 0-1 features and voxel hashes less than 200-400 characters, listed parks, and about 4 days of spot-checking empty plots of land that I came across.

Doing a comprehensive assessment of empty or undeveloped plots of land would like take a larger crowd-sourced effort of defining the parameters and validation process, but my rough estimate translates to around 1/3 of the parcels of land are empty. On average, for every 2 plots of developed land, there is 1 plot that’s empty and relatively untouched. This translates to a lot of absentee Voxel land owners who have opted to maintain ownership of the virtual Cryptovoxels Parcel (CVPA) token without improving or contributing to the virtual experience of their neighborhood in any meaningful fashion. I think it’s reasonable to assume that a portion of those holders of undeveloped virtual land are treating it as a speculative investment without any regard to the experiential aspect of that investment.

This is where I found my experience of the neighborhood aspect of Voxels to be lacking. When there are so many empty and undeveloped plots of land, it creates a disjointed and fractured experience that reads as Metaverse Blight.

There’s also been a number of critical articles & video essays about cryptocurrencies that have come out in the last year that have tipped me over to being more skeptical than optimistic including Marlinspike (2022), Kondor, et al. (2021), Olson (2022), münecat (2022), White (2021), Zhang (2022), & Bauwens & Pazaitis (2019). [Full references down below]

I also found it difficult to find compelling experiences by just roaming around neighborhoods as there seemed to be a lot more noise than signal, and there were not as many ways to search or filter recommendations built into the website. I was also able to do a more comprehensive analysis of the parcels of land via the Cryptovoxels APIs, and one indicator that seemed to help filter out interesting worlds were ones that had a high number of features or a broad range of different types of features in the parcel. As an example, The Real Vision Headquarters has 2009 features, and this one of the more elaborate worlds with a lot of experiential design considerations. Of all of the available features, the most popular feature was vox models, then images, then cubes, and then NFT images, which speaks to the popularity of NFT art galleries and images within Cryptovoxels.

In doing a survey of all of the parcels of land there were 153033 images and 41187 NFT-images, which averages out to 19.5 images per parcel and 5.2 NFT images per parcel. Many times the images are voxel textures or promotional images that you might see on a marketing brochureware website, and so many parcels of land read like 3D websites aiming to share information. Sometimes the spatial architecture creates a unique volumetric experience to amplify the message, but more often than not the voxel architecture serves as a utilitarian space to maximize how many images can be shown.

Here’s a full accounting of the total number of other types of features with how many on average on each parcel: 184325 vox-models (23.4 per parcel), 153033 images (19.5 per parcel), 92133 cubes (11.7 per parcel), 41187 nft-images (5.2 per parcel), 18503 signs (2.3 per parcel), 12376 lanterns (1.6 per parcel), 9675 groups (1.2 per parcel), 8349 collectible-models (1.1 per parcel), 7104 megavoxs (0.90 per parcel), 5464 polytexts (0.69 per parcel), 4352 videos (0.55 per parcel), 2920 youtube (0.37 per parcel), 2905 richtexts (0.37 per parcel), 1939 particles (0.25 per parcel), 1561 audios (0.20 per parcel), 1031 spawn-points (0.13 per parcel), 919 portals (0.12 per parcel), 853 guest-books (0.11 per parcel), 687 buttons (0.09 per parcel), 399 boomboxes (0.05 per parcel), 266 text-inputs (0.03 per parcel), 135 polytext-v2s (0.02 per parcel), 82 vid-screens (0.01 parcel), 44 poap-dispensers (0.006 per parcel), 23 slider-inputs (0.003 per parcel), 20 screens (0.003 per parcel), 1 discoverables (0.0001 per parcel).

The number of characters in the voxel hash also indicates how much the parcel owner has altered their piece of land. The higher that number, then the more work that they’ve put into it. The largest voxel hashes are the DERAGELAND and then ME Swing Tower. The voxel length is another indicator of how much a parcel of land has been modified. Also looking at whether or the parcel name was and/or description was added is another indicator of how much the land has been modified. 53.8% of parcels modified the name and 24.5% of parcels added a description. One of the more reliable recommendations are probably from the “womps” that are screenshots that are shared on the front page of Voxels.com. It’d be nice to be able to search and filter worlds by the number of womps, or by the number of times someone favorited it (a new feature that was added after my main evaluation period).

Getting personal recommendations of favorite worlds would likely be one of the most reliable methods, or to give more finer grained user tagging and searching. Some of the worlds Nolan recommended checking out where some of the Museum of Crypto Art worlds (searchable via the MoCA user), the Hexeosis Museum, The Rose Nexus in Gangnam in Origin City, Architect Island, Ogar Production [described as a Cannabis Cafe], and 3 Bus Estate [described as a church that's black in Maker's District].

Some of the worlds I enjoyed where jin’s 2 Proto Gardens hacker space and 3 Alva Fork, and the VRON Plaza by ross had some great speculative architecture with baked lighting that worked really well with a great payoff at the top. ME Lost Temple is a vast place. One of the most consistently impressive design firms is Metaverse Architecture firm Voxel Architects who built The [Jedi] Temple (Dark Junction), One BC, House of M, $WHALE Pagoda, zonted Gallery, Token Smart [Roman Colosseum], @westcoastbill 21x: glass age, and the @westcoastbill 21x: space age SpaceX installation that you see when you spawn.

One thing that I did enjoy about Voxels is the independent spirit that gives me a Geocities vibe of the early World Wide Web. These are low-fi, virtual worlds with a low barrier of entry to create and modify. Given that nearly a third of plots of land are untouched and undeveloped, it’s not too far of a stretch to say that the majority of the nearly 8000 parcels of land that are not very inspired or have a bare minimum of amount of modifications. But I also found it challenging at times to escape the mass consumerism of the crypto meme culture and trying to market and sell any number of digital goods. I was able to still come across enough weird, indie spatial art, that has some insights for what a spatialized, 3D web and Metaverse may evolve into. I did have a number of times that I spawned into a location and found something beautiful and unexpected next door to my destination, but this didn’t happen as much as I would have preferred in my weeklong deep dive, and I also had more mixed results when I went on a number of extended walks down a number of different suburbs.

I can’t claim to have seen every parcel of land, but I feel like I was able to see enough to get a representative sample where on the whole I didn’t find as many compelling experiences as I regularly do in VRChat or RecRoom. Nolan recognizes that Voxels may never be as compelling as these other higher fidelity social VR worlds, but he says that Voxels is trying to do something different and is an experimentation in data sovereignty where users take ownership of the virtual land. If there wasn’t such a disproportionate number of 10% of crypto whales owning over half of the parcels or 1/3 of the parcel owners being absentee speculative virtual land investors, then I think the idea of virtual land ownership might have more legs. This top of the broader Cryptocurrency and NFTs critiques listed below puts me firmly still in the skeptical camp when it comes to Crypto-based Metaverse platforms.

However, the technical architecture of Voxels is very impressive. There is a beauty in using a completely open web stack to build a vision of the open Metaverse that’s doing a better job of living into the interoperable values of the open web better than most projects. There are WebXR implementations, but I had mixed results of it either crashing out on the Quest 2 or not fully loading in all of the plots of land. So while I’m still not convinced by many of the underlying aspects of cryptocurrencies or NFTs, then I have to give Voxels credit for finding a way to bootstrap millions of dollars of funding without having to be beholden to VC investors. Let’s hope that they can translate more of that revenue into lessons for an open and interoperable metaverse, but also build in-world creation tools that helps democratize the process of virtual worlds and give a sneak peak of how the 2D web might start to be translated into immersive, 3D spaces.

Also, the jury is still out for me on what role cryptocurrencies, web3, and NFTs will play in the future of the open Metaverse. A lot of my more critical takes have been formed by the following articles and video essays that I’ve come across over the past year:

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

crypto-web3-nft-skepticism

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Magic-Leap-Beast
Photo of The Beast Prototype courtesy of Rony Abovitz
I had a chance to sit down with Magic Leap founder Rony Abovitz for two hours to unpack some of the many threads Magic Leap’s behind origin story, including some of the underlying philosophical questions and aspirations towards what he calls “Neurologically-True Reality.” Considering the many tens of billions of dollars of XR R&D that companies like Meta, Apple, and Microsoft have been spending, then the multiple billions of venture capital raised by Magic Leap seems like a relatively efficient and tight budget.

We reflect upon the potentially communication strategy missteps of Magic Leap, the changing media landscape from 2014-2020, how Magic Leap was a part of the conversation with a broader open source XR movement with Valve, unlocking the magic of digital lightfields in their Beast prototype, their three-phased portablization process, the role of sci-fi and story and the persistent dual track of enterprise apps, a brief history of XR, and why Abovitz thought VR was not going to get them to the mountaintop of neurologically-true reality.

A comprehensive history of Magic Leap would require a lot more interviews in order to get multiple perspectives and compared to the public record of events. I’d recommend checking out this Clubhouse conversation with former Magic Leap employees aired as episode #989, and my wrap-up from 2018 Leap Con, and 7 years of my Tweets about Magic Leap.

Hopefully this extended, oral history interview will Abovitz will help to provide some additional context to the underlying philosophical motivations and inspirations behind Magic Leap, but also some of the broader media context, the tradeoff challenges in his journey up until the point where he had to lay off 700 people in April 2020 leading to his own departure.

ILMxLAB co-founder John Gaeta worked with Abovitz as a senior Vice President at Magic Leap, and calls Abovitz a “real deep thinker and incredibly instinctual and intuitive on these [XR] things.” He was certainly tapping into a broader zeitgeist in the early 2010s as Sony, Microsoft, Oculus, and Valve were all independently working on similar ideas and technologies, and hopefully this conversation fills in some of the gaps for how Magic Leap came out of stealth from no where with a $542M Series B led by Google Inc. that was announced on October 21, 2014.

Full transcript is below.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

ABOVITZ: Hi, I’m Rony Abovitz, and I’m the founder of Magic Leap.

BYE: Yeah, maybe you could talk a bit about your journey into augmented reality.

ABOVITZ: So that’s a great open-ended question. By the way, Kent, thanks for having me. The journey to augmented reality. It really did not start out with this deliberate thought of there’s this field called augmented reality, and I want to go into it because I’m not even sure what the name of the thing was that I wanted to make yet.

But I started asking the question, How does our brain make the pictures that we experience as we move through the everyday world? So I was comparing the idea of like if you look at an Apple II or your current computer, your MacBook, your phone, there’s a display and you know, it’s got glass, it’s OLED or LCD, there’s televisions and movie screens or there’s an external display and you compute stuff to it.

And then when we walk around, we’re not looking at a display. We just have this display that seems to work, which seems like a really weird, obvious question. You wake up. But we don’t really ask the question like, how does that display work? I did biomedical engineering in college and grad school, so I’m sort of wired to ask those questions.

So I was thinking, “Well, how is it that we see all of this?” And like, “Do we see it or are we model building in what’s going on?” So I started asking those questions, which is how does the brain render this rich, detailed volumetric, amazing, you know, retina-resolution world? It seems to be head mounted because we all seem to have a head and whatever’s working, that display is mounted in there.

So I was thinking like, there’s that thing and could that ever be used as a display for computing rather than the external display? So it was more that question than going down, like maybe a lot of people today where they think there’s this field called AR and the field called VR or XR, you know, a lot of different definitions flying around or mixed reality or spatial computing or extended reality. They all kind of sort of mean the same thing, but not always.

But I was asking more fundamental question, like, which is how do you visualize the world when you open up your eyes? Like, did you see it? Is the brain a rendering engine? How does it work? And is it one day accessible? Could that be the thing that computers plug into?

Anyway, so that was like maybe a first step into asking this almost Zen like question, like it’s the best display — I think I actually wrote this down in an early notebook, like early pre-Magic Leap notebook — Like, “Is the best display, a no display?” That was sort of one Zen question. I was like writing these like Zen koans, I guess. “Is the best display, the no display?” Because it seemed to actually be the best display no matter how good of a 4K thing you buy from anywhere when you throw it away and you just walk on a beach or go to California and like, you know, just wander along the cliffs somewhere, or go to a desert. It’s always better than anything you could ever experience in a movie theater, anything you could ever buy. And you’re like, “Well, that’s pretty damn good. What’s going on there?”

And then I asked the second question, like, if we know how that works, do we ever see the outside? Or is it all happening inside? So is the inside = outside and the outside = inside? It was like another one of those Zen questions. But like — By the way, I hope that’s a very short answer to — because I could spend two days talking about it, but about like, how did I get into AR? I wasn’t thinking about the term AR or, you know, I want to make an app for an iPhone or an Android phone. It was like asking that fundamental question and trying to work back from there. Hopefully that makes some sense.

BYE: Yeah. I’m wondering if you could take me back to the moment that you actually decided to start Magic Leap, or to start a company, or to really seriously pursue this? If it was like 2010 timeframe or what was that turning point when you decided, okay, I’m going to go from what you were doing previously, which was biomedical engineering, and then into actually pursuing this idea of the no display or chasing down answering these questions that you’re asking.

ABOVITZ: So it’s not very linear in the way that people would want it to be, in the sense of — if you go back to probably 2010, I had founded a company previously called Mako Surgical that was actually having some real success in robotic surgery. We went public in 2008 and I wanted to see what life was like in a public company.

You know, I started Mako Surgical literally in a dorm room, apartments on the campus of the University of Miami, now my wife’s apartment, then sort of pre-fiancee. But more this $300 a month apartment on the campus, and we start this company. And you know, I had no idea like how to even start a company or anything like that. So I’m just like we had no money. We’re like living on 29 cent, 19 cent burritos from Taco Bell, like being we’re vegetarian.

So we’re these like cheap burritos. And I’m dreaming about building a company that uses robots for surgery because I was inspired by Star Wars. So I figured I’ll just go do that. I have no idea how much money is going to take. I’m just going to go wander off and try to figure that out. [A] very long story not for this podcast, but we in 2006 did our first surgery.

So that was like a big milestone. FDA cleared the first of its type in the world. We built this really sophisticated robot that used — and this will relate back to your question — It was a haptic robot. So haptics, if you’re not familiar, is this idea of like basically digitizing force feedback. So if you think about our senses, there’s like, you know, visual and sort of the physics term for like the whole visual world and its complete mathematic description would really be this like the idea of a full light field. And then the sonic world would be sound fields. And then touch could be like haptic fields.

So in surgery we were using normal displays to visualize three dimensional imaging like CAT scan and MRI and x rays that we were reconstructing, using the techniques from video games and computer graphic companies. So I’m going to SIGGRAPHs and hanging out with animation people and video game people and bringing that technology to surgery, which at the time was super weird and foreign and alien. We were kind of inventing computer-assisted surgery.

I was thinking it was like video games for surgeons because then growing up with computers and video games and like noticing this is bereft in medicine. Let’s bring it in there. Star Wars said we should do it, therefore Lucas, therefore do this. I was like, I was sort of thinking that way. And then robots, of course. But we didn’t just make a normal robot, we made a cooperative, haptic robot that would guide you — almost impossible to describe here. This is, like, even worse than the TV-on-a-Radio Problem. Like no one believed anything we did at Mako, until they came in and put their hand on the robot and felt invisible force fields guiding them into the right spot of surgery.

It really felt like total magic and it worked. It’s used all over the world. Coming out public had great success. The thing that was confounding me is like you could feel these shapes. It’s almost like if you had a pencil and there were glass tubes guiding your pencil to not make a mistake and the shape of the tube would constantly change so you’d always do the right thing.

BYE: Are these co-located robots? Or are these telepresence robots?

ABOVITZ: They’re in the room. And the surgeon would hold the robot, and the robot would act almost like a master surgeon guiding an apprentice to do the right thing. So this weird thing of a human and a machine working together, we were one of the first to actually show that and this really elegant way. So this idea of like an AI robot with amazing A.I. and computer vision and sensing and this three-dimensional reconstruction of what surgery it was, but not doing it autonomously, doing it in this cooperative way, which took a long time for people to understand what that meant.

And it’s actually been one of the most successful implementations of robotics in medicine, period. And there was another company called Intuitive Surgical, which is a giant public company. We got acquired for about $1.65 billion in 2013. Intuitive is still public. But I kind of came from that world into the normal computing world. So I brought computing and AI and game engine stuff and all these things into medicine, which is this whole fight.

Because I thought it could be really incredibly helpful to surgeons. And the haptics where you have these force fields guiding you. You couldn’t see them, you could only feel them. So it was literally like, “Use the Force, Luke.” It was totally awesome. I actually got to say that to heads of surgery at Harvard, and all kinds of things. It was really fun. And it’s used all over the world today.

I mean, people that have Mako implants in them. And then the robots, they have birthday parties today like you’ll have a team will celebrate its first surgery it’s 50th, it’s hundredth, it’s 5000th. So they befriended the robot. My vision was “Could you have this, like, R2-D2, like friend that would help you?” Not HAL From 2001. Not the Terminator, but a totally different vision of like how machines and people can work together.

And one of the things that was seeding in my mind was “Why could we feel this?” Like, we really did like virtual reality of objects in space, but you couldn’t see them. So that began an exploration of trying all kinds of — I think I could safely say this time — super big and expensive and horribly bad heads up displays and VR devices that either made me puke or gigantic, horrible.

I was just like, “What’s going on here? Why does it suck?” But I could make force fields work. Like we had these elegant, beautiful, force fields. You could do sound fields really well. What’s going on with, like, “Why is it such a nightmare?”

So that question of like, walking on the beach and asking, why do we have this great display? How does it work? Was also seeded by being in operating rooms. This was maybe one of the earlier seedings and asking the question, “Why can’t we relate that question to seeing data and to seeing visually — like right where you needed to be not on a screen, but unlock it from the screen, and it’s there, at the right size, at the right opacity, perfectly registered in space.”

What’s kind of cool is there’s a company in Germany called Brain Lab, who I competed with at Mako and now I’m friends with the CEO. He became a really early enthusiast of what we did at Magic Leap. And now he’s got a few hundred sites around the world using Magic Leap in my original health care thing that we did at Mako, he’s using them for brain surgery and spine surgery. It’s actually kind of amazing.

So it’s funny to see that loop closed, which was one of the original questions. What was originally a big competitor of mine, now a really good friend at Brain Lab, he’s doing that thing. He’s actually implementing that last gap, which I wasn’t able to solve at the time of Mako, but we did solve at Magic Leap, and Brain Lab is using it. So anyway, that’s a bit of a messy answer to your question. I don’t know if that makes any sense.

BYE: Yeah, well, that helps set the context of the time. I guess, when you think about the founding story of Magic Leap, when was the moment where your co-founders came together? And was there a moment where you went from full-time to at previous job at Mako? And then it got acquired, you said around 2013. What was the founding story of Magic Leap, the way that you tell it? Given that context of all the stuff that was happening, bringing all the different pieces together that then you started to focus more full-time on actually trying to bring out this augmented reality headset.

ABOVITZ: So I think of like the zero-hour — like, I had a — I turned my garage at my house and one of the rooms of my house into a recording studio. So I’m going to take you to a — like it’s got a few messy threads that integrate together. So we’ll go down this road. So I met this producer in London, his name is Mick Glossop, and he had done records with like King Crimson Van Morrison, Public Image Ltd, like really cool stuff.

He’s a British producer. And I sent him a tape of my weird, grungy band. We were playing like late night — like the CBGBs of South Florida is this place called Churchills. We’d play there. You know, I’d be, I mean in robotic surgeries in operating rooms during the day and playing with my kind of crappy band at midnight, getting bottles thrown at us.

And I had kind of a homemade recording studio in part of the garage and bedroom, but it wasn’t really well set up. It sounded kind of grungy. And I sent a tape of stuff we were doing the Mick, and he actually thought it was cool. There’s a whole story of how I met him, but let’s just leave it at that.

He was like, “That’s really cool. I’d like to help you guys. I want to produce your next album.” I was like, “Oh my God! This is great.” So you’re like, you’re like, “How does this have anything to do with Magic Leap?” But it loops back. So Mick comes down, comes down from London and he hangs out with us for a few weeks. And he says, “All right, before I can record, we’ve got to fix this up.”

So we cleared out of bedroom. Cleared out the garage. And he’s like, “I know an amazing sound architect. Guys that build like studios and like top studios, like Criteria and others.” He’s like, “Before I record, we’re going to turn your house into a recording studio into like a proper room.” So at recording studios, you might have like an A Room, a B and a C. So he’s like, “This is gonna be like a B room or C room, but like good quality, but not quite like a full A room, which is gigantic, like at Abbey Road, but maybe a B or C room, like full-on everything.”

He calls up one of his friends who had designed mixing boards. This guy, I think Malcolm Toft, he’s like, We’re going to get this 32-channel mixing board. Now this is — what’s cool is like this is not long after Mako went public. So we put most of the money away. But I was like, “I’m going to do a couple of fun things, post IPO.” And building the recording studio was one of them.

So he comes in, he helps build it, set it up. And I set up the control room with this 32-channel analog board. Get this cool gear. Upgrade some guitars. Basically every milestone at Make, like my main mental reward was” I’m going to get a guitar.” Like that was like the main splurge. So I have a whole bunch of guitars in the house.

I don’t really need cars or anything fancy, but like cool guitars. Yes. So that was like every time we got FDA clearance for surgery, tenth surgery, this thing happened, went public. That was an excuse to get a guitar. So I needed to name the thing. So I named it “Magic Leap Studios.” And that was kind of the founding of — So that studio was called Magic Leap, and it was like painted on the door. My mom is a painter. She studied art at Kent State. She was there during the shootings, like, you know, the classic ones that you hear about in the Crosby, Stills, Nash song. She was walking on campus at the time. So she paints this like Magic Leap Studio thing. I tack it on to the door. That’s Magic Leap Studios. And I was working on there, like in the control room and in the garage, thinking about what I’m going to do next.

So I have the studio, we’re working on music. And I’ve got like a bunch of ideas all at the same time, comic books, gonna make a film, going to do something really cool, technology, going to try to solve that problem, “Why can’t we see stuff?” You know, that whole visualization thing. And I’m kind of think about always at the same time. I’m like, that’s my incubation space.

You know, I had no co-founders — like it was — that was like the beginning of Magic Leap. I call it the wandering in the desert. But I, I brought like, I’d say, “Fellowship of the Ring Members.” You know, when you think about co-founding a company. It’s like you’re hanging out, and here’s the idea for the company. Boom! Like Larry and Sergey — Actually, there’s a whole story there, which we won’t get into, but like Scott, who was one of our investors, he was actually like a real co-founder of Google, too.

But I think of a co-founder as someone that actually really is there with you at that moment of inspiration, like Lennon McCartney writing the songs. So I start this thing called Magic Leap Studio, it’s the idea of multiple threads. And I start wandering, looking for others who want to go on the journey. I think the early members, I think of them as like, you know, they’re “Fellowship of the Ring Members.” You know, they become that fellowship, That doesn’t like diminish their status. I think it’s just as key. But there wasn’t this like moment where we’re hanging out down, “Let’s go start this company called ‘Magic Leap.’” It’s a little like me in a garage with the door painted “Magic Leap Studios,” because I had just built a studio. And that’s how it all got started.

BYE: Well, when I talked to Tom Furness, he had mentioned that he had created during his time while he was working on virtual reality headsets for the Air Force throughout the sixties, seventies and eighties. And then he eventually went off and started the lab at University of Washington. And at some point he created a –

The HIT Lab.

Yeah, he created the HIT Lab, but also created a virtual retina display patent that then expired at some point. So at what point did you come across the core technology pieces to start to build together some of this digital light field ideas and augmented reality headsets?

ABOVITZ: So I have a friend who is kind of a physics genius. He went to Caltech. We were friends for a long time, and I begin talking to him about how do we unravel how the brain is decoding this stuff? So I’m thinking about it from a neuroscience, and he’s thinking about a physics perspective. So a whole long story there. I’ll leave it for this. We come up with an idea that we actually think is viable and good. And then we get scared. “Is there anyone else who’s close to that idea thread?” Like, how this works? What’s this mix of physics and neuroscience that would really unlock the key to this? Which, forget about how difficult the engineering might be like, what’s the theory behind it?

So we start to do all this research. We regroup for lunch about a week later, and we both go, There’s a lab at the University of Washington. And there’s a guy there, Brian Schowengerdt, who was a scientist who was doing some work that was adjacently close to what we were thinking. And we ended up — Brian actually got the title “Technical Co-Founder.” Because what I did was I flew out to Seattle, sat down with them, said, “We’re going to start this crazy company. I’d love for you to be part of it. You’re one of the few people on the planet who’s even in the same direction.” Like it’s like you’re heading West and you look on the plains of Nebraska, and there’s like nobody there. And you’re walking — and suddenly you see like, “Wait, who’s over there? There’s a guy walking there.”

And I’m like, “Do you want to join our party? We’re going West. We’re going to Oregon –” Like, we thought, “We’re on the Oregon Trail. It’s going to be intense. It’s very early.” And Brian was in this lab from Tom Furness. So they were looking at virtual retina displays. And there are pieces, but our theory was like — our theory did not negate some of the work they were doing.

It had a couple unlocks that they had not done, but it was going to be incredibly important to work with them. So Brian became our chief scientist. So he’s officially a technical co-founder. So it was basically me then, Brian, and I would say like “The Fellowship of the Rings” people like Sam, Richard Taylor, Randall, others who came along the journey.

But then we did an important deal with the University of Washington where we licensed a bunch of IP. And we had some IP from NASA, original IP, and UW. And that really created almost like the kernel, like the yeast and the doe. Where we’re like, “We have this like systemic thoughts with a few original things that no one had ever done, just like totally new things.”

But we’re like, there’s this whole thread at UW which is too close. And then we brought Brian in, and he was awesome. And then we brought in the IP there. And there was this stuff from NASA. And then we found this brilliant engineer, Sam. A guy I had worked with at Mako Surgical, Randall, who is now working at a supercomputing center for the Department of Defense.

So he came in. I think the order might have b een, like — I don’t know which one came first. I’ve got to go back and look. But they were all arriving around the same time, and then one signed first and the other. But that was kind of like a very early picture of that group. So you have this like brilliant software guy, Randall. This like NASA’s system engineer, Sam. You’ve got the physics guy, Graham. And then you’ve got Brian coming in from UW. And it’s almost like, you know, they’d be the middle of film one of the trilogy, if that makes sense.

BYE: Yeah. And I know that when people recount the public history about virtual and augmented reality, Ivan Sutherland is often cited as one of the first people to put stuff together. But I know that Tom Furness was working at it also at the same time, but it was more of like a secret track of some of the stuff that he’s working on that may not even be declassified yet. But some of the stuff that he worked on in terms of the virtual retina displays of the idea of just essentially shooting photons directly into your eyeball.

And so when I talked to Tom, it was like, I made the connection that some of the stuff that he had been doing there was connected to the stuff that Magic Leap was doing. And so I know that eventually, you have the creation of the big prototype that — well, when I first got into the field, it was in January 1st, 2014, when I got my headset.

At that time, Oculus had already been out for a year. And I think it was later that year, in 2014, where there was that funding announcement that put you on the map. So it was sort of in this mix of before the HTC Vive had been announced at GDC later that spring. But it was after the acquisition of Facebook that you come out from stealth to say that you’ve been working on this for a number of years.

So help me understand what was happening before you had come out, and became public. And if you had heard of all these other strands because Sony was working on this PlayStation VR, you have Microsoft working on the HoloLens, you have Palmer Luckey that’s working on Oculus. You have folks at Valve, they’re starting to work with stuff with Jeri Elsworth.

And so, were you aware of all these other things that were happening? And maybe you can contextualize what Magic Leap was doing. Because you’re all independently with these different strands coming up with the same type of either virtual or augmented reality. So yeah, I just love to hear a little bit of what was happening in that time period when you’re still in stealth?

ABOVITZ: So Kent, I gave you two postings, which I posted in public, which you can read later. One of them, and I’m just going to use that — When you talk about the history, I got really deep into the history of it all, XR and reality is I kept going. I went to like, okay, there’s Tom Furness, and there’s Jaron Lanier, and there’s Ivan Sutherland’s lab.

And I just kept going. And there’s like the stereoscope, and Phantasmagorias. And it just got weirder and weirder, and deeper and deeper. And you go back like thousands of years. And I’m like, “Whoa!” So I realized that whatever we were doing was going to be part of a multi-thousand year, not somebody in the 90s to us, or someone in the 70s to us.

But it goes back a really long way. Like there’s a thread which I think I’m going to do a documentary film about – we could talk about that one day. I wrote this thing “The Brief and Incomplete History of XR,” but it goes back. You can really go back thousands of years to this line of thinking that leads to a lot of these things and branches. And one branch turns into cinema. And one branch turns into like the View-Master. And another branch turns into like surgeons wearing x-rays on their head that look just like VR & Oculuses, but they’re beaming fluoroscope x-rays in their head, to a branch that turns into holography in the early 1900s, and MIT plays a really big role.

So as you keep following that, I was like, I want to go to the root of the tree, and go back and back and back. And end up like shamans and Buddhists and Hindu religions and like the idea of Maya, which is like “all reality is illusion.”

We actually brought Tibetan monks to the building at some point because I was like chasing — I was going all the way trying to figure out like “What is really going on here?” So it wasn’t “We want to build a VR system.” It wasn’t “We want to build an AR system–” And we’re like, we’re trying to understand the nature of reality with this early group.

And this was pre- any real investor coming in. This was like “I’m angel funding it.” I got a little bit of capital I could put in post-Mako. My wife made me put almost the rest of it away. So we had this finite pool to do these like really weird, out there, almost philosophy meets phase-one, research grant, kind of stuff. So the early time was super fun. Like we’d have whiteboards where like, “We need a black hole that floats over here to create dark matter.” Things like any investor would not be happy with, but it was pre all of that.

And I think that early exploration was important because we were trying to trace back the whole history. So we became like hyper aware of like all of the different things and what their failings were and where do they go astray. For example, we became obsessed with holography. Like, I actually felt that like stereoscopes were a dead end, meaning that if I take two flat planes and try to convince my brain that is the real world, no matter how high of a resolution — I’m missing some fundamental things.

So then we went to holography, where like holography is capturing the totality of light field physics on film. But it’s frozen in time. And we’re like, “Well, that’s not it either, because the real world is not frozen in time. It’s dynamic. There’s a feedback loop with your brain. It’s rich. It’s got all the parameters of light fields.” And we’re just going around trying to go, “How do we actually unlock all of this?”

So one thing that’s now — you know, we’ve published patents. And I wrote a small white paper on it, so I could talk a little bit about it. We really were going after what I call “Neurologically-True Reality.” Like ,I was at SIGGRAPH when I saw Jaron Lanier speak — he’s a genius — about the whole possibility of virtual reality. Read all the SNOW CRASHes, RAINBOW’S END, all of that kind of stuff.

But I wanted to chase like, “What’s the end game look like?” Saw The Matrix. I never thought I’d work with people like [John] Gaeta or Neal [Stephenson], which was kind of insane. But the idea of like, “Could the magic of the brain — how it displays — could that be unlocked?” And the first thought was — To do that, you had to recreate the entirety of a lightfield.

Now I think this is where there was like confusion both internally and externally, because you have so many people in this field who are hardware engineers, optics engineers, display engineers, and that’s all they think about. And then the other side of it is like, “Well, how does the eye-brain system process these signals?”

So I’ll describe it this way, if you give me a minute, because I think it’s probably the root of the whole idea of why I started the company. And it’s still something to be chased, because it’s like an impossibly-difficult problem. And we did make a lot of progress in that direction, but it’s one that I think still needs to be chased, because I think it is the end game.

And it’s basically this — is like, I didn’t think the eye sees, I thought the eye was the filter. And I thought the retina acted like a CCD. And what it would do it’d take the full complexity of the universe, like the physics of lightfields coming at us, then their sound and touch — we can get into those later. But just — if you just focus on light fields. What’s outside of us is that full, dynamic, photonic wavefront definition of light energy. And it’s everywhere. And it’s all over the universe. And it travels in the speed of light. And it’s super complex. And it has all this information.

And we came up with this idea that the brain does not need all of it. So it was the idea that the brain could not possibly intake all of the lightfield physics. So then we — through this notion of evolutionary biology, which is like, “Well, we evolved to survive. So we’re here. So somehow we took in enough of what we needed to survive.”

And there’s something about the human body, our design, how we do things, even the distance of our fingertips to the eyeball, which tells us clues. If you start to look at how the brain works, there’s like little weird footprints of human evolution to deal with the fact that we took in what we needed to survive.

Another thing I haven’t talked about in public, but it’s worth time on here is — We thought about self-driving cars, and how they work. And if you think about a self-driving car — like, Sebastian Thrun is a friend of mine won the DARPA Race Across the Desert. He was the first to win that race. And it was kind of an amazing thing. And opened up the door for the whole self-driving car world. So I don’t remember the exact year he did it, but it wasn’t that far away from the time when we were starting up Magic Leap.

So we started to think about his car. And this is a bit of a conceptual, philosophical leap for a second. You’re a car driving through the desert. Now the desert is beautiful. It’s got the golden light of the sun. You see all the mountains, the cactuses, or maybe like a fox running around. And it’s just beautiful. The car doesn’t see that. The car’s covered with sensors. And it takes in sparse samples of that desert scene to add to the 3D map it has of the world that it got inherited from like other scans. And all it’s doing is driving in a line.

So that poor car never sees the beauty of the desert. It never sees that light. It never sees any of it. It’s just like in its Plato’s Cave. But it’s evolved through our engineering to do what it needs to do, which is get across the desert and win the race.

Then we have this idea that “We are just like that car.” Like human beings are in Plato’s Cave. Our eyes and our ears, our sensors are taking in and throwing out most of the information, meaning we take in just enough to have our brain build upon an existing model of the world. So we had a theory that there is an existing model of the world, #1. That comes in our brain that we’ve inherited, that you pass on through every generation. And we keep improving it.

So somehow our brain structure has a model of the world. And that model of the world gets reinforced when you’re waking up as a baby. You’re touching things. You’re seeing things in it like — almost like a machine learning, which is based on the human brain to an extent. We keep adding that model and shaping it, makes it like person-specific, human-specific.

But you have this like built-in model of the world. And we thought if that was true, that could explain why we only needed sparse data. And we kept looking through the structure of the eye, the structure of the brain, and where the visual cortex is, and sort of the density of parts of the brain, and how it does things.

And we’re running all these interesting ophthalmologic experiments on how the brain might take data in, and throw out what it doesn’t need. And the idea was — We throw out only what we needed to survive as human beings. So it was this collision of like evolutionary biology of people versus like this very intense, exotic, lightfield physics.

And then if you think about it for a while, it does match up. There’s this like nice harmony between what the human brain evolved — for many, many generations, you know, millions of years, tens of thousands of years, whatever number of cycles — enter this harmony with this like infinitely-older, lightfield physics of the universe. And we’ve adapted and evolved into this. And this is like stable system of equilibrium. And I was thinking, if you insert anything into that, that’s going to stress out the human brain, that’s going to be a problem.

Like, we needed to do something that would be like inert and then eventually find its way to be biomimetic. So before we thought about device, it was like, what’s the theory going on there? And if you then look at the whole history of like all these attempts at augmented and virtual things, every time you insert one of them and they break that equilibrium, there’s some kind of stress: there’s nausea, there’s dizziness, there’s something. There’s like lack of visual quality.

So we started to build a model of parameters on everything we would need to do to ultimately get that equilibrium. So I’m going to stop there for a second. Because that was just a lot of stuff. But does that make any sense? Because that was an opening theory to why we were even doing this thing.

BYE: Yeah, I think that helps set the broader context. And I guess part of the question that I was asking in terms of Magic Leap is — You’re doing this in stealth, but there’s also other people doing this in stealth. And I’m just curious at what point you become aware of some of these other projects, whether it’s Microsoft HoloLens or Valve with their AR lab that Jeri Ellsworth was setting up, or Palmer Luckey with Oculus that eventually had the Kickstarter in August of 2012. And so that’s a pretty public thing that happened within that month that put VR on the map. But there is all these other projects of AR that both Magic Leap and Microsoft were working on. And so, just trying to get a sense of this period because it’s like really an interesting turning point in the history of XR. All of these things are independently happening at the same time, even with Sony and their PlayStation VR. They’re all pushing forward the technology, incrementally. And I’m just wondering how you came across any of these other projects or if you had any connection with them before Magic Leap came out?

ABOVITZ: Yeah. I’m going to tell you about our intersection with Valve, and almost intersection with Oculus, which is fun. But also, you might get a kick out of this. I kick myself over it. If you look at our designs, like 2010, 2011, we designed a bunch of like — if you’ve seen those renderings of what people are projecting the Apple VR system to be. We had done all these like systems of like “Well, should do VR or AR?” And then we had designed this thing which was VR with Video Passthrough.

We actually filed a bunch of IP on that. And I have all these drawings that are sitting in Magic Leap, owned by the company, of like video passthrough, and all these sophisticated optics to do that. And it was like a thread. And I thought, in the end — and you could say, “I was stupid” — In the end, I thought that thread was not the way to the mountain top.

It was the way up the mountain. You can go up the mountain quickly. Maybe you should have done that because it probably a multi-billion dollar fast path. But I was more of a theoretical purist on trying to solve this physics-meets-the-brain problem. And I’m like, “We do that to cheat to get somewhere, but you won’t get to the mountaintop.”

And the goal of Magic Leap was, we were going to figure out how to go mountain top and try to climb up that mountain on the right path, the one that really takes you there. And I knew the other one would be like this quick hit, I called it “drinking seawater.”

And literally, all the stuff you’re seeing come out now, ’22 like VR with video passthrough displays. I have this whole bunch of renderings of those things, and they would have worked. We could’ve been very early in that. I know Valve did some things a couple of years ago on that. But I felt like it wouldn’t ultimately meet the goal of actually using the brain as it evolved and as it has designed through its evolution.

I kind of felt like there was something beautiful about the way the brain lives in the organic world, and it’s perfect in how it images with the physics of the real world. It’s just amazing. And I felt if we did this other thing, we would break that equilibrium. One’s billions of years old. One’s tens of thousands and millions, and like, why would we break that? That would be like arrogant of us. So I wanted to do this, like, “Can we somehow find a way to slip in between those things?” Which is like a much harder path, like an insanely difficult path we took.

So anyway, we decide that’s our path. We hear about Oculus. And we saw all this stuff going on. And you know, Brian was living in Seattle, Brian Schowengerdt. So we’re on our way to San Francisco. We’re going to Sandhill Road meeting with investors. I think doing our first significant round of funding after me, angel funding the company for a bit, like we need more. And I had hit a limit like if I go any more than everything I had put away from Mako would be gone and I would be like a poor college student again. And I think my family was like, “You’re not going to do that twice.” Like, “You bet everything on it. It worked. That’s put away so you can like have a secure life, but you’re not touching that anymore. You can use this much to get the company going. Now you can go see if other people believe in the idea.”

So we go out to the Bay Area, and then when we’re there we get invited to go out to meet Gabe at Valve. And I’m like a huge Valve fan, like love Valve. Michael Abrash was there, Carmack was collaborating with Palmer Luckey. And we’re hearing about, you know, we’re reading everything and friends are in the network about this amazing open source VR community that’s being formed.

And what I knew about what was happening with Oculus at the time was there’s like this group of people working on it: Carmack from ID, Abrash from Valve, Gabe is kind of like the Yoda pulling everyone together, You know, there’s this whole super commune feeling like it felt to me, like the Homebrew Computer Club, like everything’s coming together. We’re going to reinvent computing. Very idealistic, you know, Gabe in particular was like, open and idealistic.

I thought it was amazing that ID Software’s collaborating with Valve and all these open source people. And then no one cares about IP or copyrights or anything. I’m like, “It’s just going against the grain of how the whole world operates.” And it felt incredibly utopian. I was like, super excited by this. Like, maybe this is the Dawn — this the Age of Aquarius, Part Two. Because I was really into the idea of the Homebrew Computing Club and this idea of open computing. I thought that was awesome.

So we get invited up there. I remember hanging out with like, I think it was Brian, Sam, a couple of the other guys in San Francisco. We’re at this hotel near Stanford, and we’re wondering — We just have this late night talking like, “Well, let’s just think this through for a second. There’s this whole open source community. And I think they wanted us to be the augmented guys.”

Gabe is like, “You guys come up, and do this.” You know, Brian and UW had a great reputation. They sort of knew that we were partnering up and like, “You guys solve that problem. We’ve got this amazing game engine. You know, the Oculus guys had VR handled. We’re just going to change computing. We’re just going to catch everybody by surprise.”

And, you know, if we could go back in time, like in a Doctor Strange movie, and make that reality happen, one where they did not sell to Facebook, everyone hung out with Gabe. I think that future actually been kind of awesome. Like that would have been this great benevolent XR, and everyone in one nice big open platform. You know, it was very Web3. It was very decentralized. It was very super cool. This is like 2012 maybe, you know, before Facebook bought them.

And then in kind of in the end, we decided — We had the Spidey Sense Alarm. Something did not feel right about this utopian notion, because we’re like, “No one’s signing any agreements. Everything’s wide open. Like, is the world really gone into this, like, post-capitalistic, hippie commune, techno, like solar punk world?” You know what I mean? Like everyone wants that to be the case. And I was like, “Well, yes, let’s do it. We’re all very idealistic.”

And it turned out that the Spidey Sense was right. Because not long after that, Oculus was acquired by Facebook. And we’re like, “Oh my God! Like, Valve’s going to go nuts.” And so we end up, I think one of us called Abrash and we’re like, “Oh my God! You know, what happened?” And he’s like, “I’m joining — I’m joining Facebook.” And we’re like, “What?!” Like it was like this — you know, is that a bad thing? Or a good thing?

But it just felt weird that this utopian dream that was hovering around Valve and Gabe was kind of shattering. And this new element came in — was it Jedi or Sith? I’m not going to say that right now. You’re from your own opinion. But it felt like the Jedi Alliance was getting slurped into something else.

And then we heard John Carmack went there. And Gabe stopped talking to people for a while. This is my view of it. You can have them on your podcast. They could say, “I missed it.” But it felt like this moment in time where we were all going to hover around Valve as the center, as the new utopian capital for this new form of computing — Gabe, by the way, had the scale. He had the credibility. You know, he had the funding to probably pull it all together.

Like, again, if I had a time machine, I think that would be such an awesome future. Everyone would be really happy about it. And then that didn’t happen and it went one way. The device was named properly, a great “rift” happened. And one group of people went that way. And then others scattered — like, Jeri scattered from Valve into forming her own company –

BYE: Well, she was — She was fired from Valve, more accurately. It wasn’t that she just “scattered,” that she was actually forced out because of some of the things that were happening with Abrash. I mean, I have her telling that story, but –

ABOVITZ: For the record, I’m a huge fan of Jeri, super genius, one of the great hackers in computing. She’s underrated, and she got — On the record, somebody should give her a lot of funding. So putting it on the air there. She should be one of the real, like — I think the fact that she’s made it so far, it’s been too difficult for her relative to other players. And I think she should get her due, because I think she’s got a view of how to do it, which is different from others. And I think it sits in the ecosystem. So anyway, that’s my plug for Jeri. She should get her due. She’s great.

BYE: Yeah. I had a chance to do an interview with her AWE 2021 where she told her version of the whole history and story. And one of the things that she was saying was that her perspective was that “AR happens before VR.” Because she wanted to have — What Gabe’s initial vision was, to have people that were at home sitting around the dinner table, all playing games together.

And the technique that she has with Tilt Five, with the retroreflective material works perfectly for that in terms of having like a real experience of — like you were saying — like the real experience of the physics of the light that you see. But in the context, it’s very fixed and static and on a tabletop scale and for mostly for gaming. And so for that context, that works great.

But when you’re moving about the world, I feel like there is probably, in hindsight, a lot of innovations that have had to happen with computer vision and artificial intelligence to get to the point to have it fully mobile and portabilized. All of these core technologies that in a lot of ways what Magic Leap, as I look at the company, and see all the different — many different things that you’re trying to innovate in terms of the optics, and the digital lightfields, and the operating system, and the whole portable mobile computer that you have on your head.

Really ahead of your time in some sense. But at the same time, there is a complex of all these exponentially difficult problems that had to all come together, all at once. And I think at this point, we’re at the point where the technology is maturing to be able to really pull it off at scale. But at that point when you were doing it, VR has proven to be, like — some of those foundational bits and pieces needed to be in place to not really grow out the ecosystem, like the Quest tracking, and the computer vision aspects.

So you had your own track of all that stuff, but I think a very Hegelian dialectic process of looking at history and see how things develop and how they play off of each other. And it did go towards having these companies with a lot of access to capital. And I think, you know, it’s hard to say whether or not that open source utopian vision would have been able to be at the scale we’re at now.

It may have been like super small and still kind of like a hobbyist thing that not really on the radar of people talking about the Metaverse in the mainstream. I think the history that we are living with in this version of the multiverse is that it did went the corporate version, and it was highly capitalized of billions and billions and billions of dollars –

ABOVITZ: Tens — Tens of Billions –

BYE: Tens of billions. –

ABOVITZ: Easily, tens — By the way, Kent, I got to talk to you about the tabletop gaming, because if you go back to the very beginning of it, again, I got introduced to these two comic book writers, Anthony Williams and Andy Lanning, who had done all these great comics for Marvel and DC. They were actually guys. They knew a friend of mine, Richard Taylor, who co-founded WETA Workshop, the guys that did Hobbit, Lord of the Rings, all of that. Richard, by the way, was a founding board member with me of Magic Leap. So he was kind of one of the early fellowships. You can think of them as Aragorn or something, you know, maybe our Gandalf.

So he introduced me to Anthony and Andy, and they’re the guys creating comics on projects I was doing out of Magic Leap Studios. Like we made comic books and we were brainstorming around. I think I had a friend who was a film director who might have been part of this in Los Angeles. And one of our first visions for what Magic Leap would do was a bunch of people sitting around a table at home playing a game we called Monster Battle.

The idea was like — We had this rendering. This is what we showed early investors. You’d have this pair of glasses, each person would have their own monster. So I’d have this, like Godzilla, you’d have a King Kong. So it’s like all the classic movie monsters. You dropped them on the table and they would fight on the table. But the thing we wanted to do at Magic Leap, was like, they got really serious. If you wanted to level up, you would take it outside. You’d grab your little Godzilla. I’d grab my King Kong. We’d go out to the playground. You’d drop it in the floor and suddenly it gets full-sized, like 20 stories tall. You know, King Kong goes to full-scale sized. Yet with what we wanted to envision, you would still see it.

So we wanted to have this idea that you could start with like the Star Wars kind of chess game of like, you know, famous movie Monsters, King Kong, Godzilla, Mothra, and all of that. And what we did was we reinvented all of them into a classic characters we called Monster Battle. We ended up using those characters as all of our optics tests for every version of Magic Leap from the very first prototype onward. We named one Gerald, one Al. Like, Gerald might have been the King Kong one, and Al was the Godzilla — I might happen them mixed up.

And they were a little bit like more Pixarized. But anyone who came by early would see these things there. But the dream was you’d have this like — kind of like Jeri is doing out Tilt Five, these tabletop-sized creatures in a Star Wars chest kind of thing, which I thought would be awesome. And they’d battle with each other.

The leveling up was our big dream, that you could somehow take it — and the whole world would be digitized — and you could drop, let’s say, your Godzilla outside of your house. And it would just scale to however, you know, 30 stories tall, like a way past any video game, any board game. But also know where the whole world was. And we designed a system that would ultimately get to human field of view, but seeing the real world directly. And that you would be able to see this Godzilla running around and it would like it could even run away, like it might run from your city and you might find it in Chicago three years later. And you could look for it on Google Maps or in a satellite view, and they would be there running.

And if you went there and you had your system, it would be in that place. So we were envisioning what I called at our Leap Con, the Magicverse. We’re envisioning that like in 2010, 2011, this idea of an entirely scanned world where every single object could be known and co-located. And the whole world would be like one big storyboard, one big game board, so that you can go from a table to the inside of a house to the outside of the world.

We realized that was a very ambitious program. It’s a bit like SpaceX’s Occupy Mars. But we had this like very big idea that you’d have infinite layers on the world, objects could be of any size, and infinite amount of players, which had all these complex computer problems. But it’s the same scale as like, “How do you go occupy Mars?” It’s going to be a really hard problem.

But you’ve got to go step by step. You have to go Mercury, have to go Gemini, you have to go Apollo and then keep going. So that was the view. But we were intersecting at conferences and through knowing friends with what was happening. And I think at times we felt — We kind of looked at ourselves, “Are we making a huge mistake?” Because the VR path was quick, and much easier because you could use off-the-shelf optics. You didn’t have to solve all the cockamamie problems that I’m describing. You could literally just whip the thing together and go. I mean, probably with a 50th of the problem sets we needed, you could just build a decent working VR system. Because there’s just a whole different bag of problems, like it was a much more consolidated problem and I think that’s why it came earlier.

But I call that “The Early VR.” If you want to go to the ultimate version where you put something on and it really is indistinguishable from our actual reality, you have to solve all the problems Magic Leap tried to solve in what I called “Neurologically-True Reality.” So our view was you would play the long game, the really long game, which could take into the 2030s. Then you get AR, you get VR, you get everything, you win the whole game.

So I think there’s like this — it’s like I think of it as a very long race where there’s like, there’s like winners and there’s different spurts. I still am not sure what happens when we get to the finish line of like it all gets resolved. My guess is around 2035. Like you’ll see like the really big winners like and you’ll probably hit what I call “Neurologically-True Reality” in augmented mode, in virtual mode. People are like, “Oh my God! This is like, It’s Ready Player One, Matrix level.” I think we get there by then,

But that’s like SpaceX finally setting up a base on Mars and achieving the Occupy Mars that they wear on their sweatshirts. So I don’t know if that makes sense, but that was like our mentality at the time.

BYE: And as I hear you speak, some of the themes that come back again and again is the focus that you have on these entertainment or immersive experiences, being inspired by sci-fi, the storytelling. When I went to Leap Con — I have to say that I had been going to Microsoft’s HoloLens — Microsoft’s Build. And a lot of the demos there were like, nothing to write home about. They were all like just how to use the AR technology in an enterprise context, which wasn’t really a consumer product. But when I went to Leap Con, there was a lot more focus on immersive experiences, and Meow Wolf had a whole mech robot there, and different entertainment experiences.

And as I see all the different focus on Augmented Reality games, ARGs, and hiring Sean Stewart to be, you know, one of the creators of ARGs, and also Neal Stephenson, you know the author of Snow Crash as a science fiction author. I’d love to have you just maybe take a moment to reflect on not only how the stories and IP like Star Wars have influenced your thinking on this vision of the future of the Neurologically-Real Computing and augmented reality. But also how all the other sci-fi may have been inspiring this vision that you were going down.

ABOVITZ: Well, that’s great. And I also want to like unpack and maybe correct a couple of things that are just general misperceptions in the market. So there was no doubt a deep love of creativity and storytelling and games and film and music and comics, not only from me, but from a lot of the early folks who joined. It was like, absolutely no doubt.

And we didn’t have a problem letting that shine. I call that “The Yellow Submarine.” You know, like the Beatles Yellow Submarine. There was also the aircraft carrier of Magic Leap. And the part that people have a hard time characterizing is that “Could you be left-brained and right-brained simultaneously?” And I think that was not a clean story in the news. Like, they just can’t grasp that idea. And most people are not both at the same time.

But I had this idea that we could be both. And what I mean by that — I had just come out of graduate school through to starting Magic Leap in an FDA-controlled environment, doing robotic surgery, which is like highly regulated, incredibly safety oriented, you know, same level of like spaceflight-level of risk. You know, people die if you don’t do things right. So I just come out of that. I was trying to inject all these imaginative game engine and animation things into that field, and it worked. So some of that followed me immediately into Magic Leap. So the outward part that some people saw was what you described. But like, the day I started the company immediately, like health care companies started to work with us.

Like a friend of mine was a big competitor of mine at Mako’s, named Stefan, CEO of a company called Brain Lab. They’re like the most advanced medtech, computer software company of their era. And they’re in Munich. I’ve been to visit them. Stefan’s kind of like a Steve Jobs type figure in Germany. He’s a real interesting character. If you go to the Brain Lab site, he ended up becoming a very early partner.

He loved what we were doing. And my view is in enterprise stuff, we were going to find strategic partners like this. Now those partners were not going to be broadcasting their work to consumers. They were going to be working with us and building an advantage in their own businesses. So we had a huge array of like health care, automotive, industrial, defense, all kinds of things that we couldn’t really talk about too loud.

Some of it would appear once in a while, like, I know there was a YouTube video of like Navy soldiers running around with Magic Leap doing something. I can’t comment on any of it other than that what’s on that YouTube video. And they put that out. We didn’t put that out.

So we had like this, I would say, equally-weighted effort where lots of companies would reach out to us, many of them quietly, and entities like some of them universities, some of them large, large automotive players, aerospace, all kinds of stuff, all interested — CAD companies, math, physics, who are not that storytelling side.

And my view is — If we build a platform, we’ll let everyone build on it. I had a love for the storytelling part and the creative part. Like I envisioned the company as kind of a weird mash-up of Disney and Apple. But I also had this idea — and you might have grown up in a similar way of computing. My computer did not care if it was running a spreadsheet or a video game or an adventure game and then going back to like watching a video stream and then going back to a CAD. Like, it did all of it.

So I had a thing, like the first thing we would do would be like a stem cell. It would test all those areas. It wasn’t meant for one field or the other. We wanted to see — What’s going to make this group happy and that group happy? What did everyone want out of it? So we put something that we thought was like a first version to get all of these different entities trying it.

So we had so many cool things. Like I remember going at my office, sitting in a folding chair and a vehicle appeared around me. A full size vehicle like this was done with an automotive player. And you’re sitting in the car like parts work. You could change the color and change of things. And a CAD engineer somewhere else could make a shape changes. Could you stretch this out? They would stretch it. And multiple people could be in that car at the same time, people in the back seat, people in the front seat.

And it was really awesome. I’m like, This might be the future of how you sell cars, or how you design cars. And we had people that did sculpting and CAD and all sorts of prototypes and pilots because I realized, like, it wasn’t only consumers, it wasn’t only health care, it wasn’t only enterprise. It’s going to be all these sectors that we’re all going to explore computing.

I think the one that the media picked up the most was the shinier objects, you know, like the cool shiny looking things, because sometimes industrial CAD stuff or a boring enterprise thing is not sexy for an article. And it doesn’t make for a good video roll, and doesn’t bring eyeballs. I don’t know if that makes sense, but I had that experience of, you know, in terms of being in the company, there was a lot of non game, non story stuff going on, but it was with people who wanted to keep it close to the vest. It was their company, their work, not something they wanted to show their competitors yet. But we had hundreds of them, thousands actually.

BYE: Yeah, that makes sense. And I’d love to have you maybe comment on the science fiction and the other ARG elements because, you know, hiring in Neal Stephenson, a science fiction author to come work at Magic Leap. And I’m just curious to hear a little bit of the other stories or science fiction influences that had an impact on you in the creation of Magic Leap.

ABOVITZ: It was weird, because I’m still friends with Neal. He’s awesome. It’s one of those weird things in life that someone who’s a mythical figure, you ended up signing their expense report for lunch. You know, I was like, this isn’t really happening. And it add to this surreal nature of that company of Magic Leap. But Neal joined early on, and one of the things to say about him is he’s not just a brilliant author, he’s also a scientist and engineer. And he likes to be one of the guys rolling up his sleeves almost in a very blue collar way. Give some hammer, give him some tools, and he wants to get in there and build stuff. So which is amazing. Like he’s much more of a get-in-there engineer than anyone, maybe fully realizes and actually practical and smart on so many levels.

So that was just like — you know when they say, “Don’t meet your heroes.” In the case of Neal, it was “It was great to meet him. He’s amazing, and a friend and cooler than you’d imagine.” So that was all. I met a lot of other people and that bubble was burst and I don’t want to mention who they were, but it was like, I can’t watch their movie, listen to their music, or read their books anymore. But Neal, the opposite.

But there was this interesting thing where “Where was Neal going to go?” At the beginning, like was he going to go to the O company — or the F company, which might have been super lucrative, like, you know, “Here’s a big check. Come join us.” A lot of big names are going there, but for a variety of reasons he joined our thing. Because I think he liked the purity of the theory and the rebel alliance kind of nature. We were a little bit left of center.

And I created something called “SCEU: Self-Contained Existence Unit.” That was going to be Neal’s team. It was going to have 12 people in it, The Dirty Dozen. And he was going to be Lee Marvin. It was — Everything had to have a story. So Lee was Lee Marvin. It was The Dirty Dozen.

The name of the team was called SCEU. My job was to shield Neal and the team from any bureaucracies that would sprout as the company got bigger. And they would do the most extreme edge of ARG exploration of storytelling, which was awesome. Like take everything we’re building, you know, bring in the smartest co-writing authors and builders that you know. And let’s figure out how do you tell stories? What does that future look like? And some of the public stuff that they did, like the Goats, was like a bit of a sense of humor. But also, if you actually look at what goats was — I don’t know, did you ever see goats, Kent?

BYE: I saw the demo that was shown at LeapCon. So there’s like virtual pet, AI bots that were maybe emergently existing within a space that was mixed reality interactions with them.

ABOVITZ: Well, there was like two big projects. There was like the genius-level, Neal, full-unfolding story on the World Project, which I think is somewhat still confidential and secret. I have to check with Neal on that, which was just awesome. And then there was goats, which is the public-facing, look-over-here project but served the purpose.

And the funny thing with goats — I’ll tell you about one example of it. We took a room that was completely digitized, so we had a full digital twin of the room. And you can actually walk into the room with VR, take off the VR, put on the Magic Leap. And we were just like going, We have a VR version of this room with everything in place. When you feel it, take off that, put on the headset. Okay, now is the real room. And we were trying to see could the AR and VR elements commingle?

But goats was — Could you create sentient creatures that knew the world? Could find their way around objects and interact, and it was a step towards like if you’re going to tell stories one day across the whole world, well, you need that. You’re going to need all kinds of sentient things running around, and they need to know where the world is. They need to go under tables, over tables, on things, respond to you. And there was a version of goats that actually was pretty damn amazing. Like, it was just like, very next level. Something’s going on behind a couch, on their couch, jumping on a table, jumping off on a table, and there it is. And then you punch a hole in the wall, you see, like a window. And with the team sense of humor, there’s like a 300-foot goat out in the field, and then they go to jump into the window, into your room, onto a chair. So they were sort of exploring the mechanics of sentient creatures in both AR and VR environments and how they would blend back and forth.

And then we wanted to open source the whole thing, make it a big open platform and have lots of devs. You know, they created this like Goat SDK, And the idea is they’d have lots of devs remix it for any device all over the place. So Neal was going to be like the punk Seattle indie band version of our dev kits and open sourcing goats and trying to create this idea of like story tools for everywhere. That was sort of the goat’s mission.

And then the behind the scenes one was like full, novel storytelling, using every bit of technology trick we had learned. I’ll say this about it, and then maybe one of these days you’ll convince him to come on and see what he’d tell you, because it’s — I don’t want to mess with, like, that piece because maybe it gets unlocked one day and Neal will do something.

But we rented this building in Seattle, and I think it was a third or fourth floor. And we digitized like I think several miles around the building. So you have this digital twin. And the idea was you’d sort of find your way somehow when this thing like you find a way into this building and you’d unlock all this stuff. And you when you look out the window, all these things would be happening. And he actually made that work. It was just next level. It was really cool to look out a window and see real people and real traffic and then other things that really weren’t there knowing all the traffic people and jumping around over stuff at this wide scale.

And if you can imagine that knowing something like the Magic Leap 2 now exists, but if you imagine you’re Neal, you know what’s going after the Magic Leap 2 multiple generations and you’re designing for that. So he wasn’t thinking there’s only going to be a Magic Leap 1. He knew multiple generations of where we’re going and like what these end state things would look like.

So he’s like as an author and a science fiction storyteller, how awesome would it be to tell stories across the world using the best versions of these things where we want it to feel seamless, the device to get lighter and lighter, and less like you have anything on at all, and still have like amazing resolution wide field of view, the sound and the subtlety. So you just forget there is technology. And then if that was true, the storytelling, you could do with that was amazing and next level.

So we had like Sean Stewart just next-level, brilliant guy. I hope that what they learned and prototyped there does either in that form make its way or in some other manifestation makes its — because it was just incredible. It was one of the things I actually am biting my tongue and hate because like I wanted millions of people to see it if the pandemic had not happened. We were probably going to be doing something with one of the major studios and taking a lot that was like maybe 40 or 50 acres. Take an entire giant studio lot digitally twinning the whole thing, dropping Neal’s story on top of all of it and bringing like tens of thousands of people a month to beta that next level.

And then if that worked, we were going to unlock it across like a regional party, United States. And if that work, we were going to unlock it across the whole U.S. So like publishing a story from Neal and his team across the whole country. And that’s the kind of thinking like Neal was doing with his SCEU teams is totally next-level awesome.

I really hope somehow, despite the blip of the pandemic, it’ll find its way there because like, it was just cool. And it’s not like, “Oh, there’s a little creature on the sidewalk, click it, get ten gold coins.” It’s like, “No, it’s like full, novelistic way of telling a story across the whole country,” which I thought was just sick and amazing and super cool. And the world needs to see that one day.

BYE: They also had an ARG, or at least a novel that Sean Stewart, Neal Stephenson, and one other coauthor, I think it was from a project that had started maybe at Magic Leap, but they had released like an audiobook version of some of the aspects. And I don’t know if that was a separate, but some sort of innovation that had come out of their collaboration there that is out and available for people to listen to. I haven’t had a chance to check it out yet, but I don’t know if that’s connected to all of that.

ABOVITZ: Well, it’s connected to the SCEU Stew, you know, like the stew of fun things they were brewing. That novel escaped out of their lab and made its way to an Amazon partnership. I think Neal’s had a good relationship with the Amazon team. And it became an audio book.

By the way, there were a lot of things seeded in the world by that team that nobody fully realizes. It’s out there and soaking. So if they ever get to activate the whole thing, it’s there. But I won’t go more than that. But it was really cool. Sean is clever and Neal’s clever and Austin and those guys, they just — So you know, you could drop things and it’s soaking and soaking and you’re like, “Wait a minute.” And then they can activate the whole thing. But you know, it could have a long game. Like maybe that whole story will come together. I hope it does.

BYE: Yeah. ARG people discovering things and the story unfolding there. So it sounds like there’s a lot of stuff yet to be unfolded there.

ABOVITZ: I hope so.

BYE: But there was a metaphor that you’d used and you’re talking about the different iterations you have, the the Mercury, the Gemini and Apollo, where it sounds like where you’re having to have these multiple tracks of innovation. Because the big lore from the people that I was able to glean some information of, there’s, you know, lots of NDAs and secrecy around Magic Leap. So people weren’t even supposed to really acknowledge that they had seen any demo. But in the course of talking to enough people in the XR industry. And people who have had the privilege going down and seeing the demo, there was a big, massive refrigerator demo. But then eventually you had to portablize it into what eventually became the Magic Leap 1 and then Magic Leap 2, and then further iterations.

But that you had these multiple tracks — As much as you could tell me, what were people seeing? What was the demo and the content that if they would come down? Because you were flying all sorts of different investors, celebrities, people from around the world coming in to see this magical demo. And for most accounts that I heard, it was pretty mind-blowing. And even did an interview with Paul Reynolds, who told me an experience that had this synesthesia effect where he was able to see a light field that came, but he was able to feel like some haptics. And so you get sort of this phantom touch type of phenomena that you’re able to evoke using this digital light field technology. But what can you tell me in terms of this state of the art demo that you’re able to put together at Magic Leap that has been shrouded in secrecy for years and years?

ABOVITZ: Yeah. So that — that’s great — Because there was that era where we left my garage into a — I call it somewhere between a strip mall and a warehouse complex. And we were in this innocuous looking place with like a glass door. And I think there was like an accounting firm nearby. And there was like this gun range. These are shots all the time grandfathered in, like it shouldn’t have been there. But like in Florida, this gun range that got grandfathered in. It’s been there since like the 30s. So it’s like a half mile away from the parking lot. So you hear these pops all the time. It was very grungy.

It really felt like an X Files or Men in Black kind of place. Like you’d never, ever expect any of the stuff we were doing to be happening in that place. So it was like you’d expect it to be like a laundromat, maybe a bad diner, a low grade accountant, and then like it was like Magic Leap Tourist Agency or something. Like it really felt like we were completely out of sorts and the kind of tech company we were would make sense and Palo Alto. That would make no sense in Hollywood, Florida, like that place was where I grew up in Hollywood, and it was like there’s zero chance of a tech company being there.

So first of all, we were really safe because no one had any idea what we were doing. So I think there was also this notion of coming to Florida, coming to Hollywood, having no expectations, looking at this super modest, grungy strip mall warehouse thing, walking in through a door. And you’d expect that to be like a tourist agency trying to sell you a ticket to, you know, go to the Bahamas.

And then you go to the back room. And there’s like this contraption that literally looks like something out of Brazil or, you know, the movie Brainstorm. It was really gigantic. We made this thing, and I’ll tell you the purpose. I won’t tell you all the details because there’s still IP and trademarks that Magic Leap will use, and patents on it.

But the purpose was we wanted to test our theory of modifying digital lightfields. So I separated the idea of the pure analog lightfield — and I used to say this in public, which I did not think you could ever reproduce — to coming up with a way of digitizing lightfields. And like “What is a digital light field? And how do you make it?” So we built a machine that made digital lightfields. And had all these parameters that we can play with that was big and it had all kinds of weird knobs and computing things, so we can play with those parameters.

And the general notion was, I mean, very simplistically, think about your original PCs. You had the computer and, you hooked up to a monitor. Well, here — There is no monitor. We’re going to hook it up into us. So when you put yourself in this rig, you actually had this thing that mounted your head, I’m like, “You’re now completing the circuit. You’re plugging in. And you’re the monitor.” So I was like, “Can we build a computer with no monitor? And we’re the monitor.” That was the experiment. And in order to do that without jacking in like The Matrix, where we actually have to wire your visual cortex.

And remember, I came out of computer surgery where I actually saw surgeons stick electrodes into brains for Parkinson’s treatment and things like that. So I did have a notion that maybe our thing could not work. And you had to do that, like, you know, electrodes in the visual cortex, which I did not want to do at Magic Leap, I thought that that’s horrible and invasive when we shouldn’t do that. The only thing you should do that for is like someone that has Parkinson’s Dystonia, a real disease, and then you do brain surgery to help them. But don’t take healthy people and throw wires in there. I thought that was just awful and bad.

So the idea there was we wanted to prove could get in — I called it “Knocking on the Front Door.” And I’ll try to explain it this way. The idea was the retina is a doorway, a keypad, and it had a programing language — that was a theory. And that if you could unravel that, that would give you access to the brain’s GPU and display system in the natural way. That if we could figure that out, that would be the right way up the mountain. That would be the ultimate.

So instead of warping the interface between your brain and whatever signals coming in, if you sent the brain to the signal that it wants, the right one. And there’s two questions about what the right one is. There’s the raw analog signal, and then there’s the digital signal that we were chasing. And the theory was there would be a digital signal that would cause no stress, no nausea, no nothing. It would just be like sipping water or just like standing in a forest in Oregon and nothing. It would just be like perfect. And if you got that perfect, you could have all day, every day, forever. Everyone can use that. You’d have no stress on the brain. None of the things that you experience that people still have to tolerate with all these devices, still.

We were hunting for that and that machine was like our SETI experiment, or like going to one of those like CERN labs where you have all this complex stuff to try to find that set of parameters that unlocked it. And I think the moment of breakthrough was we actually came to a point where we felt like we had unlocked some things. That was that was the point where we started to bring people by. And we’re like, “Are you also saying this?” And we brought by hundreds of people. And then thousands of people. So it wasn’t just like myself and, you know, like the original Fellowship of the Rings folks.

We brought by all kinds of partners and investors — and of every type. Like, I can’t even mention all of them. But like, it was pretty incredible. Especially you walked in into this innocuous place, you saw this machine, and then you saw this very special thing happen.

The weird part was, there’s no way to take a picture of it. This was not something that wanted to be photographed because the CCD of the camera would not respond to our signal the way your brain would. The signal was only meant for the human brain. Not for a cat, not for a dog, not for a camera, not for an iPhone. So when we would try to like, take photos of it, it wasn’t the thing that we did. Immediately, it lost all of its magic.

The way to explain, if you go to the beach, go to the woods, and there’s this beautiful spatial perfection of reality. And then you look at every other thing. It’s just not that reality. And I think for a moment there on that first device, people were seeing maybe this is the way. We were opening up, “How does the brain really generate or render reality?” And that’s what made that machine magic.

That’s what we showed, you know, Larry and Sergey. They thought it was super cool. Now, that machine was gigantic. But could we have taken a shortcut and not done that and built a VR system? Yes, you’d avoid all of that. But I think at the beginning of the company, I was chasing this like almost Platonic Ideal of “How does the brain render reality?”

And I think we touched it for a moment. You know, it was like this brief moment where we go, “That’s it! That’s the math, that’s the parameters.” And it was like a lot of folks all working together at the same time. We had these brilliant optics engineers, I think Jan, Ethan, Randall, Sam — just a lot of brilliant folks all together — Brian.

And like at some point working very selflessly to make this thing happen. And many, many failures. Like the coolest part about that early period was we felt like we were the Wright brothers. We crashed 700 times, but once we had that first 50-foot flight, we were like, “Did you just see that?” And then the next person would come running into the room like, “Holy crap!”

And then the next person would come around the room like — It felt something new and magical had happened. And the part we were all like freaking out about was like, “We should probably publish this.” Like, this is an interesting moment in engineering. And we’re like, “Are we building a company? Or are we doing a Scientific American article?” So we did bring enough people by who saw it, who know it was real. But it took on this lore because instead of publishing it, and alerting every single big company in the world — as I was worried about.

It turns out my worries were real, because they did all come after we were doing. And they did all come after this field full force with tens of billions of dollars like, you know. So, I mean, I was worried that this would happen because if you unlock this possibility in computing, it could dislodge everybody. It could unseat the biggest companies in the world if computing goes that way. So we thought this was a glimmer into the next decades.

It was really awesome — I wish I could bring you back in time to that machine, because it was really cool. And it almost felt like a one-off Stradivarius. It was so tuned, so taped together. Everything was just hanging in this moment of like, it wasn’t like tuning a six-string guitar. It was like tuning an 18,000-string sitar. And we just got it right and it was like, “Whoa!”

But then we’re like, “How do we then take this 18,000-string sitar and make it into a compact, manufacturable, six-string guitar?” I don’t if that metaphor makes sense. But then the problem became from the theory to — How do you scale and manufacture this whole thing? And how do you still keep that magic in something smaller at a much different price point? That can’t be the size of a room, you know, all those kind of problems.

BYE: Yeah, well, for no lack of trying, I tried for three years from 2015 to 2018 to get permission to come down and see it. But I was unsuccessful.

ABOVITZ: By the way, Kent, knowing you now, I will apologize to you from my former self. And now I would have brought you by. I apologize for whatever parameters did not let you go there and other things.

BYE: Well, I mean, maybe that’s a good transition. There’s a number of hooks there that you just mentioned, I think that — You know, I did an interview with Paul Reynolds in 2015 because he was a listener of the podcast and he wanted to get more information about what you were doing there to just encourage people to come to Magic Leap to work on whatever you’re working — because there wasn’t a lot of information that was out at the time.

And at the same time, I feel like, as you started to come public, like — So I got my DK1 in 2014, you came out later that fall with the funding from Google Ventures [CORRECTION: The funding was actually from Google Inc., and not Google Ventues.], another big funding round that you had that was really coming out of stealth. And then there was this period leading into the GDC 2015 where the HTC Vive was shown to all the game developers had just been announced at Mobile World Congress just a few days in March of 2015.

And so it was like this period where from 2015, up until like the Leap Con in 2018, you were kind of still in a pretty stealth mode. And there is this, what I would say, this challenge between the secrecy that you were talking about in terms of not wanting to give away all the stuff that you were working on and have all these big companies just swoop it up and just have billions of dollars to pour into it to leapfrog whatever you were working on.

But also this challenge of — if you go back to 2010 and 2012 where you were given this independent strand of thinking about the future of computing from this “Neurologically-True Reality” that you call it. But the VR was on one end putting forth the visions of, like you said, the Homebrew Computer Club with Oculus representing this open source movement with dev kids going everywhere, flooding the market with innovation that happened from 2013 to 2014 and onward, that really the key turning point was having those dev kits in the hands of developers.

And, you know, I just wanted to say a comment on this thing that happens within VR, which is the accommodation vergence conflict, which means that you have a flat screen that you’re looking at. But our vision is actually dependent upon looking at different depths and focusing at different depths. And there’s something about the virtual retina display technologies and probably the stuff you were doing with digital light fields avoids this vergence accommodation conflict. So getting to this lack of eyestrain that happens when you use this different technology. So you’re trying –

ABOVITZ: Well, that’s one of the issues, not the only one, like accommodation vergence conflict is one of the well-published ones. But there’s a lot more. And one of the things that we unpacked over the years was how many other things are needed to get right? So the one that was most public was that. But then if you think about like all the errors that cause strain from nausea, and discomfort, and all these things. There’s actually a lot of them.

And we had this brilliant team that was measuring clinical and scientific impact. We work all sorts of neuro-ophthalmologists and institutions to try to understand what was safe. First of all, there’s like “What’s safe and healthy?” That’s like one border. And then “What’s like, perfect?”

So we realized the playing field was first you have to be safe. I wanted to make sure everything we did was safe and would not harm any user. So there were VR systems — I think less of them do it now — but they were aggressive and they were actually pushing this kind of diopter mismatch to a point of harm. And we actually studied that. I think we even did some papers, but there’s the whole thing of like, you get that too high and there’s harm. So I think there’s a place where you get that good enough that you’re safe, not going to cause harm, a bit of discomfort, but not harm. And then there’s like neurologically true.

So I wanted to be safe, and then get into neurologically true. And our first machine showed neurologically true is possible. And then I realized that Mercury, Gemini, Apollo would go — You’d have to be tiny, small, safe with all this other stuff. Computer vision and AI and a wearable computer and like all this other stuff that had nothing to do with the optics anymore, but you couldn’t be not safe. But you always needed to be safe. And then how could you go from safe to neurologically true? And then neurologically to perfect?

So that was the idea of Mercury, Gemini, Apollo. If you were on the engineering team at the time of the leadership, you kind of knew that was our goal. And it was this constant trade off of like I’m always pushing for neurologically true. And then the engineers are pushing back. It’s like, “Can we live with safe right now? Because we got to pack ten other startup companies worth of stuff into this device.” You know what I mean?

Like which you mentioned at the early part of the broadcast, like we had to do computer vision that sensed the world in real time. We had to do pixel stick without wiggles. We had to do sound field. We had to make things light. You had all the thermal issues about something so small generating so much heat if you put so much computing power in it. So the issue after issue after issue — and then you’ve got frame rate that needs to be really high, which creates more heat, but then you needed to be bigger in the computer, but you want to make it smaller.

So there’s this like endlessly maddening set of like opposing forces. And I’m in there screaming all the time for Neurologically-True Reality. And I think at times some of our engineers, like we’re pulling out all their hair because they’re like, “How are we going to make all of this happen?”

And if you talk to some of the folks who are there at the heat of the time where we are shrinking the big machine to the gen one, and then what’s now public as the Magic Leap 2. One of the hardest engineering projects any of them have ever worked on in their life. And some of these folks came from the biggest companies in the world. Some folks came from NASA, some folks came from Apple, Microsoft, Google. I think we were the hardest thing ever worked on, ever.

BYE: Yeah. So it sounds like you have in your secret lair, this neurologically-true, big, giant refrigerator sized machine that you need to split up into the Mercury, Gemini and Apollo, — the Magic Leap 1, Magic Leap 2, and future iterations that you’re going from the safe and aspiring towards that neurological-true vision. And at the same time, coming in cold in terms of everybody at that point when you came out in 2014 was VR was still emerging, but AR wasn’t really on the radar as much. I don’t think the HoloLens had even been announced yet. That came a little later. But then –

ABOVITZ: We didn’t even know that that they were up to something. I just kept rumors about this thing called Natall, I think. And we were just like, you know, verging on paranoia about what is Microsoft up to? Because we’re running. And then they’re trying to nix some of our people. And we don’t know what it is. And I remember I was in the Bay Area with a couple of our team and then we hear about HoloLens getting announced — the HoloLens 1. And we had the Magic Leap One PEQs already built in our building — a PEQ is a Production Equivalent.

We’re like, “Oh crap!” It felt like the Russians had gone into orbit first, you know, when they put the first man in orbit. We thought of Microsoft as like our arch enemy in that race. And when they put out the HoloLens one, it was like, “Oh no! Now we’ve got to scramble. And, you know, we’ve got to get our first mission into space.”

That mentality was so — It felt like a race on technical proofs. So our thing was that we’re going to have a bigger field of view and this and this and this on our first one. But they beat us to the punch. I have to give them credit. They’re also a thousand times bigger than we were. But, you know, give the team credit. They got out there with the HoloLens one.

But we didn’t know about each other really that stay interesting thing. You don’t really know who’s doing what because they were in real secrecy too. And who the hack knew what was going on in Apple? And, you know, it was like cars racing in the dark against who you think might be there. It was very a interesting period.

BYE: So as you start to come out, I guess from my perspective of someone in the XR industry and going and talking to lots of people, there was this challenge that I see from your perspective of having this experience of what you are approaching this neurologically true experience, but yet you have to sell it and explain it to the world as to why this is even interesting.

So you have what I see is a lot of aspirational visions of what types experiences might be possible that are like these renders or at least visions because like you said before, the TV-on-radio problem where you can’t even really necessarily capture what’s happening and what the experience is. You know, you can’t capture and recreate the full immersive experience of what we see and what’s it mean to be — the Place Illusion and Plausibility Illusion from Slater to really feel like you’re in another place.

But here you’re grounded in reality. So it’s more like you are already in another place, but you’re giving the plausibility illusion of all the stuff that’s coming in, believing that there’s these other layers of reality. So I feel like the big thing that I see, at least from Magic Leap, was this aspirational visions of different renders. And I remember going to the, IEEE VR in France in 2015, and people were reacting to the video that had been released in terms of it.

ABOVITZ: Was that the whale?

BYE: They had to make a little disclaimer that said that this was a true capture or whatnot. And people are like, “No, this is a render. There’s no way that whatever they’re doing, it looks like that.” So there was a distrust, I think, that had been starting to build up in terms of the promise of what Magic Leap could be versus the dialectic that you were talking about, the idealism of the neurologically true versus the pragmatic reality of what the first iterations of that Mercury could achieve with what the direct embodied experience of that was versus what, you know, was possible in a long trajectory of things.

I think that’s from my perspective, the kind of mismatch between what was being said would be possible versus the actual reality of what the experience was. And I don’t know what your reaction is to that, but that’s sort of, from my perspective in the VR industry, what the narrative and buzz was about Magic Leap.

ABOVITZ: So let me pick a couple specific examples and unpack them, like. One that has entered like a low-grade myth was this whale in the gym. And I’ll just give you the background that. So we were working with Lucasfilm with ILMxLAB. We had announced that partnership, and John Gaeta was there and then he joined Magic Leap. John Gatea won the Academy Award for Matrix, visual effects genius. He worked on just crazy movies, including The Matrix and all kinds of stuff. Just genius guy to work with. It was super exciting.

So Lucasfilm did a concept video of the whale in the gym with the idea that we were going to actually make on the Magic Leap. And I had this naive assumption that, like with film, you could do a trailer and you could tease people. And then with the idea that you were going to actually this. And just to close the loop at the University of Miami in the fall of 2018, not long after we did the LeapCon. We took over the UM campus. Thousands of people came by. We took over the University of Miami gym and we had a full-sized whale jump in the gym. And when it splashed, we had these fans that would throw air on you. And it was incredibly awesome. And we had these giant speakers so when it hit the floor and the water came up and John Gaeta, the same ILM team, did do that. So we had a chip on our shoulder where every single thing that we made a concept for, we were going to make real.

Another one that we did was the Dr. G. Invaders, which Richard Taylor, five-time Academy Award winning team down in Wellington. They put out like a concept trailer of this thing we were going to build. And every version of Magic Leap prototype, including the very first giant beast prototype, had elements of Dr. G on it.

They were like one of our best test objects, because they were so high fidelity, so well crafted. Like Gimbal, the robot, looked like shiny metal with propellers and smoke. So it would push everything we were doing to go. “How real? How neurologically-real is Gimbal?” In fact, we came up with something called the Dr. G. Gimbal Test. And a very famous film director, I’m not going to say who, but one of the top three of all time in the world came by and we used him to grade where we were on a neurologically true reality test.

So a perfect gimbal was a “1″. And he’d come by and say, “What level of the Gimbal are we?” So is it a 0.7, 0.8? It was like this really — like, we created this unit of trueness to neurologically true reality. And maybe one day this film director will let me say who he was out loud. But it was really cool because the Dr. G. Gimbal was the basis of that reality test. And we called it The Gimble Test. But we actually built and shipped that one. And the actual Dr. G. I thought was much cooler than the video.

But I’m going to talk about the one that Lucasfilm did that I thought was the most astounding. On the video, which we released, and Lucasfilm, ILM released it said, “Shot through Magic Leap.” Because we realized the world needed — basically, that my idea of like film trailers wasn’t going to work. They want to know what was shot through and what was concept. So we then started to label stuff. So we put “Shot through.” And the one I thought was most exciting has C-3PO and R2 looking shiny rendered solid. People thought that wasn’t real, that was shot through, it was incredible. The ILM folks thought it was incredible.

And then R2 sprays. What’d you think of that classic, Star Wars hologram on a table with like holographic X-Wings? And people thought there were two real metal robots, like built by like the model shop and then R2 spraying something. R2 was actually a neurologically true reality droid, solid, shiny, metallic, incredible. And then, we shot through it and it was like one of the very best things in terms of fidelity we ever did.

But I think in that co-mingling of how do you communicate this stuff to the world? Like, can you send a trailer? That was the very early days, like the whale and the robot. And we knew we were going to release this to the world, but the world didn’t know that any of this stuff was actually real.

So we had this kind like — You’re right, we had this like friction with people not believing. And then I’d have all these people fly into the building. Investors, partners, athletes, movie stars, because everyone was like trying to — basically, demanding their way in. I mean, the people I turned away, it was just like insane. Like, I’d have senators call me up. I’m demanding to come in. I need to see this. I’m like, “Sir, I can’t let you in.” But we had enough people come by and they had this, like, this is not real. And we’d come in and we’d show them all kinds of stuff. And they’d walk away going, “Oh, that was pretty awesome.”

You’re right though, that 2015 through 2018 was a weird time. Because in the building, we had developed many more mature prototypes, so people would see all these phases. We had generations past Magic Leap One happening so people can see the next level stuff. And it was really cool. It’s like walking into like an Imagineering, early Apple on steroids kind of place.

Again, I apologize. We did not bring there. I’m so sorry about that because it sucks that like you didn’t like — That was such a fun time to have visited. And I’m sorry that you did not go there, which is my fault, probably.

BYE: Well, I was in contact with the PR rep that I worked with. And yeah, just, everything was shut down. There were some people that made the trip out like an MIT journalist, but for a while, yeah, I, was always below the radar for I guess the strategy for whoever was running the comms team and PR team. That was like one of my other complaints was just how much the trade journalists like UploadVR or Road to VR or myself that were just kind of felt like, yeah, I see people from Rolling Stone would have access. But yeah, it just felt like people that were really the closest to the industry weren’t able to come in and vet or be able to say what their perspective was that went beyond whatever the other mainstream media, or WIRED, or other people that would get access to come down and see stuff.

ABOVITZ: I’d say — I’ll give you the sum total of my experience in media before starting Magic Leap, I was a cartoonist and a writer for my college paper, which is whatever that is. And then at Mako, we had like minimal interactions because the tech world was occasionally interested, but it wasn’t like Big Tech, like capital “T” Tech. And I had no idea what the tech world was really when I started Magic Leap, and after Google invested what that would become.

So I was like kind of clueless as all of it. So that was a learning experience. And I don’t know if I always was surrounded by the right folks who — I had really brilliant tech people. But if you think about like, “Did we have the right folks on how to interact with tech media and all that?” I don’t think we did the best job. I think we could have done better.

We had some good relationships, like, you know, Steven Levy would come by. Kevin Kelly was brilliant. Like these people I — they were legends and I knew about. I think it was like Rachel from MIT, you know we had some of that, but like I think we didn’t quite understand how to work with — and particularly me because my experience was super limited, like coming out of Mako Surgical. And basically being a college newspaper cartoonist does not prep you for basically growing up in the explosion of how social media changed media.

So the weird thing about Magic Leap, when I look in hindsight, 2014 to like the 2020s, the tech media world changed. It went from like this positive, optimistic view of the world to something else. And also social media and the business models of journalism change. And we were trying to hack reality, and all that. It was like this weird, weird thing of colliding with that.

And I had this naive view that all tech journalists were like, you know, Kevin Kelly and Steve Levy, these like tech-forward, positive optimists about the world. And some of it became more like political media.

You know, Kent, one thing you said earlier, I want to be able to address. We had this challenge of like wanting to reach partners and developers and great new team members as we’re growing up, but also not revealing to very large, difficult-to-compete-with companies because they’re so big, they have so much capital. Not all of them follow any sense of ethics at all. So we’re like, how do we send smoke signals to the people we want to hire, the awesome developers we want to work with without tipping off the biggest predators in the jungle? And that was a really complicated thing.

I don’t know. You probably felt some of it from whatever you were observing, but now knowing how much capital these guys are spending, I had my — I was getting like data points. Like people are spending five times what we were ten times what we were. Now it’s public. And you realize they’ve spent maybe 30, 40, 50 billion. They’re spending our whole decade of life as a company, they spend that every three months.

So that was intense. I mean, you know you’re going up against really ferocious, tough competitors, by the way, raiding your team. You know, as a startup, you can offer a certain amount of pay, a certain amount of equity. They were like, offer some of my best guys and engineers and women like five times what we could pay, you know, like NBA salary athlete kind of thing. Here’s a $2 million signing bonus, $3 million of RSU kind of stuff. And it was just really difficult.

You know, the playing field as a startup, trying to change computing is not level when you’re going up against — like we weren’t competing with other startups, we’re competing with companies that had half a billion to multitrillion dollar market caps. And, you know, and I kept feeling as a startup, there should be companies like us, like Tilt Five, and others. We should be able to exist. We shouldn’t be squished by the biggest giants in the world. There needs to be a way for us to come forward and exist. But they make it really hard. They make it almost impossible.

You’re fighting every day at all levels, and they don’t fight in the way that you’d think. It’s not only — You go, if I patent, if I invent first, if I’m fast, if I got really smart people. But then you’re like, “Wait, they do all these other things to compete with you.” It’s tough. But look, that’s the free market. You have to be really tough to fight it. We did a lot. We did some amazing things.

BYE: So yeah, I think as you are saying that, you know, reflecting upon that — because there’s a lot of talk around how many billions of dollars you had raised. But yeah, when you compare it to how much Microsoft and how much Facebook and who knows how much Apple has been spending, it’s many more scales because you’re really talking about the future of computing here. And then, like you said, trying to do this as a startup.

I’d say one of the frustrations I had from the outside was seeing how successful it was for Oculus to be able to make a dev kit available to all of the different developers and observing how much that was really a catalyst to innovation for the ecosystem for VR. And how much even AR had benefited from that, and to see how much the approach for Magic Leap was much more locked down. Even people who had early access to the headsets had to go through all sort of secrecy procedures. When I went to Leap Con, there seemed to be what my impression was, was that there had been a long, long hstory of that secrecy and that the company was trying to overcome some of that secrecy to really cultivate the openness, and knowledge sharing, and cultivation of those communities.

And that I ran into that personally because of the need of secrecy, not even having me go to like the opening night party because there was fears that I would overhear something. And so this locked down nature of the company versus trying to transition into that more openness, but also to make the hardware available to cultivate the ecosystem.

And the first Magic Leap One, there was debates whether or not you were saying at some points like weren’t going to have a developer kit. But then, you know, the Magic Leap was kind of like the Creator Edition. It was kind of clear whether or not this was a consumer device. And then if it was a consumer device, it’d be like thousand dollars.

As I was watching it, I was seeing what happened with Oculus and I was wishing for as much of what would be possible to get these into the hands of as many developers as possible. But given the price dynamics, it may have not even been feasible to be able to do that. So yeah, that was just from my outside looking in in terms of wanting to see a broad, robust, diverse ecosystem of people tinkering and pushing it forward, but also focusing on what would be the thing to really catalyze the augmented reality as a movement.

And in hindsight, not knowing whether or not it was just a matter of VR needed to come first and that same model didn’t have the same type of economies of scale because of all the different innovations and the nature of being a startup versus what Oculus was doing, which was eventually getting acquired by Facebook, but how much they would have been able to sustain that type of model as a startup.

So anyway, those are some of the reflections I had as a journalist covering it. And I know there’s a lot of tradeoffs and decisions that you had to make in order to navigate this as a company, taking all that into account, the secrecy, and the concerns from all these big players, and how things have continued to play out.

ABOVITZ: Well, if you unpack a little bit the complexity of building a VR of that era, it’s a lot of just off-the-shelf stuff. And stuff that was basically benefiting from the mass availability of like mobile phone components. So it’s more like, can I make a not super proprietary piece of hardware, put a lot of software, get a lot of people hacking on it, like that’s a strategy. And it is not a bad strategy to create a whole community of VR developers because you’re not worried about that hardware.

They weren’t really pushing the envelope on anything that much on that hardware. It was about the software development environment, just getting them something. But actually there’s zero way for a company to subsidize that. Like as a startup, you go out of business right away doing what they did. It was super important that they got acquired because if you don’t have a parent like that, every one of those things is like a massive, hundreds of dollars per loss. So you just go out of business very, very quickly. No matter how much money you’re raising, like at the scale that it ultimately became. It’s all at a massive loss. You could sort of see those losses now with Facebook.

But if you ultimately have that parent. And I think it’s very hard to see what that world would look like without the parent. But the thing I’ve got to give Mark a lot of credit for, he is subsidizing to tens of billions of dollars at massive losses per system, the creation of that whole ecosystem of developers.

By the way, I wanted to do the same thing. I had an idea to give out 50,000 systems of the Magic Leap One Creator. I just wanted to seed the world and see what all kinds of people would do. Now, when you have not a parent company with someone as ambitious, but also sees the vision of where all this is going, ala like Mark. And you’ve got this multitude of institutional and strategic investors of all types who all have different opinions. They didn’t really like that idea.

I actually think if I go back in time, I should have fought harder for that. Because I think it would have been kind of awesome to deploy. “Here’s 50,000 Magic Leap Creator One editions. Boom!” Then I would have been like, — I think that was the right idea. That was my gut. And you know, when you’re shot down, you can’t always like fight back to people that are multi-trillion dollar funds, you know? So I feel like –

By the way, could we have been under a parent? There were a number of opportunities where we could have enrolled up under a company. You could ask, should that have happened? Like, Is it better to develop like this as an independent startup with all this optimally going public or something? Or be under the wing?

You know, I think if you’re under the wing of someone like Facebook. I’m not going to make any comments on moral or ethics of their business model or any of that. But you have to get Mark credit for his commitment from a long-term thinking and the amount of capital. He understands that. I’ve met him over the years. He’s one of the few people in the industry that totally gets how much capital you’ve got to put in, and the duration and the intensity to do it.

I also understood that, but had to bring in all these different small, medium, large-sized partners and institutions who were not always aligned against this vision. And the nice thing about Oculus is that they had this one guy they had to convince, and they convinced early, who then from that point forward goes, “I’m going to do this, period.” And then they have this like unabated flow of capital. Nobody has to run around getting funding anymore. So that is a plus on their side in that they understand the space race, and they’re going to push for it.

I don’t know if that makes sense, but I really would have loved. And that was my original idea to drop 50,000 of these, get a lot of learnings, and then the Gen 2 would have been the thing to really go commercial with.

BYE: Yeah, and I guess to start to bring this conversation to a close with a few more questions. I’d love to hear anything you can share in terms of the end of your time working directly with Magic Leap. Because I know that — The big event for me, at least from the outside, was having to lay off half the staff, which I can only imagine how difficult that would have been to have to make a decision like that and then continue to move on with the company.

But what can you say in terms of what happened? And how do you wrap up this phase of this story? You know, we’re talking about all the constraints and limitations about trying to operate in this realm as a startup against all these big companies and the whole economics of it. But what was it like for you, as a person, as a leader of this company to have to see that point to make this shift away from your energy and passing the baton to others to carry that forward? But also to go through that more waning aspect of your time at Magic Leap?

ABOVITZ: I’ll say a couple of things, and I’ll talk a little bit about that. But — One, the company is there and very much alive and kicking and thriving and putting out a system that many, many people at Magic Leap pre-pandemic worked on for many years, which is the Gen 2, which is the Gemini. And that same team also worked on Apollo, which hopefully the world gets to see that one day, but — Super proud of that.

So I want to clear the air that like the company was never not going to do the Gen 2, and then, you know, hopefully the next thing. So one, I think it’s important like — I think everyone who is either there now or was there is probably super proud of it.

But the pandemic — I just can’t overstress that — was such a fracture not just the world shutting down but the financial markets crashing. So I’ll say a couple of things, which I could say in the rest company still operating. I helped recruit Peggy Johnson from Microsoft because I realized the immediate direction was going to be enterprise. We had tested it for years, by the way. I just want to make that point. We had tested enterprise with hundreds of partners, and we knew that was the near-term revenue.

We were actually announcing enterprise in 2019. If you go back and look at the news media, we’d launched Magic Leap One Enterprise Edition in 2019. So it wasn’t a new change of direction. We knew it was enterprise and then it was going to be consumer. And if you look at the Magic Leap 2 design and performance, it was for professionals and enterprise. We realized there was not quite yet consumer form factor, but good enough and sleek enough for many use cases, healthcare, industrial, all sorts of things. So it was designed with that in mind, it wasn’t an accident that it came up.

I’d say this — One, it was incredibly painful, because you spend a decade building up an amazing team. I think just before the pandemic, we probably had the world’s best team in place. Just the smartest, best — who learn how to work together. And we were humming on all cylinders. The Magic Leap 2 is coming off the factory floor in early 2020. It’s amazing. I had our chief product officer on Bloomberg talking about this in early 2020. We were all super excited.

And then we’re in the middle of a significant capital raise so that we can launch the Gen 2, and scale the company. And then fuel the Apollo, the Gen 3, which we also had in the basement. And we had generations after that too. So we’re like humming on every cylinder. All kinds of cool things are going on.

So the pandemic shutdown was significant as it impacted people’s ability to go to the building, work together. You know, this is such a complex system, everyone having to go home and very few people could be in the factory was like a nightmare.

But then the financial markets went through just a complete catastrophic crash. And many friends of mine who are running companies, those companies are gone. They went bankrupt. Like, they all died in that period. Some of them like were just utterly gone. Many of them survived in mutated ways. If you’re a very big company with a parent protecting you like a division, if you’re inside a Facebook or an Apple or Google and Microsoft with 100 billion in cash reserves or 50 billion, you can kind of sail through that unaffected.

So that’s not us. We were an independent company in the middle of a capital raise. But what I can’t complain about is the company survived. What did break for me as a leader was I had spent a decade building the team, building everything, and ultimately — We did not cut half the team. We cut 700 out of 1700. So we ended up going to about a thousand to keep enough critical mass to keep the Gen 2 going and everything else needed to happen.

But that was such a psychically difficult thing for me to do. I knew that, you know — I talked with the board. I needed to bring someone else in. Like if the pandemic hadn’t happened, the crash didn’t happen, we wouldn’t have done that — I would have made that change. But I think, like, in order to save the company and if I had to exit all those people who are my friends, and we had all built this thing together for years, I was going to go too. Like, I couldn’t do that them and not do it to myself. I don’t know if that makes any sense, but it was like –

One era was ending, and I wanted to bring in someone who could lead the next phase, which was like just go after the enterprise. Maybe all my, you know, The Yellow Submarine and the Aircraft Carrier together like that was maybe too much in the post-pandemic world, but it was fully supported by all of our investors up until that point.

They liked the dual track. We were going to win enterprise and we were going to win consumer. They really — That was the support we were getting. Go do both. Go take on the biggest companies in the world. That whole thing came to an end because the markets crashed. The world changed in many ways.

And there was good fortune actually that Peggy, who was like the right hand of Satya. When she joined Magic Leap — think about this — a company I started in my garage, she’s like the number two at a $1.65 trillion company came in to be my successor. So I told her, “I just pitched the first seven innings. You get the save. I’ll get the win. I’ll do everything I can to help you. And you know what? We’re going to have to fight Microsoft and everyone else in the enterprise. And I know you understand that.”

She was at Qualcomm. She was an engineer. You’re going to bring in that understanding. And you’re not going to have that same feeling that I have of having to let all these brilliant people know — who are my friends, and like — I just — To still stay there, was not going to be possible for me. If I didn’t do that, there would be no more company. I had to scale it down in order to get the funding, in order to get the momentum going again in the post-pandemic world. So that sucked, but it had to be done.

BYE: Yeah. And I guess finally, what do you think the ultimate potential of augmented reality and Neurologically-True Reality might be, and what it might be able to enable?

ABOVITZ: I think, Kent — and by the way, I’d love to do a part two or three with you at some point in the future, because I know we’ve got to wrap up real soon, but I’d say this — I feel like both AR and VR are converging into this like — the same thing in one system. Like the ML 2 is like native AR that turns into VR if you want it to, through this kind of like segmented electronic that’s super cool and very novel.

And the Cambria from Facebook is VR that lets you see the world through passthrough that gets you AR. So we’re now approaching the mountain from two directions, but we’re waving at each other. And I’m pretty sure whatever Apple does probably kind of feels a little Cambria-ish, or maybe sits in between both. But I think now the two sides of the coin, like the chocolate vanilla are coming closer.

So I feel like, knowing what I know about not just what’s happening now, but where we were looking at — the next ten years. We were seeing all the way into the 2030s with our R&D. I sincerely think it’s going to be awesome. Like some of the hardest stuff is behind us. Like we’ve solved some of those difficult problems. The commitment to investments happened. I think we’re at an interesting tipping point. I think there’s so much commitment from like both the Magic Leap team and its ability to do that, but also like the Facebook/Meta team Apple, others. You’re just going to see some amazing stuff coming out.

So I feel like there’s amazing potential. It takes a little bit of time for it to scale into the world. So I feel that technology will be ahead of people. I think if we’re in the 2030s, we have a billion plus users. And near the end of the decade, this decade, I think we’re in the hundreds of millions of users. But it will take this like time and dedicated patience, step-by-step. Because we’re moving from an addiction to the phone and television to ultimately something that really does look like what you and I have [i.e. glasses]. But performs at the ML 2 level and better, which is really incredible. If you get to see the ML 2 at some point this year, and you go that thing and what you’re wearing is not far away, it’s very exciting.

BYE: Nice. We’re — And we’re both wearing glasses for anyone who’s listening. So is there anything else that’s left unsaid that you like to say to the broader immersive community?

ABOVITZ: No, look. Really appreciate every developer who put in time and energy and effort. We had many thousands of developers working with us and they were the most awesome people, some of the most inventive stuff. The OG team of Magic Leap, amazing folks. They sweated blood and tears into making a lot of technology really come out of nowhere and make it work. And then really want to wish the new G team, the new group team at Magic Leap just completely hit a home run with the ML 2. It’s amazing. So you guys have the ball, and we want you to rock it. So we’ll leave it at that.

BYE: Awesome, well Rony, thanks so much for all the work that you’ve been working on with Magic Leap over the years, and coining the term spatial computing, and helping to push forward this vision of the “Neurologically-True Reality,” and where we’re moving with the future of digital light fields, and everything else. And yeah, I look forward to seeing how all these things converge. And I appreciate you coming on today to help tell a little bit more about your story and some of your thoughts about where things have been and where we might be going in the future.

ABOVITZ: Yeah, if we do another one of these, we should talk like all things Metaverse, which is a whole another, you know, spending a lot of time with Neal and trying to figure out how to actually build those. It’d be fun to introspect on that one day.

BYE: For sure, awesome. Well, thanks again.

ABOVITZ: Awesome. Thank you, Kent.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

time-detectives-ar

Time Detectives AR by Picture This Productions is a site-specific AR trail experience meant to be seen at the Mary Rose Museum in Portsmouth, UK, but you can also experience it remotely as a table-top experience at home. Time Detectives AR asks users to investigate the sinking of the Mary Rose in July 1545 using a mobile phone as “a magical spyglass to reveal secrets from the past and complete your mission for King Henry VIII.”

The site-specific version has multi-sensory component using a backpack scent dispersal at 3-4 spots on the AR trail. I spoke to Charlotte Mikkelborg about her journey into immersive storytelling, and the process of developing this AR storytelling experience designed to provide families a more interactive experience with additional layers of story.

Rough transcript is down below.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

MIKKELBORG: My name is Charlotte Mikkelborg. I have been working in immersive storytelling since I think 2016, 15, 16 around there I started making 360 films and then I kind of played around with 360 films that would also work as multisensory experiences using more senses than just the audio visual. And then I moved into interactive VR with a big piece I did for British Airways to celebrate their centenary called Fly, which is very exciting.

It won best domestic experience in the UK based by the UK government in 2020 and then coming into COVID. I was just looking at the fact that there was going to be the kind of budgets we’d had to make something like fly available because obviously everyone was kind of tightening their belts and we just got to a point where people were getting to experience amazing immersive experiences.

The technology was there, and so I didn’t want audiences to lose that effectively, also us to lose our ability to tell their stories. So I started thinking about whether delving into AR because AR because you’re not building obviously those entire walls, but just elements of those worlds is naturally going to be less expensive to make. So yeah. So I started exploring with a prototype of this experience what we could do basically with augmented reality. And this was my first sort of experience of storytelling in augmented reality.

BYE: Okay, yeah, maybe you can give a bit more context as to your background and your journey into XR.

MIKKELBORG: Yeah. So, so I was originally a BBC foreign correspondent, so I was at BBC a one of the BBC China correspondents until 2009, and I left the BBC to start making longform because I was really just doing news and current affairs. So I left the BBC to start making longer form film, mostly documentary. I made my first sort of feature length documentary film in 2009 2000 173.

I also was entertaining me on a fiction film, although I was not directing. That was like producing it starring Matthew McConaughey, which we made around the same time. So I was nine on one grand jury at South by Southwest, but I was working largely in documentary film between leaving the BBC and moving into the world of kind of XR in 2015-16.

And the reason I moved into XR I guess, or the kind of the tipping point was I was having a meeting with a producer, Tragedian Connolly, in LA. He’s the producer for Lucy Walker. He’s an Oscar winning documentary maker. And I asked what he was working on, and he’d mentioned she was working on this 360 film. And I said, Oh, lots, 360 film in this at home.

So he explained it to me and showed it to me and I said, Oh, wow, this makes a lot of sense. So obviously, as we know, the tech wasn’t there, but just that idea that instead of experiencing film on kind of a rectangular screen, just because that sort of technology had gotten us to, we could now experience it in the round. The way we experience everything else in life just made sense to me. So I figured that this was going somewhere and this would be worth kind of exploring in more depth.

BYE: Yeah. And the first piece that I had a chance to see of yours was Fly that was remote during the pandemic. Everything that was going to be at these film festivals started to be in these virtual film festivals. So I saw a virtual representation of it, which is just the pure VR part. But I understand that there was a whole other installation component that gave it this whole multi-sensory experience of it.

So maybe you could give a bit more context as to that project of Fly and how you started to move from working in 2D storytelling into more of this specialized experiential and multi-sensory storytelling.

MIKKELBORG: So, so yeah, the idea of Fly was we wanted to take you on this journey through when humankind first really started to think critically about whether it was going to be possible to get humankind into the air. So we decided to start that with a young Leonardo da Vinci, who, of course, did not well arguably did not succeed in sustained human flight, but did make some serious inroads.

And so we start with him, and we know from his childhood diaries that he had this recurrent childhood dream, that he would turn into a kite or a kite would land on his chest. The bird that is not the one that you fly in the wind and that you would kind of take on the spirit and fly and take flight.

So this kind of plotting out that storyline and all the way back to Leonardo and eventually, you know, 100 years into the future of Fly at the end of the experience, what I knew was that I wanted people to actually feel that they were flying inside in key moments in this narrative. Otherwise it just wasn’t going to have the awe and wonder, I guess, that I wanted it to have.

So I was looking at different sort of flying rigs and I came across this thing that Sandra Bullock was put in for gravity. So I got talking to Neil Coupland, who won the Oscars for Gladiator back in the day, actually one of my favorite films and Gravity more recently. And we looked at that bag and of course, it wasn’t ever going to be practical actually, because it took ages to get into and then land kind of had to use it.

So then we started looking at motion platforms and whether we could actually make a motion platform, genuinely feel like you’re flying because quite often they use more of, as, you know, the kind of special effects in the movies where you’ve got a helicopter crashing in the platform, spinning or whatever. But whether it can actually give you that sense of elevation.

So we kind of started playing around with a platform that the Neil’s team had won of. Neil’s Right Hands is a man called Glenn Winchester, who I work with most closely and actually feeling that yes, we could when we, you know, did certain things that we could give you that sense of levitation. And we started designing a kind of lean boards that you would lean your body down in to when we wanted you to be prone and obviously flying like a bird.

And that opening sequence with a young Leonardo. But then later, as planes sort of evolved and you went into this more sedentary position that you could then sit back but you didn’t come out of a headset. We just plotted those things within the virtual space so that you could accurately find them and sit back down. And around that, I envisage this large egg and the egg was was playing several levels.

Obviously, it was from a health and safety perspective. It was kind of keeping you away from the moving platform. We didn’t want people getting hands on things, but going to storytelling perspective, it was kind of a metaphor for when human energy is directed in the right ways, we can be capable of incredible things. It’s not always directed in the right ways, but that’s without a story.

But, you know, the fact that we could go from obviously being a land based animals to fly was quite credible. So I wanted this to be kind of a metaphor of humankind making flight possible. So if you approached and put your hands on the front of the egg, it would warm to the touch like you were incubating it effectively with your hands.

And we had this kind of for lack of a more technical expression, kind of like a rainfall shower of sound that came down on you and you could hear that incubation and that heartbeat of life within. And then eventually the sort of cracking open of the egg and the color of the egg also change from this, you know, sort of frigid blue to to a warmer reds as you were warming and eventually to a kind of bright white as light flooded in.

But it was also the egg was also a little bit of a play on Leonardo when he went into his real fever pitch obsession with flight. He was also painting a painting called Later in The Swan, which has since been stolen. We don’t know where it is in the world, but the sketches still exist. And, you know, it was this human woman birthing like, you know, eggs.

The picture is an egg. The, you know, the eggs of her feet and the cracking open and revealing human children with this kind of, let’s say, a bit of a play on that. And there was a second part that you wanted me to answer, Kent, and I’m just trying to remember what that was.

BYE: Oh, well, you know, I guess we’re speaking here in the context of your journey into XR and immersive storytelling and the multi-sensory part of it, and you’re releasing a new experience of the Mary Rose ship that sunk nearly 500 years ago. And so the whole multi-sensory aspect of it and what you thought the multi-sensory dimension of it was adding to the process of telling stories as you’re moving from 2D and to the more spatial aspects, but also the multi-sensory aspects.

MIKKELBORG: Yeah. So as you know, the beauty of VR is that you do have some of these complete attention. They’re in the headset, you’ve got their entire peripheral vision cone, which happens very rarely these days in any aspect of life. Most people are on their phones while watching TV, while doing so, I guess. And the beauty of that from multi-sensory storytelling perspective is obviously that they then don’t see if you’ve placed cunning little devices to make it feel like the sun is shining on their faces or that the wind is blowing.

But one really important scene in Fly is the Wright Brothers scene, where you become Wilbur or you take Wilbur Wright’s place on the wing of the Wright Flyer and you get to pilot that first human powered flight. So I wanted to try, like I was saying before, because we didn’t expect the kind of budgets that have been available to fly to be available, coming in against COVID, to see if we could still play with the other senses while actually just playing an augmented reality kind of game on the phones.

So originally we would try to R&D, these notes of wearable scent badges effectively and actually the R&D on those wasn’t successful. I’m not to say that it couldn’t be. I’m sure that we’ll get there, but within the time frame it wasn’t possible. So we went to actually a device that you do out of school backpack that are hidden under the strap and the scent kind of emanates. Its dry scent.

It doesn’t kind of spray in your face, but it emanates of here on the chest. And scent was important to me in particular because I think probably of obviously all of our senses absolutely key. But scent is the only one that speaks directly to the limbic parts of our brain. And the limbic part of the brain is the part that forms memories.

So it was that opportunity to possibly, quite literally, create more memorable experiences. I mean, obviously the other senses do make it to the limbic part of the brain, but they just do it more slowly and more indirectly. And so we wanted to see if we could bring that sort of personalized sensory experience together with the audio visuals of the game to make for more immersive augmented reality experience, basically.

BYE: Yeah. And so as we start to talk about this new experience that you have, I’m wondering if you could give a bit more context as to the Mary Rose and the museum and how this is both a site specific experience, but also if people want to see it at home, they can see it at home. I personally saw it at home. I have not been able to see the site specific in Portsmouth, United Kingdom. So maybe you could give a bit more context as to the museum and how this project came about.

MIKKELBORG: Yeah, sure. So the Mary Rose was Henry the favorite warship, and she was perfectly seaworthy for the best part of 30 years and then kind of sank all of a sudden very quickly, a summer’s day, 19th of July 1545, when she was literally just starting to go into battle against the French and she sank, killing almost all of the 500 soldiers and sailors on board.

And why she sank has largely remained a historical mystery. So that was quite exciting for something that ultimately we’d sort of branded Time Detectives. We wanted to tell you it’s these kind of time traveling detectives, as they tell you in historical mysteries, is pretty much what we were looking for. So it was a great fit from that perspective. The boat itself was raised.

There’d been a few diving missions, including one immediately after the sinking in 1545, a few diving missions down to the wreck. But she was brought up out of the ocean in 1982 and 40 years ago this year. I’m, in fact, the lead historian on the boat is still there today with the lady who put up is still the lead historian in the museum.

Her life has been the Mary Rose and she knows everything there is to know. So she’s been an incredibly valuable resource, obviously, in developing the game. When we got the funding to make the first episode of Time Detectives, we actually didn’t get it specifically for the Mary Rose. And so we were talking to a few different venues, including like the usual suspects at Tower of London, Hampton Court, Shakespeare’s birthplace and someothers.

And actually, it really became the Mary Rose because we were given quite a limited timeline to develop the game. We had six months if we wanted to make use of the funding we were offered. And so the Mary Rose was slightly smaller team, they were able to be a bit more nimble. And so, yes, if you visit the Mariners Museum, which is out in Portsmouth in Portsmouth’s historic dockyard, still on the seafront. And they also have The Victory, which was Lord Nelson’s ship there.

And a couple of the historical monuments. They built this sort of custom built museum around the wreck of the Mary Rose. And she is obviously humidity controlled and she’s kept sprayed down with chemicals because she is the walls, but she’s within this custom built, very modern museum space — a very dark museum space as well. So that was had its own interesting challenges with developing AR and AR markers and all of that.

But, you know, the wonder of her is that despite several damage to the site, because she was in such a busy shipping lane, in a way, the heart of how the sank under the sail, under the sands got incredibly well preserved. And she is pretty much half a ship these days because a lot of what was above there then got battered away by the combination of the waves and other boats going over. And also the people, unfortunately, who died on that side.

And obviously that was probably more than half because that’s the way she sank were also kind of down there. The skeletons were down there. They found 179 skulls of the 500 anyway. But also lots of objects, like a lot of the cannon on board, a lot of personal items. So she really is such an amazing time capsule for developing a game around.

Did you want me to talk about how the museum game is different to what you would have experienced at home beyond the obvious scent component?

BYE: One, to set a little bit more context to, because I do want to know a little bit more in terms of as you go into the museum, this feels like it might be an add on, like where you would download the app and maybe buy the in-app purchase to get access to the content. But also there’s a backpack component.

So as I go to different museums, there’s often objects and as those objects are there understand that the Mary Rose has a lot of from the Tudor period in England, lots of different objects are on the boat. And so there’s a lot of historical capture of moments from history that people are to look at. But there’s also the ship that’s there.

But I guess if you’re a kid or if you’re someone who is going in there, maybe you don’t have a lot of interest in this. But it seems like this app would maybe be a way for either young adults or youth to be able to have an interactive AR app that has a bit more of a narrative component that they could have an interactive experience of this museum rather than is going in and looking and reading all the placards.

And so maybe you can give a bit more context if that was the catalyst or if you came to them saying, hey, we want to use this new technology. But if you’re targeting a specific demographic of people who may not otherwise want to go to the museum and read all the stuff, because if you go to a museum, there’s plenty of stuff to look at.

But maybe this is a new way of having immersive technology to give new access to it. So, yeah, if you could maybe start there and explain the workflow and what’s different than what people, if they were to see it at home, what they would experience.

MIKKELBORG: Sure, lots of things. Yes. So when we were designing this, we were aiming to create a family based experience largely because we started that development during COVID. And when we were prototyping. We’re actually prototyping it around the outdoor trail based on a different side of a place called Otford’s Palace, which Henry VII’s lost Queen’s Palace. And it was an outdoor trail.

And you could go as a family or it doesn’t have the family group to be a group of friends. But we kind of, you know, modeled it around that family experience. And in fact, when we ran tests on that site, we we expected families with kids between 8-14. And we actually had a lot of requests from 16, 17, 18 year olds, which really surprised by pleasantly surprised.

And when I was kind of asking them, Well, why did you come? They were like, Well, we just never heard of like a multi satisfying, I think for them to understand that we love this idea of scent. So that was interesting. So it felt like it tapped into a demographic that is notoriously difficult. We were kind of aiming at that audience.

But also I think that the US has a similarly bad record, as the UK does, in attracting as diverse an audience to the museums and the cultural heritage sites that there are versus the diversity of society at large. Like here in the UK our record for that is appalling, like we see much less diverse audiences in museums. And interestingly, also in terms of if you look at someone like the National Gallery, their online following is a lot of young people.

But in terms of the people that come through the doors, that percentage is tiny, you know, the majority of over 55. So there’s a real disconnect, right, for both young people and I think to more diverse audiences. So we wanted for the app to help. Obviously is is one small part of, you know, moving towards solving these massive issues.

But if we could tell stories that felt more diverse, but also in a way that was exciting and immersive and that appeal to younger audiences, then we could, like you say, just bring new people through the doors. Because it is a really cool thing to explore. But like you say, it’s not necessarily the thing that you’re automatically going to think of on a Saturday to do.

So yeah, but at the Mary Rose is also being quite forward thinking I think as a museum, I mean they’ve anyway got the very modern space because it was only built 11, 12 years ago as you go in. And now the company, Figment Productions, has produced an immersive experience which kind of replicates the experience of the sinking to a degree. Obviously it’s not super distressing, but it’s more of a projection mapping.

It features a hologram of Henry the VIII. So they’ve invested something in, you know, immersive experiences already. But the game, like you say, you can download either before you come down and when you get there is kind of an add on. But a lot of people who visit at the moment are people who have like annual membership for the dockyard as a whole. So you might come one day do Mary Rose one day and do Lord Nelson’s Victory whatever. So now they could just come again and play the game because it does just show the whole museum in a completely different light. Like you’re really on a story journey. You’re not just on that looking around the museum journey because it is just picking out certain elements.

It obviously isn’t showcasing huge amounts of the collection. It really is one storyline that kind of leads through. But in terms of what the game does for the museum. I mean we looked at examples like obviously Pokémon Go and Harry Potter Wizards Unite and other trail based AR stuff that was out there. And certain aspects of those games are great.

But what was dissatisfying for me was I really wanted more photorealistic characters, for example. So that was one thing we wanted to do in the museum. If we could try and place these characters from history as accurately as we knew we had details like the captain, who’s one of the key characters. We now a fair bit about him that we could base our storyline and scripts on.

We had the skull on the skeleton of a 17 year old who may not have the eight but the other character in our game. His job was to keep the ship watertight and both of his parents were North African. And in fact, a third of the people on the boat were non English white people basically. Well, everyone on the boat was men, so there was no female male diversity.

But in terms that, you know, there was about a third who is in the Mediterranean of North Africa, which was different to, I think, what people perceive. So it was great to be able to bring out Henry’s story both because, you know, from a storyline point of view, he gave you a bit more of what was going on beneath deck when the captain wasn’t around, but also from a sort of diversity angle as well.

So you’ve got those two key main characters who are obviously represented in a 3D photorealistic way, but then the other characters who when you play the game at home and looking at these characters, they all just represented this kind of 2D videos. In actual fact, they are photo fits, again, recreated from the skulls of people who were on board and their pictures all around the museum.

And so when you hold the phone up to those pictures, on the pictures come to life and they tell you their backstory, tell you what’s going on. And yes, the other additional element I suppose you get well, there’s a couple more additional elements actually that you get from being in the museum. One is that a lot of the artifacts obviously look old because they are because you’re traveling back in time to examine when it happened.

For example, the Maltese Cross, which in the museum is a kind of very dull barnacles thing that you can just loosely identify as being a cross. Now you can see it in all its glory, how it would have looked, you know, like a very nice piece of jewelry, for example. And then obviously you’ve got the scent element, so you wear the same backpack, it releases certain key moments in the narrative and just kind of further immerses you in the gameplay.

So I think that the key differences between not in the home game, but we’re hoping people will still have a lot of fun with the home game because we really keen that it reaches audiences, digital audiences that way beyond. You can obviously come to Portsmouth in the UK and experience the on location game and I’m sure we’ll learn as we go from this episode to hopefully others, you know, other ways in which we can make the home game even more immersive, to make it as immersive as the on location game, which hopefully will be the case.

BYE: Okay. Yes, I had a chance to play the at home version and it’s basically a mystery genre where you’re trying to figure out why the ship sank. And so you get introduced to either the captain or someone who’s below the deck, and then you make a choice as to who you want to see. You see volumetric capture of these character actors who are giving you context as to who they are and a little bit information about the ship and what was happening.

And then you start to go through different objects that I presume are going to be in the museum from the Tudor period that are helping to both put you into that time and place of what was happening on the ship, but also to give you clues as to what may have been happening. So because I did it at home, that was obviously very linear experience.

But I’m wondering if as you go through it spatially, taking you through the museum in a similar way of how it’s going from object to object to help bring that object to life and give a little bit more context. And then maybe you can speak a little bit more about the scents that as you’re going on this journey, how you’re using smell to be able to amplify these objects and to amplify the story.

MIKKELBORG: Yes, sure. So yeah, so when you’re in the museum is a trail based experience and you activate the different clues with AR markers, and in fact, it’s a really good point that you make that the at-home version is much more linear. And I think, you know, one thing that we could easily do to change that potentially would be to reveal that ship in AR as you see it as your kind of anchor.

Sorry, so many ship puns, they’ll just keep going. But then have all the clues that represented spatially around the ship at one time. And you can kind of choose the order that you experienced them in as one simple thing. But yes, the way that we’ve designed it in the museum is there is a single trail and you are kind of supposed to follow that route, but there’s nothing forcing you to follow that route.

It’s just the museum because they are very busy at times. They kind of have a one way system that takes you along the ground floor down under into the hull of the ship and up onto the top deck. So we’ve kind of designed the trail and the storyline to follow that line, but actually know one part of the museum was super busy. You could skip and do it a slightly different way.

But in terms of the scent, when you enter the game, when you meet those characters for the first time, we release the kind of smell of ocean air of kind of Sea Spray to immediately put you there with them on the ocean. And then depending on which character you’re playing as, for example, if you’re playing on the captain kind of monster gun inside of the narrative, there’s a clear related to the gun because we know the guns have been sabotaged in the past and there was a suggestion that the guns and the sabotaged at the time of the sinking.

And so at the time when you get to interactively fire the gun, we release the smell of gunpowder and Gunsmoke. So that’s another. And then if you’re playing your Henry side of the narrative, when you’re down with him fixing a leak in the ship’s hold, they use this kind of molten tar almost. It was actually made from tree sap, but it was cool pitch, but it smelt much like tar, slightly more organic version of tar.

And so we release that smell. And then what else is we got? We’ve got beer because the Turors drank a lot of beer, so they drank beer on board instead of water because it was generally considered safer because the brewing process kills some of the bacteria. So they drank, I think, about 14 pints of beer a day to keep hydrated.

But also this could have contributed to the whole sinking issue. Because that is actually more than they were drinking on board the other ships. So we did do a bit of a comparison and they did see you drinking more on the Mary Rose. And so we released another beer. And when you’re in the hole where the beer was stored, but also when you were at the backgammon table, you know, one of the ways in which they took time off was backgammon was common, dice was common, playing instruments.

There’s a few musicians on board and quite a few instruments actually found on board as well. So and then also this amount of this Tudor pomander, which is something that if you could have afford to, you would have a pomander, which is like a necklace with a sort of ball on the end. You can actually make it, if you were not from the kind of noble classes, you could make a pretty well, homemade one with like an orange. If you could get an orange and, you know, sticking some cloves in it the way sometimes kids do nowadays at Christmas. But otherwise, if you could afford one, you had a fancy silver ball that stores rose water or whatever it is, you would waft it and it would help clear, unpleasant smells from under your nose and on board with 500 soldiers and sailors you probably hadn’t washed in some weeks.

So I’m guessing that was a bad number, a nasty smell. So we didn’t quite give you the full experience of being on board the Mary Rose because we figured you probably wouldn’t want to go around smelling beer or for the entire game. But. But, yeah, we gave a flavor.

BYE: Yeah. It’s interesting to hear your approach to this because as I’ve done other immersive VR experiences, one of the challenges with smell is that once you introduce a smell into a space, it’s hard to remove that smell. But because you’re walking through a museum and it’s basically the smell is a device attached to an individual that’s locomoting through a space you’re able to kind of do the editing of the smell by walking into different site specific locations that then release the smell at those locations, and then you move on to the next location so you don’t have to challenge that you usually get with VR.

MIKKELBORG: No. And also the fact that we could release the smell so close. Well, I’m going to make it sound unpleasant, as we’ve got it so close to your face. But like I say, it really is. It’s a dry scent. It’s not something that’s spraying. So it’s kind of emanating and getting that amounts of scent right was quite key. But because it is emanating quite close to your face, actually, even the person next to you shouldn’t really be able to smell it.

And because it’s so close, we don’t need to release huge amounts of it. So so the dissipation of it is not really an issue because the amount that’s being released is relatively small. So we have learned that kind of static scent compared to mobile. And let you say this just — Although, it’s harder to get through health and safety to have people walk around with scent devices on it was also much better for the scent experience.

BYE: I, I wish I was able to fully experience this, because I only saw the at home version which obviously I don’t have the scent pack to be able to experience this tech, but, I think, yeah, overall my experience of it is looking at these cultural artifact sites and when you travel to places, you are trying to tap into the lore and the stories of the past.

And I feel like museums like this, like the Mary Rose, are able to take this ship that’s been recovered and pulled out of the sea and all these objects and create a whole museum around it that becomes a tourist destination. But to add these other layers of story on top of it. So it’s interesting to see places where immersive technologies are starting to be used.

You mentioned that there was an already immersive experience because I saw a video of it where you see these big screens and the ship sank. And so it’s trying to give you a sense of the experience of the ship sinking and but then you’re going into that. So is that something that’s automatically included for everybody that goes in? Is it –

MIKKELBORG: Yeah, that’s something that everybody who comes to the Mary Rose Museum goes to that immersive experience, which lasts about 8 minutes, I think. And you kind of walk in and you actually have the actress Judy Dench telling you a little bit on the back story. And then the first room is a holographic Henry the Eighth, and he’s setting the scene, giving you a bit of the backgrounds, further background story.

And then you kind of walk past Henry and you’re on to the gun deck as they’re just kind of heading into battle like just before she sank. And the walls are slightly curved to give you something of a sense of being on board. And that’s, like I said, a project mapped to experience that we did not make, just to make it clear that was, I think, Figment Productions in the UK.

So we didn’t make that one. And so yes, after you experience that ship then goes down onto the water in the projection mapping experience and then you exit into the museum. So yeah, it’s nice. It kind of sets you up for a slightly more immersive visit to the museum, but our game doesn’t really interfere with that, although you can have downloaded it for you guys to do that. It really kickstarts in the next space that you enter after that.

BYE: So it’s in the app of the Time Detectives that you have is an iOS app that you’re able to do these in-app purchases. Do you plan on expanding that out to other museums and have one app that you have a variety of different experiences? Is that kind of the same idea of being able to have an interactive experience where you’re trying to solve a mystery but have a way of adding the story on top of the existing objects around these cultural heritage sites around the UK. I’m just curious if you have plans to expand it out beyond the Mary Rose?

MIKKELBORG: Yeah, definitely. Like I say, we originally prototyped it around a different site and when we got the funding to make this one, we were speaking to various sites. Some of whom are still interested, but they just couldn’t work with our six month timeline because it was quite a tight timeline. We wanted to really see this one launch.

See how it does. See how many new visitors it can attract all of those things and kind of get a sense of all of that and then go back to these sites and say, look, this is what we think the game can do for you, not just because we want to commercialize as a company, because actually as a storyteller, that is less my objective of being honest and other aspects.

You know, I just enjoy the storytelling. But also there was an element of this that wanted to help these sites because I’m a great lover of history and they’d obviously seen like their numbers decimated by COVID. And so it was an opportunity to also see what we could bring to help those sites, to not only get new audiences, but also kind of, you know, just expand their digital reach and all of that, which is why, you know, I’d love to hear please send me an email, Kent, if you feel like there’s any adjustments, improvements to my game, I’d love to hear about it because I think, you know, the better we can make these also for our home audiences, even though for now the scent element won’t be at home, you know, the better because it’s just great to take this history like further a field, way beyond the people who can obviously come to Portsmouth in this case.

But yeah, we definitely want to expand it to other UK sites and I’d love to take it to some Italian sites. And who knows, maybe also US sites. So although we haven’t really started thinking about which ones yet. So again, any input? Welcome.

BYE: Yeah, two quick pieces of feedback is that I initially set it on the ground. I feel like it probably works a little bit better if you set it on a table just because.

MIKKELBORG: Yeah.

BYE: A little bit low if you’re trying to see these objects. And also there’s a choice you make to go through two storylines. I chose the captain up front, and I wanted to go through the other storyline. It’d just be nice to do a reset button. They kind of do it again through the other perspective because it kind of got stuck because I’d already had done it and already had all these objects in there.

MIKKELBORG: So so I think it might be the way that the inventory displays possibly because in actual fact the objects are in the inventory but grayed out was where you collect them. So actually you go quickly into the inventory. It might look like you have all everything, but actually some will probably be grayed out and others not because certain objects are on the other characters side of the narrative, not on the captain.

So let me know on that. But it should be that Henry’s objects would still be grayed out, even if you could see them, that you wouldn’t have collected them yet. And also let me know which phone you own, because, you know, we’ve been having pretty smooth inventories on the phones that we’ve been using. But every phone represents a new challenge, and that’s one of the hardest things I discovered about AR. Luckily my developer had worked on AR before but is just you’re not developing for one or even two devices anymore. You’re developing for like 1600 androids and you know, and luckily only like eight iPhones or something. But it’s a marvelous debugging challenge.

BYE: Great and, and finally what do you think the ultimate potential of immersive storytelling might be and what it might be able to enable?

MIKKELBORG: Oh my goodness. That’s a big question, Kent. I think the possibilities for immersive storytelling are almost limitless? I mean, personally, I quite like the idea of working and I think it will likely catch on more with big audiences when you can make immersive experiences less tech heavy actually, because I feel we open out to, you know, people who might be less tech friendly.

And I’m developing an experience right now which is an LED, cave-based experience that, you know, you walk into that space and walls, ceiling, floor are LED — interactive LED, and you interact with the story through your physical movement and sounds you make and such like, you know, I love the idea of that because everybody knows how to move around the space and how to make sound if they’re asked to make sound.

Whereas there’s always technical challenges with everything you put in somebody else’s hands sites. But yeah, for sure. I think there’s so much potential. I also have a great idea in development — Well, I think it’s a great idea. We’ll obviously see what others think. But for something interactive that I would love to bring to someone like Netflix, something like a Bandersnatch, but kind of taking that a step further and also designing it in such a way that it doesn’t give you that kind of FOMO.

Like I need to go back and relive 50 different routes through this narrative. I just think there’s so many different formats, different mediums within Immersive right now, and they’re all kind of exciting and going in different places. And I don’t want to predict. I think that they’ll be one in a golden nugget of an idea that leads to the success of immersive at large.

But I think augmented reality does for now just mean that we can open out to some much bigger audiences than VR, and that’s exciting. As I think more interactive things on the likes of Netflix and other platforms is exciting because it just means immersive storytellers can also bring their work to a bigger audience. And I don’t think the likes of Oculus and such are quite there yet in terms of the ease with which you can get onto the platform. It’s become much easier, but it’s still. And the way in which you can start to experience is and you know, it’s just not quite as clever yet as Netflix. And so yeah I think we still way to go, but there’s a lot of exciting stuff happening and we’ll get there.

BYE: Is there anything else that’s left unsaid that you’d like to say to the broader immersive community?

MIKKELBORG: I don’t think so. Thank you very much for the opportunity though. Thanks so much. Really enjoyed it.

BYE: Awesome. Yeah. Thanks for joining me today on the podcast.

MIKKELBORG: Thanks, Kent.

living-cities-nine-principles
Living Cities is a digital twin XR startup announced on May 26th, 2022 with an article titled “Reality is Scarce…and The Metaverse is infinitely abundant.” that shares a list of “9 principles for connecting the real & virtual.” The founders of Living Cities include XR luminaries Matt Miesnieks (who sold his previous startup 6D.AI to Niantic), John Gaeta (who did visual effects on The Matrix, worked on the HoloLens, co-founded Lucasfilm’s ILMxLAB, & was a senior Vice President at Magic Leap), as well as Dennis Crowley who co-founded the geospatial social networks Dodgeball and Foursquare.

I had a chance to catch up with Miesnieks & Gaeta on July 6th to unpack each of their principles for connecting the virtual and the real, including how they’re trying to capture the spirit of a place and lore of specific locations digitally allowing the physical and virtual realms to be combined in unique ways. They’re still working on their initial demo, and so there is a lot of reading between the lines of their guiding philosophical principles to understand what exactly it is that they’re building. But they have a lot of deep ideas for what the next steps should be in building out an AR Metaverse that blends world scanning technology with various social XR communication features and self-expression tools.

See below for the audio interview and a full rough transcript.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

MIESNIEKS: I’m Matt Miesnieks, and I’m the CEO and founder of a company called LivingCities.xyz. We are building a digital twin of the real world and working to figure out how to make that a living copy of a real place. It’s a new company. We haven’t really launched or said anything about our product as yet. And the company kind of builds on what I have been doing over the last 12 years or so in augmented reality.

Most recently, my company, 6D.AI was acquired by Niantic and 6D had developed some technology to crowdsource a 3D map of the world so people can capture 3D scenes on their phones and build them into a map. And Living Cities is kind of taking that one step further in the sense of what happens when you’ve captured this map. You know, what can you see? What can you do and how do you actually use it?

GAETA: My name is John Gaeta, and (laughs) what do I do in the realm of spatial computing? Well, I do a lot of personal computing in space. And I think a lot about how other people might be doing that. I think today we’re here to talk about, you know, how to harness the potential of people in space and project that into new forms. So it’s a long answer to that one is perhaps another chat.

BYE: Okay. Yeah, maybe for each of you, you could give a bit more context as to your background and your journey into doing this work that you’re doing now with Living Cities.

MIESNIEKS: Sure. My background goes, you know, I’ve been in tech my whole life. I started just out of college or in college working on the Internet like a while on the Internet and helped travel around Asia, building that out. Then I spent about a decade in Mobile for the company that invented the mobile web browser, you know, started out technical, ended up commercial.

And about 12, 13 years ago now, I was thinking about what comes after mobile and landed on augmented reality as being the direction that everything was going to head, which I still feel is correct, but I was definitely way, way too early. I probably should have waited ten years or so before jumping in. And yeah, you know, during AR, you know working in AR I’ve been very interested in infrastructure and enabling technologies and what are a lot of those platforms and technologies that are going to make all these amazing AR experiences possible?

GAETA: My background is I began in cinema known for designing visuals and concepts in the original Matrix trilogy, where I worked with a number of colleagues in some methodologies that at the time were pretty experimental. But were created in a way that we thought might be relevant in future times when things like virtual reality might, might actually be plausible.

After Matrix Trilogy, I did more cinema with the Wachowski’s and then eventually started to see a lot of colleagues leave to move towards real time graphics and more experimental media. I was intrigued by a lot of what they were doing. Some of them were going into gaming. Some were starting to go into the labs of Silicon Valley and such.

So I started a period of time where I was experimenting with new media. I worked on things before the Kinect, but human interface. And then following that, the HoloLens — the pre-HoloLens with Microsoft. And after that I went back to film sort of entertainment worked for Lucasfilm as they were about to launch a next generation of Star Wars.

I helped begin something called ILMxLAB, which is one of the first immersive entertainment labs that was pushing some boundaries in real time graphics. And there was a lot of exploration on behalf of Disney and Future Disney ideas and Star Wars universe. Following that, I was SVP at Magic Leap for a couple of years, which was interesting, and I could write probably two books about that particular experience. It was like living in the future in a way, and very interesting sort of early foundational thinking from there.

And that’s around the time just around that Lucasfilm and Magic Leap time, Matt and I became friends. We’re exchanging a lot of theories and appreciation of each other’s work and vision. And since that time, we found reasons to come together along with Dennis on some ideas that were early in those days, and they seem right for us now. So I guess I’ll stop there.

BYE: And you just mentioned Dennis Crowley, one of the co-founders of Foursquare, which was a geospatial located, check-in app that was a pioneer in the space of trying to map out the social layer on top of where people were located. So Matt maybe could pick up — since Dennis is not here — how you and John and Dennis collectively like — a little bit of the origin story of [yeah] where you’re at with 6DAI and with John’s whole journey from Matrix to Magic Leap. So yeah maybe you can pick up what was a catalyst for Living Cities?

MIESNIEKS: Yeah, sure. I think what’s been awesome about working with John and Dennis is all of us have spent decades really, you know, looking at the same problem, but from three very different perspectives, you know. So, you know, in Dennis’ case, I met him through a mutual friend a couple of years ago and was really just interested in talking to him as a potential advisor to whatever I was going to do next.

You know, just as someone I was curious to know and as I described, you know, what we were hoping to achieve in living cities. He was really excited, said, Look, I’m negotiating my exit from Foursquare right now. I will remain as chairman, but I won’t have any day to day roles. And I’m thinking about what I want to do next.

And what you’re talking about is kind of the direction I always hoped, you know, Foursquare would get to. And he’s like, How can I be involved? And I’m like, “Do you want to join us and run product?” And he was like, “Yep, I’m keen.” So you know, it took about a year from that conversation until we finally had a company, but it was just great to tap into his — like he didn’t know really anything about AR or he hadn’t lived in that world at all.

But he’d spent his whole career figuring out how software intersects with the real world and how that can shape and influence our real world behaviors. And, you know, he has a sort of line about, you know, he wants people to get online in order to get offline. And that the way that, you know, these online services and experiences can actually — it’s not about getting sucked up into it.

You live your life online. It’s all about, you know, we have this whole life which is partly online, partly offline. And the online pieces should support and leverage and encourage also your offline part of your life. So philosophically, that was just so aligned with what we’re doing and obviously had a lot of experience running a large company and it was just amazing sort of having someone else on the team who’s kind of been through that as an entrepreneur.

And it’s just nice, you know, from my point of view, to have his experience behind us. And then in terms of how the three of us came together, it was, you know, like what John said, we’d been talking for five or six years, you know, about the same sorts of ideas, same sorts of vision. We talked about working together a little bit at 6D just before Niantic acquired us and stayed in touch and realized that, you know, we all wanted to do the same thing.

We just needed to turn that into a company and then put some product focus around it so that we can actually bring something out. And that idea of what happens when you’ve got a digital copy or a digital layer of the real world with, you know, I called it the AR cloud back in the 6D days. Nowadays it could be one little facet of the metaverse, which is that part of these virtual worlds that are mirror worlds of reality and the potential for all different sorts of novel interactions, novel economies, novel use cases start to get unlocked when you realize that a virtual avatar and a physical person can be in the same place at the same time.

BYE: Yeah. And if my memory serves me, I remember going to the Magic Leap LeapCon in like 2018. And I don’t know if it was you, John, that was on stage with Rony talking about the Magicverse.

GAETA: Yep.

BYE: Magic Leap was basically like ten startups all in one, doing lots of different things ahead of its time in many ways as an independent entity, trying to push forward the state of the art on so many different levels, and this idea of bringing the augmented reality out into the world and having these different layers of what Rony was calling the Magicverse.

But I’d love to hear some of your reflections of what got you excited about that Magicverse and how you see that you may be continuing that idea of trying to bring — or what was it about the Living Cities that was recapturing your imagination for where you wanted to go?

GAETA: I mean, yes, there’s a thread line through all of it. I mean, all of us I would say as long as we’ve been curious in these spaces and realms, each chapter, each relationship kind of leads you a little closer towards something. The idea of the magic verse, you know, even before the magic verse, Neal and I — well, Neal was on stage with us, right?

And so magic verse was kind of a product of Neal and Rony and myself thinking about stuff. And even before then, when I was experimenting at Lucasfilm, we got deep into trying to understand destinations, virtual destinations that are based off of places with real histories. You know, because we were thinking about the, the universe of Star Wars and how each place is special and has its own history and contains many different types of characters and relationships

it carries. It just goes quite deep. And the greatest of fictitious universes tend to do that. And fans thrive and engage deeply into those things. But oddly, those things often are based upon real world places and people and one sort of extracts from those things. So most fiction represents something that’s real and true in terms of people, places and events and such.

At any rate, you know, I sort of came into Magic Leap with a lot of thinking along those lines. If we could try to understand a sense of place and a destination based on the lore. I then went on to Magic Leap and had the luck of working with someone like Neal, you know, thinking about things like that. And Rony, who is a real deep thinker and incredibly instinctual and intuitive on these things as well, where it was pretty quick to imagine that you could have any one layer over the world at any given time and they would have to intertwine, which leads towards spatial computing is like in what way do you intertwine?

How do you intertwine? And the layer, of course, could be anything the mind could conjure from entertainment and creative to pure utility. You know, like, here’s my health layer, here’s my tourism layer on and on, right? So these could be endless layers created by infinite amounts of people over time. So the layers thing is coming. It’s coming still.

And the combination of those two things is stuck inside of me. And Matt was talking about many of the same things. You know, when we met years back, he understood the way that reality or the real world was going to couple to a virtual layer, was going to — it would need to be fused by way of understanding of not just the shapes of things, but the ongoings, what’s happening at that time.

So essentially the kinds of elements you would need to feed into a simulation of a place, right? What’s happening there? What is the light like? You know, what is the purpose of the place and the people within the place? And we always brought this back up as we talked over the years. What we do right is like people were wondering about augmented reality, to what end, you know, could it benefit people?

Could it amplify people in interesting ways? Could it lead to heightened capabilities? You know, in terms of expression? So we dragged all of that stuff from even before Magic Leap during the 6D days, the Magic Leap days into today. And I’ll let Matt sort of riff on this. But I think that there’s a lot of confusion, of course, about what the metaverse is.

We could talk about that endlessly. But the one thing that’s true is that a virtual container with no purpose or meaning or history or not defined by way of the people inside it and or community. And their purpose is like this is sort of a desert to wander through. But as soon as you have an understanding of place, which is defined by people, then suddenly the value starts to show itself.

And I guess I’ll throw it back to Matt with regard to that.

MIESNIEKS: Yeah. And Kent, you got any questions? You want to jump in with others? I can just ramble on for a good hour or two on this.

BYE: So right now — you published an article back on May 26 called “Reality is Scarce: And the Metaverse is Infinitely Abundant.” So you’re starting to be a part of this larger metaverse conversation, which John just spoke in some sense. And when I think about the differences between VR and AR, I think about is that in AR you’ve in the center of gravity of whatever existing context you’re in, that you’re using the virtual information to either modulate or subtly shift the context or maybe try to change the context of what you’re in.

But with VR, it’s a lot easier to do a complete context, which I could be at home. I go into an immersive experience. I could be at my doctor’s office, I could be on a date with my partner, I could be visiting family. And so the context there is much more of a stark context shift. So I see that there’s something with being grounded in this gravity of the existing context and being able to use the virtual layers to either connect people on a deeper layer or to maybe subtly shift whatever the context is and maybe create a new context, kind of a liminal space that doesn’t have an established context. So I’d love to hear –

MIESNIEKS: Yeah.

BYE: Some of your thoughts on where you start with the Living Cities and what context you bring in and how you start to iterate there?

MIESNIEKS: Yeah, well. One thing I’m really trying to avoid, I guess is the semantics like this is “Yeah, this is VR, this is AR, this is XR, this is spatial –.” We’re really trying to think about what is the user experience? And what is the potential of this product we’re trying to make? And you know, the device you use to look into this place could be anything, you know, it could be a phone, a browser, a VR headset or an AR headset.

What we think is most interesting for us is kind of what John was alluding to there. Like if you build a virtual place just from scratch, it’s really, really difficult. And John sort of educated me just how difficult it is to create a universe, a history, a law, a culture, all these things that this isn’t just something that exists for like VR space creators.

It’s everyone who’s ever made a film or written a novel or anything that sort of create this world. And it’s very, very difficult to do that from scratch. Nearly every time someone does it, John said before. It’s rooted in reality, somehow in the human condition somehow. So what we’ve found really amazing, I guess, is that if you go to a real place and say, look, we’re going to somehow replicate this place and bring it online, you get all of that history and culture and the law and the types of clothes you wear there and the type of music you listen to and the type of people that go there.

Kind of all of that you get for free, you know, because it’s a real place. You know, it’s had – hopefully you pick a place that’s got a lot of history and it’s interesting, you know, in reality. And when you can then bring that online, you get this kind of different thing. Like it’s not quite AR. It’s not quite VR. It’s definitely aligned and connected with reality and it’s essentially being updated in real time by what’s going on in the real world.

But it could be experienced entirely, virtually, and it could be tweaked and modified and adapted and changed around with all the tools that are available to the virtual world creation. So that sort of concept of like bringing the world online in that way is kind of where we’re zeroing in on and building out. It’s because we think that if you can do that, you have all seen like photogrammetry captures of places or 360 videos of real places and you go in there and you look around for a bit and that’s it.

You’re kind of done. But this idea of like, how do you bring that place to life? How do you make it feel like you’re really there? How do you tap into that spirit of the place? How do you connect to the people that are in that place? All these aspects of it, the magic that no one’s ever really – we haven’t seen anyone do this before and we think it’s something potentially really magic and big that could be unlocked if we can solve it.

BYE: Yeah, I guess the question that comes up is the matter of scale because I think of something like Google Earth VR, which has replicated all of the entirety of the Earth with different types of coverage that is even in Google Maps. It can’t be the same resolution universally everywhere because resource limits and don’t need high resolution things if it’s just an empty cornfield.

So you have the whole range of the entire world. So where do you start with creating a digital twin? Do you start with urban cities? Do you try to recreate an entirety of the urban cities? Do you try to take, like an Ingress approach where you pick areas of interest and start to organically build out based upon whatever the early adopter users are? Or how do you start to boil the ocean in that sense?

MIESNIEKS: Yeah, well. You, you, you try not to is the main thing. You know, one problem with building an AR product, any type of software product, is this idea of population density, you know, like Pokémon or like Niantic with these global games, how do you put a Pokémon on every street corner in every town, and how do you get more than one player in your neighborhood?

You know, that’s a really difficult problem to solve. And so much of just if you take this idea of AR is something that I look through something and I see something digital in my physical world right now, you’re going to have that population density issue. We’ve kind of flipped that on its head and we’re thinking about, rather than figuring out How do I get content to everyone on Earth, or how do I get everyone on Earth to the content?

And so we’re consciously choosing a starting location that’s in an urban environment that is very diverse and creative and reasonably well known globally, and tapping into the specialness of that place and trying to bring that to the web, to the metaverse. You know, one thing I often talk about when anyone brings up scale and AR in the same paragraph is, you know, in all my years of like, I’ve never met anyone who has ever said my AR app is too popular on iOS.

How do I put it to Android? Or It’s so successful here I’m struggling to scale. Like everyone’s always had the problem of how to actually get engagement and somewhat to come back to it repeatedly. And that’s the problem we’re really going after. We think that if we can get that working in one very small constrained location, the question of like bringing that to multiple locations, whether they’re public spaces or private spaces, we can then start replicating that.

And then, as I know from my 6D experience, the potential to crowdsource and build those maps of the places, you know, 3D realistic maps is still not quite there today to do that in the highest possible quality, but it’s coming pretty fast and a lot of the mapping infrastructure is already in place with everything from open street maps to every major platform that’s out there.

And they’re all working towards building 3D versions of their maps, but no one knows what to do with those maps once they exist exactly.

BYE: And John, did you have any thoughts on that?

GAETA: Of course, I mean, we’ve been — We’ve been inside those thoughts for a lot of it here. Yeah. I mean to, again, to reinforce some things that Matt just said, trying to boil the ocean is really going to be a slow, incremental exploration. And it probably is the domain of the giant map companies. Right, to try to do that.

But what we’re talking about is more of a strategy, a creative strategy, social strategy of going compact but deep. Back to trying to understand, you know, what’s inside the fabric of a universe? What is actually the spirit of a place? We use that term a lot. And the spirit of a place generally is in this place, these types of people congregate to do these things, and they’ve done that for, you know, in the case of the real world many, many years ago.

So in more older parts of the world, it could be like a lot of years, centuries even, right? But to try to understand what happens in a compact area. And it’s mostly really about knowing the people. And we don’t want to just sort of suggest that our interest is creating a copy, precise copy, because, you know, interesting places and people and events, you know, appear in books and in movies and all sorts of other kinds of expressions of the same place.

You know, that we don’t have any rules among ourselves about how precise we really feel like replicating. We know that it’s possible to take a picture of you and your family, you know, in a place. And that’s an expression of you in a place. Right. And it’s framed by you. And you made a choice in how you did that.

So there are a lot of ways that one could reflect what’s happening, reflect the spirit of a place. There’s different media forms that could happen in. And so our interest is essentially reflecting the real world up into some form of itself, right? A virtual form of itself. But the form factor of the things being reflected could potentially fall anywhere on the spectrum of totally real and volumetric to totally expressive.

And to use that Star Wars example again, like if, for example, a bazaar in Tatooine is really based off of a similar type of place in Morocco, for example, right? You could look at it as like an abstraction of that place in Morocco, right? It’s a sort of a fanciful sort of abstraction of that place. But underneath it, you see the elements — right? — of the real place.

And that’s interesting, right? So you can also think about it as you can fall on a spectrum of like it’s completely real to it’s an expressive or abstraction of the real. So these things are all in balance, I think. Right? And interesting, just like all different types of social media that exist today. I mean, you can put something literal up there and, you know, sort of something you’re sharing about yourself. Or you could be creative with it and you can abstract on it, but it’s still an expression of yourself.

So we love this idea that future social media will probably again still be people expressing themselves. But in this new virtual destination type of container. And potentially, again, interesting people in places export things outward like, “Hey, this is what they do in Shibuya, Japan.” It’s very interesting. I see it in videos and on the Internet and pictures and all that stuff.

Hey, that’s really cool. I’ll be influenced by that and I’ll make some art or create something, and it was influenced by that. And I put that out there now. Right? And so there’s sort of a progression, right? So the influence came in or the expression came to you, you did something and then you’re trying to participate in a way right from afar, out of appreciation or inspiration.

That stuff can happen. So there is this relationship that can happen between those that are there at the source of something and those that are everywhere else. That’s a big thing that we’re wondering about, right? How do things reflect in both directions?

BYE: Yeah. And in the Medium article, I know, Matt you had written up nine different principles that I want to bring up and dig into. And John just mentioned one of them, the spirit of the place, and also talking about the virtual and the real reflection of each other. But before we dig into those principles and values, I did have one clarifying question, which is as we’re talking about this, the question for me that comes up is, is this something that you’re starting with recreating these different spaces just from the outside, publicly accessible places?

Or are you actually doing any internal depictions of these spaces, which is there’s these boundaries between public property and private property. And if you are sticking with stuff that’s from publicly accessible view or if you’re going inside of any?

MIESNIEKS: Yeah, we’re definitely focusing on a public place to start with. You know, one of the things that we found is we’re still finding, you know, is that when we’re trying to describe what we’re doing in words, particularly to anyone who’s not familiar with VR or AR or even sometimes if they are, it’s a very abstract, amorphous concept for people to get their heads around.

And so we’re working to soon, you know, have a demoable example of what we mean. And so able will look at it go, “Oh, I get it. That’s what you’re talking about.” If we were to go to a private place, you know, a high-profile concert venue or theme park or stadium or something, we’d need to be able to show them what we do before they would really buy into what we would want them to buy into.

So this first site is as much I wouldn’t call it a proof of concept. It’s definitely gonna be a product, but it’s very much a stake in the ground of saying, Look, this is what we can do, and once it’s working here, we can do that for other places as well. If you’re in a private place or even a different country, you know there’s different laws around what you can capture and record in real time of what’s going on in that place.

You know, everything from security cameras to swiping your wrist on entry doors and all that sort of stuff. Potentially we could take all of that data in and use it to create a very accurate, real time simulation of everything that’s going on in that real place. But the ability to do that is shades of gray from completely public in an environment that’s got pretty strict laws to a private place where potentially you could do anything.

BYE: Yeah, that makes sense. I think up until the point where I’m able to see it, there’s a lot of unanswered questions and stuff that I’ll probably understand a lot more once you have created that proof of concept and released it. So I think with the time we have remaining, it might be worth just going through some of these principles that you’ve written, because I think this article you wrote out, that was part of the reason why I reached out, because I thought some deep thinking about trying to think about the underlying philosophical principles of what’s going to be really driving what you’re going to be moving forward.

And so I’d love to hear some extrapolation of these different principles. We already talked a little bit about the spirit of place, but I’d love to hear a little bit more of a riffing on some of these nine principles that you brought up here.

MIESNIEKS: Sure. Yeah. I mean, either of us can sort of go into these. Let’s start with reality is scarce. You know, everyone’s thought that it’s a big world. It’s is the huge world we live in. But what’s interesting to me is if you go to a place in the real world, you go to the center of Times Square in New York. In the physical world, there’s only one thing in one place at a particular time.

And when you’re looking at a virtual space, you can potentially have infinite things represented in that same space. You know, that’s all the different layers that we’ve talked about. So that idea of scarcity and how that connects to the abundance of being online is really interesting — from an economic point of view because we’re not quite sure exactly how that will play out.

But the idea of one physical person in a place, maybe 100,000 virtual people in the same place gives a really interesting sort of imbalance between what power does the virtual person have that the physical person doesn’t have, and vice versa. And that interplay between scarcity and abundance and value is ripe for us to explore. I don’t know, John, did you want to pick another one? Just to…

GAETA: Okay, we can play ping pong. So, you know, “people are the killer app.” That’s always been true, isn’t it? Since the beginning of time before they use the word app. We can experiment and explore all sorts of fantastical things to do, places to be in layers of reality. But at the end of the day — I think it’s proven time and time out — what matters is who you’re with or who you’re interacting with and what is it that is happening between either, right?

So at the end of the day, it’s got to be the center, right? It’s the sun is the center of the universe. And so we need to appreciate and understand that if people are going to stay engaged in experimental realities and mirror worlds and fantastical things like this, it has to begin and end with the relationship to people, the engagement of people.

So we’re trying to orient things in that particular way because the idea of the metaverse is an abstraction. It can be like infinite universes. They still are just places, but they’re empty places. They’re empty places until they’re defined by the people within them, and the things that they’re doing. So that’s the point of that. People are the sun at the center of the universe. We’re focusing on that and that’s why we feel assured in how we’re prioritizing things.

MIESNIEKS: And it also one thing I just add to that, it’s probably my biggest mistake, the thing I missed with AR and that I’d been working for years, you know, looking at world-facing AR, like look through the phone or through the glasses to see the content in the world. And what was successful was, you know, face filters and it was that ability to kind of how do I point the camera at myself?

And it’s about people and how do I augment people, not rooms, you know? And so, yeah, that really drove home to me that people should be the center of all the interactions that we’re exploring. We talked a bit about Spirit of Place. We talked a bit about how lore was built into real places, but the next one about communication, social and self-expression being the same thing.

What we see, I guess you’ll see today is nearly everything that is at least the virtual world half of the metaverse as opposed to the crypto off of the metaverse. Nearly all of those products are entertainment or gaming, but they’re hard and you know, that’s good. But something I always believed is that when you look at markets and market opportunities and people and what we do with our time and our lives and our energy, like we like to be entertained, but we far more than that, we like to communicate with each other.

And, you know, we spend far more money on communicating, you know, everything from our mobile phone bill to traveling than we do entertaining ourselves. And so I think when we look at our piece of the metaverse that we’re trying to create, we really wanted to tap into this idea of how do you enable people to communicate? Because that is social.

You know, Facebook is a communication platform. They just rebranded it as a social network. And a lot of that communication is just how do we express who we are? And when you go through a platform shift, that use case doesn’t change. All that changes is that you now have some new forms of media, you know, in this case 3D media, to start doing those same things.

And what is this new form of media? How does that let you express yourself? Will communicate in a way that wasn’t possible before. So we’re farmers and that as a use case than we are in building like a game like Niantic has done.

GAETA: Which is super exciting because we’re like in the midst of a near paradigm shift in capture and generation of things. So self-expression is really going to evolve and move fast in the next year or years. And it’s a very exciting area for us. And we’re lucky, right, to have begun now, right? Because we don’t have a lot of things that we’re bound to a lot of legacy, that we’re bound to with regard to what’s the new form of social media and self-expression.

So we’re kind of free. Oh, we’re lucky. “AR is the wrong place to start.” Just to mention that. Like, basically, I don’t think anybody who listens to this incredible program of yours, Kent, would argue, right, that augmented reality is really going to itself be a paradigm shift in the way that we perceive and consume content. But right now, it’s a window.

It’s the way we’re looking at AR and VR or any device, any screen. Is that they’re windows upon the actual thing that matters. And the thing that matters is what it is that people are coming to and engaging with and that’s where our focus has to go, because we’ve all been spending a lot of time, you know, joyfully inventing powerful windows onto the thing that matters.

So AR will come. VR will come. You know, the spatial web is going to come. But it matters what’s actually before our eyes at all. So that’s what that means.

MIESNIEKS: Yeah. The next one with “The real world being a dynamic living place.” We’ve talked a bit about that. Like if you want to capture this feeling, you know the spirit of the place, if you want to tap into the people that are inhabitants of that place, this digital twin — like the idea of a digital twin, like in architecture, engineering, whatever, is always like a static model.

And we think that’s just a first step. Like it’s nowhere near enough. So you need to somehow bring all those dynamic aspects of reality up into the virtual twin. Not much else to say there, but it’s closer to a simulation than it is to a digital twin.

GAETA: Yes. And a simulation that can have its parameters tweaked at some point. So “Get on the metaverse to get off the metaverse.” Again, like the honeymoon of virtual worlds and all of this stuff at a certain point, most people will realize that real life was always so much more interesting and so much more stranger than fiction and beautiful than any digital experience that we could have.

However, what we were sort of implying there is that like great art or something that can catalyze your interest in pursuing something, you know, that is a function that can be served by things like the metaverse. So this is more about the remote co-presence aspect. We could get a taste of a place and all that goes with it. And really, hopefully what it would cause is a drive for us to sort of get to the real place.

So the metaverse could serve to catalyze greater appreciation and engagement of the real world and reality itself. And the weird thing about essentially working in these areas, in these mediums is that the deeper in you go — and visual effects did this before world stuff — but the deeper you go into trying to replicate reality, the more it makes you actually appreciate and look at reality and engage reality harder.

Perceive it like to actually not just sort of let it go by, but actually try to use your senses to perceive what’s happening around you and appreciate it in a deeper way. So that’s hopefully what will happen is as we dabble in the metaverse, it’s going to make us get out of the metaverse and really take in our real lives or real places a lot more than we realize.

Not for all. Maybe some people get lost. But I do think a lot of people will suddenly look at reality freshly after dabbling in the metaverse.

MIESNIEKS: And the last phrase is about reflections. That’s a word we use a lot. Like how do this virtual world in the real world like reflect into each other? And I think the phrase mirror world is kind of a misnomer in that it kind of implies like a perfect literal twin of the real place, like looking in a perfect mirror at yourself.

And, you know, it’s not too difficult to imagine once you go down that path, it’s like it’s impossible to do. How do you get every blade of grass? Perfect. So the interesting thing is, if you think about reflections as a term, it has a much broader meaning than just a literal mirror, and it can mean anything, you know, like a funhouse mirror can give you like a slightly warped reflection.

You can have a pond or a lake where you get a reflection that’s kind of affected by the ripples and the color of the water right through to something like an impressionistic painting, like a Monet painting of his water lilies that is totally impressionistic, but it is definitely like you go to the place that he painted and you can tell it’s the same place and it is an interpretation of that real place.

And so that broader definition of reflections is really what we’re leaning into, and we’re not going for the whole pixel perfect real time copy of the Atom. But more a sense of what does it feel like? And how does it feel similar? But it can also be a little bit different as well. And create something that’s more interesting and more engaging or more fun or more peaceful, you know, whatever you feel it needs to be. Once, you know, all these creative tools are out there.

BYE: Awesome. Well, I think that actually gives me a really good sense of your intentions and your philosophical principles that you’re basing your company on and from there and extrapolating out and building it and really excited to see where it goes as it continue to evolve and progress down the path of building the Living Cities. So I guess just to wrap things up, I’d love to hear from each of you what you think the ultimate potential of these immersive technologies of virtual reality, augmented reality, or just the blending of the virtual and the physical together, what you think that might be able to enable?

GAETA: Well I’ll go first and Matt can like land the plane. How long have you been doing these podcasts, Kent?

BYE: It’s been over eight years now.

GAETA: Yeah. They’re amazing, and I’ve heard many of them. And so much has happened and changed and evolved. And it’s like, this is a road, right? We’re like trying to get on a road. And the road is a long road. It might not ever end. You know, imagine this road 20 years from now, 50 years from now. Right? But the idea of the road we’d like to get on is one that’s actually intertwining the real world and reality with the virtual, a sort of an extension and amplification of the real.

And these technologies that we have — it’s just remarkable and impressive, you know, what’s actually been done in the last decade. Incredible, really. Everything from like computer graphics depictions of real things to ways of seeing and being inside and touching these things in a sense. But we’ve literally just gotten a bunch of colors of paint and some brushes just in the last few years, couple of years.

You can make an argument that like this year and the next few years are like the beginnings — the beginning now. We’re finally almost ready to get going. So the answer to the question is it’s a road we’re stepping on, I think everyone’s going to step on it. And we’re really looking forward to collaborating with people, having others really tell us what they want to do, what we should do. I mean, we just have a general sense, right, of the elements of the formula of this alchemy. But we absolutely need people to sort of mix these things together with us and learn from them.

MIESNIEKS: Yeah, for me, yeah again, if I think long term and get away from products and devices and things. It really is about giving people superpowers to let them influence and change their perception of reality. That’s kind of at the heart of it. And hopefully those powers are used for good or at least incentivized for good, and they make the world a better place.

BYE: That’s awesome. Well, is there anything else that’s left unsaid that you’d like to say to the broader immersive community?

MIESNIEKS: We want to talk all about our product, but we can’t yet. We’re just going to wait until we get something to show off that we’re really looking forward to starting to show it off internally. It’s super exciting. Just in the last week, you know, we had some milestones where everything kind of is hooked together now and we’re getting a sense of what it’s going to look and feel like. So, you know, for me, I just can’t wait to start talking about that.

BYE: And well, John and Matt, thanks so much for joining me to help give a little bit of a sneak peek. I know it’s still early days and we’ll have a lot more to talk about once you are able to show the world what you’ve been creating. But I think I love to just to hear your journeys up to this point and to get a little more context as to what’s inspiring you and the different principles that you’re building upon. So really looking forward to seeing where you take this. So yeah, thanks again for joining me here on the podcast.

MIESNIEKS: Yeah, thank you. Thanks for having us.

GAETA: Of course, it’s great to be here.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

The-Metaverse-Matthew-Ball-1920x1080

Matthew Ball’s “The Metaverse: And How It Will Revolutionize Everything” book (launching today) is the most comprehensive articulation of the Metaverse technology stack, as well as it’s foundational principles catalyzing a paradigm shift from 2D into 3D into spatial and immersive computing. What started as an essay on Fortnite (Feb 5, 2019), then evolved into an essay on the Metaverse (Jan 13, 2020), then evolved into a series of essays with his Metaverse Primer (Jun 28-29, 2021), and now has tripled in size with the publication of Ball’s The Metaverse book (Jul 19, 2022) that dives into specifying each layer of the Meteverse tech stack, where it’s at now, and where he expects it might go in the future.

[See the full transcript of my podcast interview with Ball down below].

I find Ball’s definition of the Metaverse on page 29 of his book to be the most complete and satisfying I’ve seen so far.

[The Metaverse is] “a massively scaled and interoperable network of real-time rendered 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”

He spends the middle section of the book elaborating in great detail the low-level details of each layer of the Metaverse stack including networking, computing, virtual world engines, interoperability, hardware, payment rails, and the potential opportunities and perils of the blockchain. This middle section was a lot of wonky technical details to sort through, but it’s necessary to understand the bigger picture and larger argument that Ball is making in his book.

Ball sees that the aspirations of real-time, spatial computing and immersive 3D virtual worlds are pushing the technological, cutting edge of each layer of the Metaverse stack. He lays out in the middle part of the book how there are many yet-to-be solved challenges that are preventing the Metaverse from living into it’s fullest potential.

The Metaverse will be a confluence of all of these foundational elements coming together, which will then have the potential to usher in a complete paradigm shift of computing. But we won’t know what that will eventually look like as it’s an iterative and emergent process. He points to paradigmatic examples such as Fortnite, Minecraft, Roblox, EVE Online, and Microsoft’s Flight Simulator to give us some ideas of how these underlying principles have been playing out in a gaming context, but there’s many more unknowns than knowns for how these underlying principles will get translated into other contextual domains.

Ball elaborates in the first section how Metaverse faces many of the similar challenges that tech futurists faced at the advent of the World Wide Web in trying to predict the full scope of their influence on all aspects of how the new technological, cultural, legal, and economic dynamics all fit together. Ball documents many of the wrong predictions of the Internet and WWW that we can look back on and laugh at with 20/20 hindsight, but this makes it clear for how we’re still well within a similar occluded zone with the Metaverse for how it’s really difficult to know how all of these new capabilities will combine together and impact society.

He includes a brief survey at the end of his book of some of the ethical and moral challenges of the Metaverse, and Ball says that the open questions on ethics is a common open question that often comes up. We’re still in the process of trying to understand and deal with the many ethical fallouts of the 2D and mobile web focusing on text, pictures, and video, let alone how the movement from 2D to 3D and asynchronous to synchronous creates some new and even more challenging problems.

There’s still obviously lots of work to be done in the realm of ethics for the Metaverse, but there’s also existing prior work worth mentioning like Madary and Metzinger’s “Real Virtuality: A Code of Ethical Conduct,” XR Access & XR Association’s XR Accessibility GitHub, XRSI’s Ethical Research and Standards, the IEEE Global Initiative on the Ethics of Extended Reality published a series of eight white papers [published in 2022 after Ball's 2021 writing deadline], and I’ve given a number talks on XR Ethics, XR Privacy, the need for robust responsible innovation frameworks, and the need for Ethics within the Metaverse. [Update: Philosopher Evan Selinger outlines more XR ethical issues in his book review.]

My personal process of peering into the future has been to focus on the direct, embodied, phenomenological and subject experiences of VR and AR applications across many contextual domains for the past 8 years, and to listen to key developers through oral history interviews. My episode #1000 of the Voices of VR podcast elaborates why I’m personally all-in on spatial computing as a foundational paradigm shift as I gathered 120 answers to the ultimate potential of virtual reality since May 2014.

Ball takes a much more objective, distant, and quantitative approach in telling the story of the Metaverse through the economic, cultural, and gaming trends that he’s been tracking. He takes the view that the Metaverse is hardware agnostic, and so it’s not predicated on specific VR or AR technologies. But he also expects that XR devices and the emerging, embodied neural interfaces that are coming along with it will likely be the most compelling ways to access these real-time, interconnected virtual worlds to fully appreciate the new levels of immersion and presence.

I’ve seen a lot of wrong and overly optimistic market predictions about the project size of the VR and AR market over the past eight years, and so I suppose that’s one of the reasons why I’ve paid more attention to the direct experience of XR technologies and listening to what the artists and pioneering XR developers across multiple domains are doing with the tech. There’s not a lot of Ball’s own direct experiences of the Metaverse within his book, or original on-the-record reporting for the companies or experiences he’s featuring. Ball told me that’s because he wanted to show how these broader trends of the Metaverse are happening independent of any one singular company or technology stack. It’s more of a survey and synthesis of publicly-available reports and tech insights of the gaming community in how Fortnite, Minecraft, Roblox, EVE Online, and Microsoft’s Flight Simulator hold some key insights for the future of computing.

The end result is that Ball’s Metaverse book helps to set a broader context for many of the same technological shifts and trends above and beyond what’s happening in XR. Despite my more narrow focus on XR, Ball and I come to many similar conclusions for how the shift towards real-time, interactive virtual worlds is a provocation for the next phase of computing as seen in the major trends of social gaming and technological roadmap for some of the biggest Big Tech players in Silicon Valley. There’s also a clear convergence for what’s happening in the most popular virtual world platforms that are currently primarily accessed through 2D portals, and how XR tech will be some of the most compelling ways to interface with these 3D virtual worlds.

I’d highly recommend checking out Ball’s book to hear more about the specific examples and underlying principles that he lays out for each layer of the Metaverse tech stack. Ball’s writing is helping to tell the broader story of the Metaverse to leaders of the tech industry, but helping bring a more grounded and clear definition of the Metaverse reflecting what’s already happening in gaming right now and where it’s likely to go in the future. It also helps set a broader context for how the future of networking, computing, virtual world engines, interoperability, hardware, payment rails, and potentially the blockchain will play into this Metaverse ecosystem that will very likely include XR devices and new neural HCI input devices that help make the transition from 2D to 3D.

I had a chance to do a half-hour interview with Ball on Tuesday, July 5th, 2022, and you can read the full transcript below or listen to it here.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

FULL INTERVIEW TRANSCRIPT from July 5, 2022.

MATTHEW BALL: My name is Matthew Ball. I’m the author of “The Metaverse: And How It Will Revolutionize Everything”, which comes out on July 19th. But I wear a different set of hats. That is, an investor, an entrepreneur, a producer in TV, film and video games, usually in and around this theme, but sometimes in more traditional media, and so forth.

KENT BYE: I read an interview you did with SIGGRAPH that traces the evolution of this as an idea where you had some essays about Fortnite and then you moved into the Metaverse, and then did a 35,000 word series of essays on the Metaverse. And then now you’ve tripled that into over 100,000 words of the Metaverse. And so maybe you could give a bit more context as to how you started to write some of these series of essays on the Metaverse.

BALL: Sure. So you’re quite right. I talk about in my book the fact that the Metaverse is a nearly 30-year old term. Of course, anyone who’s listening to this podcast know it originated in 1992. Snow Crash. But the ideas it describes can be traced back nearly a century. That starts in the 1930s, with the first known discussion around VR goggles into immersive VR environments, A.I., Holography through the thirties and fifties.

And so it’s not new that we’ve considered the Metaverse. I’ve been familiar with the term since the late nineties, and I’ve known about games that aspired to build it since around that same time. But my personal experience relates to my last job, which was head of strategy at Amazon Studios, which at the time ran nearly everything that we think of Prime Video to be.

And for much of the last decade, that was the frontier. The new frontier in media was direct to consumer streaming services. And I started to get the sense that gaming really was ready for prime time. Of course, it had been growing slowly for 50 years, but it felt like it was on the cusp of cultural domination that really didn’t even seem possible at the start of the last decade.

And then in 2018, I started playing a lot of Fortnite, a lot of Roblox, and I could start to get the sense that it wasn’t just that gaming was moving to the front lines. It’s this fantastical idea of the Metaverse was starting to become a practical opportunity and that some had actually started to build it, that the foundation was in place.

And so I wrote a piece at the end of 2018 called Fortnite is the Future, but Not for the Reasons You Think. And the goal of the piece was to break down many of the narratives about Fortnite at the time that were running, the day that was. “Wow, I can’t believe this game is free. And yet it generates more revenue than any other.” “Wow. Look at this game’s cross-platform nature. Wow. Look at the idea that this is being patched on a weekly basis.”

And I was trying to articulate that none of these things were new, actually. What was new was their creative implementation, the enormity of their popularity. But more important was how it was leading up to the Metaverse and what was starting to become a visible Metaverse strategy at Epic, though, they hadn’t said as much.

A year later, I wrote a dedicated Metaverse piece and this was pre-pandemic but became quite popular as the pandemic began to take off. Then a year later, I wrote The Metaverse Primer, which was an effort to really encapsulate my learnings over the past year and a half about the technical requirements. And then, of course, in 2021, we saw the Metaverse popularized as a theme.

Mark starts talking about it in July and October. He renames the company by the end of the year. Unity and Roblox are the two largest gaming IPO’s ever. Roblox is the largest gaming platform in the world. And so I decided to write this book, which really culminates five years of writing and thinking on the topic.

BYE: Yeah, I just had a chance to finish it and I think it’s a pretty authoritative history assessing all the different confluence and and concrescence of all these technologies coming together. And what I really appreciated about the book was pointing towards different examples from each of these different sections that are paradigmatic examples that can extract the philosophical principles from those examples, extrapolate them out as they continue to diffuse out into larger and larger scales and out throughout the culture.

But before we start to dive more into the book, I wanted to get a bit more context as to your background and your journey into this, because your style is very interesting in your writing because I get the sense that through the process of writing your essays — I know you’ve had conversations with people like Mark Zuckerberg, and I’m not sure if you’ve been in conversations directly with Tim Sweeney, but you’re certainly well-informed, and at the center of a lot of it.

But your writing style is very distant in the sense where I’m not seeing any direct quotes. You’re using quotes from other news articles, so I’d love to hear a little bit more about your process for how you put this stuff together. And if you’re doing consulting with some of these companies behind the scenes? Or if it’s just more of your role as an analyst to be able to immerse yourself to the extent that you do and then try to gather all the publicly-available information to draw out the larger economic story that makes this argument that you’re seeing all these different confluence of technologies come together?

So I’d love to hear a little bit more about your process for how you do this sensemaking process of what’s happening in the realm of technology and where it’s all going.

BALL: So you’re right. In the book, I don’t use any direct quotations, and in fact that was partly a reflection of the constrained writing environment. I wrote the book over three and a half months. At the end of last year. There really wasn’t the opportunity to go deep, to do investigatory pieces, to understand the multi-year history of many of its leading individuals.

And so that was partly a constraint. But the goal here really wasn’t to get deep into any one company. In fact, one of the things that I take personally a lot of happiness from is you’ll note that there’s oppositional or competing endorsements for the book. So I have Epic CEO and Unity CEO. I have the Sony CEO. And then I have the Microsoft Gaming CEO.

And that was because my goal with the book was not to talk about specific instantiation of the Metaverse, not a specific philosophy or ideology, wasn’t to go into the history of Tim Sweeney’s efforts to build the Metaverse nor those of Second Life. It was to provide a survey as to the technologies in the multiple different theses. I got one piece of feedback from the CEO of a large tech company saying, I loved your blockchain section, but I came away not sure whether or not you were pro or con.

And I said, “That’s the best compliment I can imagine.” Because I wasn’t trying to advocate for this. So when it comes to your question as to the writing process, most of the education came from entrepreneurs and founders. I have a venture fund. I’m a partner at MAKERS Fund, I’m an industry advisor at KKR. And so I do early stage investing, kind of mid-stage and then late stage growth equity investing.

And those entrepreneurs are outstanding because they have 20 years of experience. Sometimes they’re disgruntled, sometimes they feel like they wasted ten years of their life trying to solve a problem at a big company that was incapable of doing it. And so there’s this bounty of information, but I think I’ve always been good at doing at distance summaries of the marketplace.

And I see this very much as the manifestation of five years of work on theme, but 15 years of writing as a blogger.

BYE: Yeah, yeah. I think my assessment of the overall book is that you get the overall story completely nailed and correct — in terms of my own assessment. And I really actually did appreciate your critiques of the blockchain because, you know, there’s things that I come to a lot of similar conclusions in terms of like there’s potential for some of it, but there’s also a lot of challenges and risks and other problems that you elaborate in your book.

But I guess one of the other differences that I’d say from how I approach covering the industry is much more direct, phenomenological, immersed into the embodied experience of a lot of these technologies. And I had a harder sense of seeing where you’re oriented when it comes to the direct experience of virtual reality or the direct experience of AR.

Because you talk a lot about Fortnite, and Roblox, and Minecraft and EVE Online as these kind of leading indicators and maybe just one mention of VRChat or Rec Room, which, you know, for me, when I think about the future of the Metaverse and immersive technologies, I think about my own experiences I’ve had in Rec Room and VRChat and the qualities of presence.

And in your definition of the Metaverse, you mentioned presence, but I wouldn’t say there’s a deep elaboration of the concepts of presence within this book, so I’d love to get a little bit more context for you of like your own journey into VR and how you assess the roles of VR and AR technologies into the continued evolution of the Metaverse.

BALL: So I love this question. I really like VR. I get super excited about AR. There are many people in my network who just believe that this technology is so far off that it’s actually absurd that we spend much time focusing on it. I certainly think the technologies are hard, the problems are hard, they’re worth solving. But I am also of two perspectives.

One is that they’re not neigh, which is to say, I don’t think that we’re going to be replacing our smartphones within this decade. I don’t think we’re going to be doing it probably within the first few years of the next decade. The second thing is, I don’t believe that the requirement for the Metaverse there was this great Tweet Thread I’m sure you saw from Neil Stephenson last week or the week before.

And he talks about the idea that when he wrote Snow Crash centering around AR and VR were what he called a good hypothesis, especially if you were a science fiction author at the time. And he said what he couldn’t have imagined was that decades later, you would have billions of people inside 3D render real time environments interacting purely via touch or even WASD, right? Keys on a keyboard for forward, back, right and left.

That would have been an unintuitive estimate. And so I think about VR and AR devices as more intuitive as being essential to further immersion in the Metaverse. They’re doubtlessly destined to become the best, most popular and preferred way to interact with these environments. But I don’t think there are requirements. And in fact, and this is probably one of the reasons why you see them as relatively non-focal is I think that separating the relevance of 3D simulation in the Metaverse from that hardware helps to explain to many people that the Metaverse is not 2050.

The Metaverse doesn’t need you to believe we’re going to replace our smartphone tomorrow. It’s actually about the more underlying technologies, a persistent virtual network that is relatively endpoint agnostic doesn’t mean you’re not going to have different end points with different experiences. My personal experience with VR, I mean at Makers Fund, we’re early investors in VRChat. I use it a lot.

I find it really fun. I’m really lucky that I don’t experience nausea. And I’m also, I think, fortunate in the sense that I just have a more intuitive sense and feel for virtual immersion. And to the extent I might be predisposed to nausea, I think my mental expectations help. But I think it’s an incredible environment for young people.

When you like some old, but like I’m no ranting because I get excited about this topic. But you know, Chris Dixon talks about the fact that when a person realizes that their active brain can’t overcome what they know to be a fantasy in VR, it’s a pivotal moment, right? You put someone at the edge of the cliff in VR, and even though it’s low risk, even though it’s low frame rate, they struggle to jump.

That’s the first time you can tell. This technology may be far from prime time, but it’s not far from substantial immersion that deeply affects us. The second is when you give it to children and they have this natural, instinctive feel, you see this a lot in Rec Room and VRChat where to some extent I still know I’m in VR or when I’m in VR.

But you look at a six year old and they’re just playing around and like that idea that like they’re turning their head to the right and it’s not a physical thing they’re looking at. That boundary just seems totally diffused at this point.

BYE: Yeah. Yeah, I think that makes total sense and also really agree with that because I know Tony Parisi came up with The Seven Principles of the Metaverse, and one of it was that it’s hardware agnostic. And I think another big point that’s made in your book is how Fortnite was catalyzing this cross-platform play that both Fortnite, Minecraft and Roblox are available and all these different platforms. And so Rec Room is another one that I think is taking that same path of being on all these different platforms.

So I think that’s actually a key part to not just tie it to the VR. I’m wondering if you can maybe — I’ve watched the video where you just rattle off your authoritative definition. I’m wondering if you could just share that definition and then how you’ve structured the book, because you’re breaking down each the different chapters going into great detail of your definition. And so I’d love to have you share that and I have some thoughts.

BALL: Sure. So I actually cheat a little bit. I describe the Metaverse primarily around technical keywords and concepts. I’m not actually defining it. The reason why I do this is if you take a look at the definition of the Internet, you’re talking about it either as a network of networks. Again, a little bit of a description, but you talk about it as the Internet protocol suite, not a very helpful definition.

It’s more technical. It’s talking about the protocols. A description of the Internet tends to be more intuitive. So I describe the Metaverse as a massively-scaled and interoperable network of real-time, rendered, 3D virtual worlds, which can be experienced synchronously and persistently by an effectively-unlimited number of users, each afforded an individual sense presence while supporting continuity of data such as communications, identity, history, entitlements, objects, and so forth.

What we’re really doing is describing the technical requirements and experiential observations to have a proper parallel plane of existence. Right? That’s essentially describing the things that exist in the real world today. And so the middle third of my book is a deep dive into what’s required for that to work. We’re talking about networking capabilities, computing requirements, the actual tools to create and render virtual environments, interoperable standards to exchange information coherently, comprehensively and securely.

Talking about the payment systems that are required to actually build a thriving Metaverse, not just a functional one. An examination into blockchain, and why many believe it’s essential. Others believe it’s useless in all applications. And what potential middle grounds might look like? That’s the middle third of the book. It’s building the Metaverse. The first third is focused on “Why now”? What is it?

Explaining that definition in great detail, but also getting into some of the fundamental questions as to why do people believe that there’s a war here? Why is it important? Who wins? And then why — and this was one of the most fun things for me to examine — does it seem that the forerunners of a multitrillion-dollar transformation are gaming companies, which have otherwise been a relatively-trivial part of the economy focused just on consumer leisure?

And then the last third of the book is the more speculative elements. Start to talk about what might the value of the Metaverse be, who might win, which technologies are likely to prevail, what sorts of businesses will be built? And then for consumers concerned about what the future might be, how do they get involved, what should they do?

And then lastly, I should note I finish by trying to remind people what you can and can’t know about the Metaverse today.

BYE: My experience of reading each of those chapters was that there was a lot of really low-level technical details, but as you get through it all, the big picture all comes together. I think as you see the confluence, just like you talk about near the end in terms of the iPhone moment of the iPhone and concrescence of all these technologies coming together. And so I think similarly, you’re telling the story of all of these things that are coming together right now, and why it’s important.

Now, a couple of things I wanted to ask. One is that I know that the challenge of writing a book is that there’s always stuff that happens afterwards. And just a couple of weeks ago, there was the Khronos Group in the Metaverse Standards Forum that had this big coalition.

In the book, you’re talking about how it’s very difficult to have people collaborate. And I guess when I was reading those, I was like, “Yeah, but they have already collaborated on OpenXR. They’re already are a lot of these interoperability efforts. There’s already open standards in terms of object formats, in terms of glTF, and there’s USD.”

So I’m just curious what your reaction is in terms of some of this latest news from the Khronos Group and the Metaverse Standards Forum. And if that supports a lot of this larger thesis that you have and the difficulties of coming up with those standards of interoperability? And if you have any additional thoughts that you didn’t have time to include in your book because it wasn’t even created yet?

BALL: So I think I’m very optimistic in the book about the establishment of interoperable standards. It’s the single biggest area of pushback I receive, which is do you actually believe this can happen? And so I do. And I believe that the primitives or pressures or gravity of expanded networks will deliver. In that regard. The Metaverse Standards Forum is an important first step.

At the same time, I’d say the following the easiest part about establishing standards is always getting a bunch of people in a room together. We’ve seen that numerous times. You’re right, we have OpenXR and WebXR. But very few people support them and more importantly, their antecedents OpenGL and WebGL are very rarely supported. None of the major consoles support them, for example.

Actually, Oculus is the most used console to use these. You’ll also find that there are multiple other standards forums of some way, shape or form in some regard. Khronos exists to do that, but then you have the WesXR and Metaverse Standards Group. You’ve also got the XR Association. There are multiple different groups. Most people sign up for them because they would rather be heard than not heard and they would rather shape the standards than have their competitors shape them.

I use this XKCD comic in the book that basically jokes that a bunch of people get together to say There are 14 standards. We should have just one. That’s how you end up with 15 standards. And so that’s true. But I’m still hopeful, right? You have to have communication to actually end up aligning on something. At the same point, we can see some important omissions from the Metaverse Standards Group.

Most obviously Google and Apple, neither are participating in the forum. But I think more notable is the fact that there aren’t really other application or content-layer members. Epic is in there and Meta’s in there, but Activision isn’t in there. EA’s not in there. Roblox isn’t in there, Ubisoft isn’t in there. And so what ends up happening is lots of technologists can agree on what the best protocol or tech standard will be, but if it’s not then deployed into application-layer services content, it’s just a technical standard, right?

In some regard, it’s like Esperanto. We come up with better artificial languages all the time, but if they’re not actually adopted and deployed, it doesn’t matter. And so, again, I’m optimistic this is right. I actually think that the Metaverse Standards Forum has far more participants than I would have originally guessed, and they’re bona fide a top to bottom.

But no one has been asked yet to make a concession. No one has been asked to deploy a standard that doesn’t optimize for their system or that might advantage one of their competitors. And we don’t yet have an operating network. There’s also another point that many have made at this point, which is there wasn’t too much on ethics, if anything, as to how are those standards going to manage for the softer issues around different platforms rather than just technical interoperations?

But again, it was just announced. The hard work is yet to come.

BYE: Yeah, I was involved with the IEEE Global Initiative on the Ethics of Extend Reality, where we’re been digging into some of those issues. But yeah, it’s huge, huge issue. One other big point I wanted to get in — because I know we have limited time here — is the anti-trust aspects of both Google and Apple seem to be really, really key in terms of, you know, there’s the lawsuit from Epic Games and Tim Sweeney going against Apple.

There’s a lot of information from discovery that was made available that’s in your book that helps paint the picture of some of these different potentially, anti-competitive dynamics of these duopolies in both Google with Android and Apple’s iPhone iOS. In terms of the 30% tax. It seems to me that there likely needs to be some sort of government intervention to break up this.

Otherwise we’re going to have the same type of thing with Meta seems to be wholeheartedly adopting the same thing that they’re critiquing — like Zuckerberg’s critiquing Apple around the 30% tax, but then they’re turning around and doing the exact same thing. And then adding even more in terms of Horizon Worlds in terms of their taxes. So it seems to me that in order to really have this open, interoperable Metaverse that is going to even have the potential to have like an open web manifestation, we need to break apart this 30% tax that’s at the hardware layer.

Otherwise, we’re going to be living into this world where just a handful of companies control our digital future. So I’d love to hear some reflections on that, because that seems to be, for me, at least one of the big takeaways. I’m glad that you are articulating that because I have similar frustrations with how Apple drags it’s feet with implementing WebXR, and not really great implementations with WebGL and it all serves their own ecosystem.

But in order to break that apart, it feels like you need to have at least some level of government intervention.

BALL: So I wholeheartedly agree. I think it’s nice that you and I are speaking today on, what is it, July 5th, where the EU started to announce more of their digital markets reforms. And they’re coming pretty firmly for a lot of the concerns that we have or you and I have, which generally seem to be shared by many in the developer community.

Epic Games as one example. The challenge about the Metaverse is we’re talking about a virtual platform, a persistent network of experiences which exist irrespective of any execution, any hardware, any platform. And yet we have to access them through a platform, a hardware device. There are essentially two of them. And of the others, they’re all contending to be the payment gateway.

Why wouldn’t you? Right? Visa’s one of the best businesses on Earth, and the Apple App Store is an even better one. And it’s not just that it increases payment fees, it’s that, as you’ve also astutely observed, they cripple competitive technologies, they stymie competitive business models. And we know that it’s ultimately transferring money from the pockets of independent creators to the largest and wealthiest companies on Earth.

At the end of the day, I think the challenge here is we’re actually talking about, in some regard, penalizing companies for having been so extraordinarily successful, for having built extraordinarily great products. Right? The Apple iPhone’s integration verticalization is why the mobile era is so accelerated. We wouldn’t be where we are today without that device. And yet we can now start to feel that as we shift to this next platform, it’s impeding us.

Sometimes I think you can very justifiably say in a maliciously, deliberately self-preferential and externally punitive way. But again, talking about the EU, we see evidence of regulatory action. I think you can see — like I’m Canadian, Tim Cook did not used to do interviews with the Toronto Star on privacy. I think that they’re doing that because they understand that they now need to win hearts and minds very differently to maintain their stewardship of the world’s most important computing platform.

But yeah, we need a lot of regulatory action. I’m hopeful there as well. I spent a lot of time at the end of the book saying that we’re disappointed with the last 15 years of digital regulation, but many of us, especially in the millennial generation, assume that that’s the pattern for regulators. And of course, we have political dysfunction that’s new today.

But through all of the 20th century, in the 19th century, new technologies were very vigorously defended by regulators telecommunications, energy, rail, steel. The government was usually pretty early. The Internet Engineering Task Force comes from DoD. The Internet comes from DoD. And the U.S. government had the foresight to relinquish control of those working groups and birds of feathers and standards bodies, understanding the criticality of doing so.

And so I’m hopeful, but the challenges are tough.

BYE: Yeah. Yeah. I guess the last question I have is what do you think the ultimate potential of the Metaverse might be? And what it might be able to enable?

BALL: Well, so this is always a fun question because you have Jensen Huang, the founder and CEO of NVIDIA talking about the fact that he believes that the GDP of the Metaverse will essentially, eventually exceed that of the physical world. The physical world economy today is roughly $70 trillion. We have another $20 trillion that’s coming from digital. And so we’re looking at $50 – $60 trillion over time that might go to the Metaverse.

You can describe it differently as say, billions of individual people reaching almost every consumer, every country, every sector, globally. But the humanist perspective is, look, I believe the digital era has had a lot of bad things, dis and misinformation, data rights, data security, the role of algorithms, toxicity, abuse, harassment, radicalization. But I think that technology is fundamentally agnostic. And I think that overall, I believe that the Internet has been a profoundly good thing for the world, especially when it comes to the democratization of information.

And so I’m hopeful that the Metaverse will allow us to correct many of the problems of the last 15 years that we as developers, users, and consumers can positively affect the trajectory of the Metaverse, who leads it, with which philosophies, and why. But that fundamentally, it will continue that transformation. I talk about in the book that education is an area of extraordinary importance, but which has been barely impacted by technology, especially when it comes to access to educational resources.

And so I’m hopeful that the Metaverse can really improve that, while also bringing more job opportunities to those who, unlike myself, are not born upper middle class in Canada, in a large city, with access to many of the best jobs in the world.

BYE: Awesome. Is there anything else that’s left said that you like to say to the broader immersive community?

BALL: Well, well. Kent, let me ask you a question. What are you least certain about the Metaverse?

BYE: Well, I mean, I really appreciated your elaboration on the blockchain, because I think that the way that I see the blockchain, you know, the Peer-to-Peer Foundation came up with an Accounting for Planetary Survival White Paper, and what they described was the difference between what they see as blockchain based upon libertarian values versus blockchain based upon common-based, shared resources and values.

So every single compelling aspect of the blockchain use [in your book] was around like the RNDR token or shared use of resources to do distributed computing. So I think there’s a real compelling use case for the future of distributed computing. But in terms of the libertarian value exchange that is doing rent seeking behaviors and basically replicating the scarcity model of the existing economy, I’m not convinced that buying and selling virtual land plots is going to be the future.

I think it’s going to be much more experiential-based and more along the lines of what Tim Sweeney and Fortnite have been doing to create really vibrant ecosystems and economies. So I really appreciated that you collaborated some of those different use cases of the blockchain, but I think for me at least, I think there’s a lot of engineering flaws that I don’t know at this point can be overcome. Because there’s like Sybil attacks and with proof of stake moving over, then you could have basically one whale that overtakes and controls everything.

So it’s not actually decentralized. So every functional utility that’s coming out of the decentralized systems have some sort of centralized point that I feel like can be vulnerable to being taken over and replicating the existing power dynamics of the existing economy. So that’s my hot take in terms of where the cryptocurrency is, but it’s actually not a matter of the technology [being] agnostic, it’s more the values that are underneath the technology.

So whether it’s being driven by a libertarian scarcity model that’s trying to do rent-seeking behavior or it’s generating more communal, shared resources to create something that would not be possible as an individual. And I’m more excited about the Decentralized Web efforts from the Internet Archive and more of those technologies, the RNDR token and other things that are going to actually bring utility that are experiential rather than for speculation and creating backdoors for fraud and abuse and money laundering and all the stuff that are the challenges with that.

So when I’m reading your book, that’s the things that I appreciated that you’re articulating those perspectives because I felt like my perspectives are being reflected in your book. But yeah, to me that’s the biggest open question in terms of that.

For me, I’m all in in terms of spatial computing. And I also appreciated the call out to CTRL-Labs and the change from 2D to 3D, meaning moving from a keyboard in a mouse and having the user input that’s using more electromyography and wrist-based inputs, and neural inputs, noninvasive neural implants, and brain computer interfaces.

That for me, the shift from 2D to 3D has to do with the types of embodied and spatial computing that comes from completely new human computer interaction interfaces that seem so much like science fiction of being able to detect the firing of individual motor neuron, meaning that it can detect your intentions without you actually moving. And how that’s going to be translated into how we interface with computing, I think is going to blow people’s minds.

So I appreciated that you included the CTRL-Labs in there as well, because I do think that the ways that we interface with computing is going to be so radically different in the next 15 to 20 years. And just from what we’ve seen already in terms of the neuroscience and these principles, we’re both kind of identifying these new methods of interfacing with the spatial computing, I think are going to be so revolutionary that once they really look into the neuroscience and the technology trends, it’s going to get really weird.

But yeah, the biggest questions are the centralization of power. Having just a small handful of companies that are controlling everything, and privacy and the ethics are the big things — and NeuroRights are the other things. For me, that’s the thing that I’m not knowing how that’s all going to play out. If we’re actually going to come to the other side of this and figure out how to as a collective, overcome some of the power law dynamics of having a small handful of companies that are controlling [a] disproportionate and asymmetrical [amount of] power.

And actually what the business models are actually going to be that go beyond surveillance capitalism, [which] seems to be fueling the existing methods. And if there’s going to be ways for people to take more ownership of their data. But also have business models to go completely away from that more extractive model of surveillance capitalism. So anyway, that’s a little bit more of my thoughts.

BALL: No, I’m — I’m totally aligned with you. I mean, look, the virtual scarcity model, virtual real estate I don’t believe in. I want to reserve the right to change my mind, but I don’t see something there yet. It seems like, you know, we always start every new computing wave by trying to recreate the thing closest to reality. Right? The skeuomorphism, the game center on the iPhone, the yellow line notepad on your iPhone Notes app.

This seems like the worst possible instantiation of trying to reproduce the real world with more tools, virtually. But I think everything the rest that you said is really I’m aligned with, so I think we can leave it there.

The only other thing that I’d say is the thing that excites me most about the advent of new hardware is the accessibility improvements are extraordinary. The Xbox accessibility controller, what we’re starting to see with CTRL-Labs and hopefully BCI can bring so many people into the digital era, not to mention virtual experiences that simply cannot today. And I think that’s a really important good. We talk about the Internet and games bringing people together, but often forget how many people just can’t through physical ability.

BYE: Well, Matthew, thank you so much for writing this book and taking all the time to detail all these things. I think you’ve provided a really nice mapping of where things are at now and where they could be going with a lot of examples to get some insights for the deeper patterns and trends of this Metaverse that you’re helping to specify the structures and forms.

So thanks again for writing it all down in the book — I know it’s not easy to do that — and to join me here on the podcast to help unpack it all.

BALL: Thank you. It was my pleasure.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

open-bci-project-galea
I had a chance to drop by the Brooklyn office of Brain Control Interface start-up OpenBCI in order to get a hands-on demo of Project Galea, which includes a range of different biometric and physiological sensors such as EOC, EMG, EDA, PPG sensors in addition to 10 EEG channels and eye-tracking into a single VR headset. I previously spoke to CEO Conor Russomanno about Project Galea in 2021 and before that about OpenBCI in 2016.

It’s still early days in terms of what the real power and potential of fusing together many different biometric and physiological data streams will be, especially as most of the demos that OpenBCI has developed only use a couple of sensors at a time, but nothing yet that combines the raw data streams from multiple types of sensors in a novel way. But the access to time-synchronized data across multiple data streams will likely open up lots of new experiments and data fusion insights for something that has otherwise been difficult to combine this many physiological data streams.

The EMG sensors on the face can be used as a real-time neural input control, which was the most notable sensor from an experiential perspective. The other data is harder to get an intuitive sense about, although I did quite enjoy their Synesthesia app which translates brain wave frequencies into colors and tones within an immersive environment providing lots of super immersive, multi-modal biofeedback for signals that are otherwise difficult to get an intuitive sense about.

I had a chance to catch up with Joseph Artuso, OpenBCI’s Chief Commercial Officer in charge of partnerships and commercialization, as well as with Co-Founder and CEO Russomanno about the pre-sales starting up on May 31, 2022 for Project Galea with hardware partner Varjo. We talked about the development process for Galea, some of the target markets of academia, XR industry, and game developers (the price will be well beyond the price range of consumers), and some of the possible use cases so far, and an update on their collaboration with Valve that was first reported by Matthew Olson in The Information.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Tribeca-Panel-Making-a-Difference-with-Non-Fiction-Immersive-Stories

I moderated a Tribeca Talks Panel discussion on “Making a Difference with Immersive Non-Fiction Stories” that on Sunday, June 12, 2022 where we focused on the structure and forms of immersive storytelling, and then reflected on the challenges and opportunities for creating and distribute immersive stories for social good.
The panel featured the following panelists:

  • Meghna Singh (co-creator of Container) [See Voices of VR Episode #1005 on Container from Venice 2021]. She’s a researcher and visual artist based in South Africa, and visual anthropologist with a focus on migration and immersive arts.
  • Ingrid Kopp, based in South Africa and runs non-profit organization called Electric South across Africa to help develop, produce, and distribute VR & AR.
  • Brenda Longfellow (co-creator of Intravene) Documentary filmmaker out of Toronto, linear filmmaker, interactive doc, working with communities doing co-creation, immersive audio set in an overdose prevention site in Vancouver collaborated with Darkfield and Crackdown
  • Glen Neath (co-creator of Intravene) co-founder of Darkfield who is making binaural audio pieces for shipping containers, & making shows for Darkfield Radio App since the Pandemic
  • Charlie Park (producer of Please, Believe Me), background in sculpture and Film, and has been working for Emblematic Group making VR & AR content since 2017

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

ReachYou
ReachYou is an AR story that receives transmissions from the future in order to connect to the tenderness of the now. It’s a slow media project aimed to create a portal into a story world that allows the audience to access emotion states of deep reflection and contemplation on topics like grief, what’s worth preserving on the human record, and the full spectrum of what it means to be human.

I had a chance to sit down with husband-and-wife team Jonah Goldsaito (creative technologist and visual artist) and Katrina Goldsaito (writer and performance artist) to talk about their journey and process of creating ReachYou. The ReachYou AR app is available to download from ReachYou.Space where the creators are planning to push out more transmissions in the future. I’d recommend experiencing their first transmission, and then tuning into our conversation to get more context for how this project came and where it might be going in the future.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

planet-city-vr
Liam Young is a director and speculative architect who is designing regenerative futures via provocative thought experiments. Planet City VR is the provocation of centralizing all 10 billion humans on the planet to a city the size of Texas so that the rest of the world could go back to seed, and serve to counter climate change. It’s not a literal proposal, but more of a futurist, worldbuilding design prompt that pushes the cutting edge of regenerative science, architecture, culture, and anthropology up to or beyond the edge of what’s possible. Planet City is a multi-media project where there is a Planet City film, Planet City book, Planet City TED Talk, and now a Planet City VR experience that premiered at Tribeca Immersive 2022.

I had a chance to unpack the Planet City VR provocation with Young at Tribeca to hear more about his intention with the project, collaboration with science fiction authors, scientists, anthropologists, and artists, his trip around the world to paradigmatic examples of renewable energy at scale, and how this project allows us to reflect on who we want to be in the future and how technology changes us.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Liam Young: Planet City — a sci-fi vision of an astonishing regenerative future | TED

Trailer for Planet City

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

LGBTQ-Plus-Museum
The LGBTQ+ VR Museum won the inaugural Tribeca Immersive New Voices Award as it curated a number of really touching virtual objects, art, and stories from LGBTQ+ people. Antonia Forester started the project after discovering that there was not a physical LGBTQ+ museum that she could visit in the UK, and so she turned to her skills within VR development, production, and direction to create a virtual LGBTQ+ VR Museum in collaboration with VR developer Thomas Terkildsen.

I spoke with Forester at Tribeca to get more context about her journey into VR, the process of creating and curating the museum, and some of the emotional reactions she was receiving at Tribeca. It takes anywhere between 30-60 minutes to see everything in the museum, and I found that were some objects and stories that really hit me emotionally.

Overall, I left the experience with a much better understanding about various themes of identity, shame, acceptance, rebellion, and hope, but also normal human experiences that went beyond the LGBTQ+ specific themes. Forester’s allowed each LGBTQ+ subject to select any object or theme they’d like without any restrictions, and so the end result reflected the full breadth of human experience, and allowed visitors to hopefully find some stories they could identify with and related to regardless of their own identity. I found the LGBTQ+ VR Museum to be a very emotionally-evocative experience that shows the power and potential of designing virtual spaces filled with individual stories that can tell a larger collective story that goes beyond any one singular narrative.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality