Philip Rosedale has a lot of really brilliant dreams and visions for how to create a sustainable and open metaverse with High Fidelity. Being that he’s the founder of Second Life, we talked last year about some of the lessons learned in how he’s approaching creating a more sustainable and scalable model of interconnected virtual worlds.
This year his focus was on how important being able to create a hyperlink between multiple virtual worlds is going to be. He says that the history of the Internet provides a really valuable lesson in the fate of what happened to the walled garden platforms of AOL and CompuServe. Even though the content on these platforms were of much higher quality, over time the links between other websites with a more rudimentary and less polished look and feel ultimately won out. Philip cites Metcalfe’s law which states that the value of a telecommunications network is proportional to the square of the number of connected users of the system. As each virtual world is able to link off to different portals to other virtual worlds, then that makes that virtual world that much more valuable and compelling.
This insight seems to be at the foundation and core of High Fidelity’s approach of using open source licensed technology in order for people to stand up their own hosted servers containing the code for their virtual worlds. In fact, part of Philip’s long-term vision is to start to use the distributed computing power of the many more desktop computers and mobile phones to be able to create a virtual world equivalent to the square footage of the entire earth so that it could concurrently create virtual experiences for all 7 billion people on the planet. This vision is what keeps Philip going to work every day, and to be able to create the technological backbone in order to make that happen.
Currently, linked between virtual worlds is a bit of an open problem for how to actually pull that off in a seamless fashion. Text on a website has the ability to unobtrusively link to other websites, and there are metadate queues that add contextual information about where links will lead to. There are no equivalent standards within a VR environment with 3D objects, but the closest metaphor is a using a Door or Portal or completely different building to be able to navigate between different virtual worlds. There are potential perceptual hacks that could be exploited, but Philip cautions that there may be very physical limitations for how we navigate virtual worlds that would the equivalent of the disorienting effects of providing contradictory information to our perceptual system thereby causing simulator sickness.
Philip was also really excited to have created some shared physics simulations in order to have games like air hockey that could be played in VR. This will add a level of physical reality that could add to the coherence of the virtual experience, but also provide a lot of opportunities of engaging in fun and playful activities with other people within High Fidelity environments. A common theme amongst all of the social VR applications from AltSpaceVR, Convrge, and VR Chat is that all of them have been adding more social gaming experiences within their social apps, and so this has been a consistent theme amongst all of the social VR applications.
If I were to bet who has the most viable and sustainable approach for creating the metaverse, then my money would be on High Fidelity’s strategy and open source technology stack. I’m really excited to see how Philip Rosedale and the rest of High Fidelity continue to evolve over time.
Become a Patron! Support The Voices of VR Podcast Patreon
Theme music: “Fatality” by Tigoolio
Subscribe to the Voices of VR podcast.
First Contact! from High Fidelity on Vimeo.
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast.
[00:00:11.955] Philip Rosedale: I'm Philip Rosedale, and I'm one of the three co-founders of High Fidelity. And prior to that, I'm the founder of Second Life. And so I love this stuff, and I've been doing it all my life. In terms of stuff we're excited about, there's a lot. I have so much to show in my demo tomorrow. I hope I get through it all. If even a good portion of it works, it'll be a lot of fun, I think, for everybody. We've made a lot of advances in shared physics systems, which is a really important part, I think, of virtual worlds, where you can see things moving and interacting. in a way that's believable and compelling. You know, I've always been a believer that physical interactions is the language that we understand, and so virtual worlds, you know, games that make that work correctly are great for us. They're things we love, but it's super hard to design the physics systems in a way that is both generalized, so you can build almost anything with them, and fast and looks good, feels good to people. And we've made a lot of progress there, so I've got some stuff to show about that. We've also gotten our alpha systems up and running so people can deploy their own servers, and there are a number of servers that have gone up now, and we'll take a look at them. You know, we've got the ability to hyperlink between virtual worlds, which is something that I think is going to be a key piece of how the metaverse gets built, you know, how these generalized systems work. Another thing we've got going now from last time we talked is tracking your facial expressions just using the 2D camera. That's something we've been working on for a while and what it means now that we have it is even if you're on a PC, just a standard PC with a camera on it, we're going to track your head movement and your body movement in the same way that somebody with a head-mounted display is seen as moving. So now people that are just on their PC or are wearing an HMD that are in the collaborative space together, they're all physically animated, they're all moving. And so that's another exciting thing I'll be showing a little bit of tomorrow.
[00:02:10.534] Kent Bye: And is that using some of the research of Hao Li in terms of doing some of the statistical regression database of just being able to see a little bit of the face and being able to extrapolate that based upon very little or occluded information?
[00:02:24.933] Philip Rosedale: Yes, the technique that we're using for the 2D camera is a regression-based approach. It's from a different research team than Haoli, but we're well appraised of his work. I love his work and have been talking to him about all this stuff. We're actually using a different researcher's work, who's another guy who has presented at SIGGRAPH multiple times and will be there again this year. His name is Kunzu.
[00:02:47.198] Kent Bye: Yeah, I think one of the most exciting things for me about what High Fidelity is doing is the whole open source approach of creating open source Apache 2.0 licensed content, and so that whether or not High Fidelity survives or not, at least you'll leave a legacy of code for people to continue to build upon. I suspect that it's pretty compelling to model it off of the decentralized nature of the internet and that when we do think about the metaverse, it does seem a lot more viable and compelling than doing a walled garden approach. So in the last year, what kind of uptake have you seen or what kind of developments have you had in building out this infrastructure for creating these virtual worlds that are interconnected with each other?
[00:03:29.791] Philip Rosedale: Well, we've gotten to have our first people like putting up their own open source software and then interconnecting with each other. So it's been fun. We've been able to actually start working on this sort of hyperlinking problem or the portal problem between spaces. I think that hyperlinks between virtual spaces will absolutely be One of the key things that we're all going to be amazed at looking back on I think that if you look at the internet in its early days there were great walled gardens for online content at the time that the internet sort of came around those were AOL and CompuServe and They were great. They offered a rich set of features much richer than the early internet in fact, but the early internet had hyperlinks and So all web pages were the same enough that you could put a hyperlink between them and the result of that Was that you had this concept which is I think known as Metcalfe's law Which is that the utility of a collection of objects if they're connected together by wires basically by hyperlinks increases as the square of the number of objects that are in the network and And that means that what it meant in the days of the early internet was that the webpage content that had hyperlinks radically outstripped things like AOL and CompuServe and its utility. And I bet we're going to see the same thing here where whoever is able to build a system, and of course this is very much what we're trying to do at High Fidelity, that enables hyperlinks between these virtual world or 3D experiences where there is both the availability of a hyperlink and a rich enough experience when you get to the place that it's fun and compelling and useful to you. Whoever does that will eclipse everything else that is happening at the time in the same way that the internet eclipsed all the data retrieval systems that existed at the time that it really became a consumer phenomena, you know, in about 1995.
[00:05:17.916] Kent Bye: And with text, do you have normal text and then the hyperlink was sort of underlying text? But there was quite a lot of metadata there in terms of trying to describe what the link was. You know, you kind of highlight the words that kind of describe where you're going. In a virtual world, you basically have objects that have little to no extra sort of metadata in terms of contextualizing where you may be headed. You may have a door, but is there a photo on the door? Or what is the indicator to be able to kind of logically set the context in terms of where you're going to be going from one room to the next?
[00:05:52.331] Philip Rosedale: Well, I think that's going to be fun to design, because just like you said, there's going to have to be some sort of almost out of band or some sort of signaling mechanism that shows you what a hyperlink is, where you're in a world that visually wants to be completely under the control of the person who created it. So you can't underline a chair or a door. But maybe you can have something where, with a particular gesture or emotion, you as a user of the system are able to see where that path leads, basically. And I think it'll be something like that. there's still a really rich design exercise around what is the essence of the hyperlink in 3D. What will be the simplest way to both to implement it as a content creator and then to render it to a user.
[00:06:33.273] Kent Bye: Yeah, I mean two things that come to mind immediately are like a doorway or an elevator where you can go into this space and it's sort of a neutral place and then you come out and you're there in a new world. I guess with the doorway you almost have to kind of render the other room as you're walking through it. But yeah, I don't know what type of things have you guys experimented with in terms of starting to play with this idea of going through a portal or a doorway to another
[00:06:56.961] Philip Rosedale: world? Well certainly I think a doorway is a very rich symbol to us as humans and it could be what we see the most of. But there's two ways to pass through a doorway or anything in 3D. One is to click on it and of course click is going to get redefined because we're not always going to be clicking anymore we're going to be using these For example these hand controllers that have some concept of a point-and-shoot or something So I think there's an issue around exactly sort of what a left-click is even in 3d But by acting on or clicking on something you could mean I want to go there or by moving through the actual doorway itself we've implemented, you know, both of those things and Experimentally certainly passing through a space and then instantly being sent to another space is very compelling I mean, it's it's very fitting. You know, it makes sense to us But I think also that even more simplistically just clicking on something or just saying, you know do that thing may work and then as I said, I have a suspicion that there will be a kind of a way of interrogating the world around you that'll become a standard. In much the same way as with the web, we have the idea that the left click is tied to the page you're on and whatever it may do, which could be very sophisticated, but the right click, almost always with the web, means I want to bring up my own menu, which is outside the context of the experience I'm having. I bet you that there's going to be a similar behavior in virtual reality where there's going to be some action we come to take as meaning I want to know where that goes or whatever. You know, maybe this chair right here glows and I click on it and I see a little something tell me where that's going to go if I actually click with my left finger or whatever on it.
[00:08:36.591] Kent Bye: Yeah, and I wonder if there's kind of limits of our own perceptual system that we can exploit like change blindness, for example, that I know that when you turn your head and there's suddenly something new that appears. I've done some my own experimentation with using change blindness as a locomotion technique by just looking down at a map and having three rooms to choose from. And then when you choose one of the rooms, then it automatically switches you. And by the time you look up, looking down at that map turns to be like this kind of virtual change blindness so that you're looking down, you switch, and then you look up and everything's different. And the mind, I think, it just accepts it. So there could be other perceptual hacks to be done in order to, like, make it feel more fluid to go from one place to another.
[00:09:19.973] Philip Rosedale: I like that idea so much I think I'll play with it with our own stuff. Like I can imagine doing that like in High Fidelity you'll have an object and you can have an action on the object like a click mean teleport mean go through a portal. I like what you're saying like it would be a simple modification to our code to make it so that The object that you clicked on to invoke the action of teleporting persisted while the world changed around it. And I bet that would achieve the same effect you had. And I like that idea, and I'll try it, and I'll tell you what we find. But I think you're right. Fixation of the eyes on an object creates a kind of a, you know that you can move things that are then outside that fixation point without notice. And then when you look back up, you perceive them. So yeah, it sounds very interesting.
[00:09:59.117] Kent Bye: Yeah, I did that for my VR Jam entry called Crossover, to be able to have a sleep no more, multi-threaded narrative to go between different rooms, so you kind of look down. It worked pretty well. So I think I just also think of like just the idea of going into a we're now in a building we walked from one building to another building in the course of this interview and Going into like maybe the building is representing of this other world So do you go out to the street to then decide which building to go into? And then maybe when you go back out maybe that building has another other streets or other ways for you to go to other worlds so That's a very physical metaphor, but I imagine that there may be more elegant solutions that only work in virtual reality but can't actually work in physical reality.
[00:10:43.110] Philip Rosedale: Yeah, we're starting to learn in the brain there's some interesting articles lately you may have read about place cells. One of our advisors, Adam Ghazali, was sending me something about this the other day. Place cells in the brain are used to actually physically map a three-dimensional space. So you actually have cells in your brain which are arranged in a grid, which are activated kind of as you move actually through a physical space, which is fascinating. This is something that There hasn't been a lot of research on and I think it'll be interesting to see maybe there'll be some rules we can't break. Like as you said, if you went outside of a door and then you made a navigational decision and then you went back in the same door and it was a different room, that might actually be problematic for us because of the use of these place cells by the brain. I think many things in VR that we're going to have to figure out are things that accommodate our weakness as human minds. I mean, we are very capable, but we also have some very strong systems in place that we can't violate. You know, I mean, obviously this is why, at the simplest level, nausea is such an issue in simulation systems. We have to present people with stimulus patterns that they have seen before. And if you violate that, you generate nausea. So, I think we may have interesting problems around navigational design and hyperlinking and interaction that depend on that stuff.
[00:11:57.299] Kent Bye: And are you going to be showing any other exciting things tomorrow in your demo? Is OAuth going to be a thing that's going to be too far in the weeds, or is that something that is actually going to have something visual to show in terms of some of the integrations you've done with Identity and OAuth?
[00:12:11.882] Philip Rosedale: Oh, OAuth, well that's just all working. I mean, yeah, that's how you log in, and if you're running another server, that's, you know, what happens when you go to somebody else's. No, I mean, I'll be touching on a number of different things. I think it'll be, hopefully it'll be fun to see. But I think some of the work we're doing with physics, the ease with which somebody can launch their own server, is another thing more at the systems layer that I'm going to be demonstrating. You know, how easy it is to basically deploy, and then install content on a server of your own that you've just downloaded from our site. That's one of the things we're going to demonstrate, because I'm going to hopefully, with a little luck, I'm going to do it right there at the podium on my own machine.
[00:12:46.859] Kent Bye: As you were kind of talking about some of the different art styles, I found it a little ironic that you're called high fidelity, but yet some of the art styles that you're using are actually low poly, low fidelity. I've definitely found that some of the low poly, low fidelity aesthetic actually is more comfortable as a VR experience. Yeah, I'm just curious about how you sort of see like trying to fall out of this uncanny valley of trying to go to high fidelity so that it doesn't actually become as comfortable as a VR experience because maybe it doesn't meet our expectations of actual reality. So you have to kind of like stylize it a little bit to make it a little bit more believable.
[00:13:22.875] Philip Rosedale: Yeah, I mean that's a huge topic and a rich one. We're obsessed with the sort of sense of belief or presence or, I was using the word earlier, honesty. That I think there's some set of decisions we can make around design or that creators of content in high fidelity will end up making by sort of market pressures. that will result in the right trade-off between technology as we have it today and the nature of the content that is in these virtual worlds. And I think that, yeah, you know, if you take a particular channel of input in the brain like visual verisimilitude or texturing, You can always define high fidelity to be something that is beyond the capabilities of the system when combined with other things. But I think there's also this idea of almost believability or the sort of belief that one has in the content they see before them and the people they see before them. across the whole board of interaction, you know, rendering, physical contact, acoustic properties, and I think in that regard, at least for me, the name High Fidelity means kind of getting to the point where it's real enough, you know, where it's real enough for us to believe, and so you have to make this rich set of trade-offs, you know, you can put 2K by 2K textures on everything, but do those pixels mean anything? Do they have any deeper meaning beyond the pixels themselves? They don't if they're just a texture that's folded over a cube. But if you dial it back a little bit and you have a situation where every vertex corresponds to a different physical object, well, that has a kind of a fidelity or a realism to it that it didn't before. So again, I think the design exploration of this is totally uncertain. I mean, there's going to be a lot yet to be learned. And the benefits to those who figure it out early are going to be large.
[00:15:04.598] Kent Bye: Is there any content that's been created by some of the alpha and beta users that is out there that's been surprising or delightful for you?
[00:15:12.101] Philip Rosedale: Well, I've seen some very hard-line structures that have been simplistic in their treatment, like some houses and forests and things like that that people have done that have been very powerful. It's still early. I think there's also the idea of embedded context, like there's a guy who does live music and he's got a stage and that's where he plays the live music and he's got instruments and it's a very compact space, very small. and the knowledge of or, you know, having seen a video or having seen people there sort of standing crowded in that space creates a context that is remarkable and very appealing and so I think, again, I think there's a lot yet to be learned around what combinations of different types of content and constructions, you know, I think scripting is going to be this huge thing where it's really about interaction and, you know, what interaction can you have with every pixel, so to speak, that you see in a scene. We're going to learn a lot from that but You know, for us, it's still early. I mean, we only put, like, our alpha stuff up with no, intentionally, you know, with little or no press, like, a month ago. So we're still watching people start to play with it.
[00:16:09.825] Kent Bye: Yeah, and it seems like the, what I've heard other social VR apps talk about is, like, starting to implement games. So having physics-based interactions and having all sorts of different things to actually do with each other seems like it could be a potential really big deal in terms of how people are kind of interacting or playing with each other in these spaces.
[00:16:27.049] Philip Rosedale: I mean, the basic set of physics stuff that I'm going to show tomorrow, which is scripted objects that can have a script and in a very, very believable 60 frames per second way, engage in physical interactions with each other for multiple people. And that is a big, there's a huge difference between that and a local simulation that nobody else is seeing the same way you are. The basic capabilities that we have in that regard I just sit there and I think if I had 24 hours a day to goof around and build little toys with this to just explore what you can do, you know, it would be nothing compared to the depth of things that can be done. So I'm very excited to like get our system stable enough to have people start to do that experimentation, you know, get some people sort of building the JavaScript and playing with different ideas. And we're just basically trying to create like a tool set of basic ideas that will at least give people enough of an overview of what the features are that as a developer you'll you'll play with something like a gun or something in high fidelity and say, okay, I understand, let me go and see what I can do. It's great.
[00:17:24.009] Kent Bye: There's something about physics that brings that sense of cohesiveness, coherence that makes it plausible and I think that there's something just really compelling about it. And finally, what do you see as sort of like the ultimate potential for virtual reality and what it might be able to enable?
[00:17:39.119] Philip Rosedale: Well, if you think about it, I'll put in the framework, I mean, we have deployed a system at this point that allows you to take a machine at home and deploy it as a virtual space. If you think about it, today, there are somewhere between a half a billion and a billion desktop PCs that are of considerable power that are connected to broadband internet connections worldwide. There are 7 billion or so people on the planet. If we used the desktop machines that we have today to run software, something like High Fidelity, and create this hyperlinked shared virtual space, that virtual space would be comparable in size to the surface of the Earth. There's a lot of ways to calculate this, but it would certainly be enormous. And it would have sufficient capacity in its network, taken together as a decentralized system, to provide current access at, say, a couple of megabits per second to everyone on Earth. And that's true today. That's true today. If we can build a technology that enables us to use all our machines, not just like hosted server machines, but I'm talking about all our machines, which are about a thousand times more connected devices than there are server machines that are available in co-location facilities. So, what I'm excited about is just that staggering thought that if anybody can get something to work that's, you know, an open, decentralized system, it would obviously have to be to operate at that scale, that people could put up. There is the potential that we could all be hanging around in a virtual space that was incredibly compelling to experience and was literally that size and was big enough for all of us to be in it at once. That's the thought that, you know, really keeps me working every day. Awesome. Well, thank you so much.
[00:19:14.223] Kent Bye: Great. Thank you. And thank you for listening. If you'd like to support the Voices of VR podcast, then please consider becoming a patron at patreon.com slash voicesofvr.