#376: Building the Metaverse on Internet-Scale with High Fidelity

philip-rosedalePhilip Rosedale has been working with virtual worlds for a long time, having founded Second Life back nearly 13 years ago in 2003. He co-founded High Fidelity in 2013, and he previously told me that he wanted to fix a lot of the things that were holding Second Life back. One big limitation was having all of the virtual worlds hosted by Linden Lab’s servers, and so Philip wanted to flip that model on it’s head by going with an open source model. Users will be able to host their own virtual spaces in a more scalable fashion like the Internet, but High Fidelity also has plans to leverage the idle GPU power of millions of machines to help render a high-resolution metaverse.

I’ve spoken to Philip at SVVR 2014 & 2015, and I had a chance to catch up with him again at SVVR 2016, where High Fidelity announced beta access to their Sandbox client. We talk about physics in VR, body tracking, 3D audio & social presence, experiments with SMI eye tracking, scaling the metaverse, avatar continuity for group collaboration, and the future of using VR as a neutral meeting ground to interface with AI robots.


Here’s the Live Demo that Philip Rosedale gave at SVVR 2016

You can download the Sandbox client from here.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Support Voices of VR

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR Podcast. So back on June 23rd of 2003, Philip Rosedale and Lyndon Lab launched Second Life, and so their 13-year anniversary is coming up. But in 2008, Philip left, did some other projects for a while, and eventually started what he really wanted to create with the Metaverse, which is High Fidelity. So back in 2014 at the Silicon Valley Virtual Reality Conference, I had a chance to catch up with Philip to talk about a lot of his visions of specific things that he was trying to do differently from all the different lessons that he learned from doing Second Life. And so High Fidelity to me has really got this comprehensive vision in terms of how to create a distributed network that is able to scale the metaverse up to the level of the internet. And he's really taking an open approach, which is in contrast to Project Sansar, Lennon Lab, and a lot of other different approaches to creating the metaverse, which is a lot more of a closed walled garden. And so Philip is proposing a whole technology stack in order to create an open metaverse. And so we'll be talking about his vision and some of his latest thoughts about what they just launched at the Silicon Valley Virtual Reality Conference in 2016, and some of his other ideas about AI and body language and some of the other more subtle nuanced components of what makes a really good social VR experience. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by Unity. Unity has created a tool set that has essentially democratized game development and created this lingua franca of immersive technologies, allowing you to write it once in Unity and be able to deploy it to any of the virtual or augmented reality platforms. Over 90% of the virtual reality applications use Unity. So to learn more information, check out Unity at Unity3D.com. So this interview happened at the Silicon Valley Virtual Reality Conference at the San Jose Convention Center in March, and this is a little bit of a ritual that Philip and I have gotten into in terms of checking in in terms of the latest information and thoughts about the future of the metaverse. So with that, let's go ahead and dive right in.

[00:02:32.611] Philip Rosedale: I'm Philip Rosedale, and I am the co-founder of High Fidelity. I'm also the former founder of Second Life. And what we're doing in VR is building an open source platform for shared VR applications.

[00:02:47.232] Kent Bye: Great, so I had a chance to talk to you two years ago at SVVR and also last year, and so what are some of the big things that you're launching this year at the SVVR Conference and Expo?

[00:02:56.976] Philip Rosedale: Yeah, today actually we're making our beta launch of High Fidelity. What it's called is Sandbox, and it's a single download that you can get for Mac and Windows on our site right now. and the sandbox allows you to put up your own server and invite people to come to it immediately and on that server you can talk to people, communicate, use your hands, build things, do things that are physical with physics and pretty much start building anything you like. It's intended for early adopters and content developers to start putting up shared servers. And we hope that the capability that it's providing right now is something like Unity has for single-player VR experiences, that it should be a platform that people will very rapidly be able to try out ideas. You know, if you wanted to make a beach ball game or something and put a bunch of people in there and try it out, this is something that you'd be able to do in a couple hours.

[00:03:49.541] Kent Bye: And so are you really trying to optimize for world building while you're in VR, or is it better to build in high fidelity on a 2D screen?

[00:03:57.292] Philip Rosedale: That's a great question. I mean, the 2D tools for building 3D content are highly evolved. I think in the long term there's no question but that it's going to be 3D building. If you look at things like Tilt Brush, Medium, all the different experiments that are being done on object manipulation with hand controllers, you absolutely can see without question that in the long term we're going to use hands and heads in the environments to actually build things. There's no question in my mind there's 18 degrees of freedom there, there's only two with the mouse. Long term, people are going to build that way. In the short term, though, people are going to import things that they're already working on in a 2D editor directly in. And so, for example, we provide direct OBJ and FBX support. So if you want to use Maya to edit a scene, a complex scene, you're going to do that editing in Maya, and then you're going to drop it into High Fidelity and maybe move it around and position it a little bit there. So I think the complicated part of the building in the near term is going to be done in 2D. And the positioning and equipping and some of the other parts of it will be done in 3D. And that's what we're trying to provide support for.

[00:04:57.538] Kent Bye: And so are you expecting that people are going to start building vast landscapes? Or do you foresee a little bit more building interactive, highly dynamic environments that have interesting physics going on or games? Where do you see this going to start off?

[00:05:12.428] Philip Rosedale: Well, the very fact that these hand controllers are coming out, you know, the Vive controllers are out, the Oculus controllers will hopefully come out later this year. Those hand controllers are so fun that I think there's going to be this sort of Cambrian explosion of experimentation around highly dynamic, complex interactive objects that are used with the hands and examined with the head, basically. That said, again, the big picture around VR, around shared spaces, around the metaverse, is definitely that of being able to create vast connected spaces, infinite terrain, I think as John Carmack said a year ago or so. And so we're trying to design a system that can do both. I think we'll see people putting up kind of room scale experiences in the near term, I hope, that are gonna be very interesting. I mean, that's where I hope we can be of use to people to quickly do that, go beyond what they've been able to do in the single player environments. In the long term though, I think everybody's going to demand large-scale, beautiful, vast connected systems and that's also what we're trying to design for in terms of how we build.

[00:06:11.986] Kent Bye: I think one of the biggest challenges with creating an environment like this is getting all the physics just right, you know, and all the collisions and, you know, physics is something that we live with every day. It's something that when we drop something, we can predict what's going to happen. And so when you go into virtual reality environments, having these physics that actually feels real is something that can cause a lot of presence, but it's also a lot of challenges in actually doing that and simulating all that. And so for High Fidelity, what are some of the innovations that you've made in terms of trying to make the physics feel right when you're in these environments?

[00:06:47.252] Philip Rosedale: Yeah, it's a huge, I mean, you got your finger right on it there. Physics is incredibly essential and immersive when it works, and then when it doesn't work, it's jarring in the same way that bad facial expressions on avatars are. We've done a lot of work in that regard, but there's as yet a lot more to do. Fundamental thing we've done is a shared physics engine, so that when you have multiple people looking at the same toppling pile of blocks, they all see the same thing. And that is a particular engineering problem that's considerable. The solution that we've made to it is fairly unique and difficult. So we think it's one of the valid things that we're giving people on this platform to work with. Some of the other physics problems are like what happens as you grab things, as you hand things to each other, as the body or the avatar interacts with the object. And there there's just a tremendous amount of, you know, there's too long a list of fundamental, interesting problems. You know, when I grab something, does it pass through things while I'm holding it? When I hand it to somebody else, what happens? What if we're both holding it and trying to pull on it at the same time? Questions like those have to be dealt with. We think we've done an okay job. Right now, I think the beta, the sandbox that we just put up is pretty good, and we're going to keep working on it just like everybody else, as quickly as we can, you know, finding the edge cases. I mean, the design space around hand controllers is so vast that it's going to take years for people to kind of almost get to the point where we are with a modern windowed operating system today, where we know what left-click does, we know what right-click does, we know what a context menu is. That stuff hasn't been figured out yet in VR.

[00:08:11.522] Kent Bye: Yeah, and I think that in the long term, I would expect that things like the Vive would be able to have track controllers for your elbow or your knees to be able to really get the body movement. Because right now, having just your wrist and tracking that and trying to really figure out the inverse kinematics of your elbow, I think it's a really hard problem. And when it's wrong, I think a lot of VR developers have taken an approach of only showing what you're tracking. So if you're only tracking the hands, just show the hands. In High Fidelity, you have kind of like this embodied avatar presence, but yet if inverse kinematics start to get off, then there's, I guess, an interesting trade-off there.

[00:08:48.887] Philip Rosedale: An important distinction to be made is whether it's you seeing yourself or somebody else seeing you. We believe the right strategy. We've built a sophisticated IK system that can take inputs from multiple points. As you said, eventually we're going to have torso inputs and foot inputs and that, I mean, I think people are going to come out with hardware for it. We've built our system to allow that and sort of guess at the joints that it doesn't know about. As you said, probably for a lot of applications, You won't show your own gestat joints to yourself because that's what's most psychologically jarring, but you do show the gestat joints of someone else to you if you think about it. And so that kind of like, I don't see the same thing that everybody else sees, is I think one of the tools that we're all going to use. Because seeing somebody else's full body move is very expressive. You know, if they're spreading out their arms in surprise and you see that whole arm coming with them, that tells you a lot more about what they're doing. And so you'd like to see the other people, but not yourself. Because if you look down at your own elbow and it isn't where your actual elbow is, you get this loss of body presence, basically.

[00:09:46.866] Kent Bye: Yeah, it's very delicate, I think, you know, having presence within these environments. And so doing all the things that you can do to maintain that presence. And I think there's another dimension of social presence that happens when you're with other people. And so what have you found in terms of, you know, being in these spaces and interacting with other people and what that does to your level of presence?

[00:10:06.205] Philip Rosedale: Well, 3D audio presents a huge set of design challenges. There again, I think we've done pretty good work so far. Best work in some cases as compared to anybody else in the industry. Obviously, the spatialization and reverberation in the environment is key to creating a sense of social presence when you're talking to people. There are other issues like attenuation, for example. This is subtle, but as I demonstrated today in our demos here at the show, when there's a lot of people near you in an environment, it's very easy because we don't have as much dynamic range in our voices in VR as we do in reality. It's very easy to kind of not be able to hear anyone at a distance or hear everybody so loudly that you're just completely cognitively overwhelmed. So that's an example of a design problem that's present there. Again, I think there's big challenges, but big opportunities. We feel like we've got a pretty good 20-25 person environment right now where you have a tremendous sense of presence. Three people is a lot more than two in VR, which is interesting also. Like, the third person and the idea that there's a separate set of people you're talking to gives you a very powerful sense of social presence as compared to just one-on-one.

[00:11:07.998] Kent Bye: Yeah, and I think that once you go beyond a one-on-one conversation, then it seems like a virtual reality environment starts to have unique affordances in terms of carrying a conversation amongst multiple people. So I know that you've written an article comparing and contrasting like 2D video Skype with virtual reality telepresence. And so what were some of the main points that you're trying to make there in terms of some of the unique qualities of what VR as a telepresence tool gives you?

[00:11:34.846] Philip Rosedale: Yeah, if I could just pick out one because I think it's a rich area and I wouldn't want to bore you or lengthen this conversation with all the different ones, but one really interesting one is agency. That is the ability to move where you are when you're speaking in a group. So in a video conference, we don't do that because we have to array the participants, you know, in a line or in a circle or something, because we don't have any choice. I mean, everybody is sort of seeing a different set. In a VR environment, you can move around, you know, you can move closer to somebody when you want to emphasize to them what you're saying, or you can space yourself back because you're addressing everyone. And that idea of agency in a conversation is very significant. We use that word specifically and we've explored it quite a bit as we've looked at the alternatives. But I think that's one thing, the ability to move your location as a participant in a group conversation that is absolutely crucial to human group conversation and is present in VR, is not present in video conferencing.

[00:12:27.978] Kent Bye: And so why does a conversation with 25 people work in VR and it doesn't work in Skype?

[00:12:33.144] Philip Rosedale: Well, one thing is that you have 3D audio so you can separate the apparent location of the speakers. Our brains use the source, the 3D information about a speaker to better understand it. So if two people are talking at the same time, and they are coming out of the same point in space, your speakerphone in a conference call, you actually cannot understand both their voices at once. Your brain uses the 3D spatialization information to kind of separate where in the brain it processes their voices and it allows you to hear them both at once. So that's another simple example that is like a huge win for VR that we just don't have any other way.

[00:13:06.132] Kent Bye: It seems like people who are doing virtual reality development have to learn a lot about our perception, how the body works. What's been some of your most interesting discoveries or insights about some of this stuff? Because we're simulating reality, but by doing that we're actually kind of understanding our real reality more.

[00:13:21.576] Philip Rosedale: You know I think again just to pick one really simple thing is I'd sum it up as we're not as smart as we think we are. There are a lot of things that the literal way that we understand something can't be stretched very much and we've really learned that. I think a lot of VR designers are learning that right now that you can't break the rules. the natural human experience by as much as you would have thought you could have. So for example, when we have our group meetings, we have both avatars that we've made that are creative, you know, fanciful avatars that don't look anything like us. We also have avatars that look exactly like us. Your ability to understand a group conversation is much higher when you use the photoreal avatars. Why? I don't know. I think it's just that when you know, when you kind of glance out of your eye and that person is that person that you work with every day, your ability to kind of understand the meaning or contextualize the meaning of what they're saying is much higher. So one of the things we found is that under many circumstances where you're working with people that you frequently work with in real life, you really want them to use avatars that at least in the facial details look like them. I can't say exactly why that is, but it's a really interesting finding and I think we've seen that echoed in a number of other cases where Breaking presence, breaking the literal what you would have expected to have happen in the real world is a dangerous thing to do, even though you have all this compute power and ability to kind of break the rules if you want to.

[00:14:38.496] Kent Bye: Are you talking about like avatars that are like non-humanoid at all or like robotic? Because for most research that's been done into the Uncanny Valley, they've done these different studies to look at what sense of either familiarity or comfort that you have. And it turns out some sort of stylization is what we want. But the more photorealistic it is, the more it doesn't meet our expectations of them.

[00:15:01.045] Philip Rosedale: Yeah, that's great. What I'm referring to is the case where whether it's a styled or a photorealistic avatar, choosing an avatar that doesn't look like you when I do know what you look like, and I mean dramatically different, you know, a guy being a girl or, you know, you having a completely different, you know, hair or skin or something like that. In those cases, it's kind of harder to interact with you if I know that you're not that person. And I think you wouldn't have expected that. You know, in a group of your co-workers, you would have thought, I'll kind of remember as a bookmark that Ryan is this female avatar because we've been testing that for the last couple of months. I'll just remember that's Ryan. I can see him right there. It doesn't work. There's a dissonance there in the brain where you just kind of don't understand what's going on. that affects the quality of the communication. There's a lot of things in VR that are so subtle, I've been fascinated by this, that we need to do many, many user tests of them, and it's going to take years to really bear them out, because you don't have an immediate answer. Like, you don't immediately say, oh, that's the wrong way to do it, or that's the right way. You just realize after several sessions together that you're like, eh, this isn't quite as good as the way we had it before. And it's going to slow down the development cycle on VR because it's just, I mean, it's interesting, I love it, but it's very subtle stuff compared to what you would have thought of in the early years of computer science. It's just far more obvious, you know, buttons have got to be on the left side kind of thing.

[00:16:17.280] Kent Bye: Well, it seems like eye contact is going to be a pretty huge component of these social interactions. And right now, there's no solution to give you any sort of more specific detail other than kind of approximating based upon the center of where someone's looking. But yet, when you're in these social environments, you're going to actually want to see the precise eye tracking. So what are some of the things that you've been doing in order to prepare for integrating more sophisticated eye tracking solutions?

[00:16:43.773] Philip Rosedale: Well, we built avatars that have highly visualized, highly visible eyes. So for example, stylized avatars have been kind of our go-to from the beginning because we felt that the eyes were so important. We also do, we have eyes that are fully mobile. So we actually have a test rig where we use one of the SMI eye tracking, Oculus equipped headsets. And you are able to see the full eye movements of your partner in conversation. And it's a remarkable experience, just as you would think it would be. So, we've engineered our system to have support for full mobility of the eyeballs. Right now, we program it, as you said, you know, when you roughly look at somebody, we lock your avatar's eyes onto their eyes, so they can feel that sense of presence from you. But we just do that as the person that your nose is pointing most at, basically.

[00:17:27.000] Kent Bye: And I know there's been a number of different experiences that start to program, like eye blinks, and our eyes are kind of moving around. And if you're just kind of like staring at somebody, it's like they look like a zombie. So what are some of those things that you have maybe added additional layers in order to make it feel like it's less uncanny?

[00:17:42.963] Philip Rosedale: Yeah, well, first of all, just the idle or breathing of an avatar. The devices are not always able to pick up the slight sway of your torso. So a lot of times, like you're doing right now, you'll move your body kind of while holding your hands and your head still. So we do move the body a little bit in a regular fashion. As you said, blinking is a critical part of communication. We're actually looking at how to capture blinks, maybe even before people are able to capture eye movement, because blinking is a way of kind of emphasizing or starting and stopping your communication phases. You won't realize it until you pay more attention, but it's really interesting. Random eye movements, what are called saccades, the slight small movements of the eyes as fixations as you look around a scene, we actually do simulate those as well. So we do a lot of things with the avatars to make them feel lifelike, kind of in these years before we have all the data to apply from the real you.

[00:18:32.792] Kent Bye: And so, going out into the future, maybe three or four years, I'd imagine that there may be some, let's say, governance issues that, you know, something like Second Life had a lot of, like, walled garden. They had control. They were able to set different policies. But yet, something like High Fidelity doesn't seem like it has a singular governance system, or is there a governance system? What's kind of the future of making the rules of this whole system?

[00:18:58.642] Philip Rosedale: Well, our expectation with High Fidelity is for it to reach the scale of the internet itself. I mean, we believe that either our software or something like it will be as widely used as web servers are today. So in anticipation of that, we have less governance than, for example, we had in Second Life. And I think that's the right call. However, the individual what we call domains or server operators that will put up high-fidelity servers will have the sort of capabilities within their own domains that you saw, for example, with Second Life. And of course, we've thought about that a lot because we saw a lot of these governance issues borne out in Second Life.

[00:19:33.292] Kent Bye: It sounds like you're kind of metaphorically the bar owner or the owner of your domain, of your actual virtual experience. And so are there certain admin privileges that you would have that you'd be able to execute because of that then? And what does that look like?

[00:19:46.476] Philip Rosedale: Absolutely. There's a variety of interesting things that I think server operators, domain operators, will have. One is certainly the ability to kick people out or to let people in on a list. And we're looking at a variety of different ways that that can easily be done. Some of those features are in the software already. Other things though that are more subtle are regulations on the type of content that can be brought into or out of domain or more importantly into. So domain operators will probably have a desire and this is one of the businesses that we hope we can provide for people as a company. domain operators will probably have a desire to regulate the types of content that are allowed in their space. So they might want to have stuff that's kind of on a white list of content, or they might want to have things that is guaranteed not to have objectionable content of one kind or another. We can probably be like a fair arbiter simply, a designator of what that type of content is, and then allow those domain operators to make their own choices about what they want to let into their space.

[00:20:44.353] Kent Bye: So artificial intelligence, machine learning is something that's really exploding right now in all sorts of different businesses. How do you see the future of high fidelity and VR in these social spaces? And where does AI fit into that?

[00:20:57.374] Philip Rosedale: Well, AI is just obviously fascinating right now, and there's probably no more fascinating application, I think, of AI than embedded agents, you know, of people that, you know, imagine walking up to an avatar in High Fidelity and having that avatar be Watson from IBM, folks that we've been talking to. We're trying to design the avatar system so that it's very easy to connect AI systems into it so that you can have natural interactions with an AI. My personal belief is that we are really at the beginning of an interesting decade or so around AI and that we're going to see some just staggeringly human interactions that we're able to have. And I think that we're going to need virtual worlds because virtual worlds provide an easy, lifelike, real environment in which we can interact with these AI creatures that get made.

[00:21:45.673] Kent Bye: Yeah, talking about the future of virtual reality and artificial intelligence, to me, I just find that people kind of have this inner conflict of all the amazing potentials, but also the dangers. Are there any dangers that you see in terms of VR or AI that we should be a little cautious about?

[00:22:03.072] Philip Rosedale: Well, first of all, VR manages some of the danger issues around AI, and I think in that sense it could be a real force for good. You know, we're afraid of robots that could harm us sometimes, you know, in the real world. And, you know, industrial robots don't walk around on legs yet. But, you know, one of the neat things about VR is that We're all on a level playing field in there, meaning that the machines can't hurt us. And to some extent, and I actually think this may be even more important, we can't hurt the machines. So virtual worlds may represent a kind of a safe playing field for a lot of human AI interactions. And they may also represent a kind of a home for different types of AI that can create their own kind of backups and safety and personal space inside the virtual world. It's an idea that hasn't been talked about much yet, but I do think it's coming.

[00:22:50.288] Kent Bye: Yeah, I do too. Especially with the birth of virtual reality right now, there just seems to be a lot of big innovations in AI that are kind of coming into this confluence right now. And I think a lot of speech recognition as well, of being able to do voice input and machine learning in that way. And so do you foresee the GUI of the future being voice input? Or do you see that we're going to have some sort of 3D user interface with hand controllers?

[00:23:16.005] Philip Rosedale: Yeah, VR represents a couple of really big changes and one is the presence of a microphone. Because VR involves wearing a headset and because it so often involves interacting with others, it means that we all have microphones and that is a big change. As you said, the use of AI interaction for voice interaction has primarily been on mobile devices. That's a big thing that's driven everything is all of us sort of typing once in a while by using our voice, but it's into our mobile device because they always have microphones. VR headsets will always have microphones. And so I believe that VR will change computing by giving it a microphone. And then as you said, I think that a significant portion of the interaction that we do, and maybe even the control, authoritative or executive sort of functioning that we do, will be using our voice. Because if we have a headset on and we're talking to others, it's also probably more socially acceptable to use our voice. Another thing that has not been true in the desktop age of computing.

[00:24:07.396] Kent Bye: And so what's it going to be like in three, five years? Right now I can go hang out at a bar on Friday night in Portland. What do you foresee, kind of like the thing that people are going to go into virtual worlds and be the things that you predict that people are going to want to do in these, based upon all of your experience and spending time in these virtual worlds?

[00:24:27.403] Philip Rosedale: Well, I think there's going to be just a lot of stuff to marvel at. I mean, as we've seen VR come online and we've seen, you know, one demo after another that's fascinated us, I think demonstrative spaces that have unusual social experiences in them are just going to be innumerable in a few years. I think we're still going to be in that early phase, though, the pioneering phase of just trying to figure out what works best. But I bet that on a Friday night, you'll have the option of just putting on your headset and just marveling at following one link from another to another to another and just kind of taking a look at what's out there. Not entirely unlike the internet in the late 90s where we would all just kind of sit down and say, man, what new strange website is up now that I can try out or play with? Distance education, historic recreation, taking something that can be put in a rich format into 3D and then explore it. I mean, we're going to see that over and over and over again, and I bet that's a lot of what we're going to be doing in the next few years.

[00:25:22.054] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?

[00:25:28.535] Philip Rosedale: Well, one of the things we're doing with High Fidelity is we're designing the virtual servers to run on your own machines. So we're not just designing them to be able to run in the cloud or on like an Amazon server. We're designing them to run on your home desktop servers. And the reason we're doing that relates to how big we believe the virtual world is going to be. There are a thousand times more desktop machines available today with similar power as there are server machines. And those thousand times more machines, about a billion machines right now that are broadband connected worldwide, are capable of simulating a virtual space even today that has the level of detail and size of the entire planetary area of Earth. And so I believe that in the long run we're looking at an interconnected set of virtual spaces which taken together represent a larger and more interesting landmass than the world that we're currently standing on having this interview. So I believe that we are almost certainly going to move the majority of our creative and intellectual and constructive and work activities into virtual spaces and you know at High Fidelity and you know my life's work has been trying to just help build the software that would enable that because I think it is fascinating to watch unfold and in general a force for good for humanity.

[00:26:39.602] Kent Bye: But don't you see eventually that all these augmented and virtual reality headsets will be mobile and we wouldn't need the desktop computers and then where does the computer power come then?

[00:26:49.117] Philip Rosedale: But by then, the devices that we have in our pockets will perfectly well be able to do the same job. My point is simply that we want to use as much computing as we can to build as large a virtual space as we can. In fact, our designs regarding this kind of shared computing do include mobile devices as well. And the mobile device I have in my pocket right now is more powerful than the minimum machine that we required you to have as a desktop when Second Life was launched, believe it or not.

[00:27:12.521] Kent Bye: Great. Anything else that's left unsaid that you'd like to say?

[00:27:16.980] Philip Rosedale: No, it's been great. I think it's just going to be an unbelievable few years for virtual reality and for High Fidelity, and it's going to be a lot of fun.

[00:27:24.105] Kent Bye: Great. All right, Philip. Thanks a lot. Thank you. So that was Philip Rosedale. He's the founder of High Fidelity, as well as the original founder of Second Life. And so a number of really interesting ideas that have really stuck with me since this interview. I mean, number one is probably this concept that virtual reality could be kind of like this neutral meeting ground for people who could be hurting each other in physical space, whether that it's a human and an artificial intelligent robot, or if it could be even humans that may be dangerous to each other. VR could kind of provide this neutral, safe meeting ground for discussions. But also just this idea of AI as this technology that we don't know if we can really fully trust it, or more importantly, that the AI would be able to fully trust us. And, you know, of all the different companies that are out there talking about the metaverse and trying to put forth a platform and network, I think a lot of companies are trying to own the platform, first of all. And by doing that, they can create an experience that's going to be at a certain level of quality. And I think that, you know, in the long run, though, if people are willing to go through the different pains of setting up this technology and running it themselves, and if they're able to create a very compelling experience, then I think that as myself, as an independent business owner with the Voices of VR podcast, I'm much more motivated to create and host my own high fidelity spaces for people to come and visit because it's going to be a little bit more like I can control the different revenue streams that could be going through that. Whereas there may be other cuts that are going to be happening with, you know, some like paying taxes, for example, when it comes to doing something in Project Sansar, when you have both the income tax of the money that's exchanged, but also the property tax for the different spaces that you're owning. So I didn't get a chance to really dive into the economics with Philip Rosedale, their business model, to see whether or not they're going to have some sort of sales tax and property tax. And you know, it's certainly possible. But yet they may be taking an approach where they're doing a little bit more of a software services, almost like a Linux model, where they're giving away their free technology stack, but they're perhaps focusing more on building worlds. It'll be interesting to see where the center of a gravity of our attention starts to go to, whether or not it'll be at some of these more walled garden projects or something that's more open, like high fidelity. I think another point is just the the importance of audio in these different situations. I mean it was really interesting to hear how Philip made the distinction between when you're talking in a group and if you're doing a conference call you're kind of getting the sound all filtered into one location from your speakers but yet In high fidelity, being able to have spatialized sound allows your brain to listen to multiple people talking at the same time. And anecdotally, I kind of sense that's true. It's easier to have a group conversation with people talking over each other rather than trying to do that same type of conversation in a video conferencing like Skype. There is something that's lost within 2D and it sounds like within high fidelity and having the spatialized sound and audio is a huge component of creating the realism of those interactions. Also, I do think it's a really insightful VR design principle that you may only see your hands and head in order to preserve presence, but yet when you're looking at other people, there's a lot more expression that can happen if you do full body avatars for other people. So I think it's some of the latest demos that they were showing at SVVR, they were showing a full body and the inverse kinematics wasn't quite right. And it did feel pretty weird. I think eventually, once we get enough data points, I think that at least the feet as a minimum, and perhaps the elbows and knees will help alleviate some of those problems. But yeah, invoking the virtual body ownership illusion with a full avatar, you really kind of need a couple more data points to really make it believable and not break presence for the person who's embodied. Now, Philip did say something that was kind of interesting, the early findings that matching up somebody's gender and appearance based upon what you're seeing in the real life and having that continuity of discussions and identity within virtual worlds. And so if you're changing your identity too much, then it could be a little bit too much cognitive dissonance. Now that said, I imagine that people who have different preferred gender pronouns or different identities that maybe within virtual worlds they can more fully express those. And so I'd imagine there could be certain cases where it's actually better for people to embody their sense of themselves and you know, that's going to create a cognitive dissonance for them who if they're not able to really fully express their their identity in that way. So I think there's different contexts, though, you know, from the high fidelity perspective, he's really talking about people who know each other's identities, who are working together. And sounds like there's a lot of really interesting open research questions in terms of How do you even measure that, or how would you go about trying to determine what the best approach would be there? But it sounds like stylized avatars is something that they're definitely doing at high fidelity. He did mention the SMI instruments for eye tracking, the sensorimotoric instruments for eye tracking. You can check out that they do actually have some add-ons for the DK2 Oculus Rift, as well as perhaps the consumer version of the Oculus Rift. Very exciting to hear that eye tracking is going to be coming, I think, within group situations and social interactions. Someone's eye gaze and eye tracking, blinking, all that's going to be super important in terms of being able to assess out some of these other subtle body language cues that at this point in VR are kind of lost and they have to approximate them and mimic them through just kind of generic computer algorithms. Another really interesting point is just this distributed computing paradigm. I think that, you know, High Fidelity is a company that's really thinking about the future in a way that makes a lot more sense to me in terms of this trend of decentralization and putting the control and freedom back into the hands of individuals and in the community. And so their vision, instead of having these huge clusters of cloud computing in order to render these experiences, I think really using the potential computing power of all these headsets when they're not being used, I think that's a really forward-looking idea and something to really pay attention to in terms of something that has a viable approach and model in order to really scale up to the level of the Internet. So that's super exciting and it'll be interesting to see if people are willing to use the extra bandwidth and power and energy in order to make their experience better, but, you know, telling that story of, you know, having them somehow use their computing resources in order to really drive this vision of the metaverse that High Fidelity is trying to create. And just one final thought that wasn't mentioned here explicitly on the podcast, but I had a chance to catch up with Philip Rosedale a number of weeks after this interview, and you know, he had a number of really interesting ideas in terms of, well, what's it going to do to culture when we start to meet in these virtual spaces? Because there's this kind of paradox that as we start to form unified internet culture, then that has the potential to threaten the cultures of different sub communities. And so, There's a tension there between how do you preserve culture of the geographic communities that may be very specific to a country or a value system. And as we start to move into creating a new universal culture within these virtual metaverse spaces, It's just another example of the different types of things that Philip thinks about when he's talking about the future of virtual reality in the metaverse. He's kind of always on one or two or three steps ahead of other people in terms of thinking about some of the implications of all of this. always enjoy hearing a lot of his thoughts and ideas and look forward to seeing where High Fidelity goes in the future. And I am getting more interested in creating my own social spaces to have gatherings and virtual meetups of Voices of VR listeners and patrons. And so if you are interested in that, then go to episode 376 on the Voices of VR and find a link to sign up to my email list so that you can just Keep informed for when I start to have these virtual gatherings and meetups for people who are listeners and pioneers and creators and visionaries of this whole new realm of virtual reality. So, with that, thank you so much for listening, and please spread the word, tell your friends about the podcast, and if you would like to support the podcast, then please do consider becoming a contributor to the Patreon at patreon.com slash Voices of VR.

More from this show