Oculus Medium may be the first professional 3D modeling creation tool for the Oculus Touch controllers. It was created with the goal to democratize the process of creating 3D art and prototyping 3D-printed objects. The Medium developers didn’t set out to build a high-end industrial art tool, but early beta testers have been starting to integrate Medium within their professional 3D graphics pipelines.
LISTEN TO THE VOICES OF VR PODCAST
There have been a TON of improvements and UI developments to Medium over the last year, and it’s in the process of final improvements before being released with the Oculus Touch Controllers on December 6th. I had a chance to talk with Oculus Medium lead Brian Sharp about some of the design intentions behind Medium as well as some of the surprising ways that it’s already being used by professional artist like Goro Fujita. We talk about the evolution of the 3DUI, and how they wanted to stick to physical metaphors that anyone could understand rather than abstract concepts that only graphics professionals can grok. Medium is a great example of a VR program that demonstrates the power of immersive computing and how it can be much more intuitive and easy learn than previous 2D methods.
Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR podcast. So one of the applications that I'm the most looking forward to coming out with Oculus and Oculus Touch is Oculus Medium. So Medium was announced back in Oculus Connect 2, and it basically provided a way for you to be in VR and to be able to create 3D objects through more of a metaphor of sculpting and working with clay and kind of forming these different 3D objects. So it's kind of like 3D modeling for the masses and it's not meant to be this super high-end professional tool but yet lower the bar of entry to be able to make it just more accessible to be able to create 3D objects because frankly it's pretty difficult to learn Maya or Blender and The learning curve for those programs is just huge. It takes weeks and weeks and weeks. But within just a few minutes, you can start to go into Oculus Media and start creating 3D objects, which I think personally is probably one of the most revolutionary things to come out of Oculus so far. And I'm super excited that they're going to be releasing it for free with the touch controllers when they come out in December. So today, I have a chance to talk to Brian Sharp, who is the lead of Oculus Medium. And he walks through the design process and deeper intention with the program and what he's already starting to see with professional artists trying to use Oculus Medium within their production pipeline. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by Fishbowl VR. Valve suggests to game developers to make a good game, price it right, listen to your community, and update, update, update. But sometimes you need to get candid feedback before you release it to the public because you don't want to have to dig yourself out of too many negative reviews. So Fishbowl VR does on-demand private user testing videos at an affordable price. You can watch a broad range of users play your experience in their natural environment. and get that valuable feedback that will help you make your experience a success. So start getting feedback today at fishbowlvr.com. So this interview with Brian happened at Oculus Connect 3 that was happening in San Jose from October 5th to 7th. So with that, let's go ahead and dive right in.
[00:02:29.479] Brian Sharp: I'm Brian Sharp and I work at Oculus. I'm the project director for Oculus Medium. And for the past couple of years, I've been running the team that is building Medium, which is a VR sculpting experience that we are debuting with Touch.
[00:02:40.219] Kent Bye: Great, so last year at Oculus Connect 2 was the premiere of Oculus Medium, and so maybe you could talk a bit about how this project came about, and then where it is now.
[00:02:50.401] Brian Sharp: Yeah, it's interesting, right? We debuted it last year at Connect 2. We kind of didn't know what to expect. It's funny, right? Story Studio, who made Henry and Lost, are our sister team, and we share an office. I always criticize them for, I think artists have this habit of keeping their work close to their chest. And then when they finally show it to someone, people are like, this is amazing. And then it's this huge morale boost. And it's like, well, why don't you show it to people more often? But we're guilty of the same thing, right? We showed it at Connect Two and we're like, you know, we'll put it kind of in the back. We don't really know. And then it was like this really big deal and it went over really well. Since then, I would say the big things we've done are we rolled it out to a group of artists, internal and external, Some of them were here at Kinect. Three, Landis Fields, who works at ILM on Star Wars, and I think Doctor Strange and stuff like that was just here sculpting. And that's, again, been this huge boost. We just had this tool where we're like, we think it's pretty good. We don't really know. And we gave it to these artists. And they started doing stuff that, honestly, I am surprised the tool in an unreleased state is able to do the kind of work they're doing in it. And that's great, both because then they get super excited about it in the future and modeling and VR and all this. It's a huge boost to the team, right? It's very energizing for us, especially coming into, you know, we're landing OC3 and then we're shipping in just a month and a half or so and all that.
[00:04:05.181] Kent Bye: Yeah, I've seen just in my Twitter stream a number of artists who have access to their Oculus Medium and some of the sculpts they're showing and I get really envious and be like, oh man, I want to play around with that. But, you know, the thing that's really striking to me is to be able to create VR within VR and that a lot of these programs with Maya, these 3D modeling programs, has done this like crazy abstraction to be able to work in a 3D environment with all these 2D planes and they've figured out all these ways to do that, but it seems like a bit of a hack or a constraint to be able to do that. But yet, within VR, you kind of remove all those kind of arbitrary constraints of working in 2D planes in a 3D medium, and you're able to kind of unlock the creative potential is what it seems like what I'm seeing so far.
[00:04:49.900] Brian Sharp: I think you could make a VR modeling app that has all of the technical complexities and hurdles on the monitor we do. But one of the big design goals with Medium is to avoid doing that, right? You never go into some mode where you're drawing out a grid and you're measuring, you know, it's really deliberately very free form and we design it so that most of your time is really spent working directly on your sculpt. Creating digital objects is just not something that the average person has ever been able to do before. I mean, I guess you could download Blender if you were really motivated and watch like a bunch of YouTube videos and figure it out, but you had to really have a goal in mind and it's been really satisfying to see people using Medium, especially to speak to the progression from Kinect 2. A lot of people had a lot of fun with it at Kinect 2 and mostly made kind of like toothpaste in the air kind of stuff, right? And we've done a bunch of work. We added stamps, so there's kind of these prefab shapes that you can use, and we have libraries of parts as a starting point. that professionals use to mass out stuff, but even a total beginner can use to basically piece together things that look pretty good just to get started. And then also at Kinect 3 is the first time we've showed our new user experience kind of tutorial, which is if you've used Toybox, you know, you're kind of used to the head and hands in the air. We have that recorded next to you, basically walking you through, you know, little short chapters of here's how you do this and here's how you move. It was really satisfying to see that the sculpts that came out of just the walk-up machines right behind me are significantly higher quality from total novices than we've had in the past. We don't want it to be a thing that's just for high-end artists to make content that other people look at from afar. We really want people to be like, I can make an object on my computer now. I know how, it's easy, and I can just do that.
[00:06:40.462] Kent Bye: Yeah, I think that was the thing that I remember from last year, is that there was a lot of people that wanted to try it, and then once you got to try it, then most of the first part of the experience was being trained how to use this completely new paradigm for how to interact with these 3D immersive environments. I mean, the thing that was really striking for me is just the touch controllers that have two buttons and then, you know, a couple triggers and then, like, a joystick, and you're really kind of using all the different buttons that are available to be able to... To what I see is, like, if you were a professional and you were trying to use, like, the most non-fatiguing interfaces to be able to interface with the user interface, with all those buttons that are available, you're really kind of using all of them in different ways, and so that... just like as a professional who's using Maya might use shortcut keys, you have different ways to be able to, rather than pull up a menu and use your hand to kind of move around, you've figured out some of the most important tasks that you're trying to do, assign a button, and then it's gonna take, I think, a learning curve for people, maybe two or three weeks to really get up to speed with being able to really use it, but then after that, really just get into the flow state of just unlocking their creative potential.
[00:07:48.221] Brian Sharp: Yeah, we've tried to. Ever since last year's Kinect, one of my strong directions to the team is like, we should always be able to give someone like a seven minute demo. who's never used VR at all before and have them able to use, you know, a few tools and not be tripping over themselves in that time, which is frankly quite aggressive, right? If you imagine saying that for like Maya or ZBrush or any of these programs on monitors, like within seven minutes from zero, someone should be able to use it. It just doesn't happen on a monitor. I'm pretty happy with our UI. I think it would be total hubris to say that I think that we've really figured out anything particularly magnificent, because this is like the early days of VR. In 10 years, we will look back at this and be like, it was a good first shot. And that's kind of how I think of it right now. But I think that's inevitable, right? Like, you don't come on these things the first turn of the crank. One of the challenges is as we add more features to Medium, right, if you want to be able to do more things and have more tools, fundamentally, we try to make it really accessible. But even if you just imagine, like, a Star Trek-style giant panel with buttons on it, when you've got 10 buttons, a person goes in there and is like, hey, cool, there's, like, 10 things. As soon as you have, like, 500 buttons, it's intimidating, even if every button is something super comprehensible. It's not like they're in, you know, some other language or something. It's just a lot to take in. So that's, we've spent a lot of time on that too, is trying to make it so a completely new user can get in there and get just one thing at a time. You don't need to learn a ton of crazy menus just to like draw your name in the air. And I think we did all right. I think we did all right with that.
[00:09:24.253] Kent Bye: Yeah, I think that there's a lot of lessons from existing graphical user interfaces with the 2D, with the menus and the drop-downs, and I know that even like Photoshop or Unity, where you're able to kind of have little panel windows that come up, seems like you're able to pull up some of these windows with just a lot of sliders and checkboxes and things that you can select, and then even tabs that you can go between things. Oftentimes I would be doing something and the person that was there helping mine the station would come up and say, oh, hey, you can do this one thing. Click here, here, and here. And then they would point out a thing that I could do that I didn't know was even possible. And so it feels like, like I said, there's probably a couple, at least two or three week learning curve for people to be using this every day to really get to the point where they really know what all the options that are really there. And then once that is there, it feels like it's a type of tool that I'm really looking forward to just because within VR, it's very high bar for a lot of people to be able to create these 3D models, which is kind of like the foundation for a lot of VR experiences. And so you're really talking about this creation tool that's going to unlock a lot of creation of assets that are then going to go into larger VR experiences.
[00:10:36.127] Brian Sharp: Yeah, right. I think when you talk about how long it's going to take for someone to get familiar with it, there's this implied question of, well, what are they trying to do in it in the first place? So we think about that. So we have like a case back there that, well, we just packed them up, but had a bunch of 3D prints of objects that some of the artists have made, which came out really well. And for someone who's making something and wants to 3D print it, I think that's the kind of thing where we can make a little tutorial that's like, here are the things that you do. Where you look at conventional programs make you do all of this stuff that, as just a normal person, you don't, A, care about, or B, even understand, right? It's like, well, you have to weld those verts together. And you're like, what's a vert and why would I weld it? Like, that's not a verb that I need or want. So we've tried to keep the verbs to things that if you have a goal, you're like, oh, I would like to add some clay. And there's like, well, use the clay tool to add clay. OK, I want to kind of smudge it here. We'll use the smudge tool to smudge it. And I think, yeah, I think if you really want to appreciate the full feature set of the whole thing, yeah, like a couple weeks of kind of using it and messing around. But if you go in there with a goal, if you're like, I want to make an object that I can put on my friend's wall for their birthday and take a screenshot of it or whatever, like learning the few things you need to do that is pretty quick, right? Oh, I want to make something I can 3D print. Learning the few things to do that, pretty quick.
[00:11:51.434] Kent Bye: Well, I think that it sounds like one of the big use cases would be to create a 3D object that you could export and then have a 3D mesh with textures and everything to be able to put into a VR experience, maybe rig up and animate. And so is that something that is taken into consideration? I mean, I know that there is a certain amount of affordances you need to do, perhaps some shortcuts when you're actually making the model. But then when you're exporting and optimizing it for a VR experience and being able to hit 90 frames per second, there's other considerations to be able to, like, really create it in a way that perhaps if you were to do it in Maya, maybe it would be more optimized in a certain way, or maybe it's the same. I'm curious to hear a little bit about the use case of being able to create something within Medium and then put it into a VR experience.
[00:12:32.835] Brian Sharp: Right, we actually have some of the artists on the REX team up in Seattle that did Far Lands and the Avatar stuff and all that, have been using Medium in their production pipelines, which I did not expect and still find surprising. Medium supports kind of industry standard export formats. You can get the mesh out there. That's how we 3D print them. For applications like 3D printing, basically you're done, right? You export the mesh, you put it on the printer. I mean, there are limitations. Obviously, if you made a butterfly floating in the air, you can't 3D print that because of gravity. But for all reasonable purposes, if you want to bring it into another application, Medium isn't really trying to solve, like when I talk about all those fiddly technical things, oh, I want it to be only this many polygons, or, you know, I want the, you get good topology out of Medium, all that kind of stuff. what artists have been doing is they take it out of medium and then that's like the beginning, right, of the pipeline. It's so much faster, like if you saw Goro's talk. you can get something done so much, like 10, 20 times faster than you could in any other program. Then you take it out of there and you do some cleanup and ZBrush or you retopologize it or, you know, texture it in Substance or something like that. And we've gotten really good feedback. That isn't honestly something that we really designed for because we're kind of more concerned with making sure that it's a really broadly accessible and usable tool for casual people. So I was surprised to hear that apparently that flow works pretty well for a lot of people right now.
[00:13:50.679] Kent Bye: So you can export it and do some low-level optimizations, and rather than sort of recreate the entire thing in Maya, you could just take the thing that you created in Medium and potentially even use it within a production experience.
[00:14:02.841] Brian Sharp: Yeah, and Medium makes some nice guarantees, right? Like, part of trying to make sure that you don't have to understand a bunch of deep technical stuff that a normal person doesn't care about is the technology we use for the object means you can't have a hole in the object, right? Like, you can carve tunnels through things, but you can't, like, mess up the mesh or end up in a situation where it's this kind of like bad polygon soup. Like it's always a nice, clean, closed, solid object. So it gives you that. The mesh you get out of it is a pretty good mesh to begin with. You know, but it's pretty high resolution, right? It's not something that you would want to throw into a Gear VR app because it's going to be like three million triangles to start and that's way, way more than you want. So it gives you a really, really good starting point that you can get to substantially faster. When we look at like Using it as a production tool, which again is only kind of one of the many uses that we think about for it, it really feels like it slots in well at the beginning of that, right? Like for sketching, for prototyping ideas, and for getting a first pass of a mesh done, even at pretty high fidelity, right? Like you can do a mesh that really doesn't take much when you get it out of there pretty fast, but for all the fiddly technical stuff you need to do to get another stuff, that isn't a problem that we're currently trying to solve.
[00:15:13.485] Kent Bye: Yeah, to me it feels like it's one of the most fully fleshed out professional applications that I've seen so far in terms of a type of user interface and a sophistication that has a fully featured way to be able to interact with a lot of different options with a primary six degree of freedom controllers. But I also realize that as you're designing it, those use cases are really kind of shaping how you're moving forward. And so I'm curious to hear all those different use cases that you're really trying to focus this for then.
[00:15:41.706] Brian Sharp: Yeah, so I think, broadly, my direction to the team is, I want Medium to be the way that people make digital objects in virtual reality, right? Like, period. And sometimes we get into these conversations about, oh, well, is it a toy for casual people or is it a high-end thing for professional artists? And my stance on that is, if we ever have to pick one of those, then we have failed, right? That the promise of VR There are two challenges when you're making art on a computer, right? And one of them is craft. Like, are you a well-trained artist? You know, if you're making figurative art, do you know bone structure and anatomy and all that kind of stuff? We can't solve that problem with virtual reality. It doesn't really help. But the other problem, which is huge, is just the basic technical hurdles of, like, How do you rotate your camera? How do you focus on the point you want? How do you select things when you're looking at a 2D window and there's a stack of things and you want the face that's the third one back? All of that kind of stuff really falls into the category of things that nobody wants to care about. They just have to care about it on monitors. And VR eliminates all of that. You want to move the camera around? It's your head. You move and you look at the side of the thing. You want to zoom in? You grab the thing and you move your hands apart like you do on your smartphone. And so my perspective on that is it should be possible for us to make an application that is satisfying to the highest end artists and still super accessible to casual users. And I think we're doing all right. I think we've got a ways to go. We'll never really be done. I think it's coming along. In terms of concrete use cases, I think that, yeah, 3D printing stuff, using the meshes in other applications, bringing them out of Medium and into other stuff, sharing things. We had a team, one of the social VR teams at Facebook, one of the team members sculpted a faceversary cake for one of their co-workers who'd been there for a year or two or something like that and posted a screenshot to his wall. So we think, like, A lot of social sharing and community and that kind of stuff is also pretty important. Those are kind of the touchstone ones we come to.
[00:17:45.128] Kent Bye: Yeah, when I was at SIGGRAPH, I had a chance to talk to the founder of Blender, and I was asking him, like, oh, are you starting to integrate into VR? And, you know, his take was kind of like someone who's been in the graphics industry for a long time and saying, you know, we've had a lot of different experiences here on the show floor of SIGGRAPH of a lot of people thinking that they're going to start to do 3D interface, and I think that there's been a bit of a bad rap of seeing it as just a toy for people who are professionals. There is a certain amount of shortcuts within Maya, people who use the mouse, and there's just sort of workflows and pipelines for people to kind of do it right the first time without having to do a lot of cleanup afterwards. I'm curious how you see that ecosystem for people who are already doing this all day, every day, for six to eight hours a day, whether or not you see that Medium is going to kind of fit into their pipeline, or how those two work together.
[00:18:38.081] Brian Sharp: Yeah, I mean, I think the Oculus artists on other teams are the best example of this. And I swear we did not, I didn't even suggest to them that they try using Medium as part of their pipeline. But the REX artists have been using it for, I think, Environment Mass Out, for some of their prototypes. I think Goro's the most visible. Goro Fujita is on Story Studio. He just started showing up to work like an hour to an hour and a half early, literally every day for the past several months, just because he's super into using Medium and making these sculpts. I think that, just to kind of talk artist pipeline for a minute, if you look at tools like 3ds Max or Maya, right, those aren't the kinds of tools that the industry is building these days, right? They're these giant monolithic things that try to do everything. It's like the inspector gadget of modeling tools. Really, even on monitors, you kind of see things that are a little more focused, right? Like, you don't do literally every part of character that's in a movie in ZBrush, but you do a lot of the actual sculpting work, right? And then you take it into something else to do the next step, kind of the same way that a sculptor has a bunch of different tools, and you don't expect any one tool to do everything, and you switch tools when it's appropriate. So I think it actually fits in pretty well, both the response we've gotten from professional artists I like to tell this story, my friend Jake, who worked on Res and Space Channel 5 and stuff like that, the first time he used Medium, he took the headset off and he was weeping. And I was like, oh my god, like, are you okay? And he said, you know, I went to art school, but since then, You know, it's all been digital art on monitors, and I've been a production designer, so I really haven't been doing hands-on stuff for so long, and I didn't miss it, because digital art feels so... I don't remember his exact words, but like workman-like, right? It's kind of tedious. And he said, using this makes me feel like I'm back in art school, right? Like it makes me want to make art again. And I think that's super powerful for a lot of professionals where you just view it as your day job and then you use this thing and you're like, oh my God, like this is super refreshing. I'm going to start showing up early to use this. So I think there's the incentive. I think they're going to want to use it in their pipelines. And I think it actually fits in pretty well, honestly. Again, that's not, you know, I mean, Facebook's goal is not to own a high end industrial art tool that doesn't make any sense with our mission of connecting the world. But again, I think it's one of those things where I really believe we ought to be able to make something that does both ends of that spectrum.
[00:20:54.216] Kent Bye: Yeah, when I've talked to different artists, you know, there's a phrase of you can't really get high on your own supply of like as you're creating the art. And the thing that I'm kind of hearing from different artists is that as they're actually creating their art in VR, it does feel like they're kind of getting high on their own supply in a new way. They're able to start to really step into it and be embodied in their own creations in a way that they've never been able to really fully experience before because it's been through the lens or window of a 2D objectified screen and now when they're actually embodied and co-present with it, I think it's something that's qualitatively different and the creative process to be able to actually see all the different spatial relationships in a way that just, in our brain, I think just processes it differently, but also kind of unlocks a certain amount of just intuitive, natural movements as people are creating stuff, rather than through some sort of abstracted way through a mouse and keyboard.
[00:21:46.404] Brian Sharp: Yeah, right, I mean, the place the name Medium came from was this idea that it's both an artistic medium and a social medium, right? So there'll be a bunch of that community stuff as part of it too, but on the artistic medium side, where, like, any traditional artist who works in any traditional medium, whether it's oil paint or clay or whatever, has, like, a really intimate relationship with that medium, right? Like, you know, you use Roman number one, and you know exactly the setting on the toaster to pre-warm it, and you know exactly how it feels and the smell of it, and it's like you spend so much time with that medium that it's, like, really a significant relationship in your life. And I don't claim that VR is at the point now where we can do it. I mean, obviously, there is no, like, smell to the substance and medium yet. But I think compared to work that you do on monitors, I don't know that a lot of artists would necessarily say they feel like they have a really intimate relationship with, like, the thing behind the sheet of glass that they're looking at when they're just doing their day job. And one of our goals with Medium really was to make it feel like you are there in a place where time is passing, like it's a real place where things happen and you're with the substance and the substance has these properties to it and you can have this relationship with it because you're really there with it and you're working on it directly with your hands. And I think we've been pretty, you know, I think there's room down the road. You listen to Abrash talk about stuff and in the future when we've got you know, whatever, full haptics and full everything, and you can feel whether the surface is fuzzy and it pushes back on your hand. I mean, obviously, we're going to take advantage of all that stuff as we can. But even now, in these kinds of early days, I'm pretty proud to say that it feels like a lot of the artists that use it describe it in a way that sounds just like they would a real kind of physical medium.
[00:23:31.280] Kent Bye: Yeah, in talking to Shashka Enfeld about Quill, and one of the things he said to me was that, you know, the goal of Quill was to not to create something and say, oh, hey, wow, that looks like a great Quill piece, but rather say, oh, wow, that piece looks like an amazing Wesley Albrook piece, where they actually identified more of the artistic style rather than sort of the technical limitations of the tool to really have that come out. And so it feels like, you know, and just talking a little bit about Quill and the difference between Quill and Medium is that Quill feels like it's a little bit more of like if Oculus Medium is kind of like Photoshop, it feels like Quill is kind of like Illustrator in a way of like a little bit more vectorized 2D way or maybe Oculus Medium is more like Maya 3D modeling tool in that. Quill is a little bit more like Illustrator, but kind of built for VR to be able to do all sorts of other time-based animations and kind of very unique in its own way. But just in terms of style, there seems to be more of a 2D painterly illustration type of feel rather than 3D sculpting. So just curious to kind of hear your thoughts about how you think about the differences between these two tools.
[00:24:36.147] Brian Sharp: Yeah, the way I put it is fundamentally Medium is about making objects, right? You're making a solid object that you are sculpting with your hands and then Quill I mean, they call them Quillastrations, which is kind of a cute portmanteau, but there's not really a word. Some of the pieces that I've seen in Quill really defy comparison to anything that you've seen outside of VR because it's like, okay, this is a thing that you can only experience in VR because it's really important that you be immersed in it. Some of them are very impressionistic where it's not just like, oh, you made a photorealistic scene. It's like, no, this is, I'm having an experience that an artist, you know, really wanted me to have in here. So they feel really quite different, right? I think medium with the object thing, again, I think we're going for accessibility, right? Like as people, you know, I envision a future where 3D printers become more and more commonplace. And I want medium to be the thing where like, if you're just a normal person at home and you like break something or you need a doorstop, like even mundane stuff, right? Like it's the way that you'd be like, all right, I can just make one of those. And then I'll just like in the morning it'll be done. And I think there's a lot of power there. And I think the experiences, I would say that one of the scenes that I've seen in Quill was the first time I saw something that felt like a piece of art that was native to VR, not just in the sense that it was made in VR, but in that you had to experience it in VR. And it was very much kind of a moving and impressionistic piece of artwork. So I think it's kind of in its own league in that regard.
[00:26:02.545] Kent Bye: Awesome. So what's next for Medium? What happens next?
[00:26:07.402] Brian Sharp: what we're gonna ship on December 6th. So, you know, with Medium, it's been a few years and we've definitely, I would say the overall vision has been very consistent, right, of kind of 3D modeling for the masses kind of stuff, like making a really intuitive and engaging way to make objects. But the specifics of that have really changed a lot as the landscape has changed and that kind of stuff. So, I try not to make too many predictions because we're gonna ship it on the 6th and we're gonna see what people do with it and like, We've already been so surprised by what the, just handful, I mean, we only have tens of artists working with it right now, really, and they've already surprised us so much that when we get it in the hands of thousands and thousands of people in a couple months, you know, if there's something that comes out of that that's super compelling and people are super excited about, obviously we're going to try to work with them on that, right? We really want it to feel like a community, right? I mean, I think that's true of any kind of art or practice or anything, even if you're not trying to be a professional sculptor, if this is just the thing that you go into every once in a while and there's nothing else there and you don't really interact with anyone and you don't get any feedback or affirmation, I don't think people are going to stick with it very much. And so we're building a bunch of community stuff. And a big part of that is us being involved in the community too, right? And like listening to what people are saying and trying to work with them, hopefully, uh, really getting updates out there on a regular basis so we can have an kind of ongoing dialogue with people. We also have a wish list of items that's like a million things long. Obviously we could work on it forever in a vacuum, but that's kind of pointless, right? Because we don't know which of those things. We'd rather, hopefully we can do it well where we're able to test out these ideas and get them in people's hands and see what people think. We'll learn more from our audience than we could possibly figure out because we're a tiny team. So I would say that's what's next, right? Involvement with the community, seeing what people think and working with them to make something awesome. Awesome.
[00:27:59.059] Kent Bye: And finally, what do you think is the ultimate potential of virtual reality, and what am I be able to enable? That's a really broad question.
[00:28:10.101] Brian Sharp: The thing that I think is most important about virtual reality is its potential to be the first computing platform that actually inspires empathy. I think Mark touched on this a little when he was talking about the social VR stuff and some of the psychology. I know we read emotions off twitches of eyebrows and stuff like that. I do think a way that technology has not really failed us but just hasn't helped us yet is that it's really easy to have an argument with someone online and not really empathize with them at all, right, and get kind of entrenched and dug in and believe that you're right. And I think a lot of that, there's that Louis C.K. bit where he talks about, you know, now that kids have cell phones, they don't have to, like, see the other kid's face when they call them fat. That kind of stuff. At the highest level, I think the importance of VR is that if we can make you feel that you are there with another person and you are accurately seeing them and their body language and their reactions, that hopefully we can help people empathize more with each other, especially people who are dissimilar in that they live on the other side of the planet. I'm not going to pretend that Medium is like my bid for world peace. I'm making it because I spent 20 years making video games and most of that time I was making tools for artists and I really love working with creative people. But I think that kind of goes to some of the community stuff we're building too, where it's like, having an affirmative community that's kind of supporting you in that, having a way that you can teach other people and learn from other people and have those interactions. That's not something you do on a monitor, because you don't feel like you're in a place with anyone else, right? And hopefully that experience is really empowering to people. So we're trying to do our small part of that. But I think high concepts, the reason I think VR is important is empathy, like no question.
[00:29:57.917] Kent Bye: Awesome. Well, thank you so much.
[00:29:59.077] Brian Sharp: Of course. Yeah, thank you.
[00:30:00.941] Kent Bye: So that was Brian Sharp. He's the lead for Oculus Medium, which is going to be launching for free with the touch controllers on December 6th. So I have a number of different takeaways from this interview is that first of all, the touch controllers, like the ergonomics of them are awesome and they feel great. They kind of disappear in your hand. I love the fact that there's other buttons and that you have these other joysticks. And I think Oculus Medium of all the programs that are out there starts to use all the different buttons and possibilities that are available more so than anything else that I've seen so far. I think like the types of professional applications that are going to be enabled with Oculus touch controllers are great, they're amazing, and they're going to be a lot more sophisticated of things that you could do with the Vive controllers, which is a lot more limited with the types of kind of buttons that are available. So there's going to be a lot more non-fatiguing interfaces that happen with the touch controllers, especially when it comes to these kind of like hybrid, somewhat professional applications like, I'd say, Oculus Media Mar. So, I am probably the most excited about Oculus Medium and what that means for the overall VR industry. I think it's just going to democratize the process of creating 3D objects, and I think it's also going to make people realize how much they need to go back and learn about the principles of making art. I think something like Tilt Brush, people can go in and make something in Tilt Brush and feel like an amazing artist. And I think if you go into Oculus Medium, you kind of have to have a little bit more skills to be able to actually make something that looks amazing. That just is something that is beyond the control of the creators of Oculus Medium. But it's starting to get to the point where it makes me want to be a lot better in creating different digital objects. It's a fun process to actually go into a medium and start to mess around. They have a lot of templates that you can use to start to put different hands and feet and just start to clone different things. And so with that, it's going to just make it easier for you to make something that is a lot better than what you might be able to create within a 2D program, especially if you don't know anything about the process of using those programs. I think that what Brian said is that you have to have a very specific thing in mind that you're creating when you're starting to use some of these tools like Maya or 3ds Max or Blender. But with Oculus Medium, it just allows you to play and just create a lot easier. some of the art pieces that have come out there do look really quite amazing. And at Oculus Connect 3, they had this glass case that had all these 3D printed objects that people had created. So I think it's really important about one of the things that Brian was saying is that there's no kind of abstractions of things getting into the weeds of vertexes and other things. It's using very physical and intuitive metaphors. It's something that everybody, when you're in the experience, can kind of intuitively and logically understand what it's doing. No, I think there's going to be quite a bit of experimentation that's going to have to happen for people as you start to go in there and say, OK, what does this tool do? How could I use this? And so there's a little bit of that. But I think over the course of a day, you can learn a lot. And over the course of a couple of weeks, I think you'll be able to create some really amazing art. And I expect to see a lot of that art be put into other VR experiences. Now I think it's important to note that these objects that are being created are not going to be fully optimized for all contexts. And so they're not really concerned about performance per se. So it's not necessarily a tool that you're going to be able to create some assets and just dump it into a gear VR experience. just because it's not going to be optimized for that. So there's still going to be use cases where the 3ds Max Maya Blender workflow for some people is just going to be more efficient. I know a lot of times when you use something like speech-to-text recognition, for example, if you were to just go through and create the speech-to-text and it gets like 95% of it right, that 5% of all the cleanup that you have to do, sometimes it can be just faster for you to do it right the first time. And I think that this might be the case for some situations where existing professional artists are still going to use the tools of the trade because they are professional tools and they have a lot more capabilities. And you can automate and do all sorts of amazing things that once you get into the process, it just may be actually faster and more efficient to do it with the existing tool set. But the thing that really stuck out with me with Brian was he's talking about the artists at Oculus Story Studio named Goro Fujita, who would start to come in early into work and just start to sculpt for a couple hours. And so it seems like when you're actually in Oculus Medium, you get this sense of embodied presence within your piece of artwork that you're creating. And it does create that higher level of intimacy. And I think that within itself is going to be a huge revolution within the process of using these immersive computing platforms. And I think it's pretty obvious that one of the first programs that you're going to want to use and do that is to actually make these 3D objects within a 3D environment. So I think, for me personally, I'm super excited to see where this tool goes because right now they are trying to make it more of a consumer tool. But I think Brian and the whole team at Oculus Medium has been surprised with how these professional 3D artists are starting to already use this as a tool within their pipeline just to be able to go into these environments and start to use their artistic skills to be able to create these different pieces of art. So whether or not it's just used to do rapid prototyping and to unlock an artist's creativity, I think it's still yet to be seen how it'll fully be integrated into these, you know, existing pipelines. But I think it's going to open up the doors to a lot of people who wouldn't otherwise be creating anything at all. And that, to me, is I think what is the most exciting, is that it's just going to kind of unlock all this creative expression and potential. So I'm super excited about Oculus Medium and where it goes and what people end up doing with it. I think it's going to be similar to how Tilt Brush has been able to just really start to have all these amazing different creations that we never even thought of creating before. And also just, you know, Brian said that their whole process of environment creation, so not just like characters and objects, but actually entire VR scenes that are going to be created from medium again the optimization is something that isn't necessarily one of their top priorities because I think what Brian said is that you know Facebook's not necessarily in the in the business of creating a high-end industrial art tool But their goal is to really connect the world and so I think you know if anything a lot of these digital objects are going to be more optimized to be able to use the use case that Brian said, which is to kind of create these virtual birthday cakes to be able to share and like maybe take a 2D photo of that and share it. And maybe you have a 3D immersive experience of it, but not necessarily fully optimized for professionals to be able to come in and start using it right away. So I think, you know, my interview with the founder of Blender, you know, the thing he said is like, look, if you're going to make a table, you're going to do it in these existing 2D tools. And I think that was something that was also reflected in what Brian was saying is that it's not necessarily going to be like pulling out a ruler and getting all precise with different specs. And so when it comes to like precision of actually making something with set amount of size and Using specific circles or other types of precise squares other things like that. I think yeah using a 2d program is going to be more likely So thanks again for listening and I really do appreciate all the support that's been coming in either whether it's emails or tweets or different Patreon donations. Sometimes when I do this it doesn't feel like there's much of a safety net and it's a little bit like Tarzan economics where I have to get one rope let go of it and find the next rope and when I don't have those next ropes that are clear the Patreon just helps me make sure I can just continue to pay the travel bills, pay the server fees that it takes to serve all these audiophiles for free, as well as to just pay my mortgage and to buy food and stuff like that. And the Patreon isn't enough for me to do that all alone. And so I still need to get other sponsors and stuff like that. But in the long run, I'd love to just be able to serve the community for my end customer and consumer to be able to serve the needs of what the community wants. And that's what I want is to just be of service to a larger community. If you want to see this podcast keep continue and have a healthy foundation, then become a donor to my Patreon at patreon.com slash voices of VR.