Paul Mlyniec has been involved in computer graphics for virtual reality for over 30 years & has been chasing his dream of creating an immersive, 3D painting medium. He started his career at Silicon Graphics to joining MultiGen to develop what became the de facto standard in modeling for flight simulation. He’s now with Sixense Entertainment as the Head Developer on MakeVR continuing his career of making highly interactive, immersive applications.
Paul provides a great piece of historical framing of where we are today. Whereas the space program in the 1960s drove a lot of innovations in technology, the space program driving technology today is immersive gaming within virtual reality.
Paul also talks about some of the user interface lessons for manipulating objects within 3D. For example, he talks about how the 2-handed interface is like 3D multi-touch & how it helps prevent motion sickness. He also shares the surprising insight that turning VR knobs while pressing a button with the same hand turns out to be a straining action. And he also talks about the differences between using physical input devices with a button versus when it makes more sense to use gesture controls with camera-based, visual input.
Finally, he talks about some of the lessons learned from the MakeVR Kickstarter and their plans moving forward.
Reddit discussion here.
TOPICS
- 0:00 – Intro and 30-year background in VR & computer graphics
- 1:37 – LEGO VR demo at SIGGRAPH 1996 with MultiGen that was head-mounted & hand tracked
- 3:00 – Space program of the 60s was to get to the moon, and today’s space program is gaming in virtual reality
- 4:46 – What are benchmarks do you look at when moving computer graphics forward? How easy it is to create 3D graphics with the speed of creation & fluid flow.
- 6:39 – What are the differences in using 3D input devices vs 2D mouse and keyboard & how that changes the 3D graphics creation process?
- 7:58 – Who would be the target demographic for something like MakeVR? Would you see professionals who use Maya using this tool? How well rapid prototyping works with MakeVR
- 9:24 – How you’re evolving 3DUI with the two-handed, 3D user interface w/ fully immersive experience? Making the most out of limited real estate, and 2D buttons vs. 3D cube widgets.
- 11:19 – 3DUI experiments that failed? Twist control being straining activity for your wrist. Pressing button on the opposite hand seems to be a best practice.
- 12:38 – What are things that work with 3DUI? Two-handed, 3D user interface is like 3D multi-touch and how it avoids motion sickness when driving a viewpoint through the world.
- 14:18 – Physical controls vs camera-based motion controls in 3D. Physical controls meet the professional demands for speed and precision where you can have rapid input without physical strain.
- 16:13 – MakeVR Kickstarter lessons learned and plans moving forward. Too high of a price point, no early access, misgauged the target market.
- 17:33 – Best way to track progress of MakeVR
Theme music: “Fatality” by Tigoolio
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast.
[00:00:11.919] Paul Mylyniec: I'm Paul Malinik. I'm the head of development for MakeVR at Sixth Sense, and I've been in the industry for 30 years now. The reason I got into the industry was I saw some holography. I actually took a course in holography, the way you take a photography class back in 1975 and saw an opportunity, imagined I saw an opportunity, to build a 3D painting medium, an immersive visual medium, and so got into the industry and got lucky because it was the right time. As I graduated in engineering, SGI was coming out with their first pre-production platforms. Virtual reality was still a little ways off, but there was already 3D tracking. And just around the corner, maybe five years later, the head mount showed up. And these were all the pieces that would be part of that 3D medium. that ultimately turned into make VR. There's still lots to do in that area, but the reason I got into it was a very specific and narrow and potentially, unfortunately, impossible thing to do and just got lucky. It could have been 1950 when I graduated and I might have been trying to create a virtual reality platform with steam power or something like that.
[00:01:37.443] Kent Bye: And Jason, Gerald mentioned this Lego demo that you had worked on that was mind-blowing to him, and maybe you could talk a bit about what that demo was.
[00:01:46.742] Paul Mylyniec: When I was with a company called Multigen back in the 90s, we hooked up with Lego itself. Lego saw us at SIGGRAPH, I think 1995, and proposed that we do a demo together. And so we hired modelers, they sent modelers over, and we worked with their creative group. to create an immersive New Orleans, all of New Orleans, with the river and the bridge and the cemetery and the Superdome and Bourbon Street all made out of Legos. and also added the ability to put kits together. So to actually have Lego kits and have Lego parts that you could snap together while surrounded by this Lego world, this broad Lego world. And it was head mounted and hand tracked and was really something. It really was something but it was very targeted as a demo rather than as a platform, as a tool. It was more of a visual experience. But it was fun to be in, and we showed both in the SGI booth and in the digital bayou, which was kind of the emerging technologies area at SIGGRAPH that year.
[00:03:01.838] Kent Bye: I see, and so since you've been in this computer graphics industry for so long, I'm just curious if you could kind of point out some of the turning points that you saw along the way of evolution of the computer graphics to where they are today.
[00:03:14.360] Paul Mylyniec: You know, I think about it almost as a space program model. That may date me, but in the 1960s, there was this set goal. We're going to get to the moon and we're going to, you know, the space program came out of that. And lots of innovation and technologies grew out of that. The transistor came out of that. Miniaturization came out of that. And I think the space program that we face now, the one that's driving us now, is gaming. And it's what took the computer graphics industry from SGI as a $125,000, a quarter million, a half million dollar platform, down to a $100 card that you put in your $300 PC to drive much more than you ever could before. Virtual reality is also being driven by that same force. as you recall probably the virtual reality industry got a very good running start in the 90s and then collapsed at the end of the 90s. It was just too expensive and I think that Palmer Luckey just nailed it. And the thing that he did that is going to help bring down prices and just explode deployment out there is he said, I want VR for gaming. Once gaming is the space program for that and grows the industry, then it's going to be available to everybody who's doing rehab and modeling and all the other things that you can do education with virtual reality.
[00:04:46.640] Kent Bye: Yeah, it really seems like now there's this arms race of the technology driving the need to have frame rates, and so maybe you could just talk about that trade-off in terms of where we're at, in terms of where the computer graphics are, and where you see it going in order to get this completely immersive photorealistic virtual reality.
[00:05:04.963] Paul Mylyniec: For my purposes, I'm less of a taskmaster when it comes to frame rate. I am with frame rate, but with content complexity. I'm not so much thinking in terms of photorealistic. I think in terms of the modeling experience in MakeVR. It's not so much about how complex the scene is. it's how easy it is to create it. And ultimately it leads to a complex scene and you might reach a point where, wow, my frame rate is starting to come down. I've kind of gotten to the limit of what I can do in MakeVR because now I've got too many polygons in the scene. But it takes a long time to get there and make VR, so it's more about the speed of creation, the fluid flow of building something so that it's so fast that you can create about as quickly as you think up new stuff and you never feel like, oh god, it's bottlenecking again. There's traffic ahead and now traffic is slowing down here. and I'm going to have to think this through and wait it through until I can complete this operation. Instead, you're kind of free to make that left turn, you know, to let one thing lead to another, and you make a mistake, and you say, hey, that's kind of cool, you know, it's not what I intended, but that's sort of cool, and then work from there instead of having to plan out a modeling session from the beginning. So, for me at least, and I'm probably way in the minority, it's about speed and fluidity of creation than anything else.
[00:06:39.707] Kent Bye: Yeah, so maybe talk about that process of working with the STEM controllers and make VR and maybe how you benchmark in terms of what you can create in this immersive virtual reality environment versus making the same thing in a 2D environment.
[00:06:54.078] Paul Mylyniec: Obviously, using a Maya or a Max, you can create anything. You can create beautiful scenes, but the process is so different than it is when you're working in a real sandbox. Effectively, MakeVR is a sandbox. If you're making a character, for instance, in Maya, you have to think 20 steps ahead, and if you make a mistake, if you miscalculate, you're going to hit a dead end. and it's a very different process. You're always thinking about something and you're juggling, whereas you have this direct interaction, it's very concrete. And that sort of explains why you know what to do when you walk in to make VR, because it is concrete. It's the way you do things, and you're using your hands, and nobody has to tell you how to do that. So you strategize through a modeling session in a way that's familiar, the same way you would play on a, you know, I've got some sticks, and here's some sand in the sandbox. I'm going to put these sticks in here. You kind of think about it in the same way, so very physical and direct.
[00:07:59.139] Kent Bye: And so would you say that the target demographic are more novice 3D modelers who are just getting it for the first time, or do you see that the professional grade, people who are very familiar with Maya, do you foresee them using MakeVR for any applications?
[00:08:13.847] Paul Mylyniec: It really is twofold, so that a new user who's never done any modeling before, never thought about it, can be very effective. But also an experienced modeler, an architect, or an industrial designer can build a prototype very quickly and very directly and kind of have a blank slate and work from a blank slate much more quickly and much more fluidly. say a Frank Gehry would feel very comfortable and in fact I've modeled up the Bilbao Museum in Spain in like an hour or two just because there's a very natural translation of the vocabulary of Make VR into his vocabulary. So I've heard he works by he'll crumble up a piece of paper and throw it on the floor and oh that's a good start and he'll crumble up another one and start thinking in that way. Make VR is sort of that environment that very early concept environment for a creative person to get a start in. And then because it has a CAD engine built into it, then you can export that into SOLIDWORKS or CATIA or one of the other advanced CAD systems and use it as a starting point.
[00:09:24.752] Kent Bye: In terms of the user interface, right now we have this menu-based 2D method of selecting how you want to do things. Can you talk a bit about if you're trying to evolve that within MakeVR or what kind of interfaces that you're trying to do there?
[00:09:39.717] Paul Mylyniec: We are. The two-handed interface has been around for a little while now, and we have a fully immersive 3D GUI. So it was designed initially for full immersion, for head-mounted, and that's why you'll see a panel floating over your left hand. It's fully self-contained, and the idea is you should never have to tip up your head mount to see the key that you need to press. You should not need any other aids inside to run a GUI. And it has a natural two-handed interface, but that only gets you so far. It gets you object and viewpoint manipulation. There's always going to be modes and content and other kinds of control that you need to bring in. So a GUI that lives with you in 3D is kind of essential. And, like I say, the two-handed interface has been around for a while, and we've been working with it, so we're sort of at that first horizon where there are lots of lessons learned. And now we have a good handle on it, and we know how to bring it, and we've gotten it into a lot of people's hands now. So now we sort of know what the next steps are, how we make the most out of limited real estate, what widgets really make sense in 3D, and which are really just naturally 2D. Putting that in 3D and floating it in space is not necessarily the right thing to do. A button is a 2D thing. But a color cube, that's a 3D thing. And there are other navigation and abstract manipulation widgets that really do make sense. And we've kind of got a grip on those now and kind of know where to go with it next.
[00:11:20.188] Kent Bye: And can you talk a bit about some of the things that you've tried with that 3D user interface that just didn't work at all?
[00:11:25.791] Paul Mylyniec: Well, one surprise was we really saw a twist control as being a real natural, you know, and it's something you can't do with a mouse. You can't take a mouse and twist it and twist it the way you would a dial. So we thought, well, God, that's going to be really great. And it turns out that at least with the configuration we're using currently with the, we were using the Hydra at that point, pressing a button and twisting is a straining activity for your wrist and you really do feel it. You can only do so much of that and it came as a total, it blindsided us. We thought we've got the answer to the limited vocabulary of things you can do and now we've got something and we don't as it turns out or we have to rethink that you know maybe do the twist on one hand pushing the button on the other hand because there are some times that we do that. For instance, our execute button, when you're executing a cut, for instance, you'll set up and say you're going to do the cut, you're going to be the thing that's cut, and you'll grab the thing that's going to do the cut and place it where you want it, but you press the button in the opposite hand because you don't want that hand to shake as it presses the button. So sometimes the opposite hand as the effector is the right thing.
[00:12:38.862] Kent Bye: I see, interesting. So yeah, what are some of the things that really do work in the 3D interfaces then?
[00:12:45.649] Paul Mylyniec: The two-handed interface, which you see in MakeVR, is this two-point interface, effectively. It's very much like 3D multi-touch. So you can grab on your iPhone, you just press with one finger. and slide it over and you've done a translator. You press with two fingers and squeeze them together and that pinching action scales down or scales up. That's a very natural paradigm and now you don't even have to teach people how to do it because they know how to do it on their iPhone. You tell them it's just 3D what you do on your iPhone. But it has an effect that I personally didn't see when we were originally doing it. The two-handed interface was just a great interface and made it so you could move around the scene very quickly, grab objects and do anything you wanted. But when you wear a head mount, As you know, you can get sick. Motion sickness is a big issue, and it's sort of a hard wall. The human system gets sick if you drive a viewpoint through a world and don't give any feedback. There is no physical feedback to that. The two-handed interface turns that inside out, so when I grab a point and pull the world towards me, I'm not driving my viewpoint, I'm grabbing the world. And so if I was to grab a coffee cup and shake it around, I'd be no more likely to get sick than I am in MakeVR. So the two-handed interface has solved a problem that honestly I didn't see was there until I saw other people getting sick in other applications.
[00:14:19.311] Kent Bye: Interesting. And one of the things about the stem controllers is that you're holding these physical things in your hand and pressing buttons. Is there any way to kind of use just the stem module and start to use like motion tracking of your fingers and use your hands a little bit more interactively?
[00:14:34.088] Paul Mylyniec: I think for certain applications that really does make sense, or just purely vision-based, going even beyond the stem, just tracking the hand. But I've focused a lot of my career on professional applications like modeling, like radiology, like training, and other areas where a user typically could click 20 times in a minute, click and drag. If you watch a modeler, It's constant. If you watch a radiologist, even though their moves are subtle, they're very much like a gamer. You'll see they have one hand on their keyboard, the other hand on the mouse. You know, the scroll wheel is barely moving. They're using their shortcuts and they're constantly adding input to the system. And like I say, you know, it could be 20 clicks a minute. If you're trying to replicate that with a gesture based system, it's a strain on the hands and it's slow. And if a radiologist or a modeler, if there's a deal breaker for those guys, it's speed. I mean, you expect me to do this and it's going to take four times as long. I'm not going to do it. I'm just not going to do it. From a professional standpoint, that's great. From a casual walk-up, a gesture-based system makes sense. You'll see surgeons in the operating room are happy to reach their hand up and just wave over to scroll through images. They might go forward and back to scroll in depth and side to side to get to a new image. So it makes sense for some applications and not for others.
[00:16:13.557] Kent Bye: I see, yeah, that's really helpful. Can you talk a bit about the state of the Kickstarter that Sixth Sense had with the Make VR and what was learned from that and where are you going moving forward?
[00:16:23.965] Paul Mylyniec: Lessons were learned quickly, as you know. Kickstarters happen in the first 24 hours, maybe the first 48 hours, and we saw immediately that we had made some mistakes. We were asking too much for MakeVR. We were not providing early access. Maybe we misgaged the market. We were going for 3D printing, particularly for 3D print services rather than people who own their own 3D printers. Working with Shapeways is great and that was a real coup to begin working with Shapeways. We sort of ignored that other side of it and someone who's just spent $1,500 on a 3D printer is probably more likely to want a modeling system for that 3D printer. So, we learned a lot, and then going forward, we may go up on Kickstarter again, but we are looking to carry it forward, to move forward. MakeVR, we've got a lot in it now, and the reactions are always very strong to it. We really think we've got something. I will personally drag it along, if that's what it takes.
[00:17:33.723] Kent Bye: Great. And finally, what's the best way for people to kind of get more information and keep in touch?
[00:17:40.089] Paul Mylyniec: The Sixth Sense website, I think there's a MakeVR area for that, that kind of followed the comments section of the Kickstarter. So there's still kind of a vibrant dialogue going on there. Okay, great. Well, thank you. All right. Well, thanks a lot.