#446: Ken Perlin on VR as a Social Interaction Technology

ken-perlinKen Perlin is a professor of computer science at NYU, and is researching how to use VR to enhance social interactions. I had a chance to talk with him about that as well as how he’s combined his passions for math and art with his procedural textures.

LISTEN TO THE VOICES OF VR PODCAST

Here’s a demo video of NYU’s Holojam project:

Subscribe on iTunes

Donate to the Fatality & Summer Trip

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR podcast. On today's episode, I have Ken Perlin, who is a professor of computer science at NYU. And Ken has been involved in the graphics industry for a long, long time, combining his love for math and art and coming up with procedural textures that have been able to enable more realistic-looking computer graphics, for which he eventually won an Academy Award for. So Ken is also working at NYU doing some different social interaction experiments within virtual reality and is interested in VR, not because of the technology, but for how it can help us enhance our social interactions. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. This is a paid sponsored ad by the Intel Core i7 processor. If you're going to be playing the best VR experiences, then you're going to need a high end PC. So Intel asked me to talk about my process for why I decided to go with the Intel Core i7 processor. I figured that the computational resources needed for VR are only going to get bigger. I researched online, compared CPU benchmark scores, and read reviews over at Amazon and Newegg. What I found is that the i7 is the best of what's out there today. So future proof your VR PC and go with the Intel Core i7 processor. So this interview with Ken happened at a VR meetup that was happening in New York City on July 16th. So with that, let's go ahead and dive right in.

[00:01:46.835] Ken Perlin: My name is Ken Perlin. I am a professor of computer science at New York University. And we're doing a lot of work in VR, but ironically not because we are interested in VR. But mainly because virtual reality right now provides a very good technical vehicle for us to explore what we're really interested in.

[00:02:07.726] Kent Bye: So is that what you're interested in is humans?

[00:02:11.193] Ken Perlin: Well, hopefully everybody's interested in humans. We like to ask questions like, in 10, 15, 20 years, when everybody has the glasses or the contact lenses, and we're not even thinking about it, and kids grow up in a world where mixed reality is just reality, what will that be like? And how do we prototype what that might feel like, how we might hang out together?

[00:02:36.284] Kent Bye: So it sounds like you've created this HoloJam. Maybe you could tell me a bit about what the HoloJam project is.

[00:02:40.833] Ken Perlin: HoloJam is a project we started about two years ago. We started in 2014, and we showed it for hundreds of people at SIGGRAPH 2015 in LA last August, where you all put on a headset, strap on some wrist and ankle markers, grab a wand, and then you're all physically in the same room, and you see each other as avatars of yourself, and everybody can collaboratively draw in the air. In a sense, it's a kind of experience of going together onto the holodeck.

[00:03:13.436] Kent Bye: You know, there's a lot of different social dynamics that it sounds like this is enabling. And so what do you think is different between people just hanging out together and people doing this type of interaction together within VR?

[00:03:24.546] Ken Perlin: I don't think there's a fundamental difference. I think that to be human means to use technology to enhance our social interactions. If we are talking and we're standing in front of a whiteboard and I hand you a marker, that's a kind of virtual reality. Like we're using technology to change the world around us to suit our own purposes. Even if we both order from a menu in a restaurant, that menu is a form of virtual reality. I mean, you can't taste or touch or smell or hear or feel the words on the menu. And yet we've all adopted to a reality that only exists inside of our brains where those squiggles are meaningful. The things we're doing are not fundamentally different from that. We're just extending that idea that people create a consensual hallucination that we all share using whatever technology we have, and we're just taking it one step further.

[00:04:21.622] Kent Bye: What do you think some of the biggest open questions that you have around that area that you think are really driving that research forward?

[00:04:29.627] Ken Perlin: Yeah, I think that the thing that's fixed, is within the last 30,000 years since say the time of the Cro-Magnon, a baby's brain is capable of growing into and learning has been pretty much genetically fixed. There hasn't been a tremendous amount of genetic macro evolution. So now it's all selection from existing genes and the same thing with these bodies. So the tools we have to work with which are fixed are brains and bodies. We're not going to in the near term reach in and change these brains and make them into different essentially non-human brains. So all of our technology is kind of doing a dance around the big power up, which is this biological evolutionary endowment of being human in the first place. Our job, when a new technology comes along, is to understand how to use that power as well as possible.

[00:05:22.131] Kent Bye: And to me, when I look at history, I kind of see the Gutenberg Press as kind of like this seed that then kind of spurred the Renaissance by being able to capture information and knowledge in a new way. And it feels like, in a lot of ways, the computer is like the Gutenberg Press of the 21st century, both with just computing in general, with the World Wide Web, the internet, with augmented reality and virtual reality and artificial intelligence. All these things seem to be like this new platform that's introducing a lot of new capabilities. I'm just curious your perspective on this.

[00:05:50.940] Ken Perlin: So if you replace the phrase Gutenberg Press, which you're using as a catchphrase, with what you're really talking about, which is magnifying humans' means of distributing information between each other, so the evolution of writing, the replacement of papyrus with the codex, you know, bound media books about 2,000 years ago, the telephone, movies, recording onto phonographs. It's amazing how many of these innovations were actually done by Thomas Edison. Actually all pretty much in the same place at the same time. Computers, the web. There's the internet and then the web. And the web made the internet more democratic because you didn't have to be a programmer anymore to be able to use it. And every one of these things is basically saying, just as Twitter is, just as Snapchat is, we're making it easier for communities to form. And that's going to keep happening.

[00:06:52.968] Kent Bye: So I just want to take a step back from your own history and understand you have a background in math. And I'm just curious what your path was into then getting into computer graphics or, you know, where you see your entry point into kind of like trajectory towards virtual reality.

[00:07:06.575] Ken Perlin: When I was a little kid, I really liked art. I was always very visual. I didn't know I liked math until I got to high school and had some really great math teachers. And it turned out that in my high school, the only good teachers were the math teachers, and they were great. So I ended up becoming very excited by math, and I tied it in with my other passion, which was art and visuals, and add computers to the mix, and you have computer graphics.

[00:07:34.643] Kent Bye: And so it sounds like there was a lot of different computer graphic innovations that you were working on with Tron, which you could say some of those technologies were now feeding into VR in real life. So what's the connection there for what happened with Tron and what you were working on there?

[00:07:50.290] Ken Perlin: Well, actually, I'd say probably the first real contribution that I made to the field was in response to Tron. We had finished Tron, and I was unsatisfied by the fact that Everything you saw in that movie looked like it was made by computer-aided design software, because it was. The entire state of the industry had been repurposed from automotive and aerospace, and it was all boxes and spheres and lines. And so the year after Tron, I just started hacking on trying to create procedural textures, things that look natural, trying to get some of that feeling of, you know, when a painter picks up a paintbrush, and starts making things that look like they could be in the real world. So I developed this entire field of procedural textures, which is still used now, which is a way for artists to use a kind of programming to create naturalistic looking things in graphics.

[00:08:47.410] Kent Bye: You eventually got an Academy Award for your lifetime achievements. What were some of the big things that were contributing to visual effects for what you did?

[00:08:56.252] Ken Perlin: Oh no, I got an Academy Award specifically for the development of some of these procedural texture technologies. Because from the moment I started making them in the mid-80s, and immediately they started getting used in special effects. Like right away. Because they were fairly simple, easy to use techniques. I'd say probably, my guess is that the major reason I ended up getting the Academy Award was that the year that they put my name forward was when Ed Catmull, who had founded Pixar, was the head of the technical committee. This is only a conjecture. And he knew, and he has told me that My procedural textures were all over every frame of Toy Story and every computer graphic film ever since. And he thought, well, this should be recognized. And he was in a position to recognize that.

[00:09:42.934] Kent Bye: And it seems like a lot of your techniques kind of feed into the demo scene as well. Is that correct? Or some of the people that try to make really elaborate scenes in very small size?

[00:09:52.978] Ken Perlin: Yeah, it turns out that the demo scene, which is a whole later generation, is people who think like me. And so as computers have become faster, and you can start doing things in real time and especially with the rise of GPUs and the absolutely brilliant work back in 2013 from the ShaderToy people who are taking my techniques but then applying them to GPUs and creating an entire new revolution. I think there's a new generation of young people who are building on Moore's Law, computers continually getting faster, to create new forms of real-time art. I didn't have the luxury when I started of creating real-time art, but all the same techniques work perfectly for real-time.

[00:10:39.421] Kent Bye: And maybe you could talk a bit about some of the stuff that you did in terms of flocking behaviors, in terms of having a way to have kind of simulating the movement of animals and flocking.

[00:10:48.609] Ken Perlin: Oh, right. Back in the 90s, I started applying procedural techniques to movement, and we formed a whole team at NYU just about 20 years ago now. And we did a whole set of work which we called improv, improvisational animation, where you have semi-autonomous characters and, you know, subtle secondary movements. Now it's become very, very popular. You see since the rise of Massive and various AIs, but I think that when we started it, people didn't believe you could do that. I think there was an animation I did, an interactive animation in 1994, that I think was the first time I think we started showing that you could have a real-time interactive character that would convey personality in a way that people would buy into the character.

[00:11:39.085] Kent Bye: And when you think about the future of graphics, I sort of see the LCD screen as perhaps having a limit in terms of the fidelity that we can see. You have this other technology with Magic Leap, which is sort of like this, shooting photons directly into your eyeball with this virtual retina display. So I'm just curious, from your perspective, the future of graphics, if you kind of see it eventually kind of moving beyond the screen-based media, and if it's really going to be kind of more of just direct photon injection into the eyes.

[00:12:06.452] Ken Perlin: Well, since the dawn of time, Everything has been shooting photons directly into our eyeballs. That's kind of all we do. So the fact that Magic Leap is doing it in a more clever and subtle way that presumably helps with focus and accommodation isn't a radical shift. If they manage to do all of that with very high dynamic range in a form factor eventually that will fit within your glasses, then I think it'll have enormous impact.

[00:12:36.980] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?

[00:12:44.066] Ken Perlin: I don't know what the eventual potential will be of what people are currently thinking of as virtual reality. Obviously, the location-based entertainment that's just gone viral of Pokemon Go, which some people calling augmented reality, but I just think of as location-based entertainment. Because you're not really seeing objects in super posed as though they're in the world, you're just seeing a Google Glass-like experience, but through your phone, shows that people are interested in social interactions that involve the power of the computer. And that tells us that there is a path where this will become more and more relevant, but it does not tell us which paths will succeed and which will not. I guess we're just going to have to find out.

[00:13:34.149] Kent Bye: Awesome. Well, thank you so much.

[00:13:36.064] Ken Perlin: Thank you.

[00:13:37.601] Kent Bye: So that was Ken Perlin. He's a professor of computer science at NYU. So I have a number of different takeaways about this interview is that first of all, I was really struck by Ken's definition of technology and virtual reality because when you look at technology as anything that humans are creating, then you could say that there's all sorts of ways that we augment our ways of communicating and being able to visualize abstractions. And so you could call a book a virtual reality. So whether or not you have that loose of a definition of virtual reality or whether or not you are looking at VR as much more specifically in terms of the immersive capabilities that you're able to do, doing head tracking, all these different benchmarks for whether or not you're able to achieve a couple of illusions, essentially the place illusion and the plausibility illusion of essentially tricking your perceptual mind that it's in another world. I think that is what most people would kind of categorize as virtual reality, but the main point is that, regardless of that definition, is that these are still abstractions for reality and that in a lot of ways what's driving Ken and his research into VR is how can this technology enhance our social interactions with each other. And the other thing is that our brains and bodies are kind of the common thread through the evolution of humanity. And in essence, the only thing that's consistent through all these different modes of looking at the technology is our human brains and bodies. And these are the tools that we're working with that are fairly fixed. But essentially, he's just really trying to figure out how can we magnify human means of transmitting information between each other. So that's the main gist that's really driving a lot of his research into VR. And his contributions into procedural textures, like he said, has been used throughout every frame of a lot of different animated films. And also been driving a lot of the real-time interactions and graphics. But also the whole demo scene is pretty amazing if you've never taken a look at it. some of the projects that they're able to do with just like 64k worth of code which is essentially just procedurally generating a lot of these shaders that are then giving you a full interactive type of experience. So it's really showing that in the future we're kind of moving towards this real-time interactive graphics so that when we go into these virtual reality experiences then we're able to see these really full and rich procedurally generated environments in virtual worlds. Also, I just noted that he cited back in 1994, he created this real-time interactive character that had a personality. So just in the history of computer graphics and interactions, I think that's a pretty interesting marking point for the first time that Ken at least created one of those real-time characters and could have actually been one of the first ones that had ever been created. So that's all that I have for today. I would like to just thank you for joining me here on the Voices of VR podcast. And if you enjoy the podcast, then spread the word, tell your friends, and become a donor at patreon.com slash Voices of VR.

More from this show