#139: Henry Fuchs on the early history of Virtual Reality with Ivan Sutherland & the Sword of Damocles

henryfuchs
Henry Fuchs has been involved with virtual reality technologies for over 45 years since 1970 when he first heard about Ivan Sutherland’s Sword of Damocles from his 1968 paper titled “A head-mounted three-dimensional display.” He talks about traveling to the University of Utah to study with Ivan Sutherland, and how he was inspired to work on his thesis of using lasers for depth-capturing 3D objects after watching some of Sutherland’s students hand-digitize his VW bug into polygon shapes.

Fuchs has also been recently working on telepresence applications of VR and talks about some of the open problems and challenges facing having a compelling telepresence implementation within a virtual environment.

In this interview, Fuchs provides a lot of really interesting insights into the history of virtual reality ranging from those first interactions with Ivan and how the Sword of Damocles came about, and how VR has been sustained over the years. He points out the importance of flight simulation in the history of VR, and some how much more robust computer-generated flight simulators were from the model-train style of building physical models with cameras.

Overall, Fuchs is full of really interesting insights about the history of computer graphics and some of the major milestones that virtual reality has had over the years.

Become a Patron! Support The Voices of VR Podcast Patreon

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast.

[00:00:12.076] Henry Fuchs: So I'm Henry Fuchs, University of North Carolina. I've been involved in VR since before the term. I've been doing things in VR technology that's all the way from the tracking systems to the image generation, to the displays, to the application.

[00:00:28.894] Kent Bye: Yeah, I heard that you actually were one of the students of Ivan Sutherland. Is that true? And maybe you could talk a bit about, you know, the origin, the genesis of the sort of Damocles and where you kind of fit into the history of VR.

[00:00:40.719] Henry Fuchs: Sure. I went to University of Utah in 1970 because One of my other professors as an undergraduate showed me the paper that Ivan Sutherland wrote in 1968 on a head-mounted display, so I thought it was just the coolest thing since sliced bread, and so I made a beeline for Utah. The disappointing thing was that when I got there, and I got there in 1970, that Sutherland had already started a company with his colleague Dave Evans, Evans & Sutherland Computer Corporation, and he was spending a large part of his time at the corporation and not as much time at the university. So those of us who were interested in this were more on our own than I think we expected to be. So I remember the head-mounted display and the Sword of Damocles, of course, but there were very few of us who were actually using that in that lab.

[00:01:33.288] Kent Bye: I see. Did you get a chance to actually use it then?

[00:01:35.989] Henry Fuchs: Sure. What I was interested in, in fact what I did my dissertation on, was how to automate the acquisition system so you have some content to display in this head-mounted display. I remember the conversation with Ivan because he had assigned to the class of graphics students after me, so in 1971 or 1972, the task of digitizing his Volkswagen. And I remember coming into the mailroom in the department and looking out the window and seeing outside the window his old Volkswagen Beetle with lines being drawn on it by a group of students and people taking measurements and tapes and yardsticks and writing down the coordinates of these vertices and then evidently doing the list of polygons that are connected to these vertices. And as I was staring at this site out here, with pity since I had taken the previous class and he didn't make us to do this, Ivan walked in to get his mail. And I said, Ivan, this is really cruel and unusual punishment to make students manually digitize your car. And Ivan, nonplussed, said, yes, but Henry, while you and I are idly chatting here, they're actually getting some results, aren't they? Oh, that hurts. So shortly thereafter, some guy from Stanford AI lab came to talk about some robot they were building, which was able to avoid obstacles, you know, in a room. And he tossed off that it had this sort of a laser scanning system that would then detect the distance to various objects. And so, to me, that's what you need. What you want is just a laser scanning system that detects the whole 3D depth map and then from different points of view you could scan an object and you could put it together. It's sort of like you run a visible surface algorithm backwards because a graphics rendering system is basically takes a bunch of polygon and points and surfaces, and then figures out how to go scan line by scan line, you know, to get a Z-buffer and then to get an image, and you just run this whole pipeline backwards. So I went to Ivan and a couple of other people and said, how about a dissertation in which you do that? You could automatically scan objects and put the various scans together, and then you'd have a 3D data, and then you could display it anywhere. So that's how I got started doing stuff.

[00:04:26.886] Kent Bye: Wow, so you were able to actually build sort of a LiDAR scanner or a scanner using lasers to be able to actually kind of create a depth map and create objects in 3D without having to manually draw polygons on 3D physical objects.

[00:04:40.706] Henry Fuchs: That's a nice way of putting it. But it was so crude that when I scanned my girlfriend, I'm not sure that you could tell the gender. You could tell that it was a human being. And I would scan other objects that would happen to be in the lab, you know, like our industrial vacuum cleaner or something like that. But it was certainly not very high resolution.

[00:05:04.263] Kent Bye: And so to me it's pretty mind-blowing to think about back in 1968 for all these technologies from the graphics and the head tracking and everything to come together with the sort of Damocles to be really the first virtual reality head-mounted display. What was the inspiration or the genesis or what were they trying to do back then for all these technologies to come together like that?

[00:05:25.534] Henry Fuchs: My understanding of Ivan's inspiration was, at least I've always thought was, his 1965 visionary paper on the ultimate display. And even in 1965, he said that we should think about the computer display not as something that draws dots and lines, but as a window into a wonderland created by the computer. and that that wonderland could be as realistic as our real world. And so I've always assumed that when he built the head-mounted display, he had that in mind, that this was a first step toward creating a realistic experience of a wonderland created by a computer.

[00:06:14.163] Kent Bye: And since you've been involved in the realm of virtual reality for, you know, since the 1970s it sounds like, you've probably seen a lot of waves come up in the 90s and now there's another surge, but what has it that's been sustaining you through all of that? Has it been government research, into the army to do training simulations, or what is it that you've been sort of doing since then?

[00:06:36.065] Henry Fuchs: Well, we opportunistically do the things that we think where we can make a contribution. And then, of course, as you say, you're dependent on support for that, for this, that or the other. And that's opportunistic as NSF or DARPA or ONR or companies or individuals think that this is worthwhile, they will then support it. But it goes through phases, as you say. And I'm really excited that this is the best phase ever.

[00:07:07.019] Kent Bye: Yeah, it sounded like the keynote that you gave last year at the IEEE VR that you had kind of thrown away your existing keynote and completely rewrote it to take into consideration the Facebook acquisition of Oculus VR. And so what do you see as why these breakthroughs didn't come from the academic community versus how everything was sort of being able to be synthesized by a private startup?

[00:07:30.907] Henry Fuchs: Well, the breakthroughs, not very much recognized, come from all over the place. But the reason that we don't see it as a big splash is because no community until last year put serious money behind it. As an example, DARPA, which is probably the richest funding agency, at various times has put what they think is a lot of money into it. What they think is a lot of money is, as far as I can tell, about tens of millions of dollars. Which is more, by the way, than as far as I can tell any other funding agency has put in before. And what you could get for that is maybe an improved design for a head-mounted display. That's what you could get for tens of millions of dollars. And that doesn't make much of a dent in the world. But when you put two billion dollars into it, you might be able to change the world. And no funding agency has beforehand put two billion dollars into it. That's the big change.

[00:08:38.151] Kent Bye: And so I know you've been looking lately at a lot of things and issues of telepresence. And so what are some of the open problems when it comes to being able to put your sense of self and your avatar into a room to be able to kind of collaborate with other people and to have like a really immersive telepresence experience?

[00:08:56.910] Henry Fuchs: The problems that remain in order to have a meaningful telepresence experience are still some of the same basic problems that we've had for decades. That is, we need to have a good enough display that gives us the illusion of somebody being next to us. And that means a wide field of view, see-through, high-resolution display. We need to have a good tracker so that the system knows where we are looking. So when we look to the side or move a little bit that the system knows with very high precision and very low latency where we've moved. Then we need some image generation to respond to that with very low latency and display. And then most of all we need is we need the acquisition system so that where this person is who's far away, that this acquisition system can continuously update the environment map of that far away place from which we could do the display at our local place. So those four things are still a challenge. There are different levels of difficulty for those. In some ways, you could think of it as the pipeline. The acquisition, the tracking, the rendering, and the display. And those all remain. The rendering is in pretty good shape, actually, because the GPU manufacturers, driven by the gaming market and others, have that in pretty good shape. The other three are not in good shape at all.

[00:10:27.265] Kent Bye: Yeah, and in terms of the display, it seems like if you're wearing a head-mounted display, then you're including, you know, the transfer of your own sort of facial expressions in some ways. And so, with these types of telepresence systems, would you imagine that it would be like some sort of cave-like environment, or televisions, or what do you see, you know, or do you see that they would be able to overcome this problem of being able to actually track your eyes and your facial expressions within the HMD?

[00:10:55.065] Henry Fuchs: I think for different applications, there'll be different solutions that will be possible. So for certain kinds of applications that are not very demanding, for example, meetings, I think that large displays like TV sets will continue to be perfectly okay. You're sitting there and you could see me on a TV screen, basically, and it will just look more realistic than current TV screens. it will look stereoscopic, and if you move your head a little bit, you'll get a strong expression of depth, more so than current stereo so-called 3D TVs. But for most applications, I think that you're going to need to have some kind of a head-worn display, because no matter how good that display is that is fixed in the environment, when you look away, you won't be able to see anything that's augmented. And cave systems that are surrounding systems make too much demands on the environment. People are not going to dedicate a whole room in their house to something like that. And although we and many others have worked on so-called spatial augmented reality, that is, projecting onto surfaces wherever they are, like in your home, and then warping the geometry to compensate for the irregularities, and adjusting the colors to compensate for the reflected color. It's very limiting what you could do in an un-instrumented environment. So my guess is that within 10 years, certainly, that many, if not most, of the applications will be on head-worn displays.

[00:12:32.275] Kent Bye: Yeah, and there's a kind of applied scientist, Oliver Kralos, who's put together like just three Kinect sensors to kind of triangulate and put entire body avatars into virtual spaces. And I'm curious with the tracking problem, if you see an array of Kinects being some solution to be able to track and put people with 3D representations of humans within a meeting context, and to be able to share that within a virtual environment.

[00:12:59.942] Henry Fuchs: So we've been working on that for several years. We could put about 10 or 12 Kinects in a room and run them through in real time with a single PC. And the limitation is really in the interface and the processing of single PCs right now. I think that's a really good way of doing things in the short run. The quality that you get from Kinect and other similar depth cameras is not good enough to give you a really strong sense of things. And so, you need to do more, I think, in order to have it be acceptable to most people. Because for most people, the standard that they're looking at is video. Even if it's very limited quality video, like in application like Skype. Nevertheless, it is pretty good video. And when you look at what kind of quality you get from the 3D reconstruction when you scan a room with even a dozen Kinects, it is not as good as video many times because there are problems with occlusion and missing data and noise. So when you compare side-by-side a video to a reconstruction from a Kinect, Most people on the street would say, whoa, that one with the video looks a lot better. But of course, they don't realize that you could walk around in the 3D reconstruction where you cannot walk around in the video.

[00:14:27.180] Kent Bye: Yeah, and it seems like there is an uncanny valley on all dimensions of virtual reality, but in the specific context of telepresence, it seems like you may have a low-fidelity, like, stylized avatar that maybe captures the gestural information of the facial expression. And then in the middle, that seems like maybe where the connect might be with sort of not, you know, super high quality. And then the high fidelity where you're getting the more realistic and the video seems to be on that end of the spectrum but lacking the sort of stereoscopic cues and ability to kind of walk around. But have you experimented with trying to do low polygon stylized characterizations of people's faces within telepresence apps?

[00:15:10.212] Henry Fuchs: At various times we've experimented with very low quality characterizations, even as low as just getting an image of them in the right place and the right orientation. So we've built collaborative environments with colleagues at Brown and at Utah years ago in which we would construct a 3D virtual environment and then we would have multiple people in that environment, sort of like you would think of a second life now. And they would then walk around in various ways in that environment, like work together on some CAD design. And then they would be represented only by like a cube with their face on the front of it. So really really low fidelity except that you see a portrait of them and you of course recognize them and you see where they are in that environment and you see where they're moving and you see where they're looking and it gives you a strong sense of presence. But of course it's not very realistic. It's just a box with their face on it. We've done that and lots of other things in between all the way to doing a capture in 3D, like with multiple Kinects and multiple higher quality cameras, and then doing the reconstruction offline to do it as good as possible, and then integrating several frames or a whole succession of frames in order to get the geometry as good as possible, and then playing it back at real time. And it does look, of course, dramatically better. But you don't get the interaction then.

[00:16:52.073] Kent Bye: Right. And there seemed to be like a workshop here, you know, before the big IEEE VR conference starts tomorrow. There was a whole collaborative virtual environment workshop with a number of different talks and lectures. And were you at some of those talks and maybe could talk about some of the discussion that was happening there?

[00:17:10.151] Henry Fuchs: So I was asked to give the keynote to that workshop, 3D Collaborative Environments, I think you might be referring to. Yeah, so I talked for an hour and there were a bunch of other papers and position papers and experiences. And they were really, of course, I think they were fascinating. I think some of the most interesting ones were experiences that people had with using collaborative virtual environments for training. There was a person from Norway who showed some experiences in training graduate nurses in various complicated situations using a system built on top of Second Life. also showed some experiences with training Norwegian soldiers who were about to deploy to Afghanistan in cultural sensitivity. So, like a video game, except that it's really immersed in Second Life and there's multiple players. like in Second Life, playing the various roles. So some were playing the roles of Afghan villagers, and those were, of course, the teachers and experts who've been there and know the cultural sensitivities. And if the trainees weren't doing the right things, then they would be points deducted and would make their goals harder to reach, like interacting with a village chief and getting useful information.

[00:18:49.191] Kent Bye: Ah, interesting. with the virtual reality it seems like there is a lot of the pieces are coming together and getting ready to kind of hit the mainstream with consumer versions of virtual reality but along the way there's been a lot of many years of research with incremental improvements of all different dimensions and you've been involved with the academic side of that and so I'm just curious from your perspective of different milestones or things along the way that you know insights in terms of like latency is really important or low persistence or some of these other things that you've personally been working on over the years that you see have kind of contributed to where we are today?

[00:19:26.369] Henry Fuchs: So first of all, we shouldn't credit just the academic community. There's tremendous amount of work in industry over the last 30 or 40 years. And we don't hear about them as much because people in industry tend not to publish as much. But there are tremendous amount of work in industry. Yeah, so I could tell you some of the milestones as I see them. I think that the first really significant milestone was in image generation that was real-time. That was something that, when Ivan Sutherland built his head-mounted display in 1968, wasn't around for any amount of money. And now, of course, we just assume real-time image generation in our cell phones, right? We could play 3D video games that let us navigate in complicated environments, and we take those for granted. But that was not possible for any amount of money until the early 70s. And then about 1985 or so, it became possible to do that with sort of $100,000 units, machines that were available to laboratories and research groups. And then by the 90s or mid-90s, it was available in graphics cards that were possible to be bought by people who were really aficionados of real-time games. But I think that's the area which has had the most amount of development. And when we look back on it, the reason I believe it had development is it had paying customers earliest on. And those paying customers in the 70s were people involved in flight training. And so the commercial airliners were willing to pay money, sometimes millions of dollars, for real-time image generation because they realized that that was useful to them and cheaper to train pilots than training them in either the simpler simulators they had before or on real airplanes. And so when you want to look back on virtual reality, I think the first thing that you should realize is that much of the success owes itself to the training and simulation in the airline industry. Now the other ones, the other areas like actual head mount design, like tracking and scanning for content creation, are much farther behind because there was no market that was willing to pay money for them. Does that give you some idea? Yeah, for sure. I mean, I could tell you for each one where I saw the milestones. So in the actual displays, The milestone started around 1980 when there were small pocket television sets that some manufacturers thought they could sell. Sinclair in the UK and Sony, if I recall correctly, in the early 80s had these pocket TV sets that sort of sold. And it became possible then for people to buy these TV sets and build head-mounted displays out of them. Before that, it was really hard. I mean, I tried to build them, but there was nothing. You could buy vacuum tubes, you know, like small cathode ray tubes, but that's what Sutherland did. But most of us wouldn't go there because you'd have to actually build the scanning circuitry for that. But with pocket TVs, mere mortals could build head-mounted displays. So that was the next thing. In tracking systems, there was a very small market for tracking the direction of gaze of pilots in fighter aircraft for heads-up displays. and a small company called Polimus built magnetic tracking system for that. And of course, they'd be happy to sell it to anybody. And so they sold a few units in the 80s to people who were building head-mounted displays. But because it was made for tracking gaze, direction of gaze of a pilot, it was not meant to be very far away. The distance from the source to the target was less than a meter. And so if you walk more than a meter from where you started, the tracking was really bad because it wasn't made for that. In content creation, it was the worst because as far as I recall, nobody was interested in 3D digitization. And so people just made do with cameras and laser scanners and various other things that people put together for various purposes. So that's my sense of where it comes. It's not a very dense sense of development because the amount of money in the marketplace was very small until now.

[00:24:23.134] Kent Bye: Yeah, I think the thing that's surprising to a lot of people from consumer VR that may put the origin at Oculus Rift without not really even realizing that there's a spin in this whole history, partly because it's in academia, but also because the industry that people are using it, they don't always necessarily, you know, talk about it or advertise it or give up their secrets about what they're doing to, you know, give their competitors a committed advantage. There seems to be, even within the industry here, people that are doing stuff, you know, they kind of keep it under wraps so they don't broadcast or advertise what they're doing very much.

[00:24:55.520] Henry Fuchs: Right. And also, when an industry or part of a segment of a large company is not doing so well, there's not a lot of effort made to publicize it. So, for example, Canon in Japan, had a significant research lab in mixed reality that they ran for four or five years together with funding from the Japanese government. This was maybe 10 years ago and people could look up mixed reality laboratory. Canon and They did some really interesting work really interesting work for example in head mounted displays with video see-through that is with video camera in front of each eye and some nice optics both for video see-through and non video see-through what we now call augmented reality ones, but They never sold. They've put them on the market. They've had them on the market, I think, since that time. But unfortunately, they're expensive and their field of view is pretty narrow and the tracking system is sort of rudimentary. So the market is not very big for them. So as a result, people who know about Canon cameras and lots of other things don't know that Canon has been in the virtual reality business for many years. But it hasn't been a big player because the market hasn't been very big.

[00:26:18.329] Kent Bye: Yeah, interesting. Yeah, and just looking at the history and reading about Sutherland and some of the papers that he was putting out, it looked like he was then ARPA at the time, which is the predecessor to DARPA, and there looked like there was also like Bill Helicopter or other funding from other military sources. What do you know about some of the funding that came about or as to why he was able to kind of put together this crazy system?

[00:26:43.809] Henry Fuchs: My recollection is that Sutherland was at ARPA only for a couple of years after he finished his doctorate at MIT and before he went to Harvard as a faculty member. It's not just the predecessor of DARPA, it is DARPA. It just changed its name, put the defense in front of it, I think, for political reasons. It is exactly the same organization. It celebrated its 50th anniversary a couple of years ago. It's a really interesting funding agency, but he was only there a couple of years, and then he went to academia at Harvard for a couple of years, and then he was attracted to Utah by Dave Evans, I think because Dave said we could start a company together and do good things. So as to why Ivan Sutherland did this, you could ask him. He's still around. He's in Portland now. If you ask him about graphics and virtual reality, my guess is he will say, I have not done virtual reality in 40 years. Ask somebody that's been in the field. But you can certainly ask him about the history of his inspiration. My recollection as to the linkage with Bell Labs is I think he once said that he had done some consulting for Bell No, not Bell Labs, Bell Helicopter. I think Bell Helicopter had a helicopter that was deployed to Vietnam, if I recall correctly. You should check all this out. And I think it was a small helicopter that was just used for scouting missions. So it was not one of these heavily armored warfighter with lots of people on it. But it would get shot at at various times. And I recall they wanted something so that the single person flying this helicopter could return fire as a defensive action. And I think they put a machine gun on the belly, below the belly of the helicopter, and had it controlled by the gaze of the pilot. And so, if I recall, this was, I think, I've been talking 40 years ago, I think that they had a reticle on the front of the pilot's helmet so that anywhere where the pilot looked, the gun would swing in that direction and it would be pointing, it was calibrated so it would point to where the reticle was pointing. And so if he wanted to shoot in that direction, there was a button on the control sticks of the helicopter so he could return fire wherever he saw it coming from. And he looks in that direction, he pushes a button, and the machine gun would return fire. So it's the best that they could do at the time. And I think Ivan said that was some inspiration for him for head-mounted displays. But that's all I remember. Call them and check it out.

[00:29:40.266] Kent Bye: Yeah, I actually tried to track him down for a while. I couldn't find him. But yeah, yeah, I've definitely been wanting to sort of... I'm just curious about how this came about because, you know, where it is today and just knowing about the history and the origins I think is interesting to see where it came about.

[00:29:53.994] Henry Fuchs: What I can tell you is I think it's quite correct that he would say that he has not worked on virtual reality in 40 years or whatever because I know that when I and other people got there around 1970, he was already doing things principally in the company. And some of the technology that he developed in the head-mounted display, like the image generation, found its way to some of the products that Evans & Sutherland made. You know, real-time line drawing systems for computer-aided design. But as far as I recall, they did not even contemplate making a product out of this head-mounted display. You know, it was way too far out.

[00:30:33.709] Kent Bye: Yeah, really ahead of its time, if you think about it, back in 1968, where we are today. I mean, even in the 90s when it came up, it was sort of like the technology wasn't there yet. So just imagine the technology not being there back in 68 to be able to pull off that. But, you know, because you've been involved with virtual reality, and from what I see is sort of a dark history in terms of not knowing a lot of what was happening in terms of industry and stuff, what was industry doing with VR since the 60s, or when did it get really picked up from the airline simulations, or I get the sense that there was perhaps military training, but what else was happening in the context of VR and this kind of foggy history from the 60s up to the 90s and up to today?

[00:31:17.004] Henry Fuchs: So, the short answer is that different people and organizations were doing different things, but nothing with a sense of community. So, the simulator, the flight simulator industry was the first one that developed, and it was already around before Ivan's head-mounted display. As I understand it, the state of the art at the time for flight simulators was basically using television cameras on model boards. So imagine building a model of an airport that you want to train pilots to land and take off from. And then, just like a model railroad style model, and then you would have basically like an XY moving platform above it on which you would put a camera, which of course was big, a big TV camera, and with some optics and small mirrors so that it would be pretending to be very close to the surface if you wanted to land. and then you would have this XYZ motion platform that would be slaved to a computer that would then calculate where the airplane would be given that the trainee made some various moves from the cockpit mock-up. And so the cockpit would be a physical cockpit with the same kind of controls that would be in a real airplane. And they would be basically like a computer screen outside the cockpit window. That computer screen was directly driven by the TV camera that was over the model board. And then as you, the trainee, say, would bank to the right or would take off, whatever, then the model board camera would make that same motion. And that worked pretty well. A crash, by the way, would be a real crash. I understand because, you know, then you'd crash into, like, the tower. And then they'd have to rebuild that model tower. But it was very realistic. You're flying over this model that's, you know, that you would build like a train model set. So that was the competition, but they realized that if you could do this with a computer, that you'd have a lot more flexibility. And so as computer capabilities developed in the 60s and 70s, this became the new kid on the block for training systems, and eventually completely supplanted them. That was a really great market. Because they were so expensive in the 70s, basically no laboratory could afford them. As far as I remember, when I finished Utah in like 75, I think there was only one university in the country that had one of these real-time image generation systems. I could tell you who, I think it was Fred Park, P-A-R-K-E, who graduated I think maybe a year ahead of me, and he was then at Case Western. For some reason, through some grant from some federal agency, they got money to buy a one-channel digital image generation system. And I remember talking to him and how jealous I was that he could do these. And he says, well, Henry, it's actually not that great as a research machine because it's made as a flight simulator. And so if you want to change anything, like change how the colors are calculated, you know, it's not made to be programmed. It's made as a set of boxes to generate imagery in a flight simulator. Sorry. So he kept telling me that it wasn't great as a research engine, but he had one. It was in real time. And so I remember working on the research of how to generate images in real time, because I thought that was the first thing that you really needed to do. And around 1975, there was already the first of the 8-bit microprocessors. And so a bunch of people, lots of us around there, would be thinking, if you could only figure out how to put a dozen or two dozen of these microprocessor chips together to generate images in real time, wow, you could actually build your own real-time system. for which then you just have a head-bound display and the rest of it we'll figure out. But you see, it's like there was no community as such, so people individually doing things that they thought that they could do intellectually and that they had money to actually explore. But the entire graphics community, for example, or 95% of it, didn't have this vision. And if you look at even the best textbooks in computer graphics, they didn't give much coverage to Ivan Sutherland's 1968 head-mounted display. And I've asked them at various times now. Like, the best textbook in the 70s, or at least most popular, was by Newman and Sproul. Both people that had very close ties to Ivan Sutherland. Not much coverage of his head-mount. Then in 81, there was a textbook by Foley and Van Damme that was like the Bible for a generation of people. Also, there was a page or two near the back on various esoteric displays and there was something on head-mounted displays and Ivan Sutherland's. no description of this being a vision of what the future is going to hold. It was just sort of an esoteric one-off. It's like this strange butterfly. A bunch of other ones also. So the people didn't appreciate or didn't buy into or didn't think about the notion that virtual worlds and virtual environments are going to be the next big thing, even if it's 20, 30 or 40 years away.

[00:37:05.148] Kent Bye: And so what type of experiences do you want to have in a virtual reality then?

[00:37:10.443] Henry Fuchs: I think telepresence is what I want to have. I want to be able to be with friends who are far away. I think that's going to change my life for the better. There are people I'd really like to be with that I just don't get to be in the same room with. People that are across the country, across the world. I want it to be that we could hang out together sort of in the same way that we can now talk on the phone. So I think it'll be a much richer experience to just sort of hang out together. That's what I want.

[00:37:40.059] Kent Bye: And finally, what do you see as the ultimate potential for what virtual reality may be able to enable?

[00:37:47.180] Henry Fuchs: I think it's virtually impossible to answer ultimate potential. I think easier is to think about how other technologies that are not so magical have changed society. So somebody asked me a couple years ago when they came to our lab and saw what we've been doing, this kind of telepresence. The guy said, wow, this is the closest I've ever come to having a conversation with a hologram. He said, are we like in the Model T age? I said, oh, nowhere near the Model T. We're like 20 years before the Model T. That's what I think people should realize. We're at so early stage that it's impossible to think about what impact this might have on society. Think about how we are 20 years before the Model T. The Model T was a mature product that you could buy, you could have for years, you could do whatever you want with it. You could drive it to the store, you could drive it across town, you could drive it across the country. It worked, right? Do we have anything like that with a VR system? No. I mean, what we have is a situation like 20 years before the Model T in which they were tinkerers and a couple of people in their backyards building engines or, you know, trying to see if they could put it onto some horseless carriage. We don't have a single product that is integrated that somebody could use for years without knowing something special about it. I mean, even Gear VR, for example, which I think is wonderful, is really just a developer's edition. It's not as if someone's going to use it daily like they would use their Model T. So I think, rather than think about what ultimately will change, is think about how exciting it is to live perhaps 20 years before the Model T and see how that may change society, both individual lives and societies around the world. OK, great.

[00:39:55.777] Kent Bye: Well, thank you. You're welcome. And thank you for listening. If you'd like to support the Voices of VR podcast, then please consider becoming a patron at patreon.com slash Voices of VR.

More from this show