#4: Jason Jerald on VR latency, simulator sickness & presence + Conference highlights from 3DUI, IEEE VR, Neurogaming & SIGGRAPH

Jason Jerald of NextGen Interactions has been involved with creating computer graphics and next-generation 3D computer interfaces for 20 years. His virtual reality consulting client list ranges from Oculus VR, Valve and Sixense Entertainment to NASA Ames Research Center, Battelle Pacific Northwest National Laboratories, Naval Research Laboratories, HRL Laboratories, DARPA & NIH.

jason-jerald

We talk about some of his research and thoughts on VR latency, simulator sickness, presence and VR input devices & 3D user interface constructs. We also cover highlights from the IEEE VR, 3DUI, SIGGRAPH & Neurogaming conferences.

Topics

  • 0:00 Intro
  • 1:58 Consulting work with Oculus VR
  • 2:46 Jason’s Ph.D work in reducing latency leading to work with Valve & Oculus VR
  • 4:08 The 20ms latency threshold target
  • 5:41 Research process for measuring VR latency
  • 7:37 Other VR user studies comparing 3D user interface tasks with 2D equivalents
  • 9:00 3D User Interface (3DUI) conference contest
  • 10:46 The importance of VR hand input, point-to-fly UIs, & going beyond 2D menu constructs
  • 12:43 VR input options of vision-based systems, physical based devices and data gloves
  • 15:01 Comparing and contrasting the strengths and weaknesses of VR input devices
  • 16:19 IEEE VR highlights including the Head-Mounted Display panel that Jason moderated
  • 19:07 IEEE VR perspective on Facebook acquisition, and Henry Fuchs’ inspirational keynote.
  • 20:24 The biases towards low-risk dissertations that prevented academia from making a VR breakthrough
  • 22:25 IEEE VR Unity 3D workshop, MiddleVR, Virtual Human Toolkit, and AutoVerb binaural audio plug-in
  • 25:27 Adoption of Unity in Academia
  • 27:04 Academic VR frameworks & toolkits and UE4
  • 28:04 Unity Asset Store and the Impulsonic AutoVerb Unity Plug-in for binaural audio
  • 28:54 SIGGRAPH computer graphics conference and it’s connection to Virtual Reality
  • 30:27 Jason’s background in real-time 3D graphics
  • 31:24 Neurogaming conference impressions
  • 32:34 Tradeoff of consumer EEG interfaces of ease of use vs. more powerful EEG signals with more electrodes & paste.
  • 33:48 Using palm sweat and heart rate to measure VR presence
  • 36:34 Quantitative and qualitative measures for researching simulator sickness
  • 37:39 Sixense’s serious game grant for “Motion-Controlled Gaming for Neuroscience Education”
  • 39:39 Potential of getting a VR dream job in academia
  • 42:28 Keenly interested in the open problems of 3D user interfaces, researching simulator sickness best practices & moving towards higher-level VR problems rather than implementation
  • 44:50 Wrap up and conclusion

Music: “Fatality” by Tigoolio

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast.

[00:00:11.857] Jason Jerald: So my name is Jason Gerald. I'm currently at my own company, NextGen Interactions. And essentially what we do, our primary purpose is to do consulting for virtual reality applications. a little bit on the augmented reality side and also the user interface side. I'm not necessarily having a tracked head position with a head-mounted display or a stereoscopic display, but that's really what I like to do most. And I've been doing virtual reality since about 1996 when I was first using a head-mounted display up in Washington State, actually. I got an internship for a summer at a place called Battelle Pacific Northwest National Laboratories. And I don't know, maybe a year or two before that, I had another internship doing computer-aided design, doing some very basic modeling and animation, which I don't know if you would call that modeling and animation. Today, it was pretty ugly, but I thought it was pretty cool stuff. But I heard of someone doing virtual reality and that you could actually study that in school. And that blew me away, that you could actually get a degree in something like that, which eventually that kind of set me on my path. And that was at UNC, University of North Carolina at Chapel Hill, where they kind of specialized in that. So it took me a few years to get there, but I kind of had in my mind that's what I wanted to do and eventually ended up studying there at the University of North Carolina and been just loving the journey. It's been a blast and seeing the technology evolve and playing with a lot of toys and creating some cool experiences. Well, some cool experiences and sometimes that didn't work and sometimes did work.

[00:01:58.794] Kent Bye: And so on your client list, you list Oculus VR. Are you at liberty to talk a bit about what you're consulting with Oculus on in terms of their head-mounted display?

[00:02:07.728] Jason Jerald: little bit. So the biggest challenge I have a bit of a frustration is being under NDA. And there's so many things I want to talk about. And I have to be careful of you know, what I can talk about and what I need to keep private. And that's actually a great question because I've been meaning to, you know, next time I see Palmer or Brendan or someone to ask about that now that, you know, the work that I've done is in a way continued and gone public. But, you know, can I say the specifics of what I worked on? And I really think it's safer to say, you know, I can't discuss that at this point. Right. But maybe as PBR, I'll have a better answer for that.

[00:02:46.473] Kent Bye: Well, if I look at your PhD, it's in latency, so I can make a guess as to what it might be. Can you talk a bit about what your PhD thesis was in?

[00:02:54.694] Jason Jerald: Yeah. That's actually what led to the valve work and initially caught the interest of Oculus VR. I somehow was able to have breakfast with Brendan, Oculus CEO. He showed me what they were doing. I was like, wow, you guys are like doing everything right. Like what I would hope a company in VR is doing. And at the very end, I mentioned, oh, I brought you a gift, because I knew they were interested in latency reduction. And I handed them a copy of my dissertation, which I guess those of you that don't know about graduate school, it's basically a book on research you did over several years' work. That work was on reducing latency to a very small amount, and then testing the perceptual thresholds of how much latency can there be in a head-mounted display without the person noticing, or when do they start to notice that latency. I handed that to Brendan, and, you know, it was an okay meeting, but it wasn't clear, you know, if the conversation would continue or whatever. And about five minutes later, he had to jump in the cab to catch a flight. About five minutes later, he, I forget if it was a text or he called, but he got that, you know, we need to talk again. So I think it was the dissertation that caught their interest at that point.

[00:04:09.449] Kent Bye: And I know Oculus has put out a best practices and they are sort of putting out this number of 20 milliseconds as kind of like this magic threshold of latency where you don't really notice it or that's the goal that they're going for. How did that threshold come about that specific number of 20 milliseconds?

[00:04:26.657] Jason Jerald: So I agree with that number. I think about 20 milliseconds is when you can start predicting really well to get down to zero. And most people might not even notice 20 milliseconds. Now, in my research, when I was looking at those perceptual thresholds of latency, the biggest factor, at least at what I looked at, was the users. Some users, I put 150 milliseconds in there, and they just couldn't see it. They couldn't tell the difference between 150 milliseconds and 20 milliseconds. My most sensitive subject was actually more sensitive than myself. She was able to notice differences in latency of 3.2 milliseconds. I was blown away. I was expecting, you know, to be much higher. But in that case, it's something that, like, you really have to be focused, you really have to be looking for it. You know, ideally, that's where you want to go. You can always have lower and lower latency. I mean, someday maybe it will be under a millisecond. Maybe you'll have some superhumans out there that can detect less than a millisecond, or under some conditions you might be able to detect it. You know, that threshold was 3.2 milliseconds without using prediction and paying attention, and it was only one of the users that was able to perceive it. So I think 20 milliseconds is a very reasonable level to shoot for.

[00:05:41.443] Kent Bye: So what kind of scenario do you have to set up to be able to have the person be able to detect the latency and then be able to report on it? I mean, 3.2 milliseconds sounds like that's not even fast enough to kind of hit a button with your thumb or say anything.

[00:05:56.062] Jason Jerald: And so this was actually what took me a while to develop was the actual method for that. And so there's a field of study called psychophysics. And basically, well, you can think about it if you go and get your hearing tested. And they say, do you hear something? Do you not hear something? And they ask you that question many times. So imagine going into a research lab and someone saying, OK, turn your head and tell me if you perceive some latency. Of course, we explain what latency is. And it's actually they're not detecting latency directly. They're detecting the results of latency, which is seen motion. The world kind of swims around. It's not stable. Latency results in an unstable world if you're rotating your head. So I had them rotate their head in various ways at various speeds. and ask them with different amounts of latency, do you think this has latency in it? Do you think it does not have latency? There's different ways of doing that. There's ways of reducing bias, so they don't just say, yeah, I always detect it. And I love virtual reality, but this is probably the most boring task I've ever created, for sure. Just over and over again, looking at a very plain, boring image to remove all confounding factors. for like two or three hours at a time. And so some subjects, it was really tough. Now, this most sensitive subject, she was so excited about it. Like at the end of the experiment, she was clearly motivated. She said, you know, you've got to let me know when you run one of these other experiments. And she came back for further experiments. So I guess it's subjective what is a fun experience versus what is a boring put-you-to-sleep experience.

[00:07:37.131] Kent Bye: Wow, interesting. And one of the services that you provide in NextGen Interactions is user studies. And so maybe you can talk a bit about some of those other studies that you do with virtual reality.

[00:07:48.334] Jason Jerald: Sure. So that was an example of one study sort of on the perceptual side. Another study I worked with company Digital Art Forms. We're working on creating a virtual reality-like game for education. But a study we did with them was we looked at their interface and said, OK, how can you compare this with a mouse and keyboard, which is a very difficult thing to do because they're just a complete different paradigm. And it's not like they're just a difference in one factor. There's multiple difference there. So that's a tough thing to study. But what we did is we compared actually three different interfaces. We compared a mouse and keyboard with a one-handed interface system and a two-handed interface system. And we showed that for fundamental 3D tasks, such as putting together a 3D puzzle or docking some objects into a specific location, we found that at about four and a half times as efficient to use that interface for those fundamental 3D tasks. And when we compared experts, it was even more pronounced. the expert for the two-handed interface was about nine times as fast as the mouse and keyboard.

[00:09:00.846] Kent Bye: Interesting. So you really get some quantitative data in terms of the efficiency that you get through these 3D user interfaces it sounds like.

[00:09:08.411] Jason Jerald: Exactly.

[00:09:10.192] Kent Bye: Now you are on the committee for the 3D UI and maybe you could talk a bit about that conference in terms of what type of insights you get from that.

[00:09:18.803] Jason Jerald: Yeah, so that's the 3D User Interface Conference, and it's co-located with the IEEE Virtual Reality Conference, which was about a month ago. And so what I did this year, there were actually four of us that chaired a contest. And it was basically, here's the task. The task was to annotate or mark up volumetric datasets. And actually annotate them in some hierarchical form. What I mean by that, say you had a volume, A volume is like a medical data set or a point cloud data set. And the task was to annotate that. So for example, if you had a scan of a human face, you'd want to be able to annotate, say, OK, this is the mouth. These are the eyes. And then at a higher level, this is the entire face that encompasses both the eyes and the mouth. Part of the challenge is judging something like that. But I think our criteria were something like innovation, efficiency, and something else, you know, whatever it was. And so that was really cool to see different groups come in and try new things, you know, be able to be creative without having to worry about, oh, is this actually going to be turned into a product, but instead just being able to say, you know, if we fail, it's okay. It's not like, you know, we're depending on selling a lot of these, you know, systems or anything to that effect. So they're resolving in some pretty cool systems.

[00:10:46.144] Kent Bye: And do you see things like data gloves that have positional tracking or what do you see as sort of the next wave in terms of doing 3D user interfaces?

[00:10:56.350] Jason Jerald: Yeah, so I believe like what Oculus is doing with head mounted displays, they're doing everything perfect. And it's very impressive. And That's half of the issue. I think the other issue is you've really got to have your hands. To have a really compelling virtual environment, you've got to have your hands in the environment, at the minimum. A full body is even better. But at a minimum, you can do quite a bit with the hands, because you can estimate where the elbows are. You walk forward, you can fake it with your feet, for example, if it's a seated type of experience, versus physically walking. I believe that area is wide open. There's some pretty cool solutions out there, but a lot of them are really simple. The common one is point to fly. And so if you have a glove, you can recognize when the person is pointing with their finger, and then you just kind of fly in that direction. And that works great in some situations, but to be able to generalize those interfaces is something that's wide open, and it's not obvious how to do it. It may be obvious once someone comes up with the solutions, but we're so stuck in this world of, you know, 2D menus and dialogue boxes and such that it may be someone that's completely outside the field that comes in and revolutionizes the user interface because, you know, those of us that have been using computers for a long time, our mind automatically goes to those 2D menu systems. And so it's really going to require some some out-of-the-box thinking. The interface that Sixth Sense Entertainment has with their MakeVR application, that's kind of moving in that direction. They're doing some very interesting things as far as navigating through 3D spaces and interacting with the environment inside.

[00:12:43.865] Kent Bye: Yeah, I guess there's two components. There's the hardware component of the sensors and the software user interfaces. And I'm curious, I know that the STEM sensors are ones that seem to be, I guess, one of the potential leading ones at this point. How do you see the field developing in terms of something like a leap motion versus something that may have less occlusion problems with the STEM controllers and what other options you may have out there?

[00:13:09.282] Jason Jerald: Yeah, so there's definitely, those are the two big ones right now are camera vision-based systems. And then there's the actual physical devices that have the buttons or the controls on, you know, the joysticks or whatever on them. So, you know, the 6th Sense STEM, Razer Hydra, the Sony PlayStation Move, they did a really good job of that type of controller. Then there's sort of the vision-based system. There's the, you know, Microsoft Kinect, there's the Leap Motion, there's others as well. And I don't think there's necessarily that one's better than the other. it depends what you're trying to do with your application. So if you have a dance game, then Microsoft Kinect obviously is, I mean, it's just perfect for that. If you have a game that requires precision, then depending what you mean by precision, you know, the Leap Motion has a lot of precision as far as accuracy goes. And then something that's like a physical controller, you have the precision of a button press. And so some games will require a button press In some games, maybe you don't care. playing Superman in a game, maybe you don't need a button. Maybe you're just kind of making these gestures and such. So, I don't think there's necessarily a single answer for that. Oh, the other one you mentioned is gloves. So, that's something else that's, you know, appropriate for some applications, but maybe not other applications. And, you know, some of these are not obvious. You know, what would a glove be good for versus, you know, just a physical controller like the stem. Maybe you don't need to try anything. I can imagine something like a serious game. What I mean by that is, you know, an educational game. I would think like something with a glove, like teaching sign language or, you know, a game that requires gesture, maybe a military game where you're giving hand signals to your, you know, your fellow soldiers, then a glove might be very useful.

[00:15:01.665] Kent Bye: When you say there's some of the differences that are not obvious, what do you mean by that in terms of the advantages and disadvantages of each?

[00:15:08.838] Jason Jerald: Well, some games, it's not really clear. They're kind of in the middle of, do you need a button? Do you not need a button? Is a gesture good enough? One of the challenges with gestures versus buttons is that it's sort of like voice recognition. Works great 95% of the time, but that 5% of the time can be extremely frustrating. It's sort of like when I'm talking on the phone and I'm talking to a computer, and it recognizes me 90% of the time. That 10% of the time can be extremely frustrating. So, ideally, I do think the camera-based systems are maybe the long-term solution for a majority of the games. Games where there's not occlusion issues such as, you know, with a controller is great because you can hold the controllers in your lap and you don't have to worry about your legs getting in the way or, you know, you can kind of just hold your hands comfortably to the side instead of having to worry about if the camera can see your hands. But as you add more cameras and maybe you miss frames and that's okay, then these camera-based systems may be better, at least for the ones that don't require those physical feedback of, oh, I'm pushing the button.

[00:16:19.578] Kent Bye: Right. And you're also involved with the IEEE VR conference, and maybe you could talk a bit about your experience at that conference this year and kind of what the vibe was.

[00:16:29.724] Jason Jerald: Yeah, we had a head-mounted display panel this year at the conference. We had some amazing speakers. We had David Smith, who has just done some amazing things. He's the founder of Red Storm Entertainment with Tom Clancy. He worked with James Cameron on the Abyss with one of the really the first virtual set or virtual camera set that was similar to what was used in Avatar. He had created the first-person shooter, at least the precursor to first-person shooters back in the late 80s. And he had a pretty amazing head-mounted display that's 180 degrees field of view. And so that was really cool to get his perception of where things are headed, what's important, those sorts of things. We had Yuval Boger from Sensix, which has been creating head-mounted displays for, I don't know, the last 10 years or so. And we had Steve Alice, which has been a pioneer from NASA. He's one of the guys I worked with while I was working at NASA and also helped me with the latency studies. He's like the world-leading expert on perception and head-mounted displays and latency and all those things. And so to get that perspective from those that have been doing it for so long, was pretty neat. One of my favorite slides Steve Ellis showed was, he showed like, I don't know, 50 head-mounted displays, how they've iterated over the years. And one thing he pointed out was that a lot of people, you know, what I kind of consider the first head-mounted display was by Ivan Sutherland back in, I think, 1968. He created a head-mounted display, you know, fully head-tracked. You know, very simple visuals, but I mean, it was really virtual reality, actually more augmented reality at that point. But we kind of all consider that the world's first head-mounted display. Steve Alice was saying, no, the first head-mounted display was by a man named Galileo. You know, he created these sort of devices that you put your head up against to figure out ship navigation. It wasn't virtual reality, but it was a head-mounted display. So I've been going to that conference for years. I love that conference. I was joking with Carl Krantz, the chair of the SVVR conference coming up next week, that I'm hoping SVVR will turn into my favorite conference and beat out the IEEE virtual reality conference. I think they'll definitely, I mean, those are probably going to be the top, my top two conferences. The IEEE virtual reality conference is more academic, whereas SVVR is more industry kind of startup focused entertainment.

[00:19:07.983] Kent Bye: Right. And I'm curious that, you know, because the IEEE conference happened like right after the Facebook acquisition, what was discussed there about that?

[00:19:15.584] Jason Jerald: Well, Henry Fuchs, one of my, I guess you could call him a mentor from the university of North Carolina. He gave the keynote and this, maybe I'm biased because, you know, I'm a big fan of virtual reality, you know, and I know him and such, but this was like the best keynote I've ever heard. He gave the most inspirational talk. about what's happening with virtual and augmented reality. He said he had his keynote completely ready, and he got to the conference and started talking with people about the state of virtual reality and about the acquisition of Oculus by Facebook. He got really excited, stayed up all night, and completely changed all his slides. When he started off the talk, you know, what does the Facebook acquisition of Oculus, what does that mean for us, you know, in the industry? And, you know, didn't give it away what he was going to say, because, you know, there's been some, you know, negative opinions on that. There's been positive opinions. And he said, he took it to, you know, one extreme. He said, this is the best thing that's ever happened in the field of virtual reality. And this gives us the chance, you know, what we as researchers have been doing for years, to take this to the next level. He said one of the problems with dissertations and such, if you focus on such a narrow topic that you think you can get a dissertation out of, it kind of keeps us from innovating. You couldn't have gotten a dissertation by building, well, maybe you could have, but it'd be harder to get a dissertation in something like building the Oculus Rift. And he said that was one of the challenges in academics, is that we can't take risks. We can't go out and do these great things that Oculus has done, because the question is not, can we get this out so people can experience it, but can you get a dissertation on it? And so he was kind of saying, maybe we should move more towards the industry, what industry is doing, because they're making some pretty amazing breakthroughs. And so his conclusion was that with this takeoff of virtual reality, and looking like it's going mainstream, These technologies are out there now so that anyone can use them. This allows us to influence and change the world in a way that that opportunity is very rare to be able to do that in any field. If you're in pharmaceuticals, if you're in film, if you're in traditional software development, whatever it is, it's often iterative development. But with virtual reality, we really can go out there and change the world. And a lot of people, through their entire lifetime, don't have that opportunity. And this is our opportunity. Anyone's opportunity. Conor was very young when he started the company, and look what he's done in a couple short years. That opportunity is out there for any of us to take. And then, of course, he said at the end, so let's not mess it up.

[00:22:15.829] Kent Bye: Wow. Yeah. That sounds really, really inspiring. And it gets me all charged up to be like, yeah, let's, let's do this.

[00:22:22.030] Jason Jerald: Oh, we were all charged up after that. That was, yeah.

[00:22:25.851] Kent Bye: Huh. And I'm curious if there's any other talks at IEEE VR that really stuck out for you.

[00:22:30.532] Jason Jerald: Oh, um, so I've put together a tutorial, um, on unity using unity for virtual reality. So I spoke a little bit, I showed some things I've done with. Well, part of it, some portion of it was on ABC Shark Tank. I was working with BirthTrix Omni, putting together a game for Shark Tank. And some of the things were taken out because they weren't appropriate for a television audience, such as we have blood splatters and such in there. And some of the user interfaces that would just confuse someone watching it on television. So we kind of simplified it in some cases. But I talked about some of those concepts, such as ideas for reducing simulator And also I talked a little bit about reducing latency. I showed some examples of avatars and how compelling it can be. Avatars don't necessarily have to be photorealistic. Just very simple motion is just huge as far as feeling like that person is actually present with you. Or adding an avatar for your own body, for self embodiment, so you look down and you see some representation of your body. It might not even be the same color of jeans that you're wearing, but that's not so important. As long as you see some legs there, maybe your hands are tracked, then you move your hands, and it's like a surreal experience that you just can't get from a video game. And there's been a lot of talk of presence in the community, how important that is. My very strong belief is that self-embodied avatars is huge for presence. Right now, a lot of the applications out there, you kind of have this viewpoint floating in space. But if you don't look down and see your own body, it kind of takes away the presence. So even if you have something simple, that can have a huge amount of that presence. That tutorial, we also had some great speakers. We had Arno from the University of Southern California Institute for Creative Technologies talking about virtual humans and their plug-in for Unity. He's gone far beyond what I've done with virtual humans. We had Sebastian Kuntz talk about his middle VR solution, which his software allows you to run your Unity applications in any type of virtual reality system. So different outputs, such as different types of head-mounted displays, the caves that I talked about earlier with the stereoscopic images displayed on the walls. With input, he supports all the traditional tracking systems, as well as the Razer Hydra and some of those systems. So it kind of allowed you a tool to put everything together. And then we had Anish from Impulse Sonic talking about 3D sound simulation. So instead of just kind of faking it and putting a sound in the environment that's pre-recorded, you know, he's simulating the sounds echoing across the room and off walls. So if you go down a hallway, it sounds very different than you would in an open room.

[00:25:28.003] Kent Bye: Do you see that Unity is being adopted within the academic community as the tool of choice to be able to create research grade applications?

[00:25:37.407] Jason Jerald: I was very skeptical at Unity for far too long. And then I finally, I said, you know what, I'm just gonna, I'm going to learn it. I gave myself a deadline, an unreasonable deadline of showing some stuff at a conference, you know, with a couple weeks notice without ever having used Unity before. And I was able to learn Unity and actually show something with, I was using the Razer Hydra with the Oculus Rift. And I'm just blown away. What you can do with Unity, like, literally, you can put together a demo in half a day that would take months, you know, a few years ago. And so these tools, like Unity, are just so easy to use that it really allows anyone to create these experiences now. And so, super exciting times. I don't know, it's interesting, Unity, it almost feels like Unity is cheating because it's so easy to do things. And so I know some universities are teaching Unity game development. I use pretty much Unity for most of my projects when I'm actually doing implementation now. I'm not so sure about an introductory computer science class, because like I said, it's so easy to do things. You don't necessarily need to know, maybe you don't need to know those core concepts of you know, outputting text to a console. I think it kind of depends on the class. If it's a game-focused school, I think it'd be very appropriate. If it's maybe more of a traditional computer science school, maybe that's a later course instead of a first course.

[00:27:04.912] Kent Bye: Yeah, I know there was some talks about different VR frameworks there at IEEE VR. And within the iClust subreddit, I've just noticed that there's a lot of excitement around Unreal Engine 4, which is open source their code as well. So I'm kind of curious to see where that goes. But yeah, it sounds like you're able to get enough performance out of Unity to do certain applications, especially with the I'm in the middle of VR to be able to plug in all the sensors.

[00:27:32.080] Jason Jerald: Yeah, yeah, exactly. It's pretty amazing. Now, I haven't actually used the Unreal Engine myself, other than just briefly playing around with it. But my understanding is that's a super great tool as well. It's probably, for indie developers, it's probably Unity's the better choice, or for people first starting off. Another option is to prototype things quickly in Unity. Then when you want the super high quality rendering, when you decide what you want, and you know after you've experimented, then move to the Unreal Engine.

[00:28:02.442] Kent Bye: Yeah, the asset store and unity seems pretty compelling, especially when you see things like the binaural audio plugins that are coming in, or I know that connect V2 has a plugin for unity, but not unreal yet. But curious about the binaural audio speaker that you mentioned, like what type of thing that he was offering? Was it a plugin or what was he able to do there?

[00:28:22.266] Jason Jerald: Yeah, it was a unity plugin. And so my understanding is similar to the way you can have I forget the name now, the lighting in Unity, where you have, I'm blanking now, sort of the light points that are based in the world. They're doing it in a very similar way that that lighting is, dynamic lighting can be done, kind of in the middle between static and dynamic lighting. Maybe I shouldn't go further into that, because I'm getting to the, that's not my, not an expert in that area, so I'll reduce the risk and leave that part to the experts.

[00:28:54.495] Kent Bye: Cool. Well, one of the things that you also have been involved with is SIGGRAPH. Maybe you could explain what SIGGRAPH is and how it relates to virtual reality.

[00:29:02.318] Jason Jerald: I haven't missed the SIGGRAPH since 1995. When I went in 1995, I was, I was blown away. I was, cause I was at a school in Washington State University in Eastern Washington. And there were a couple of computer graphics people, but I was kind of, you know, they're in isolation by myself. And I went to SIGGRAPH and I was just, I was like, wow. I found my people. Like, you know, I grew up in a town of 1500 people. And so it was like people like me, you know, interested in computer graphics and, you know, cool technologies and all that. And I went back in 1996, and that's where I first saw Threadbrooke speak and Mark Manet, who's now doing amazing things at Disney. And that further verified what I wanted to do with my life, which was virtual reality. Because they were doing the coolest thing, the way they were interacting with these virtual realities. I also saw Paul Malenik, who will be at SVDR. He was given this virtual reality demo at the Silicon Graphics booth. And it was like this virtual reality Legoland. And he was a Lego character. Flying through this Lego town and putting pieces of Legos together and laying down Neighborhoods of Legos together and just it blew me away to this day. That's still one of the coolest virtual reality demos I've ever seen that really, you know set in stone.

[00:30:25.528] Kent Bye: This is what I'm going to do with my life So you've also listed one of the services that you provide as real-time 3d graphics Can you talk a little bit about like what that actually means?

[00:30:36.216] Jason Jerald: Yeah, so that's my background. That's what I you know, was doing back in the 90s was computer graphics, starting with modeling and animation, but then moving on to my true interest of rendering scenes, games, virtual reality, whatever it is, at, you know, 60 frames per second, so you can go interact with that environment. So, I don't really focus on that nearly as much anymore. Right now, I mean, virtual reality is so hot, which is great, because that's what I'm truly passionate about, but that requires the computer graphics. And so that's really where my background comes in that relates to the latency reduction, for example, optimizing systems to make sure they're running at a high frame rate, getting things working fast, which is so important in virtual reality.

[00:31:23.713] Kent Bye: Right. And you were also at the NeuroGaming conference and I watched a little bit of it and I kept hearing Oculus Rift being mentioned. So it just brought home to me that the emergence of virtual reality technology and NeuroGaming seemed to go really hand in hand. Can you kind of speak to what the scene was like there at the NeuroGaming conference?

[00:31:42.865] Jason Jerald: Yeah, it was. I love that conference. I went last year as well because neuroscience is something that I understand the basics, but I largely, you know, I'm not an expert in it. And it's so fascinating to me, and I learn so much when I go to a conference that I don't know a whole lot about. So, to try the different sensors and, you know, kind of similar to virtual reality, you have to kind of experience it or try it for yourself to, you know, see how the computers will sense your state and such. And so, similar in some respect to virtual reality, that it can't be explained. And I think putting those together, the neurosensors and stuff, along with Virtual realities is like wide open. There's been such little research done in that area that it just seems like a huge Potential or you know companies to get involved with that.

[00:32:35.022] Kent Bye: I Supported the Kickstarter to the open BCI the open brain control interface where you putting electrodes and paste, you know there's this range of whether or not it's you know, kind of passive like a motive or actually putting in like pastes on your head and And somewhere in the middle where you get a little bit higher signal to noise, but it's a little bit less user friendly, I guess. So I'm curious, like how that trade off that you saw in some of the products that were there in terms of the ease of use of just throwing it on or, you know, a little bit more involved, but better quality.

[00:33:06.028] Jason Jerald: Yeah. So some of the, you know, the higher end devices probably have a ways to go before consumers start putting those crazy things on their head. Although, you know, putting a head-mounted display up to your face, I mean, that seemed like a hard sell a few years ago as well. So maybe that will end up going more mainstream. But I suspect the initial commercial devices that, I mean, there's commercial devices now, but I mean that go big, that everyone wants one. I suspect they're going to start, you know, they're going to start very simple. Now, on the other hand, you, of course, can't do as much with a simple device. And so there's going to be that trade off, but that's, uh, I think going to be one of the challenges is getting people to put these crazy devices on their head.

[00:33:49.474] Kent Bye: And so what are the types of things that you can do to trigger a threshold of a EEG events to be able to have something happen within virtual reality?

[00:33:59.702] Jason Jerald: You know, I probably, I don't think I'm qualified enough to talk about EEG, but what I could talk about is something like palm sweat or heart rate. When we were at the University of North Carolina, we actually showed this at SIGGRAPH one year and ran a test studying how does latency affect simulator systems. But what we did is we measured to get a sense of, to try to measure presence. is we measured palm sweat and we measured heart rate. And we had users go into an environment, a virtual environment, and they'd say, OK, this is cool. This is a virtual reality environment. And then we'd say, OK, now reach out and touch the table. And they'd reach out and touch the table. And what we actually had was Styrofoam blocks in place of where the virtual table would be. So the virtual world was calibrated with the physical world. And so then they were blown away. They were like, oh, wow, maybe this is real. And we put different things in there, like we put some curtains that you could play with, and we put in a fan in there so you'd feel the breeze as you got near the window and such. And then what we had them do, oh, and by the way, so this was a physical walking space. And so if you walk five feet, it was one-to-one mapping. In the virtual world, you really walk five feet. And so we then opened up a door, and we said, okay, after they play with some simple physics and such, and kind of play around in the room, we'd say, okay, go ahead and walk in this door. They'd walk through the door, and there'd be a big hole in the floor, a big pit. And some people would freak out. Because if they didn't have that sense of touch, or passive haptics, being the styrofoam blocks, Then they'd say, OK, yeah, I get it. There's supposed to be a hole in the floor, whatever. But by adding that sense of touch, that really increased that sense of presence. Some people still weren't convinced, however. And so what we did is we put an inch and a half thick piece of plywood on the floor over the hole of the floor and matched that up with a virtual piece of plywood. We called it the diving board that kind of went out over the pit. And we'd say, OK, walk out on the diving board. They'd say, OK, yeah, this is pretty cool, but maybe still not think it's real. And then we'd say, OK, put your toe over the edge. And they'd put their toe over the edge, and that's when they'd freak out. You'd see the spike in the heart rate. And so that's definitely not EEG, but it's a way of sensing the human system as a form of input that's very different than what we think of as traditional input of a mouse and keyboard.

[00:36:34.547] Kent Bye: Wow, so you're looking at both the quantitative, you know heart rate and palms Yeah, but also their behavior as to whether or not they believe whether or not it's real or not.

[00:36:43.029] Jason Jerald: It sounds like Yeah, and you know that brings up a great point that sense of presence as well as the sense of even more so simulator sickness is such a hard thing to test Because typically you give a questionnaire after the experience But if it was a 10 minute long experience, you don't have that immediate, you don't know what affected them, what caused them to feel nauseous. And so if there's a way to, I've had this discussion with some people, I don't know if it's EEG or what it might be. I was talking to one researcher out of Australia that is measuring the sweat on the forehead. As far as if you're sweating on the forehead, you're probably getting nauseous. And so correlating that with simulator symptoms. But as we come up with these measures, you know, more direct, objective, quantitative measures, then I think that's a really good way to measure simulator sickness and allow us to better understand simulator sickness.

[00:37:39.780] Kent Bye: Oh, interesting. So you've mentioned serious games, and I know that you mentioned also on the NeuroGaming panel that Sixth Sense had a serious game that you were involved with. Can you talk about, like, what is a serious games and some of the things that you've been involved with?

[00:37:52.830] Jason Jerald: Yeah, absolutely. So, in this case, what we're doing is we're doing neuroscience education for 3rd through 5th graders. And the idea is to put the users sort of in the world so they can learn by doing. And so, we use the Razor Hydros, and of course we'll be using, you know, moving to the 6th STEM at some point. But it's not just using the visual cues that's in a game. As, you know, game developers know, audio is very important as well. But there's really three primary senses. It's the visual, the auditory, and the kinesthetic, which is that sense of touch or where your body is in space. And so if we could put the user's hands in the game as well, then we can do things like have them crawl with their hands through the brain. And the idea is to teach kids how their brains work, essentially. So you can imagine if a kid is I don't know, bullied, or there's some stimulus in the environment that causes them to get upset. Us adults can use this as well. When someone screams at us or whatever, we naturally react. We have the stimulus response thing going on. However, as we become more and more aware of ourselves and aware of how our brains work, you know, imagine a fifth grader being able to stop and say, wait a second, that's just my amygdala or that's, you know, whatever part of my brain responding. And I don't necessarily need to take on the behavior of that first kind of instinctual response. Instead, if they can, you know, realize, oh, this is just my brain reacting, but I don't need to necessarily respond in that way. That may not be empowering. I have the option to respond in a more empowering way and make different choices. And so that's kind of the high-level vision of what we're trying to do with that project.

[00:39:40.515] Kent Bye: Interesting. And going forward, since you are a consultant, do you foresee yourself continuing to consult or would there be an opportunity that comes up that would be so amazing that you would consider going full-time in virtual reality in that way?

[00:39:54.080] Jason Jerald: Yeah, so that's interesting. You bring that up because that recent opportunity came up. I wasn't really expecting it to be in academics, but I had an all day interview yesterday with Duke University of being a visiting professor there. And I was blown away. The faculty there, it was such an open, collaborative environment, and they really had this passion for improving computer science education. And that was something that I always considered, oh, maybe someday I'll go back and teach, but kind of thought that's 20 years ago. And so over the last month, I've been thinking about this more and more and become more excited about it. And that's kind of been simmering in my mind. So that may happen. Now, that doesn't mean I won't continue my consulting business. It's more that, you know, I'll be doing my consulting part-time instead of full-time. And I guess the nice thing about that is, you know, one way to think about it is that I have to be very careful of, because unfortunately my time's limited, I'm one of those guys that wants to do everything, that, you know, I just choose the coolest projects that I really want to work on, the virtual reality projects. The great thing about Duke as well is that they have some amazing virtual reality researchers. And they have a six-sided cave. I guess maybe some of your listeners may not be aware of what a cave is. A cave stands for a cave automatic virtual environment. It's very different than a head-mounted display, although some of the results are similar. You have multiple walls. You have, say, the front wall, a wall to your left, a wall to your right, and the floor, for example, that have stereoscopic images projected onto. And your head is tracked. And so the walls kind of melt away, and you're able to perceive a virtual world that you're surrounded by. You see those objects as if they're actually in front of you. Or if the cave is large enough, you can actually walk around objects within the center of the cave. So it's a similar experience to a head-mounted display and, of course, has advantages of head-mounted displays. Head-mounted displays have advantages over it. It kind of depends what you're trying to do. And they have a six-sided cave. You are completely surrounded 360 degrees by this cave. And they close the door behind you and you turn around and you're in the world. So you no longer have to virtually rotate with the analog stick on the controller. You just physically look around. I think it's a 10 by 10 foot space so you can physically walk around as well. And so I'm pretty excited about that.

[00:42:28.508] Kent Bye: Interesting. And so if you were to describe the ultimate type of projects or problems that you want to solve, just so that people kind of know, like what you really want to be interested in and help them work on, what are the open problems that you're really interested in and continuing to research and develop on?

[00:42:43.546] Jason Jerald: Well, one is 3D user interfaces is, you know, how do we interact in virtual reality? It's very different than in, you know, a computer game or a 2D application. I've taken that to the next level. And you know, some of those, I think I have a pretty good idea of what works, what doesn't work, but I certainly no way come close to knowing all the answers. I mean, that's just wide open for anyone to get involved and take that to the next level. I'm very interested in better understanding simulator sickness. We understand the basics of what's good, what's bad to do. I have some ideas for research studies that maybe I'll be able to do if I end up at Duke University using neurosensors to correlate those maybe whatever it might be, EEG, relating that to simulator sickness for better measures so that we can create these better worlds. For my consulting business, I consider myself, like using Unity, these tools are so easy to build these virtual reality experiences right now. Like I mentioned earlier, you can literally just plop in a model of a house and use the Unity plug-ins that Oculus has created. And they have done a great job of doing that. And put yourself in a house or in some sort of model in literally minutes. It's so easy to do. And so it's really easy to do virtual reality now. However, to do it really well, that's extremely difficult. And so to be able to make sure that these things that Oculus talks about in their document that you were mentioning earlier, their guidelines for virtual reality, those sorts of issues, what's really important to focus on to make sure you're doing right. And especially if I end up taking this position at Duke, I may move on to more of high-level consulting versus the lower-level consulting of implementation and such. Because right now, I'm maybe half-time, implementation, halftime, higher level consulting. So I think that where I could offer the most value is helping create experiences that are engaging, exciting, that don't get users sick.

[00:44:50.063] Kent Bye: Right. And finally, what's the best place for people to find you on the web and online?

[00:44:55.336] Jason Jerald: Our best place is LinkedIn. It's pretty straightforward. I almost, you know, just mentioned something about virtual reality and I accept the connection on LinkedIn. You know, as long as it's obviously not a spam or some, you know, we think you'd be good, you know, selling car insurance, you know, sort of thing that occasionally comes in. As well as my website, NextGenInteractions. There's a, just you know, click on free consultation on the link there and, you know, we'll get together and, uh, and talk for a bit. I always love talking about these technologies. So always love talking with anyone that wants to, uh, let's talk shop. Oh, I should give out my email, I guess. I'm Jason at next gen interactions.com.

[00:45:36.671] Kent Bye: Great. Well, thanks so much, Jason, again, for joining me today.

[00:45:40.414] Jason Jerald: Absolutely. Thank you, Kent.

More from this show