#796: VR Artist Omer Shapira: Prototyping Imagination & Designing for Vulnerability

omer-shapiraOmer Shapira is a developer for VR who does both art and software development, and he’s interested in the reciprocal relationship between humans and technology. Shapira started working as a graduate student with Ken Perlin at NYU in 2012, and started developing for virtual reality in 2013 when an Oculus Rift DK1 arrived at Perlin’s lab. Shapira created immersive virtual experiments and art using high-end motion tracking, hand tracking, and vibration motors. He quickly realized that these were some of the richest experiences he had ever made, and he has continued exploring the frontiers of human computer interaction, artificial intelligence, and robotics.

I had a chance to talk with Shapira back on July 2016 after a VR meetup in NYC while I was traveling to New York to cover the International Joint Conference for Artificial Intelligence for the (still nascent) Voices of AI podcast. I talked with Shapira about his background in mathematics, linguistics, and visualization. He explains his rapid prototyping system for immersive design that involves a blind fold, post it notes, and your imagination. We also talk about the importance of designing accessible systems, which forces creators to hone into the affordances of specific modalities of input. He’s also very interested in virtual experiences that allow him to feel powerless, vulnerable, or unfamiliar since he sees these are more interesting constraints, and that it’s also more likely for him to cultivate empathy and awareness for people who don’t have able bodies.

After NYU, Shapira had some of his creative coding work appear in Jonathan Minard & James George’s CLOUDS documentary that premiered at Sundance New Frontier 2014. Shapira then headed up the VR department at Framestore where he worked on a number of cutting-edge VR advertisement experiences including Interstellar VR, Merrell Trailscape, and Avengers VR: Tony Stark’s Lab. Soon after this interview was recorded, Shapira went on to work at NVIDIA, where he worked on systems that allowed you to train neural networks within a virtual environment, which could then be deployed to an actual robot. I interviewed him in episode “#623: Training AI & Robots in VR with NVIDIA’s Project Holodeck” at SIGGRAPH 2017.

I’m looking forward to seeing how Shapira continues to apply his artistic sensibilities to the cutting edge of human computer interactions, virtual reality, and artificial intelligence.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s a video of Omer presenting about his thesis project, which is a game that uses space-time as a game mechanic to solve puzzles by altering objects via scrubbing time:

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. So continuing on on my series of talking to VR artists about their process, this is a conversation with Omer Shapira. He's a developer for VR who's doing both art as well as software development. And this was way back in 2016 after a meetup that I went to after going to the International Joint Conference for AI. I had just done about 60 interviews about artificial intelligence. and Omer actually has a background in mathematics and he's looking at different things like linguistics and natural language processing and trying to think about like these higher order dimensions of mathematics and how could you give these different spatial metaphors within virtual reality but also looking a lot at human-computer interaction. He spent some time at the NYU's lab with Ken Perlin early days of Oculus Rift in 2013. He was a graduate student there and just started playing around with some of the motion track controls and lots of really interesting experiments at this cross-section of human-computer interaction, haptics, accessibility. And Omer has a whole design philosophy that he uses that is trying to reduce the friction between your imagination and to be able to actually design these experiences. And what are the best practices for being able to do that? So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Omer happened on Saturday, July 16th, 2016 at a VR meetup in New York City, New York. So with that, let's go ahead and dive right in.

[00:01:46.410] Omer Shapira: So my name is Omer Shapira. I'm a developer for VR. I do both art and software development. And most of my art is based on a reciprocal relationship with computers. So it's either computers involved in an emotional or thought process with the person inside an experience. Other than that, I'm highly visual. So a lot of things that matter to me is having good vistas when it comes to VR. These are things I like a lot.

[00:02:17.788] Kent Bye: So maybe you could talk about some of your first projects of getting into VR.

[00:02:22.169] Omer Shapira: I think the first project I ever worked on did not actually start out as a VR project, but it started out as a creative tech project. I was working with a guy called James George on a project that included interactive animations, and we didn't exactly know where it was going. It was supposed to be a documentary film. At a certain point, we realized that all of the things that we were making are easily translatable to virtual reality or like 3D surfaces and so all of our animations just landed nicely in a 3D world and so the first VR project that I had actually shipped unintentionally was clouds. But before that, for years, I was making a lot of things that I would call virtual reality, just not in head-mounted display-based media. I was working on a glove that let you play Pong when you were blindfolded, as in you would just hit air and it would respond to you by a small shockwave. And I'd be building a lot of interaction tools that would enable you to extend your thought Mostly because I was studying math and one of the practices that you do a lot when you study high dimensional spaces is you try to make them tangible in some ways. So the thing I was doing for a long time was like laying down on the floor trying to think about like how do I visualize this complex function. And I would sometimes just reach out into the air and start twisting my hand. Oh, this is what the surface feels like. What if I had a tool to make that surface really react that way? And I started building tools that made those surfaces react that way. And a lot of my first experiments ended up being haptic devices to enable you to imagine something with your body. And that's what I really think got me into virtual reality. It's the ability to imagine things that are literally unfathomable.

[00:04:06.940] Kent Bye: Yeah, just in coming from a math background just having come from the doing about 60 interviews about artificial intelligence for the voices of AI There's a lot of math a lot of math that I don't really understand and so I was kind of forced to have these conversations with people and try to get this intuition about some of these mathematical equations because there's all these posters with math and So I would go up to them and be like, I have no idea what this is, but maybe you could describe it to me in your language. And one of the things that one of the professors of cognitive architecture told me is that we learn by analogy, by having some sort of idea of what something is. It's kind of like this. And I feel like math opens your brain up to all sorts of sophisticated analogies that are pretty much impossible to describe outside of math sometimes.

[00:04:57.132] Omer Shapira: Oh man, yeah. So this relates to my entire history of how I came to be a creator. I was, for many years before even studying math, I was a filmmaker. So I started out doing films when I was 15, and because I'm from Israel, I did that in the Israeli army. And somewhere around the age of 25, I completely burned out. I didn't feel like I could make anything more significant, even though I was like, I was very good at manipulating emotions and I felt like I had a good game there. I knew the gestalt very well. But I really had a strong feeling that I am lacking the ability to describe. And this is before I knew about deep learning and about generally the semantics of description systems. and I sort of had the hunch, so at the age of 25 I went to school for the first time and studied math and linguistics and it took a few years and when it clicked, it really massively clicked because I understood that actually a lot of the knowledge systems that I'm trying to describe with pen and paper do not have physical representations. So I am not on a lost journey for doing something that someone had already done before. This is really something that every single person going through studying math and physics and even linguistics and parts of AI are definitely also capable of being described this way. This is something that you build in your head. You build this sort of memory maze inside your head of how you attach the problem to a metaphor, and how you attach a problem to a smaller set of descriptions. Because a lot of the things that you talk about when you talk about deep learning, they're like spaces that have 4,000 dimensions. How do you even describe that? What's the physical analogy for 4,000 dimensions? Four is hard, right? So what you do is you use things like scale. You use things that are easily translatable to a single dimension, or behave locally the same in all dimensions, and that actually happens a lot in math. You have stuff that may be a wild roller coaster when you look at it in very high-dimensional spaces, but actually when you look at it in small regions, you can imagine it being smooth and pretty much the same, and you can imagine the slope with your hand, because you know that most dimensions will act the same way. And like, one of the, like, there's a metaphor, I forgot the mathematician, I'm probably doing this a great disservice by saying this, but There's a mathematician who said that complex analysis is like the Disneyland of mathematical analysis, or the Disneyland for mathematicians, in the way that there are a lot of things that look fun but are also easy to describe and anticipate to a visual person. And I'm not the first school of thought to think that. There are books about visual complex analysis. And you can see how that makes you want to give it a physical description, a physical representation, or a visual representation, And VR comes naturally out of it because you can explore in three dimensions. You can explore with subtly moving your head forward. You can explore with using motion parallax, which is the subtle head nods that you have just by moving back and forth. You can already visualize depth. And the more that you do that, the more that your body starts to accept that analogy, and you can understand that you're there and actually the way that your body responds to those things are different cycles of thought than what you have when you just apply rational thought to a process. And that's something I really like. When you have visualizations that really function as activating different parts of your brain, that leads to insight. And that's what I mean by saying having the computer part of the thought process.

[00:08:42.680] Kent Bye: Yeah. I feel that passion and excitement because I had similar kind of breakthroughs this past week of talking to these people who are experts in knowledge representation. And they said this thing of like, you know, when you speak your language, you know, I was asking, like, how do you visualize knowledge? And it's like, well, there's actually many different ways that you can represent it in nodal graphs. But he actually said there's a semantic model of language that actually representative knowledge of how we actually store knowledge has a semantic structure and one dimension of it. There's a visual dimension and sort of other other kind of nodal connections and associative memory, but you know, there's a big part of our memory that comes from language. And so the fact that you're studying linguistics and math, you're sort of like unlocking these different, you know, the mechanism and architecture of the mind. And through that, I'm just really curious how you've been able to kind of use that drive of visualization and expressing these concepts, how that's come out through your creativity and virtual reality.

[00:09:37.247] Omer Shapira: Yeah, so the linguistics bit is, I will say that the way I was exposed to linguistics in university, on the theoretical side, did not assist a lot because linguistics, the way it's taught in some universities, is pretty much locked down to theories that date back to Chomsky and are not easily provable with science. Like, there's a joke about linguistics being applied to fMRI studies, but you validate a crackpot theory by throwing an fMRI study on that. So I can't really relate a lot of these things to what I know about linguistics, but the parts that I do know about, like natural language processing, which is something that I've dealt with a bit, immediately calls for visualization, because natural language processing, many of the approaches that are available right now that are not through neural networks are still high-dimensional vector spaces that, when you are developing them, you do not have a lot of insight about what you're doing. You really need to see a good way of what you're doing. So through the process of, like, actually making my final undergrad project, the first thing I did was I created a visualization for this 150-dimensional space that I was trying to solve my syntax learner, which is the project I was working on, in. And I realized that actually the visualization was helping me develop the algorithm significantly faster than anything else. And it's available online. It's called Syntactic. You can look it up attached to my name and see if you can understand the algorithm. I bet most people can, actually, after a while. And through that, I was able to debug a lot of the issues that I was having with the way it was parsing Wikipedia, which is what I was working on at the time. And I think that was like the first point where I was like, a lot of my theories of this would be really useful if I could just have some tangible representation of it, they really manifested, because immediately I saw the problem, and I was able to address it, and that was amazing.

[00:11:35.398] Kent Bye: And maybe you could talk about where you went from there, from your first projects, what came next?

[00:11:40.367] Omer Shapira: Oh yeah, so I came to the United States in 2012 to study human-computer interaction at NYU, and a lot of it involved haptic devices because I was really into the idea that I couldn't explain at the time of virtual space. Mind you, this was a year before the DK-1 had come out, and I was completely oblivious of it, and I thought virtual reality was dead at the time. I thought it was still far away in the future and still far away in the past. It didn't make sense to me that this was the right time, but I was looking for ways of doing essentially exactly that. I was looking for ways to feel something while being blind. And actually, the way I was describing it at the time was either using the mathematical representation of saying, oh yeah, I put my hand in space in order to feel the extra dimensions, or what I really the analogy that I still think is most relevant is like I want to be able to learn the granularity of my near near my body which is a really a largely unexplored at the time region in media art because media art at the time was largely hey This is a Kinect. Do something really wild with your hands and see what comes out. And I was severely uninterested in that. I really wanted to make granular interactions. So I was building a lot of projects around that. And that led me to join a lab that was led by Ken Perlin at NYU. Ken Perlin is this computer graphics legend who now deals mostly with human-computer interaction. And on the first week that I joined his lab, he said, oh, look what I just got in the mail. And it was a DK-1. It was an Oculus DK-1. This is at the time I didn't know Unity at all. He just told me, try something out with it. And I took a bunch of stuff that his lab was building with motion tracking, with 10-year-old motion tracking equipment, and just added some of my own devices. I added an encodable turntable, and I added some tangible objects. And we made sort of a experience with tracked hands that did not involve the Kinect at first. We eventually put it in a Kinect because it had to be, like, portable, but we started out with really fancy, fine-grained motion trackers, and I felt that was really so good because, like, for the first time, I can feel and see something. And this is, mind you, this is, like, with DK1 technology, the pixels were super rough, and I would still skip lunches to develop because you would get sick every day. But it was really promising because you could really feel the sort of stuff that you're touching. And so I eventually started adding stuff to it, right? I made some art and put it inside Unity, but then I took the tweezers that we were using as encoders as, like, you know, the grabbing devices, and I added special vibration motors that we ripped out of Nintendo DS cartridges. And just, like, just, you know, Arduino network protocol from Unity to some haptic shock that you get. And it was super useful because you can start, like, grabbing objects in space and Learning where to grab them because if you put your hand through them, you would feel something different You would not be able to grab them. So all of a sudden like these things clicked and we made this like this ultra rich VR experiences that that you couldn't really share because nobody had the devices but whoever tried it was like oh man this button feels like a button and you can grab this cube because it's real and And that's where I felt triumph. And that was like the first moment that I saw that I thought, okay, this is by far the richest experience that I've ever made while pretending to be an artist. I better focus on this because I'm onto something. So I built a few more experiences. My thesis in that HCI program was a four-dimensional video game that allowed you to navigate time and space together in order to help this female physicist escape from this simulator that she built. The game's now being developed by my wife. It's called Horizon. And I still think it was like one of the things that I kept being obsessed with was exploring higher dimensions. And that was a realization of that. And we're now, I pushed her into doing that in VR.

[00:15:56.800] Kent Bye: So just to expand on that, the higher dimension being that you go through the scene on one time frame, but you can fast forward to another time. Maybe you could talk about how you're scanning through time through the same scene in a story, how that's constructed.

[00:16:10.512] Omer Shapira: Oh, man. OK, I'll talk about Horizon. OK, so Horizon is based on the idea that we are looking at time in a very constrained way. We're looking at it as an arrow, right? I was really interested as a video editor when I was doing film in what would happen if I were able to see a span of pixels across many points in time. Because I thought, hey, this is actually productive. I can understand when an actor goes in and out of a frame. This is something that I've been obsessed with since I was 15. And all of a sudden, after having studied math and having gained a lot of ability in programming, I thought, wait, I can create this. And I created this and was essentially, the first thing I created was an interactive slit scan experience that you can do with video. You take a bunch of video and represent it, instead of just representing the pixels in x and y, you represent the pixels in x and y and over t, which is time, and the pixels are now a volume, right? They're not just a sequence of frames, they're a volume that you can interpolate. So instead of taking the intersecting plane that is fixed to one point in T, and showing all the X and all the Y at that point, you can shift that plane a bit so it varies through X, Y, and T all together, and shows you the intersection. And that led to some beautiful results. I then published the code for that, and realized that actually, you know what, I did that in one afternoon after three weeks of research, maybe I should focus on something else because I really want to make art at the end of this. So I started thinking about ways I can do that in not video, not pixels, but three-dimensional objects. Which is, I'm already in x, y, and t land, so I'm already in three dimensions, but now that I add x, y, z and t land, I'm in four dimensions. So... That was a big problem for a while. And I developed a game that essentially was recording objects that went through the physics engine at different points in time. And it was just saving them in both space and time. And you can rewind through the objects, but asymmetrically. You could rewind through objects that were closer to you faster than through objects that are further away. So it led to stuff like an I-beam is falling from a crane. And you can rewind one part of the I-beam while keeping the other on the floor, and all of a sudden the fallen Eyebeam has turned into stairs, and you can walk on those stairs. So, you know, it's sort of a puzzle game that enables you to create objects from actions that have already happened, you just need to explore an existing space. And I found that really cool because it's actually an observation on causality, and it's looking at time and space in a different angle that most video games didn't. So I released the video for the game. It got a lot of attention. I had to get a day job, so my wife started developing it into a video game, and that's what she's still doing.

[00:19:05.413] Kent Bye: And so you mentioned that you're going to be giving a presentation at Art&&Code. What are you going to be presenting there?

[00:19:11.395] Omer Shapira: Oh, so I'm going to be talking at Art&&Code. I'll give a little background on this. Art&&Code is a conference that is focused more on art. Beautiful. Keep it that way. Just edit that in. So Art&&Code is a conference that's focused more on art than on code. It's using code as an expressive medium. And it's led by one of the OG media artists, a guy called Golan Levin, who's a Media Lab alum, and he's now leading the studio for creative inquiry at Carnegie Mellon University. And he has collected a very, very rich and diverse set of artists working on virtual worlds. It's not necessarily the sort of thing that you would imagine is based on headsets because virtual reality is richer than headsets. You know, you can do a virtual reality experience to the blind and to people who are both blind and deaf. and he is looking at the broader spectrum of how to describe a reality that's weird. And invariably, it has been called Weird Realities. That's the name of the conference now. And I'm going to be presenting there my prototyping methods, which I can talk about. So during my time working, like since I was working on math problems throughout my actual work inside VR, I was always finding ways to design around my body. I attribute this a lot to hearing a story when I was a kid about one of my favorite bands called Pulp. The singer Jarvis Cocker had tried to impress a girl by jumping between two windows when he was 15, and he ended up on a wheelchair. And a lot of the early shows that Pulp had, for many years they were him dancing in a wheelchair only with his upper body and I realized that wait this is actually really really granular you can do a lot of things by just keeping your hands close to your face you can draw and describe and Touching your face is so granular. I want to I want to study that that seems so cool because you know like I didn't even realize that locomotion will ever be a solved problem in VR. Like, to be honest, I thought that the motion capture spaces that we created at Ken Perlin's lab were pretty much going to be outside mainstream for very long. And I started working on prototyping mechanisms for making good VR, and this is my first job after school. I worked at Framestore, I did some interactive experiences, and I figured out some methods of developing quickly that I just carried on through the departments that I ran after that, that are all based on designing around a person's body and being sensitive to their body. And my favorite way of showing that to people instantly is by getting them a stack of post-it notes, a blindfold, and a friend. And what they do then is they blindfold themselves, their friend is not allowed to speak, And they just point at things and say, this is what I want here, this far away from me. And the exercise goes this way, that while you remove the blindfold, you're not allowed to speak at anything, you're just allowed to look around. And you can only comment on things while you're imagining them, i.e. with a blindfold on. And I limit the amount of time that you can lift your blindfold. So suppose, you know, after the third time you're done. And I let people design, like, which experiences and that. I typically tell people, okay, design a magic carpet ride. What does it look like? Right? And they start saying, okay, well, really close to me I want a panda bear that's pink. And I want it to be encouraging me to jump into this lake that's made of candy. And you see people just scribing notes and putting post-it notes everywhere. And these are really, really rich rules that they're creating that they could not be creating by just drawing them, because your imagination does not work that way in 2D. And they couldn't be creating those in even Tilt Brush, because really, you think much quicker than what your tools enable you currently to do. And you also can't really be doing those in game engines, per se, because you're still limited to a mouse and keyboard or whatever expression tool you have there. So really, just labels and shorthand for your imagination is the quickest way to go, right? So that's what I try to show other people. This is actually how I design, by the way. So, like, yes, you can hold me to it. You know, I put post-it notes on things. and I spend a lot of time with my eyes closed. And actually, when I am designing haptics, I black out my headset so I can focus on what I'm experiencing, not my sensor fusion. Yeah, so, and I think that's a really quick way for artists to get, like, a direction of where they're going, and then they can work on putting stuff in game engines, and I think that, like, you know, game engines are currently, their state is pretty limited to what we can actually be achieving in VR. So, while they're cool and they're enabling a lot of art, they're not enabling the ultimate form of art, which I don't know if it exists, but, like, they're definitely limited. And I think that, like, working with your imagination and the quickest tools possible is definitely the way to work it. And there's another thing about that, where, you know, if you're trying to find a way to measure how bad a tool is actually in VR, then you can think about it this way. How quickly do you imagine it transitioning from A to B versus how quickly you can do it in real life? versus how quickly you can do that with a tool in virtual space, right? So let's take the example of I would like to move the Arc of Triumph in Paris to the top of the Everest. In my imagination, it pretty much happens instantly, right? I don't even have to imagine constraints. I just lift it and put it elsewhere. In real life, there's probably some physical constraints. So let's say you can build a machine like, you know, help with the help of Elon Musk, you can get like something that will take you X amount of time. And in a game engine, you can do that and probably X divided by 1 million. So you have like a sort of a X divided by one million tool, right? A one to one million efficient tool. But if you try to do the same with like, I don't know, I want to solve like these operation games where you can't touch wires and like whenever you touch a wire, a buzzer hits really hard. It's really significantly harder to do it with virtual tools than it is in real life. So this is like a, I don't know, a 100 to one tool. Right? And I try to focus on the opposite. I try to focus on making tools that are 1 to 10, or 1 to 100, or 1 to 1 million, making the quickest way between you and your imagination. Right? Not the quickest way between real life and the virtual world. And that's the sort of stuff that I really encourage people to be using.

[00:25:51.390] Kent Bye: Yeah, and just from my early experiences with programs like Tilt Brush, I think that these tools that allow us to lower the barrier between what we are thinking and being able to express, I think is going to be one of the things that makes VR very sticky and super compelling for so many different reasons. So I wanted to ask about your approach to VR and whether or not you kind of see it as a reflection of our humanity. And you had mentioned also just kind of focus specifically on vulnerability.

[00:26:20.267] Omer Shapira: Oh yeah. I guess there's this, you know, in the realm of cultural criticism of technology, VR has had a lot of, I believe, unfairly bad rap at first for being something that looks like a proxy of male fantasies. And I really don't subscribe to that notion because, yes, that is a possibility, but I don't really see that as an enjoyable possibility. It could be just me, but I don't You know, if I have something like Doom in a video game, I don't really feel the need to be empowered inside VR. Why? Why is that good, right? I am also a white male in this world, so I am not the target audience for feeling empowered anywhere, pretty obviously in our culture of technology. but the sort of experiences that I could craft for our experiences that are based on my observations of other people and my observations of experiences in real life and I really want to make tools for making people feel powerless or vulnerable or unfamiliar so like I keep trying to imagine like having known some people with disabilities and Keep trying to imagine what is it like to build an experience for a person who can't use the entire spectrum of a vive? What is it if I don't have two functioning limbs? Choose which two, right? Like, if you can only walk around but you're afraid of your balance because your hands aren't functioning too well, then that's a problem. If you can only move around using your hands but not your feet, that's not like what current mainstream VR is being utilized for. A lot of AAA games just made it really obvious that you want to move around as a shooter. I don't subscribe to that at all. I want to have experiences in which I can be fixed. I don't know my control scheme. My control scheme is either limited or learnable. I can learn to empower myself after overcoming my vulnerabilities. Or having experiences that I can't have IRL. Like, for example, the sort of things that we see now in I'll try to phrase that gently because I'm here on a visa. The sort of inequality that you see in the United States in subjective experiences as different people is something that I don't fantasize about experiencing, but I definitely would like to experience that because that's what art is about. It's about putting yourself in a perspective that you cannot see in your daily life, right? So I definitely think there is more power to VR in those things and actually the potential of good VR by making you more powerful and more ominous is kind of limited. So these are the things that I'm looking to explore and you know like one of the things I keep thinking about is like how can you make a compelling handicap and I'm talking about handicap not in the way of like people being handicapped I'm talking about the professional game term of handicap like you can there's a subset of board games that you can play with a subset of their rules and make it equally hard for an experienced player and a non-experienced player to play and they will have a good time, right? They will be activating all of the parts of their brain and they will be thinking longer and harder because they know less tricks and that's a really compelling type of in the general typology of games, that's a very compelling type of game that I would like to extend to other areas of my life. So I'm thinking, what's a good VR experience that I can enjoy whether or not I'm blind? What does that look like? What does that sound like? What does it feel like to have a VR experience that you don't need either your eyes or your ears? And one of my favorite, I really think it's a VR peripheral, one of my favorite VR peripherals is this device made by Hiroshi Ishii in 1998 called InTouch. Hiroshi Ishii is now a professor at the MIT Media Lab for the group called, I think it's called Tangible Media. And he built a device made of three wooden rods, just kind of like a massage contraption that he uses for just like moving his palm of his hand on them. And there's another replicated device that replicates his motions. But he can put that, not in the same room, he can put that in a different continent. And he built that in order to do something while talking to his mother on the phone. So they're almost holding hands. He's moving something and she can feel his exact motions. And if you think about it, this is... very high granularity for a very low granularity potential, right? You can only do one thing. You can move something in a circular motion, but if you do it right, then you get very, very high detail from that one thing. So you're actually creating a sense of immersion. You're actually creating a virtual reality. And I really think that talking about headsets as the ultimate medium of virtual reality, and note that I'm saying that headsets are a medium and virtual reality is not. I think it's silly, because virtual reality is not tied to one form of output. In that way, it's not a medium. Virtual reality is not a medium. It's the result that you get from putting someone in an environment or in a situation they confuse with reality, right? And that's what I really like to focus on. So these devices are actually more important to me because they are a better representation of reality than the limited amount of pixels that we currently have in displays.

[00:31:48.473] Kent Bye: Great. And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?

[00:32:00.085] Omer Shapira: So what I haven't seen done well yet, and what I have very serious doubts but like my ultimate experience would be, is recreating the same experience that I have watching music videos as a 15 year old. as a hormonal 15-year-old, watching those same amazing music videos like Michel Gondry music videos and Spike Jonze music videos, having them done in the equivalent in virtual reality. And the equivalent, by the way, is not like high-paced cuts. It is the ride, right? And that's how I always think about immersive media, because screen-based media can be immersive. you think about taking people for a ride, right? If you control the pace that their psyches go through, if you control the cycle of their emotions, then you have given them an immersive experience because they were with you throughout the ride. The ultimate form for me still is music videos. I just love that format so much. And there is no equivalent right now in virtual reality that can immerse you that much because I want to dance in that when I'm in virtual reality, or I want to be completely incapacitated. I want to have a very physical experience of that. But what I really think the ultimate form is, and I hate saying this because I sound like a nerd, is extension of my proprioception in mundane everyday things, right? I want to be able to have a safe space for myself. I want to be able to walk around in a safe space, be able to understand how far objects are for me in the very short term and like in the very, you know, the few meters away to know that nothing is a threat to me because virtual reality will inevitably be real enough for that to become an issue. And I see that like, you know, just making a calm, precise experience that allows you to focus on your internal trail of thought is significantly more important than any sort of like forced upon storytelling that a lot of creators are trying to make right now.

[00:34:08.153] Kent Bye: Great. Is there anything else that's left unsaid that you'd like to say?

[00:34:11.795] Omer Shapira: I really urge people to be careful when designing for virtual reality and use better metaphors. A lot of things I see currently when I judge hackathons or look at finished virtual reality products is the vast difference in approaches between well-thought-through projects and hot takes. And I think the vast difference is the fact that people don't think about the senses first. They don't think about care to the body and rather the emotional process that you're going through first. And I really want to urge people to think that way. To start from their internal systems and then go on to visuals or whatever the thing that's supposed to function is. And the other piece of advice that I normally give to people is to think like cave dwellers. Start with the simplest tools that you can. Start with whatever connects to your hand. Explain why it should connect to your hand and why it's a good tool and start developing on that because you will invariably develop a sensibility to it.

[00:35:17.838] Kent Bye: Awesome. Well, thank you so much. Thank you. So that was Omer Shapira. He's a VR developer who is doing both art and software development. So I have a number of different takeaways about this interview is that first of all, Well, just first off, this rapid prototyping technique that Omer is talking about, it sounds pretty simple, but it's actually, I think, pretty profound to think about how he's trying to create a process by which that you're really just trying to use your imagination. And so he has this game where you put on a blindfold and you can't look at the environment. You just have to point at things. And then somebody is there with a post-it note who's just making these notes and annotations. And then you can lift up your blindfold and maybe look at the scene, but you only have a limited amount of time to be able to do that. And the idea is to try to reduce the friction between your imagination and to be able to get everything out in your mind. And then from there, to be able to then jump into a game engine, because he's looking at something like what it would take to actually do things in real life. There's a huge amount of increase of what it takes to do something in a virtual environment, but then even more so if you do it just with your imagination. And so he's just trying to create this technique to be able to reduce the friction to getting into the full breadth of your imagination. And so he's been developing a series of different rapid prototyping techniques to be able to actually do that. The other thing that was really striking was just the connection to mathematics and Omer's background in both linguistics and mathematics and some of the stuff that he's been doing in artificial intelligence. I had just come from the International Joint Conference of Artificial Intelligence where I did about 60 interviews there with uh, voices of AI podcast, which I've launched about five episodes of that over three years later. Now I'm still got over a hundred episodes of that, that I haven't gotten into. And I hope to get into a little bit more of a rhythm like I'm doing here. I'm publishing these 10 interviews about VR artists talking about their process. And I hope to kind of dive into a number of different other podcast projects that I've been working on in both mathematics and philosophy and artificial intelligence. But, I can definitely point to this conversation with Omer coming from the International Joint Conference for AI that started to really rekindle my interest in mathematics. I think also talking to Jaron Lanier, who also has a mathematics background, and to see how his mathematics brain has led him to a lot of different insights into virtual reality. And if you go back to Ivan Sutherland and read The Ultimate Display, there is quite a big mathematics inspiration for even Ivan Sutherland, who wanted to walk through this mathematical wonderland. So it sounds like a lot of the work that Omar has been doing over the years is trying to find different ways to do natural and intuitive visualizations, whether that's through actually visualizing things in these higher dimensions and trying to find ways to make a balance or to collapse it in some ways, but also thinking about other things like haptics and human computer interaction and just a lot of the work that he's been doing with HCI and haptics and also this whole perspective of accessibility and trying to reduce the amount of ability that you have. For him, being able to actually focus very clearly on some of these other multimodal inputs within virtual reality, but also just taking this whole perspective on what's it mean to create art that's going to build empathy for a perspective that you may not have. He was saying that he doesn't want to go into VR and feel empowered. In fact, he wants to have this opposite experience of feeling powerless and vulnerable and unfamiliar, just to build the types of empathy for people who have those different types of experiences in their life already, but to play with what's it mean to go into these different environments and to work with the constraints in order to allow you to either amplify some of the senses that you have available, or to just think about, in general, this accessibility design to say, like, what would it mean to design a virtual reality experience that people who are blind and not blind could enjoy equally? Lots of interesting stuff about space and time and his game that he was developing called Horizon, which was his thesis project, which sounds like you were able to scroll things through space and time and be able to create new objects by moving things through space and time, which sounds like a really interesting game mechanic, but really breaking through this concept of an arrow of time that is only going in one direction. Also just playing with concepts of causality, which I think is a fascinating concept and something that you can do quite well when it comes into virtual reality. I think of something like super hot, which is drawing this connection between how you move your body through space and how that impacts what happens in the environment, which I think is a super compelling mechanic that's uniquely suited for something like virtual reality. So Umar had been working on this project with James George. He was doing some creative coding types of artwork that ended up getting featured in this Clouds documentary that got into Sundance. And then they had realized that a lot of the foundation of their work was very well suited to be able to just port into virtual reality. And so the Clouds documentary that debuted at Sundance 2014 actually had a virtual reality component to it. I remember when I got my Oculus Rift DK1, I bought it on January 1st, 2014. I was seeing that there was these different virtual reality experiences that were at Sundance and it just was really striking to me to see that specific project, the clouds documentary. And I think actually after I talked to Omar, I actually went to go do an interview with James George about the clouds documentary and depth kit that this project they'd been working on. which was a lot of technology of using depth sensor cameras to be able to do this type of new volumetric filmmaking. And that has been a solution for volumetric capture that has continued to develop and evolve. But it sounds like this was a project that Homer was working on and it had a virtual reality component that didn't start as a VR project. And then he went on to the frame store and did a lot of different experiences. Uh, frame store was working with Hollywood and doing lots of different experiences as well. And then, uh, went on to work at NVIDIA where he ended up working with robots and artificial intelligence. And I ended up doing an interview with him about a year after this at SIGGRAPH, where he was debuting an artificial intelligence network that was trained within virtual reality. So it was a. AI that had a virtual simulation, and then they were able to train this robot to do these different tasks. And then they took that neural network and put it into an actual robot who was able to actually perform a lot of those tasks that were trained within these virtual simulations. And so it's doing a lot of really interesting work, pushing the limits of robotics and artificial intelligence and the cross-section there. So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a listener supported podcast. And so I do rely upon your donations in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show