Omer Shapira is a developer for VR who does both art and software development, and he’s interested in the reciprocal relationship between humans and technology. Shapira started working as a graduate student with Ken Perlin at NYU in 2012, and started developing for virtual reality in 2013 when an Oculus Rift DK1 arrived at Perlin’s lab. Shapira created immersive virtual experiments and art using high-end motion tracking, hand tracking, and vibration motors. He quickly realized that these were some of the richest experiences he had ever made, and he has continued exploring the frontiers of human computer interaction, artificial intelligence, and robotics.
I had a chance to talk with Shapira back on July 2016 after a VR meetup in NYC while I was traveling to New York to cover the International Joint Conference for Artificial Intelligence for the (still nascent) Voices of AI podcast. I talked with Shapira about his background in mathematics, linguistics, and visualization. He explains his rapid prototyping system for immersive design that involves a blind fold, post it notes, and your imagination. We also talk about the importance of designing accessible systems, which forces creators to hone into the affordances of specific modalities of input. He’s also very interested in virtual experiences that allow him to feel powerless, vulnerable, or unfamiliar since he sees these are more interesting constraints, and that it’s also more likely for him to cultivate empathy and awareness for people who don’t have able bodies.
After NYU, Shapira had some of his creative coding work appear in Jonathan Minard & James George’s CLOUDS documentary that premiered at Sundance New Frontier 2014. Shapira then headed up the VR department at Framestore where he worked on a number of cutting-edge VR advertisement experiences including Interstellar VR, Merrell Trailscape, and Avengers VR: Tony Stark’s Lab. Soon after this interview was recorded, Shapira went on to work at NVIDIA, where he worked on systems that allowed you to train neural networks within a virtual environment, which could then be deployed to an actual robot. I interviewed him in episode “#623: Training AI & Robots in VR with NVIDIA’s Project Holodeck” at SIGGRAPH 2017.
I’m looking forward to seeing how Shapira continues to apply his artistic sensibilities to the cutting edge of human computer interactions, virtual reality, and artificial intelligence.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Here’s a video of Omer presenting about his thesis project, which is a game that uses space-time as a game mechanic to solve puzzles by altering objects via scrubbing time:
This is a listener-supported podcast through the Voices of VR Patreon.