alex-schwartz

Alex Schwartz is the Chief Scientist and Founder of Owlchemy Labs, and he talks about the process of developing Job Simulator with Valve’s SteamVR and the HTC Vive. They wanted to create a series of mini games like the WarioWare of VR, and he shares a bit more of the backstory for Job Simulator. They have a number of different experiences planned beyond the chef simulator and bartender that include engaging two-handed interactions.

Some things that didn’t work include locomoting the player without them walking around, and so they had to design the spaces so that they could walk around in them. They had to figure out how to grab objects, and simulate reality like object penetration without feeling like it’s fake, the downfalls of infinite chopping, and adding conservation of angular momentum to objects that are thrown.

I asked about the Chaperone System, but he couldn’t talk about more details about that yet. Some of the other open questions are how to adapatively scale the room size, but also account for the ergonomic and structural design problems that come up in walkable VR experiences. The goal is that it should be fun in a wide variety of spaces.

Alex said that the experience and crunch leading up to GDC felt like a war with all of the API updates, iterations, and fixing various bugs. It was like “a band of brothers coming out of the other side coming out of a traumatic experience”, but that at the same time it was the coolest thing that he’s ever done in his development career.

Alex also helped to organize a positional tracking VR game jam, and Valve came out to preview some of their hardware prototypes.

He’s very grateful to have been a part of the process of developing a demo for GDC, and he sees the potential for VR to change every industry and in the end change the world.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Ben LangBen Lang is the Executive Editor at RoadtoVR.com, and he shares a lot of his impressions from many of the VR announcements and news from GDC. Ben has been exclusively covering the VR industry developments at GDC for the past three years, and he says that this year has been by far the most exciting.

Ben is also currently holding a Reddit Ask Me Anything thread today, and so go ask him a question here.

Here’s all of the news that we discuss coming out of GDC:

  • 0:52 – Lighthouse tracking system and Valve’s HTC Vive, and SteamVR controllers. Planning to release lighthouse to whomever wants to use it. Best tracking for doing tracking in VR in large spaces that we’ve seen so far. Flexibility of the Lighthouse system for tracking multiple items.
  • 3:41 – More details for how the Lighthouse tracking system works with an X & Y sweep. It’s 100Hz tracking and added to other sensor fusion
  • 5:06 – Laser sensors are also on the HMD, and more about how the laser sensor timings work
  • 6:20 – Tracking multiple people and potential occlusion issues. Can always add more laser base stations
  • 7:27 – More details about the HTC Vive (here’s Ben’s hands on write up about it), it’s better tracking than anything else out there. and using a 15×15 space is something totally new. Vive is visually very comparable to the Crescent Bay.
  • 9:37 – Cultivating a sense of presence being immersed into another place plus being able to act within a VR scene. Vive combines these two components of presence to the next level with their tracking and input solutions. Adding natural movements for what humans do instead of an abstracted button press. Get immersed by not worrying about your surroundings.
  • 12:50 – Three tiers of designing from mobile experience ranging from Cardboard to Gear VR to a sit-down experience like Morpheus and Oculus and then the stand-up experience like Vive and Survios. Devs carry around Gear VR because it’s easier to demo VR experiences, and when you come home then you have positional tracking. Will people start to have a VR office dedicated room?
  • 16:53 – Either they have to move their room around or dedicated VR arcade that have lighthouse systems set up
  • 18:03 – Progressive enhancement ideas and how that may apply to VR design. Mobile with no input to sit-down positional components and then on up to the walkable VR experience with two-hand interactions. It’s more difficult to do progressive enhancement design in VR. It’ll be easier to design for it later once the solutions become more standardized.
  • 21:05 – Difficult to do two-handed interactions without the controllers.
  • 23:07 – Sony Morpheus and the Sony Move controllers were not as performant as the SteamVR controllers. The VR narrative storytelling by Sony where he stood up and crouched, but wasn’t walking around. Worked vast majority of time and it was really fun.
  • 27:53 – Q2 2016 release date for Morpheus and Vive coming out in the Fall 2015. Oculus’ new Crescent Bay demos, but not a lot of announcements beyond an Audio SDK and the Mobile Game Jam. They had a big physical presence, but they were focusing primarily on developers. Had some new blog posts about time warp.
  • 30:09 – Developers do need some time integrating input controls, and so the launch looks to not have input. A lot of the demos were very passive. Is Oculus pivoting towards more cinematic VR and passive VR? Ben thinks that they’re still focusing on gaming market. They’re working on input solution, and devs need time to know what they’re working with. Perhaps Lighthouse will become a standard solution.
  • 32:59 – A tour of the history of the VR hardware development at Valve. Oculus looking for people with optical experience. There’s not a lot of extra wires with the Lighthouse solution. Oculus likely had known about their laser scanning solution
  • 34:59 – Striking to see how many VR was happening at GDC, over 22 different booths had a VR HMD. Haptech working on an electric feedback for guns and using STEM to track the gun.
  • 36:47 – Haptics are a key component to immersion, and abstracting out other haptic devices. Tactical Haptics and their Reactive Grip Controller
  • 39:03 – Low-poly scenes for VR and the uncanny valley problem, and using stylized art to avoid the uncanny valley and
  • 40:57 – Lucky’s Tale and using a diorama approach with a 3rd person perspective that is the sweet spot of VR with the stereoscopic effects. Things that work better in VR like body language in VR for doing telecommunications. Google Earth data being mapped at the SteamVR demo, and it’ll help visual learners to
  • 43:42 – John Dewar’s educational demo of airplanes, and Oculus demos used scale a lot. Other indie demos that were cool were like ConVRge that was broadcasting a livestream of the party into the VR space, and then broadcast VR scene onto a screen at the party. WebVR experiences that Mozilla were displaying
  • 45:52 – Mozilla’s WebVR experiences are really exciting and rearchitecting the browser to be more optimized for VR for ephemeral experiences. Web is great for quickly navigating information without having to download a lot of information
  • 48:10 – Google Cardboard experiences and ecosystem for ephemeral photos and videos, and using Gear VR to show people 360 videos very quickly
  • 49:46 – Eye tracking at Tobii and FOVE, and eye tracking can add a lot of useful things for VR like adding depth of field, being able to know where the user is looking for selection, and can do better chromatic aberration correction, and then to do foveate rendering for more optimized rendering.
  • 52:49 – Augmented Reality like Project Tango, and Qualcomm’s Vuforia and Meta’s AR Hackathon. Microsoft and Magic Leap are the two big AR players at the moment. AR isn’t there yet, and need sub-millimeter tracking
  • 54:32 – Meta’s AR hackathon, small field of view, and about a 60ms delay, and rudimentary demos not really interacting with the environment, and more tethered to a computer. They have $23 million in funding and some interesting team members. AR is still really early days, and computer vision is not a solved problem. Doing advanced putting things in the room is a difficult problem. VR is a lot further along in terms of the experience and
  • 57:15 – Magic Leap is looking to the VR space for innovation. OSVR and Razer booth and having a unified SDK and Unity’s integration of VR input as well. As long as OSVR’s system just works, then it doesn’t matter as much if Oculus, Sony, or Valve is involved or not. It’ll allow the third party manufacturers to collaborate and be
  • 1:00:12 – Khronos Group to come up with Vulkan announcements and the collaboration that’s happening there. Need to pay attention to the performance and don’t just throw GPUs at the issue, and a lot of focus on speed and reducing latency in the future.
  • 1:01:45 – Unity, Unreal Engine and Source 3 are now free, and that’s huge. VR has been a grassroots movement with a lot of experimentation from a lot of small upstarts. Ben Lang really wants a Virtual Pinball game.
  • 1:04:25 – Triple AAA shops are being really cautious with VR, and it’s a great time for indie’s to jump in and experiments. Lots of open problems for narrative storytelling for VR
  • 1:05:34 – Other highlights were Sony Morpheus narrative experiences and the WETA experience from Epic was pretty memorable
  • 1:07:03 – Unreal Engine for cinematic VR with Oculus Story Studio and interactive games from Unity
  • 1:07:40 – A lot of important announcements from GDC for the future of VR

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Below is a 16-part visual history of Valve’s SteamVR and what’s since become the HTC Vive:

denny_headshot-200x200

Denny Unger is the President and Creative Director at Cloudhead Games. He talks about his experience of seeing the HTC Vive prototype for the first time with other developers, as well as some of the implications of 360-degree, full-room walkable capabilities as well as two-hand interactions means for VR game design for The Gallery, Six Elements.

Denny & Joel from Cloudhead Games along with Alex from Owlchemy Labs talk about the following:

  • How it’d change the gameplay for The Gallery. They always wanted to use your hands and to spin around 360, but the technology wasn’t there yet. The HTC Vive matched the vision of what they wanted to do, and it gave them freedom to design the VR experience as they’ve always intended. Started with designing for sit-down experience, then moved to a standing experience, and now designing for a full room and 360-degree turning experience. They have to account for designing for each of these scenarios
  • VR locomotion is still an open problem because you don’t want to warp around. Trying to build systems to also locomote the volume with a joystick.
  • Some of the other experiences chosen by Valve. Cloudhead Games had a vision of where VR could go with interacting with a 3D space in an adventure game context.
  • First time that he’s experienced presence for an extended period of time. Joel from Cloudhead Games talks about his own experience of presence in VR
  • Simulating jobs with Job Simulator by Owlchemy Labs is about two hands interactions in VR. Watched people and what they did. People expect natural experiences within VR, and if you account for that, then it’s delightful. Throwing something at a robot and having a reaction that is accounted for.
  • User interactions that are new, and starting to build in interactions into the VR world that parallel what people want to do in real life. Gameplay in VR that require two hands. People tend to just use one hand at first, and it’s hard to understand that you need to use your whole body. Helping to people realize that they can use both hands in VR. Finally have the freedom to use both hands. Leaning and crouching are now natural body movements.
  • It’s going to apply to so many experiences. Devs will enable what you’ve always wanted to do in VR.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Sebastian-Kuntz

This is the 100th episode of the Voices of VR podcast marking 33 hours of interviews with the leaders of the consumer VR revolution over the past 10 months.

I wanted to celebrate the 100th episode with an interview with Sébastien Kuntz because he helped inspire me to start this podcast. Sébastien has been working in virtual reality for 13 years doing everything ranging form training simulators, 3D engine development, and producing a middleware VR solution with MiddleVR.

I first discovered Sébastien’s work during the IEEE VR conference last year because he was tweeting about different presentations talking about the academic community’s response to the Facebook acquisition. Here’s a couple of examples of his tweets that captivated my attention:

I wanted to hear more from Sébastien and attendees at IEEE VR, but there weren’t any consumer VR publications covering what was happening in academia or with VR researchers. In fact, there was hardly any coverage from any publication of last year’s IEEE VR conference beyond tweets from attendees, with the most prolific being the ones from Sébastien.

Because of this lack of coverage, I decided to start my own podcast. I reached out to interview a couple of other attendees of the IEEE VR conference including Eric Hodgson and Jason Jerald. I also really wanted to hear more from Oliver “Doc_Ok” Kreylos who was a respected commenter on the /r/oculus subreddit, and also happened to be working in VR within an academic context.

I also wanted to hear more from D of eVRydayVR, who is a computer science graduate student and made a number of amazing tutorial videos on barrel distortion, low persistence, and time warp.

The Voices of VR podcast was born and seeded with these more academic insights and perspectives into VR, and so I’m really looking forward to being able to travel to France at the end of March to cover the 2015 IEEE VR conference.

Sébastien has a great blog on VR with summaries of a lot of interesting VR research. He pointed me to Mel Slater’s research into virtual bodies, and this video summary of Slater’s research into the Positive Illusions of Self is one of the most fascinating VR videos that I’ve seen:

I also came across some quotes about presence in VR from Mel Slater on Sébastien’s blog that inspired me to write an extended essay about it in VR that I’ve referenced in a few tweets:

A number of people have asked me where that excerpt was from, and it’s from an unpublished essay that I’ll share here so that people can more easily link and reference to it.

THE FUTURE OF IMMERSIVE VIRTUAL REALITY
Just as the Internet and mobile computing have changed every aspect of society over the course of the last 20 years, Virtual Reality is poised to have a similarly pervasive effect on our lives.

The types of visceral experiences that VR can provide are truly unique, and on the whole constitute a new communications medium that will amount to the Gutenberg Press of the 21st Century. Felt experiences will be able to be captured and shared just as books were able to capture and share information and knowledge. Just as new insights into perspective catalyzed breakthroughs in Renaissance art, then adding a new immersive dimension to computing will likely spur a similar revolutionary change in what types of experiences that can be shared through virtual reality.

In order to investigate what types of new doors and interactions that VR will be able to provide to society, then it’d be helpful to first dive into the key components of what constitutes “virtual reality.”

WHAT IS IMMERSIVE VIRTUAL REALITY?

There’s a pretty big difference between 3D virtual worlds that are experienced through a 2D screen, and a completely immersive, virtual reality experience. There have been 3D environments in computer games since the early 90s, and interactive virtual worlds like Second Life since 2003. While immersive virtual reality includes some components of virtual environments, it also provides other elements that are completely new and different that transcendent what is possible through a 2D medium.

Sébastien Kuntz defines Immersive Virtual Reality as “the science and technology required for a user to feel present, via perceptive, cognitive and functional immersion and interaction in a computer-generated environment.”

There is a sense that you’re transported into another world where your unconscious parts of yourself are fooled into believing that the computer-generated reality is an actually real. Within the VR community, this is widely referred to as the experience of “presence.”

VR researcher Mel Slater defines two key components that are necessary for people to have a realistic response to a virtual reality environment. He says, “The first is ‘being there’, often called ‘presence’, the qualia of having a sensation of being in a real place. We call this Place Illusion (PI). Second, Plausibility Illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring… when both PI and Psi occur, participants will respond realistically to the virtual reality.”

Some of the elements that trick your perception into believing that you’re in another place are real-time interactions like head tracking where your physical movements are mirrored within a virtual 3D environment to the point that they match your expectations. Even though your rational mind may realize that you’re not in another world, your unconscious perceptions and primitive limbic mind will react as if the environment was real.

The second component of plausibility is when your cognition is fooled. As Kuntz says, “Everything that happens is coherent. You actually believe you’re there, your actions have a credible impact on the virtual environment and your sensations are affected by it.”

Attaining a sense of “presence” is the ultimate goal of a VR experience, but there is no fixed set of ingredients that reliably produce it. What is known is that there are a combination of components that together constitute “virtual reality,” but taken individually are merely a part of a virtual world.

If I were to boil down presence to an equation, then I’d say it’d be this:
Presence = Place Illusion + Plausibility Illusion

Sébastien and I talk about how the Crescent Bay demo really only had half of this equation with the Place Illusion. Without input controls, then there’s no ability to feel like your actions have an impact on the world and makes it more difficult to achieve the illusion of plausibility. So for the both of us it wasn’t able to achieve that full sense of immersion and presence.

This interview happened at Oculus Connect last September, and so neither one of us had seen the latest Valve Vive demo yet. Based upon the reactions to Vive, it’s clear that having an accurate tracking and input control system takes VR presence to the next level. We discuss our understanding his presence as well insights from his extended experience in VR, as well as his reaction to the Crescent Bay demos.

  • Middleware abstraction of the input trackers to use a wide range of input controllers. Also be able to determine things about the user
  • Input controllers and input devices and how should developers should approach implementing input controls, and natural hand interactions
  • What is presence has two levels of being immersed cognitively, and then there’s a lower-level of immersion at the subconscious perceptual level.
  • Virtual body being transferred to a virtual world for virtual therapy for phobias. You can also have marketing studies within VR because people act naturally. Engineers can test the ergonomics of physical designs & architectural spaces
  • Connecting the space to your interactions. You can simulate products and spaces at scale.
  • Don’t want to use VR to escape reality, but use it to improve reality. Use VR to test an assembly line with a virtual body. You can training simulations where you can actually practice the gestures that you need to do. You can have therapy towards phobias. You can use it to build empathy for others.
  • Crescent Bay VR HMD was the best that he saw up to the point of the Oculus Connect. The input device was missing from the experience.
  • Lightsaber demo with Sixense built even more presence. VR presence is about both Immersion into the Virtual world as well as Interaction with the virtual world where your actions make a difference.
  • Reflection of the VR community and how much that it’s changed over the past couple of years. It’s easier to see your own body and collaborate with other people in a CAVE environment. Having your avatar in VR and play around with your identity, which will make it easier to collaborate.
  • Telepresence within VR and social experiences in VR with haptics. Experiment within a virtual bar where two avatars are arguing, and they look at you and ask you to weigh in. You can tune into emotions with their body language in these virtual environments.
  • Most compelling VR experience where you are put into the body of wounded soldier, and your legs have disappeared and you’re waiting to die. Then an avatar comes to help you, and people smile are really grateful. Presence wasn’t broken because interactions were limited. If a virtual hand goes through a wall, then that also breaks cognitive presence, which is harder to maintain because it’s like a house a cards. It takes a long time to rebuild this sense of presence at a subconscious level once it’s broken. Have to make sure that your brain accepts the rules of these virtual worlds, which is more difficult.
  • No bodies were in the Crescent Bay demos, and you were standing. Want to be able to track your entire body. Sitted experience is a limit due to the hardware and what you can do with creating a sense of presence. Trying to recreate a whole reality. Can’t simulate everything yet because we don’t understand everything yet. Realistic rendering makes the mind demand that everything else is realistic including the sound and haptics, otherwise everything will fall apart.
  • Showdown demo where you’re moving through the space through slow motion. Felt like a ghost, and so anything could happen. That helped him accept it. If he could interact and see interactions within VR, then it’d be even more believable.
  • Really liked to interact with the alien in the Crescent Bay demo. Also really like the T-Rex coming at him because it was beautiful and scary.
  • New Consumer VR vs. Old VR communities. Misconception with VR is that everything VR died 20 years ago, but it’s still being used in professional situations as well a lot of VR research. Need to do new research with the new hardware, and great to see Facebook and Oculus are investing in new research. Working with the human mind and perception and not just hardware. We don’t understand how we perceive the environment.
  • Controlling perceptions to take us into another world, is that just escapist of going into fantasy worlds. VR can be used to improve reality. Stanford research where you see you virtual avatar and how nicer avatars can improve your sense of self-image and confidence. You can transform your self-image. It can also be used to escape reality as well. It’s up to society to decide. TV is probably worse than VR because it’s more passive.
  • Crescent Bay demos were pretty passive and not very interactive. Adding a tracked joystick is easier compared to achieving a low-level sense of presence. Then we can start a new frontier of how to interact with a 3D interface
  • Mel Slater’s research into the impact of virtual self, and how your brain accepts a virtual body. Your brain accepts it in seconds shen you hands move. Experiments of putting your virtual body into different ages and races, and it reduces racism because it builds empathy.
  • 3DUI book by Doug Bowman. Go to IEEE VR conference. There are a lot papers online. They’re friendly and they want to help the VR community.
  • Last IEEE VR conference happened right after the Facbook / Oculus acquisition. How could Palmer create a VR HMD, and the academic community couldn’t do it? Too reductionistic in the academic community and they have to be cautious and make incremental progress.
  • The Metaverse will change the world because it’ll provide a new way to communicate with your friends. People will recreate reality in VR because that’s what happens with a new communications medium. Unlock our brains to embrace the new possibilities, and looking forward to being a part of the VR development community.

I’m looking forward to many more episodes of the Voices of VR, and I hope that you’ve been enjoying them.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Andrew-Eiche

Andrew Eiche was in the Digital Interactive Group leading the Serious Games division at Booz Allen Hamilton at the time of this interview at Oculus Connect. He talks about CelluVR, which is an interactive educational experience through the circulatory system. In this VR experience, you ride through a vein inside of a nanoprobe to the heart, into the lungs, back to the heart, and then back to the body while educational commentary is provided throughout the journey.

Here’s what Andrew discusses:

  • The mandate at Booz Allen Hamilton was to push innovation with cool technology like VR.
  • A key lesson was that the lead-in to the experience was key in order to orient you into a virtual space. Their target audience was students and teachers.
  • Frame rate is king. They also minimized fear by having calming sound effects like a heartbeat with ethereal music.
  • Lessons in VR locomotion & minimizing motion sickness through a lot of testing and by being inside of a nanoprobe ship to minimize vection effects.
  • How being at a microscale enables being able to see individual cells and other surprising insights
  • Strategies for testing and getting feedback to ensure that they’re creating comfortable VR experiences. They see that it’s successful because their demo booth is always packed. They did 1800 demos in three days at a conference
  • The changed the path of the nanobot a lot based upon feedback. Had to ensure that the doors don’t close too soon to go through a wall, which breaks presence.
  • It is a predetermined path, but they’re working on making it more interactive
  • Also investigating how to visualize data within Oculus that makes it interesting to look at. They’ve been calling it the “Standing in the River of Data.” Trying to see if it’s possible to create a sense of presence with the data rather than making it a numbers game.
  • They’re looking at migratory patterns across the US, but trying to create a toolkit to create abstract experiences based upon the input data.
  • VR input controls while working with data. Integrating with Leap Motion to get your hands into VR
  • Inspired Darknet & Owlchemy Labs and use their rules of VR for minimizing motion sickness

Finally, Andrew sees that that the ultimate goal will be people interacting together within a virtual space.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Keep Talking and Nobody Explodes is a cooperative party game with one person is in VR who has to defuse a bomb, but has to be instructed by friends outside of VR who reading a paper manual based upon the descriptions provided by the other player. They two or more players have to communicate with each other to describe the bomb in order to know which section of the instructions to read in order to diffuse it.

Ben-Kane I think that Keep Talking has the potential to be a breakout, VR party game experience that helps introduce VR to the mainstream. After hearing developer Ben Kane talk about how people have been reacting to it, then I think it has the potential to really take off. It’s certainly built up a lot buzz in the VR community.

It was developed as a part of the 2014 Global Game Jam, and the developers noticed that there would be a crowd of people that formed around someone having a VR experience. This inspired the developers to create a game that would allow the crowd to play a game with whomever was in VR.

Ben talks about the gameplay mechanics of both the VR bomb diffusing experience as well as reading the instructions. The goal was to have interaction be the key component in this game design. They focused on fostering interesting and silly communication because they needed to have a reason for the players to talk to each other. If either side goes silent, then something has gone seriously wrong.

Part of the gameplay is to be able to isolate which portion of the puzzle that they’re working on. The best strategy is to identify the puzzles, and then work through them sequentially. It is possible to do them in parallel with a large enough team reading through separate sections of the manual, but they found that the most efficient and effective approach is to maintain consistent and granular communication with each other.

Space Team is collaborative game where there’s synchronous real-time communication, and sometimes the first rounds of Keep Talking gameplay of new teams resembles Space Team. As people get more experienced, then they start to develop more sophisticated strategies and become efficient at communicating.

They found that the difficulty in communicating the visual concepts was difficult enough to avoid having to resort to misdirection or tricks within the instructions.

He talks about all of the new puzzles, pacing events and other gameplay elements that they’ve been adding since the game jam back in January 2014.

They’ve also designed the levels so that it’s virtually impossible for someone to diffuse the bomb by themselves because of the procedurally-generated and random nature of each bomb as well as each set of rules. It’s even hard for the developers to diffuse the bomb in the most hardest settings.

Ben shares some Insights from watching people play it, as well as the emotions that people get from it. It was also surprising to see the parent and child relationships, and it’s interesting to hear about strangers diffusing a bomb together.

People laugh a lot to cope with the stress. Even if they fail miserably at being able to diffuse the bomb, people still have so much fun that they’ll often line up to play it again.

They originally used the Razer Hydras to diffuse the bombs in the game jam, but they’ve simplified the game controls so that they don’t need to use motion controllers.

Finally, Ben talks about how they’ve been sustaining themselves while they’re working on for more than six months, and that in the end they’re happy to break even and gain a lot of experience in creating a VR game.

It was also recently announced that Keep Talking will be coming out for Gear VR.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

IMG_20140920_094804

eleVR is a three-person research and design group doing a number of different spherical video experiments as well as developing their own WebVR-compatible web video player. eleVR includes Andrea Hawksley as the developer, Vi Hart as the director, & Emily Eifler as the producer.

eleVR is doing some of the most cutting edge and innovative content experiments that I’ve seen, especially the WebVR integrations that I think are going to be huge.

They also discuss some of their concerns about diversity within Virtual Reality with what they see as a gross underrepresentation of women at the invite-only Oculus Connect developer conference. Emily also talks more about the sexual assault that she experienced at Oculus Connect.

Some of measures that the larger tech industry have been implementing to help prevent this are things like a Code of Conduct that outlines as well as diversity statements like the one implemented by O’Reilly Media.

I also mentioned the work of Ashe Dryden who has written a couple of really great blog posts about fostering diversity at tech conferences. In this post on “Increasing Diversity at Your Conference” she says:

The easiest way to get feedback on your efforts is to publicly state what you’ve tried and ask for constructive criticism. Be transparent and truthful. I’ve seen many conferences write blog posts about what they’ve done to address the issue of the lack of diversity and the positive or negative results that they ended up with. This is important for a few reasons: it signals that this is important to you and that you are open to more ideas as well as letting people within marginalized groups know that you are considering their needs and the reality of their situations.

Here’s another great excerpt from a post titled “So you want to put on a diverse, inclusive conference”

How do you advertise that you want to see a diverse community at your conference when you don’t already have one?

  • Admit you have a problem. There is nothing wrong with going to colleagues or to twitter and saying “We want to provide an inclusive, diverse conference experience, but we need help. Can you help us?”
  • Explicitly ask for constructive criticism. Write a blog post on your conference’s site explaing what you have done and ask where you are going wrong or what you might have forgotten. Maybe you didn’t notice that all of the pictures on your conference site are of white people or that the language you use in your CFP is gendered.
  • Be gracious, humble, and kind. It’s hard to hear that you may have misstepped or made a mistake, but it happens to everyone. Before responding to criticism (constructive or not), take some time to examine the truth in it. For best results, ask an unbiased third party to examine the evidence and the criticism and help you understand the problem. Then, humbly apologize and make known the steps you’re taking to correct the situation.

I’d agree that there’s a lot that the VR community can collectively do to help foster more diversity, and I’m really glad to see that Oculus is starting to take steps towards being more deliberate about diversity concerns. They now have a Diversity Lead with Brandi House, and tonight at GDC, Oculus is co-sponsoring a party with Women in Games International. The Eventbrite page says,

VR is still very young, and now is the best time to define what’s possible. To help VR reach its enormous potential, we need a diverse and talented community of developers to make it a fun and engaging experience for everyone. As part of that effort, we want to welcome and support women developers both to the VR community and to the Oculus team.

I’ll be there tonight and look forward to meeting and featuring more women within the VR community.

Theme music: “Fatality” by Tigoolio

Subscribe to the Voices of VR podcast.

Magic Leap has been relatively quiet about the details of their digital light field technology after raising $542 million from Google, but they’re starting to be more public about what they’re doing. The most detailed information comes from a hands-on from the MIT Technology Review, and there was also a recent AMA with CEO Rony Abovitz.

paul-reynoldsI noticed that Paul Reynolds, a Senior Technical Director at Magic Leap, was following me on Twitter. I reached out to see if he’d be available for an interview for the Voices of VR podcast, and I was surprised to hear that he was interested.

Magic Leap is very supportive of what’s happening in the VR space because there’s a lot of innovation that’s good for the entire mixed reality ecosystem. They’re also very much interested in recruiting developers to create content for Magic Leap, and so I suspect that they see that the VR developers as being on the forefront of helping to create this new immersive world. Paul told me that they’re definitely interested in creating a healthy developer ecosystem where it’ll be possible to create Magic Leap content using the most popular 3rd party developer tools ranging from Unity to Unreal Engine.

Magic Leap is not quite ready to talk about specific dates or product plans right now, but are starting to release more information as they’re growing their company. Their presence at GDC will be more focused on recruitment because they see that people with a gaming background know how to create highly interactive, real-time 3D experiences.

magic-leap-realistic-3d-light-field-technology-1

The press refers to Magic Leap as an augmented reality start-up, but they’re cautious about labeling themselves as doing AR. They prefer calling it “cinematic reality” because they feel that digital light field is something quite different. Magic Leap’s technology actually has the capability to do fully immersive virtual reality, but the uncanny valley effects are leading them to focus on creating magical moments by embedding virtual objects into the real world.

There’s a lot of new information about what Magic Leap in my interview with Paul Reynolds, and I provided full transcript down below with some clean up.

Here’s the topics that we discussed:

  • 0:00 – Introduction to Magic Leap & how they create a digital light field to untap the model-making function of the brain
  • 1:48 – Range of hardware and software positions that they’re hiring
  • 3:31 – Support for 3rd party game development platforms like Unity & Unreal Engine
  • 4:27 – How Magic Leap sees the differences between virtual reality & augmented reality & how VR is benefiting the mixed-reality ecosystem
  • 6:27 – Whether or not Magic Leap considers itself to be creating augmented reality or whether their “cinematic reality” should be considered something different
  • 7:52 – How Magic Leap has the capability to do fully immersive virtual reality. How they define presence, & how their virtual content created from the digital light field feels more natural the closer you get to it
  • 9:39 – Magic Leap has the capability to completely overtake your perceptual system by putting you into a virtual world, but that they hit an uncanny valley effect where it’s more magical & believable for their technology to put virtual content into the real world
  • 10:52 – Rather than creating a sense of presence into another world, how Magic Leap is more concerned about creating a magic moment where people casually accept virtual content without thinking about the technology
  • 12:47 – How Magic Leap is addressing the privacy implications of augmented reality to avoid some of the backlash that Google Glass faced
  • 15:33 – How the future of 3D user interfaces has the potential make technology more accessible to humans. Magic Leap sees that it make the power users more powerful, and make it easier for others to access the power of technology. Paul mentions the specific example of search
  • 17:55 – The hands-on and practical approach of how Snow Crash author Neal Stephenson is contributing as Magic Leap’s Chief Futurist. He’s not just a “philosophical spirit guide” or a “head on a stick,” but rather that he’s actually more interested in making things.
  • 19:58 – That it’s up for debate as to whether the metaverse will be a sole artifact of virtual reality, or if Magic Leap would be able to provide a window into what’s considering the metaverse. Paul talks about some of the social & multiplayer use cases for Magic Leap with a virtual tabletop game example.
  • 22:13 – What Paul sees as the ultimate potential of cinematic reality, and it’s capability to change the world by making technology more accessible to everyone.
  • FULL TRANSCRIPT

    PAUL REYNOLDS: I’m Paul Reynolds, senior technical director at Magic Leap. And at Magic Leap, we’re making a new way to interact to technology — a human-friendly way to interact with technology.

    We’ve been in the news a lot lately talking about our ability to put virtual content into the real world. What we firmly believe is that the reality that we see is the function of the model-making function in our the brain. So basically, our brain is the highest-resolution renderer you’ll ever have. That render takes a lot of inputs including a light field. Everything our eyes do is they basically scan the light field to create signals for our brain to help it render what we see for us. We feel like that if we can create a digital light field that is similar to how we process the real world, we actually untap the power of the model-making function of the brain, basically tapping into perception.

    So the interesting thing about being so natural in our output is that it’s an incredible engineering challenge. So we’re doing that as well as commercializing this and making a device and develop an ecosystem where we can create content for this light field technology.

    My title is senior technical director. I’m in the content group, and at the intersection of all of these things between the engineering work we’re doing across many disciplines of engineering, the developer ecosystem, and the content development that we actually make the most of this technology with.

    VOICES OF VR: So you’ve recently posted a slew of job different postings looking for “wizards” to come work there at Magic Leap with a wide range of hardware and software positions. So maybe you could go over some of the highlights of what type of positions you’re looking for.

    PAUL: Yeah. Magic Leap is a really interesting place to work in, especially from an engineering perspective because we have incredibly complex and diverse engineering effort. So you’ll see job postings open for optics, opto-mechanical, embedded software, which is like firmware and drivers, mechanical engineering, perception and machine learning, all across the board. So much so we actually have literal rocket scientists that work in our group, and they’ve even said that the stuff we’re doing is more complicated than launching a rocket, which is kind of funny.

    But that means that everything we do — because we’re putting virtual content in the real world, everything we do is real-time 3D. The philosophy behind the company with regards to that is that everything must be highly interactive and real-time 3D, and who’s the most experienced group of people? It’s video game developers.

    My background is in video games. I’ve been in the video game industry for over 15 years now. We also believe that gaming helps push all of the APIs and technology that we have across the board because it’s got to be high-performance, interactive, and beautiful. And that’s a really hard thing. So you’ll see crazy engineering jobs across the board. And we’re just a scaling company in general. So we’re scaling quite a lot across the board this year. So it’s very diverse.

    VOICES OF VR: Yeah, I even saw some like cloud engineers and also Unity developers. So would you say that Unity is your primary platform? Or are you also trying to integrate with Unreal Engine? Since it is speaking to a gaming audience, you’re explicitly calling out Unity.

    PAUL: Yeah, I wouldn’t say that we have any particular primary platform. We basically want to support third-party development as much as possible. And as someone that comes from that world, I want to make sure that we put out a developer ecosystem that’s as friendly as possible to existing developers. So we’re looking to support any gaming engine across the board. Some engines may have nicer integrations than others just because they’re more popular. But our current architecture, we want to be accessible to anybody that has an existing game engine or investment in any kind of technology. We don’t want to be closed off to anything.

    VOICES OF VR: There’s a mixed reality spectrum that ranges from fully-immersive virtual reality to a mix between augmented and virtual reality, and then to augmented reality, and then on down to sort of just normal, mundane reality with no augmentation at all. And so how do you see and understand this mixed reality spectrum ranging from virtual reality to augmented reality?

    PAUL: Yeah so, we actually are really excited about all of the progression that’s happening in VR because that actually benefits us as well. I think anything AR gets unfairly judged as non-immersive, which I can firsthand testify that there can be extremely immersive experiences in an optical view-through experience.

    I think that there will always be things that clearly better happen in a VR environment where you want to basically to take someone away to a completely different world. And just vice versa, they both have their strengths and tradeoffs. There’s going to be things that are much cooler that are integrated within the real world.

    Some of the interesting things are, especially if you wan to think in terms of game development, there’s some tried-and-true game mechanics that have served us well over many years through the frame of a monitor that we’re figuring out even in VR versus an integrated context. Do those just port over? Or do we have to actually distill down what makes those things fun? And then how do we innovate those things in this new platform? So there are challenges.

    And then there are some things that they just weren’t being done any justice on a 2D screen. Some of these games and game mechanics should exist in this new kind of platform.

    So I think that there’s tradeoffs in both. And I think there’s strengths in both. And like I said, all of the technology innovation and all of the user interaction exploration that’s happening right now is beneficial to the entire market, as far as I’m concerned. So I’m very excited about all of these new things that are happening.

    VOICES OF VR: Where would you classify Magic Leap on that spectrum? I mean, I know that it’s been labeled as augmented reality, but then you’ve also said that it’s something different like “Cinematic Reality.” Would you classify Magic Leap as augmented reality?

    PAUL: Well so, from a purist’s standpoint our light field is being integrated into the real world light field. So we feel like we’re actually fulfilling the dream of what — these past years of what AR has been. We’re very excited about that. I think that we’re actually implementing it. So yeah, I — I mean this is virtual content in your real world. You know, if I had to pick one line of that, we’re definitely, on the overall view of integrating into the real world, we’re definitely on that side of things.

    But we also feel like that’s the harder problem to solve. It’s really hard from a performance, and fidelity, and technology standpoint. It’s a harder problem to have content solidly located in the real-world light field.

    What we feel like is that that could make for a really awesome VR experience too. You know, that same technology.

    That’s why we we’ve been trying to define this whole new thing because we think it’s a whole new thing. I think it’s inspired by the visions and the dreams of all these — of what has happened in the past.

    VOICES OF VR: At the IEEE VR conference last year, they had a slide of the spectrum between Augmented Reality to Virtual Reality. And what they said is that as you get closer to the end of Virtual Reality it’s more about cultivating the sense of presence into another world. Whereas when you’re in augmented reality, you have a presence of being in the [real] world, and your body is there and so you don’t have to recreate your virtual body. And so, would you say that Magic Leap has the capability of doing fully immersive, virtual reality then?

    PAUL: Yeah. I mean, I think we definitely — By understanding more about the perception system and by presenting a natural light field, we certainly have that ability.

    Our focus is putting virtual content in the real world.

    We do talk about presence though. So the interesting thing is that’s kind of an overloaded term because for us. When we talk about presence, one of our huge strengths is that the virtual content feels like it’s there. As opposed to feeling like you’re somewhere else.

    Our presence is more like, this thing, you have a sense of it’s mass, scale, it’s location relative to you. That’s presence in our –and it’s really strong especially in the near-range, the closer to you. We’re typically, especially in video game type of stuff, the closer things get to you, they start to break down.

    One of our strengths is that it’s a really powerful moment when you see virtual content really close, and it feels even more that it’s there instead of getting more blurry or lower resolution.

    So yeah, we do talk about presence, but in a totally different way. So we have that ability, you know, immersing even further is something we do.

    VOICES OF VR: Right, so would it be able to like over take your entire visual field? I guess that’s the difference between augmented realities where you still see reality, and virtual reality you’re taking it into a completely different world. So would it be able to just completely overtake you’re visual system and have you believe that you’re in another world?

    PAUL: I mean from a technical perspective, it certainly has the ability to. From a what-we’re-trying-to-accomplish perspective, I don’t think that’s what we want to do.

    Because what we found is you get a much more magical experience when this thing appears in your real world.

    Basically, our perception system is so tuned for the real world. You kind of hit an uncanny valley the more you try to overtake that. We’ve happened upon begin able to generate a light field that feels very natural and it’s perceived as natural, and we get a lot of benefits — our brain being able to fill out a ton of detail when you give it the right signals.

    So our focus is definitely on brining magic into your real world. It’s just where our strength is. But our technology capability, we could certainly do that. But I just don’t think it’s something we really even consider right now.

    VOICES OF VR: Yeah, I see. And that makes sense. I’ve found that the most immersive virtual reality experiences are actually the ones that are like low-poly. Because there is a lot of ability to project, and your minds accepts it more.

    When it’s more closer to reality, it starts to reject those little differences between not being quite there. So it sounds like you’re able to to do that by taking that object and putting it within our reality.

    And so with this cinematic experience, I’m wondering what you guys use internally, if it’s not presence — like within VR, it’s like putting you into another world and being “present there.” What is for you the ultimate yardstick for creating a successful cinematic reality experience?

    PAUL: So for us, it really is about creating a magic moment. A successful moment is when someone causally accepts that virtual object is really in there in the room with them or in their space.

    So for example, if we were to put a virtual dragon into the room. For the person to be talking about the dragon as if it’s there instead of how it’s rendered or they technical details of it. Just the causal observer is talking about, “I want the dragon to do this.” And all the things we want to be able to do.

    To us, that’s when you’re no longer fighting disbelieve. People are just delighted. It’s no longer you trying to explain to them what this could be. They already get it, and they just accept it. And then it’s more talking about all of the other things they want to happen.

    So the big bar is when people not consider this technology. It’s just this magic that’s in their world. We don’t talk bout photons and pixels. We talk about this object, the things I can with it now that I have this magical ability to put whatever I want in the room.

    So for us, that is the crossover moment that kind of ties into the presence thing as well like when you feel this thing is in there with you.

    Magic-Leap-googles
    VOICES OF VR: And with Google Glass, that was something where people were wearing it out in public, it had a camera, and it had the capability to broadcast the lives of other people onto the web without their full consent. And I think that we saw that there was a lot of backlash to that because it was something that kind of violated the privacy of other people.

    Virtual reality seems to avoid that by being more of a private experience, but with AR it’s something that you’re wearing out in the world. So I’m wondering if you guys that see if AR needs to be more of a private experience at first before you take it out into the wide world? Or if you see that it kind of needs to demonstrate value more before it’s released onto the streets of mainstream society?

    PAUL: Right. So I’m actually really proud that internally we’re extremely sensitive to these privacy concerns. It’s part of the why we’re being so careful on a lot of ways.

    So from my perspective, there’s two major privacy concerns. There’s the user’s first-person data, and then there’s the data of the people around them.

    I don’t think we can control where our users go. So I think that we just have to assume they’ll be in public places. So given that, I think what we have to be sure to do is that any sensing of the world that we do that we do it in an ethical way. That we don’t take more information than we need. And that any data that we do have for that user is transparent and accessible to that user, and we give them some control of it. It’s basically their data. That’s how we feel about it.

    So to make virtual content even more integrated into the real world. You have to be able to know more about the real world in terms of sensing and context.

    So the line between, “Is it worth it to take this much more potentially private data to make the experience more magical?” So towing that line. And giving the user access and control to it. I think’s a better approach as opposed to trying to restrict usage.

    We do feel like we’re basically reinventing the way humans and technology interact, and we want that to be an all-day, wherever-they-are basis. I think it’s better now to be respectful, and aware, and ethical with data than to ignore it. And just try to make the coolest possible stuff, and then have to retrofit privacy and things like that. I’m really happy that we’re conscientious of that.

    In addition to the fashion issue of people wearing these things in public. We want to make something people aren’t afraid of, they’re excited about. And they want to wear whatever we make proudly, and give them a reason content and experience wise to do that.

    magic-leap-3DUI
    VOICES OF VR: What gets you the most excited about the future of 3D user interfaces?

    PAUL: I think, honestly, the most exciting thing is that UI as we know it will probably go away. I think the graphical user interface and the mouse and the keyboard — it was a huge step towards making technology accessible to humans. And the same with touch screens. I think mobile touch screens made these things even more accessible.

    But there is a limit to a 2D, flat plane interaction. Touch screens gave this a second wind with multi-touch gestures and such. But I feel like we’ve kind of stretched the limits of — we’re basically starting to limit how much technology we can utilize by how we interact with it.

    So I really like the idea that the best UI is not the one that’s in your face, and that the user interaction only interacts when you’re ready for it. And so one thing we’re pretty — in our design-thought — We don’t want to be this Time Square, heads-up display, head-locked display in your face all the time. Really if you’re not using it, it’s not in your face.

    That’s where the magic part comes in and permeates through beyond just the experiences we create. But the interaction of the technology where when you call upon it and when you need it, then it’s there. And when you need it, it goes away and it’s a natural interaction.

    So to me I think that is going to be the most exciting thing. Because as someone who has grown up through the boon of the modern web and search, I have pretty good Google Fu. If I need to research or learn something, I can drill down pretty quickly.

    But it kind of goes back to the mom test we talked about this week. My mom, if she had a more intuitive way of accessing this power of search in a more natural way, I think it’s going to accelerate everything. It’s literally world changing the way that we can access technology in a totally different way.

    That’s the most exciting to me, which is probably more at a higher level from a user interaction standpoint than actual design.

    So we’re doing of tons of exploration work with that because it does change everything when you can arbitrarily stick a virtual object in the real world, it opens up so many possibilities.

    VOICES OF VR: VR has had a long history of being inspired by science fiction, and one of the things that I find really interesting is that Magic Leap has actually hired on the author of Snow Crash, Neal Stephenson as your chief futurist. Maybe you could comment of on how Magic Leap is using the visionary skills of a fiction, sci-fi writer and moving into non-fiction & how he’s been helping to shape these technologies that he’s been dreaming about for over two decades.

    PAUL: Yeah. So I was fortunate enough to be a part of Neal coming into the company. He’s very practical and very involved.

    So one of the first things that you’ll notice about Neal speaking with him is that he’s a very — which is very obvious when I say it out lout — but he’s very thoughtful with his words. So when you have a conversation with him, he will pause and think, and then he will speak.

    One of the first dinners we had was, Rony, our CEO, basically saying, “You are the metaverse guy. So there’s all types of way we could work together. The first of which is you’ve already envisioned this future. You could be our,” — I think Rony used the term — “philosophical spirit guide on a lot of things.”

    And so Neal paused as he does. And he was like, “I think I’m more interested in making things.” I think the term he used was that “I don’t want to be a head on a stick. I actually want to create things in this.”

    He’s definitely the chief futurist, but he’s extremely hands-on. He has a whole list of things he wants to create with this technology. And I think the next day we were bringing him and showing him stuff, and he actually had one of our engineers to take a part a thing for him to look at the electronics inside because he was very interested in how it worked.

    So I think Neal is guiding us by actually making things. As opposed to us to us like running things by him.

    But without a doubt, his writing has influenced us heavily. And it is pretty surreal at some times to be talking to him a bout these things. But he’s very practical.

    VOICES OF VR: You mentioned the metaverse, and he’s famous for coining that term. But that sort of implies this interconnection of completely, fully-immersive virtual spaces, kind of like the Internet. It seems like in some ways augmented reality would be bound to more the physical geography. Do you feel like AR would be able to provide like a small window into the metaverse? Maybe you could comment a little bit how you see augmented reality fitting into the metaverse? Or if you feel like it’s going to be a sole artifact of virtual reality?

    PAUL: I think it’s up for debate. One thing we concentrate a lot on — To really put virtual content in the real world, the most magic things happen when that virtual content respects the constraints of the virtual world. For example, if I did have that dragon flying around the room. If that dragon did not interpenetrate or fly through anything that’s real.

    But it’s also not a limited constraint so that we can take advantage of the fact that we’re expanding beyond the physical world. An example I would give is that if you and I are in the same room together and we’re sharing a cinematic reality experience, like we’re playing a virtual game together on a table top. We’re sharing that same room together physically. But then imagine that it’s a four-player game, and there’s two other people playing with us. And one person is in Seattle and one person is in New York.

    So we’re sharing the same physical space, you and I, but the four of us are sharing this kind of virtual thing. And they’re in their respective physical places. So who’s to say who’s in the actual real place or not? Which starts to get pretty meta.

    And to me that’s how I see — I think the metaverse idea, I think a lot of people think about it kind of like how you articulated it, like this alternate dimension that you have to jump in to.

    But I think you could just as easily consider it this layer on top of our real world where it’s all relative to our personal perspective if you’re in the real spot or not.

    So no, I don’t think it’s limited to a VR experience at all. If anything, I think we could actually bring this kind of metaverse to the physical world.

    VOICES OF VR: The last question I wanted to ask is — What do you see as the ultimate potential for cinematic reality?

    PAUL: Kind of like what I mentioned in the UI thought that — Access to the power of technology has been limited to really a select few that have the ability to interact with it in the current forms that we have to interact with it.

    So I think the ultimate potential is presenting the power of information in search, and just technology in general that has benefited a lot of people in my generation’s lives. That being a much more natural and instinctual interaction that we all have.

    So people that are power users now actually have even more powerful. And that people that are really out of touch will actually have access and it makes sense to them and it doesn’t require this explanation and bring-up to get them access to it.

    And related to that, the fact that we’ve been working through screens for the past hundred years. You know, I think that’s going to weaken greatly. That putting virtual content into our world instead of us diving into those.

    And then that continues to evolve, we kind of — I don’t know about you, but I have these kind of different interaction modes where I might be on my phone for this, but I type a longer e-mail so I’ll go to my computer. Or I’ll want to watch a movie so I’ll go to my couch in the living room. We all have these kind of modal interactions with technology. I think that’s going to blur much, much more.

    I think you’ll no longer feel burdened by having to use one particular technology for one particular thing you want to do. I think that’s really the world-changing technology for some, because of my access to the information on the Internet and my ability to drill down through it.

    So I think if we can give me more access and give more people access in general then that’s pretty huge. I think that’s the ultimate potential of this.

    VOICES OF VR: Awesome. Well, thank you so much for joining me today.

    PAUL: Yep. Thank you.

    Reddit discussion is here.

    Theme music: “Fatality” by Tigoolio

    Subscribe to the Voices of VR podcast.

    Lesley Klassen is the Chief Innovation Officer for The Campfire Union, which is an education-based, start-up based in Winnipeg.

    Lesley-KlassenThe Campfire Union is taking one of the most innovative and sustainable approaches that I’ve seen to developing educational virtual reality experiences. They discovered that VR works really well within the context of career exploration, and received funding to develop a number of persuasive VR experiences in order to try out a career.

    One experience is called Tower Crane where kids can explore what it’s like to operate a tower crane 125 feet above a virtual construction site. The other experience that they developed is called Tiny Plant, which is a tour of the engineering aspects of high-tech manufacturing jobs.

    They also implemented a formal survey process at every public showing of one of their VR demos in order gather feedback and evidence of the efficacy of using VR for career exploration. They found that out of 220 kids that experienced the tower crane recruitment experience, they received 90% as the average player experience rating, 76% wanted to learn more using virtual reality, and 60% wanted to learn more about being a tower crane operator.

    The other innovative approach that The Campfire Union is taking to VR is that they’re in the process of creating an assessment engine to be able to evaluate the demonstration of job skills. The best way to demonstrate competency is to demonstrate your skills, and VR can provide a simulated experience of how well someone knows something by measuring the order and speed in which they do certain tasks. They’re collaborating with some local academics to be able to use a 3D spatial database in order to derive meaning through VR analytics. They’re capturing 3D data from the sensors from head movements as well as the capturing the motion tracking. They want to be able to track whether someone is looking at something and see if they can isolate the moments when someone is making a judgment or decision. Sometimes the easiest way to determine if someone is competent is to watch them do the tasks, and so they’re also providing a visual recording for a reference as they decide which analytic data is or is not useful.

    They’re also starting to develop multi-player educational experiences for team scenarios as well as having two people learning from each other. They’re also considering using virtual reality spaces for pre-briefs and de-briefs in order to process and integrate the social learning experiences and targeted training experiences.

    Lesley emphasizes that we should not recreate a classroom in VR because you could have every day be a field trip. You should adapt the VR medium for what’s it’s best at.

    Finally, he talks about some of his fears about VR and where’s it’s going ranging from the ability to live within a fantasy and the dangers of screen addiction. But at the same time, there’s amazing learning and educational experiences that can be created in VR, and he’s decided to change his entire career direction to be a part of it.

    Theme music: “Fatality” by Tigoolio

    Subscribe to the Voices of VR podcast.

    Nick Whiting is a lead engineer at Epic Games and has been a VR evangelist there. Nick had worked with Brendan Iribe and Nate Mitchell at Scaleform, and so they sent him an Oculus developer’s kit to integrate into Unreal Engine 4. He ended up working on it in his spare time, and eventually got it working.

    Nick-WhitingThen Oculus brought them the HD prototype and they collaborated on creating a UE4 demo for E3 in July 2013, which helped Oculus beat out Xbox One and PlayStation 4 in the Game Critics E3 award. This was a turning point that helped to legitimize VR at Epic Games.

    Then founder Tim Sweeney & the directory of engineering Dan Vogel saw the Valve VR demo room, and this helped to seal the deal for VR at Epic. Some people could see the potential beyond DK1, but others needed to see something closer to something that’s ready for a consumer product. The Valve Room VR demo was a clear turning point for the leadership at Epic.

    Over the past couple of years, Nick has started to get more resources to make VR demos, including the Showdown demo that was the final demo scene in the series of Crescent Bay demos.

    VR started as a side project at Epic, and now Nick says that it’s pretty huge there. Most recently Tim Sweeney said that he believes that virtual reality “is going to change the world.”

    In this interview Nick talks about:

    • How opening up UE4 to a subscription model at $19/month brought it to a wider variety of developers and VR experience creators.
    • The process of integrating open source contributions from the community back into UE4
    • The Public Trello board for the UE4 Roadmap, and how that plays into their release cadence
    • Help from Oculus in integrating UE4 originally came from Andrew Reisse, and now Artem Bolgar has been the dedicated resource doing a lot of engineering work to get the Oculus SDK integrations working
    • Epic’s approach to superior visual fidelity
    • The possibility of SLI GPUs and need for more GPU power for VR
    • How the Showdown demo was being run on the NVIDIA GeForce GTX 980 at 90Hz at the Crescent Bay demo resolution
    • Experimenting with integrating with different hardware for VR input controls. The more integrations, the better

    Nick also talks about some of the lessons learned from doing VR demos. He says that VR makes developers more honest because the tricks that work in 2D don’t work as well in 3D. Couch Knights was about creating a shared social space and it was more impactful than they expected.

    Epic’s visual style has also traditionally been more realistic and gritty, but they found that within VR that people tend to make more emotional connections to abstract characters with a more stylized art style. There are downfalls of the uncanny valley with a hyperreal rendering, while a low-poly scene tends to allow your mind to accept it more because there is room for more mental projections and less noticing for what’s not 100% correct.

    Finally, one of the most powerful experiences in VR for Nick was a social VR experience where he felt presence with another person who had limb tracking enabled. He see that humans being presence with each other was really powerful and compelling, and that being present in a world that’s not our own has a lot of potential that he finds really exciting.

    Theme music: “Fatality” by Tigoolio

    Subscribe to the Voices of VR podcast.

    This video is what since has been referred to as the Showdown demo, and was the final demo scene of the Crescent Bay demo.

    Here’s Nick Whiting and Nick Donaldson from Epic Games discuss Lessons from integrating the Oculus Rift into Unreal Engine 4 at Oculus Connect: