Ethan Schur is the Chief Marketing Officer for ImmersiON-VRelia, and they produce three different virtual reality HMDs. “The GO” is a mobile mobile HMD for either iPhone or Android phones, the PRO G1 has higher resolution and some AR features, and the Blue Sky Pro version has some extra ruggedized features for the military or for use in digital out-of-home entertainment applications.
Brandon Jones is a WebGL developer who started implementing WebVR into the Chrome web browswer as a part of his 20% project at Google. He’s been working on it for the past year in order to get VR into the browser. You can find the latest WebVR-enabled Chrome build here.
Brandon talks about the moment and growth of WebVR over the past year, and how he sees VR as an area of computing that people are very interested in. WebVR and WebGL are very interrelated areas, and so one could consider that he’s working all the time on WebVR.
He talks about the Khronos Group standards committee for WebGL, and the fact that the WebVR is currently homeless in terms of a standards committee and it’s uncertain as to whether the W3C or Khronos Group will be the governing body. You can check out the latest WebVR spec here.
Reducing latency is the number one focus for working on WebVR in Chrome, and the latest latency with a DK1 was 64ms of seconds for motion to photos, and is likely lower with the DK2 with the faster framerates. They’ve also integrated timewarp into WebVR in Chrome in order to help with reducing perceived latency. He talks about some of the ongoing work for Chrome to make realtime WebGL rendering a lot faster, as well as some of the other optimizations to the browser that could be made to make it more efficient if it’s known that the output display is VR.
Google is not targeting WebVR for the Gear VR at first because it’s not meant to be an end-to-end VR experience. In other words, they’re not creating browsers that work in VR, but rather making VR experiences that work in the browser.
Brandon talks about Google Cardboard, and some of the initial reactions to it and the growing interest around it. His own personal reaction to seeing Cardboard for the first time was to laugh in his manager’s face, but he very quickly went to “This is crazy!” to “This is brilliant!” after trying it out and seeing it’s potential. He talks about some of the more compelling Cardboard experiences he’s had, and how he sees it being able to fill a need for consuming 360-degree videos, 360-degree photos, and other ephemeral content.
He talks about some of the future optimizations that Unity and web browsers could make in order to streamline the viewing of WebGL content on the web. The current downloads are fairly monolithic and could be made to be more friendly for the web by dynamically streaming of assets and content.
Finally, Brandon doesn’t see the web as a platform for Triple AAA VR gaming since it’s not optimized to be able to maximize the use of native resources. Instead, he sees that web will be great for sharing ephemeral content that you don’t want to download. He also sees that a lot of the framework for the metaverse is already here with the Internet and cites Vlad Vukicevic who said, “We already have the metaverse, it’s the Internet in 2D!”
For more information on WebVR, then be sure to check it out here.
Elyse Bromser-Kloeden is a 3D environment artist and VR developer who is working on a 360-degree video streaming site called Vrideo. Vrideo is focused on making VR streaming work, and they’re doing some special optimizations provide a better quality VR streaming experience.
Elyse also has her own personal educational VR experiences including Meso VR and Painted Cave. She’s worked in the games industry as an environment artist and art director on games like Assassin’s Creed Brotherhood and Til Morning’s Night. She’s also been freelancing in working on different projects where she’s able to create real-time VR environments.
Elyse is working on the user interface at Vrideo so there’s a consistent cross-platform VR experience from web to mobile to PC. She’s noticed that text-heavy content needs special attention in VR because it can be overwhelming. Other questions of how real to make the environment versus how abstract it should be so that the focus should be on the video content.
She talks about her educational experiences with Painted Cave and Meso VR. She used Autodesk’s free 123D Catch photogrammetry program to capture textures and 3D models at Vasquez Rocks to create Painted Cave. In the process, she learned the importance of starting by optimizing your VR experience from the beginning to create a comfortable VR experience first, and then focus on making things look good from there.
Elyse talks about how she’s using photos from online combined with Google Street View’s photo sphere captures of Mayan Ruins in order to create her Meso VR experience. She’s has some educational features that she’ll be adding to her experience, including a lot of historical insights based upon working directly with archeologists studying Mayan artifacts.
Her favorite educational experience is 6000 Moons, which is a guided tour of all of Earth’s satellites. She makes the great point that a lot of the iOS educational experiences are geared more towards youth, and the VR educational experiences tend to be more targeted towards an older adult audience.
Trevor Claiborne is a product manager for Google Cardboard, and he talks about the evolution of Cardboard from being a 20% side project to now shipping over 500,000 cardboard units. The Google Cardboard was a very secretive project within Google, and a lot of the people who were working on it in their spare time leading up to the initial announcement at Google I/O in 2014.
He talks about some of the design evolution of Cardboard including how the first discovered how they could use a magnet as a button using the phone’s magnetometer. There have also been some small tweaks and improvements to the the design over time.
He talks about the initial perception of Carboard being a joke, but once people get a chance to experience a Cardboard VR experience then they understand it a lot more. Trevor says that Google is serious about Virtual Reality, and it’s just that they’re going at it in a different way than other companies. They’re trying to produce VR experiences that more accessible to more people.
He talks about how Google Cardboard was deliberately designed to not include a headstrap because you hands limit your head motions in a way that helps to prevent nausea. The phone’s sensors are still limited, and they’re just trying to create an optimized VR experience given these constraints.
He talks about some of the future plans with regards to creating a Cardboard experience that fits a “phablet,” and some of the emerging interaction models. A couple of stand-out Google Cardboard experiences that he’s had include VR Cosmic Roller Coaster & Titan of Space for Cardboard.
Trevor says that there are no current plans at this point to add positional tracking, and so they’re using 3DOF head tracking. And he’s looking forward to when VR gets to the point of being able to transport us to another world that’s indistinguishable for reality, and that there are a lot of opportunities for fun and education in VR.
Kyle Monti talks about HapTech’s Haptic Peripherals for VR. They have a patent for electronic recoil for haptic feedback using linear motors, and they’re creating a generalized haptic device to be able to simulate everything from pool sticks, baseball bats to tennis racquets.
They were showing off a gun system in a really fun arcade-style robot shooter game at GDC. Having haptic feedback in sync with a positionally tracked object does an incredible job of increasing the sense of presence within VR. They were using the STEM controllers with their demo that was created by Otherworld Interactive.
One of the iterative innovations that came from the GDC experience is that HapTech modified their demo so that the person who was next standing in line could spawn and control the robots attacking the user in the demo. It was a great way to engage and involve the people watching the person in VR, but a human-controlled enemy was also able to do things that would be a lot more difficult to program for.
Kyle also talks about some of the more military-grade haptic weapons that they are involved with helping to create with his DEKKA technologies. They license out the patented recoil system so that they can focus on the haptic technology portion, but also so that they can invest more energy into VR gaming rather than virtual military training equipment. So HapTech is an interesting company in that the founders have a foot in each world.
HapTech was one of the more immersive VR experiences and demos that I got to try out at GDC, and so keep an eye out for their upcoming SDK and information for how to get more involved with creating experiences for their generalized haptic technologies.
Paul Bettner is the founder and CEO of Playful Corp, and he’s best known for creating the iPhone game Words with Friends. Dan Hurd is studio design lead at Playful and led an epic 3-4 month jam that produced 38 VR game prototypes exploring various VR game mechanics.
I had a chance to play Lucky’s tale for the first time at GDC, and I have to say that I was blown away. I had a half hour for the interview, and I had the best intentions to get a brief feel for the game and then do the interview. Once I got into the Lucky’s Tale world, I was so compelled that I couldn’t stop playing. Playful has discovered something really key about what they call the “sweet spot” of VR, which is optimizing all of the gameplay to be about arms length away by increasing the IPD and using a set of finely tuned 3rd person camera controls.
Bettner has been an early adopter of new technologies, and he talks about discovering VR through a friendship with John Carmack and then the process that Playful went through in order to discover what works and what doesn’t work in the VR medium. They couldn’t fall back on their old tricks, and they had to find the rules where there are no rules yet.
Paul talks about how developers have decades of expectations and fantasies of what VR should be, and that we have to be willing to let go of some of the things that don’t work as well within VR. At GDC, Paul was starting to see some experiences that go beyond what our fantasies of VR might be. Dan suggests that developers have to be candid with themselves when experiencing something that they expect to be amazing in VR, but isn’t — such as flying because of the lack of depth cues of objects at far distances.
Dan & Paul talk about their iterative process of trying to discover the fun mechanics within VR. Like Job Simulator, they also found that doing mundane tasks with your hands is fun, especially if they’re physics-based.
Paul talks about discovering the “sweet spot” of VR, which ends up being about arms length away. They found research that backs up that we have more neurons in our brains to process information at higher fidelity when all of the game mechanics are happening within the length of our arms. Lucky’s tale was created in order to optimize the gameplay to happen within this VR sweet spot. They ended up increasing the IPD in order to give a giant’s perspective with a third person perspective. They also found that the diorama mode of an entire level looks really amazing in VR.
They broke some of the best practices guidelines from Oculus in order to discover this table mode of placing all of the action with the sweet spot of VR. Paul says that the parameters that control the third person camera operate in a very narrow band of comfort that are not very forgiving.
Finally, Dan talks about how light can bring a sense of reality to these virtual worlds and being in a magical place that causes surprise and delight in a way that’s the most comfortable and impactful. Paul says that virtual reality is the ultimate expression of being able to create your dreams, and that we’ve actually been able to create a technology that can directly interface with your brain in a way that’s indistinguishable from reality.
Josh Carpenter is a researcher and interaction and user interaction designer for figuring out how to use virtual reality on the web with WebVR. Mozilla has been increasing the size of the team working on WebVR because they’re betting that immersive experiences like virtual and augmented reality will be a huge part of the future of computing. Mozilla wants to enable web developers with the tools and infrastructure so they can help build out the metaverse of interconnected virtual worlds.
Some of the lessons learned for user experience design in VR is that they found that designing UI elements onto a curved surfaces works really well, and the text size has to be fairly large and so that reduces the amount of information density that’s available to work with. They also found that lean in to zoom with DK2s positional tracking is analogous to pinch to zoom on a multi-touch device in that it’s effective but yet also somewhat annoying, and so they try to avoid relying upon that too much.
Leap Motion integration into WebVR and virtual reality and warns about designing skeuomorphic interactions with your hands, but thinking about extending your reach and optimizing the interaction design for what works the best within VR. Josh also talks about the concept of progressive enhancement and how that applies to designing VR experiences that work in a range from mobile to sitting at your desktop with positional tracking to all the way to room-scale tracking with two-hand interactions. For the web, an experience should work at the most basic input device, and then progressively enhance the experience if more of those input devices are detected to be available.
Josh talks about the range of WebVR demos that were being shown at GDC ranging from games created in Flash, a 360-degree video, as well as Epic Games’ Couch Knights which was exported from Unreal Engine 4 using a WebGL export plugin.
The WebVR Web API specification is being designed so that you can experience the web through any of the virtual reality HMDs, and they’re also figuring out the user interaction paradigms that will allow people to be able to go to any destination in the world wide web-enabled Metaverse.
He talks about how Unity 5 now supports One Click WebGL export. Unity exports WebGL 1, and WebGL 2 is on the horizon with even more next-generation graphics capabilities. For example, Mozilla was showing off the following demo at GDC for the level of graphics fidelity that’ll be possible with WebGL 2
Josh also talks about what they’re doing in order to increase performance for real-time, immersive 3D experiences. There are a lot of optimizations that can be made to the browser if it’s known that the output will be a virtual reality display. It will take more development resources, and Mozilla has recentely committed to growing the WebVR to enable more VR on the web but also to help create the Metaverse.
He also talks about the power of the web for ephemeral experiences, and some of the collaborative learning VR experiences that he really wants to have within VR powered by a browser. He also talks about how WebVR is pushing innovation on the audio front, and he cites A Way to Go as an experience that pushes the audio API to it’s performance limits.
Finally, Josh talks about the future plans for getting WebVR easier to use for developers, making sure that it’s possible to have mobile VR experiences, and then creating responsive web design standards so that websites can flag that they should be experienced as fully immersive VR experiences. He also sees that it’s a safe bet to be investing in virtual reality because immersive experiences are a key part in driving innovations of the future of computing.
Sarah Amsellem is a lead engineer for Faceshift, which uses a 3D camera to capture facial animations. She talks about the workflow for getting Faceshift up and running, and says that having a good 3D model is key.
Faceshift is being used for High Fidelity, as well as throughout the animation and gaming industry to bring more human emotion and expression into characters through facial animation. At the moment though, it’s difficult to do full facial tracking while wearing a virtual reality HMD. Sarah suggests that eventually they might be able to fuse eye tracking with other facial tracking technologies in order to bring more human expression into realtime social interactions in virtual environments.
For more information on Faceshift, then check out this product page.
The Khronos Group is an open standards organization for compute, graphics and media, and they provide interoperability APIs to enable the portability of hardware. Neil Trevett is the President of the Khronos Group, and he talks about the three main announcements that were made at GDC including:
Neil talks about how SPIR-V is used by both Open CL 2.1 and by Vulkan, and he explains more about what SPIR-V enables.
VR requires a lot of different pieces of hardware and software to come together. Khronos Group brings 120 companies together ranging form Google, Intel, Oculus VR, Nvidia and AMD. They’re taking feedback from VR display vendors in order to drive more pixels and optimize the graphics pipeline to be performant enough for VR.
He explains the tension between platforms and interoperability of the hardware and software vendors. This helps the ecosystem to evolve, and he talks about the importance of standards that come from the Khronos Group. Neil sees that VR is still very much in that chaos phase of experimentation for creating the most compelling VR experiences. Then once it’s clear the direction of the industry is moving, then standards start to play a bigger role.
Open standards like Vulkan usually work best when they have proprietary competitors like DirectX 12. He talks about the advantages of having both in the ecosystem, and how that competition can drive innovation and provide choice to developers.
Microsoft recently joined the Khronos Group, and they joined for the WebGL working group, and the Khronos Group invites a lot of different companies to participate.
The WebVR standard currently lives between the World Wide Consortium (W3C) and the Khronos Group. WebGL ended up being governed by the Khronos Group instead of the W3C because it’s more tightly tied to the GPU, and all of the hardware stakeholders were already a part of the Khronos Group.
Khronos Group has interoperability APIs to enable portability of hardware, and they’ve started to implement conformance tests that are a part of the Vulkan standard to ensure that their properly implemented by each vendor.
One of the myths about standards body is that standards bodies move too slowly, but Neil says that it’s more about the time to ecosystem of a solution being implemented by multiple vendors. He also reassures members of the Khronos Group that your intellectual property is protected. Part of being a part of the Khronos Group is that members agree to not sue each other for proper implementations of Vulkan.
The Khronos Group is involved with helping to reducing latency in drivers, and they’ll be taking a lot of feedback from the VR community over the next couple of years to take those suggestions and integrate those into the APIs.
Finally, Neil sees that the future of VR will be driven by applications that help companies save or make more money. He sees that it’s an exciting time for the technology, and that VR will continue to push innovation throughout the graphics hardware industry.
Below are some of the presentations that the Khronos Group and various members gave during GDC.