At the IEEE VR conference this year, there was a pre-conference workshop about Immersive Analytics, which talked about how to use VR & multi-sensory interfaces to support analytical reasoning, decision making, and real-world analytics tasks by immersing users in their data. Data visualization expert Chris North gave the keynote for the Immersive Analytics workshop, and I had a chance to catch up with him to talk about the four different ways that intelligence analysts use space for Big Visualization, Big Cognition, Big Interaction, and Big Algorithms. Chris also explains why the principles of embodied cognition are causing intelligence analysts to look at how to use the body, physical movement, and the surrounding environment in order to support and amplify distributed cognition.
And here’s a graphic of the Pirolli-Card Sensemaking Loop that Chris referred to within his talk as a process that intelligence analysts use in order to do intelligence analysis.
Brad Herman & Shiraz Akmal recently left DreamWorks Animation to raise $3 million to start SPACES Inc., which is a virtual reality and mixed reality company focusing on storytelling for different brands including Microsoft, NBCUniversal, and The Hettema Group. Brad founded the DreamLab at DreamWorks in order to focus on developing content for mobile and cutting edge immersive technologies like VR, and so he’s been intimately involved with exploring storytelling in VR for the past couple of years. I had a chance to catch up with him at SVVR 2016, where he shared his thoughts about storytelling in VR and mixed reality, how to deal with non-compliant users, how he thinks that VR differs from film & broadway, and one of their first mixed reality projects of Big Blue Bubble’s My Singing Monsters.
Rob Jagnow is a senior software engineer who was a part of Google’s Daydream Lab team that produced over 60 VR prototypes in 30 weeks. This team was originally revealed in a WIRED magazine profile of VP of VR Clay Bavor, and this Daydream Labs prototyping team was officially announced by Bavor in his Google I/O talk on “VR at Google.” The Daydream Labs team gave a really amazing talk titled “Lessons Learned from VR Prototyping,” which had a lot of great VR design insights across the three areas of interactions, immersion, and social VR. I had a chance to catch up with Rob Jagnow at Google I/O, and dig a bit deeper into some his favorite prototypes, lessons learned, and design principles that were driving these VR experiments.
Betty Mohler is a researcher at the Max Planck Institute for Biological Cybernetics, and she had some keen insights the nature of the uncanny valley being connected to expectations in my previous interview with her. I caught up with her again at the 2016 IEEE VR academic conference where she was talking about a recent paper about Appealing female avatars from 3D body scans: Perceptual effects of stylization. They used an automated way to stylize a 3D scan across a variety of different animated art styles, and found that most women preferred to have at least some stylization in their avatar. I talk to Betty about these findings, some of the associated ethical issues, and how this 3D avatar research could be applied to help treat people with eating disorders and body dimorphic disorder.
During Google’s Daydream Labs presentation at Google I/O, they discussed how to deal with different types of trolling behaviors. Suzanne Leibrick is a VR user interface and user experience designer, who has experienced different types of online harassment within virtual spaces. As a response to this, she wrote up a number of different suggestions in a post titled Social VR solutions. I had a chance to catch up with her at Google I/O to talk about some of these technical solutions, as well as how to create more open and welcoming social VR spaces.
Techcrunch reports that Snapchat just raised another $1.8 billion in a Series F round, and Snapchat could end up being a major player within augmented reality hardware market. Snapchat is probably considered a bit of a dark horse by most people when compared to other major AR players including Microsoft, Meta, Daqri, or Magic Leap. But Snapchat has captured the attention of Generation Z through the use of augmented reality filters for selfies and vlogs.
On my trip to Google I/O, I met Duygu Daniels, who is an augmented reality UI/UX designer who is an enthusiastic Snapchat user and evangelist to her Millennial friends. Duygu is very interested in studying the behavior of Generation Z and how they’re using Snapchat, Snapchat’s connection to augmented reality, and how it’s cultivating radical authenticity and ushering us into The Experiential Age.
LISTEN TO THE VOICES OF VR PODCAST
Mike Wadhera recently pointed out Snapchat as being an example of a new paradigm of Experiential Age applications in his article titled, “The Information Age is over; welcome to the Experience Age.” He cites a video by Snapchat co-founder Evan Spiegel where he talks about how desktop computers have created a mindset of accumulation of information, whereas mobile computing empowers users to have instant expression. With Snapchat, identity is constructed from being in the moment rather than accumulating actions from the past like Facebook or Twitter. Stories in Snapchat are told in chronological order and last only 24 hours, whereas other social media is reverse chronological order and lives on forever. Rather than taking photos of things worth remembering for a long time, Generation Z uses annotated photos as a more immediate and ephemeral form of symbolic communication.
Artificial Intelligence has the potential to disrupt so many different dimensions of our society that the White House Office of Science & Technology Policy recently announced a series of four public workshops to look at some of the possible impacts of AI. The first of these workshops happened at the University of Washington on Tuesday, May 24th, and I was there to cover how some of these discussions may impact the virtual reality community.
The first AI public workshop was focused on law and policy, and I had a chance to talk to three different people about their perspectives on AI. I interviewed the White House Deputy U.S. Chief Technology Officer Edward Felten about how these workshops came about, and the government’s plan for addressing the issue.
I also talked with workshop attendee Sheila Dean, who is a privacy advocate about the implications of AI algorithms making judgments about identified individuals. I also spoke with Ned Finkle, who is the Vice President of External Affairs at NVIDIA about the role of high-end GPUs in the AI revolution.
LISTEN TO THE VOICES OF VR PODCAST
There are a number of take-aways from this event that are relevant to the VR community.
First of all, there are going to be a number of different privacy issues the biometric data that could be collected from virtual reality technologies including eye tracking, attention, heart rate, emotional states, body language, and even EMG muscle data or EEG brainwaves. There were a number of companies at the Experiential Technology and Neurogaming conference who were using machine learning techniques in order to analyze and make sense of these raw data streams. Storing this type of biometric data and what it means could have some privacy implications. For example, Conor Russomanno warned me that EEG data could have a unique fingerprint and so there could be implications of storing anonymized brainwave data because it could still get tracked back to you.
I also discussed tracking user behavior and data with Linden Lab’s Ebbe Altberg, where we talked about the potential dangers of having companies ban users based upon observed behavior. Will there be AI approaches that either grant or deny access to virtual spaces based upon an aggregation of behavioral data or community feedback?
Sheila Dean was concerned that she didn’t hear a lot of voices that were advocating for privacy rights of users in the context of some these AI-driven tracking solutions. She sees that we’re in the middle of a battle where our privacy awareness and rights are eroding, and that users need to be aware of what’s at stake when AI neural nets start to flag us as targets within these databases. She says that consumers need to advocate for data access, privacy notice consent, privacy controls, and for people to be more aware of their privacy rights. We have the right to ask companies and the government to send us a copy of the data that they have about us because we still own all of our data.
Sheila also had a strong reaction to Oren Etzioni’s presentation. Etzioni is the CEO of the Allen Institute for Artificial Intelligence, and he had a rather optimistic take on AI and the risks. He had a slide that labeled SkyNet as a “Hollywood Myth,” and Sheila said that SkyNet is a very real NSA program. She cites an article by the Intercept that there’s an actual NSA program called SKYNET that uses AI technologies to identify terrorist targets.
At the same time, SkyNet is kind of seen as the “Hitler” of AI discussions, and we could probably adapt Godwin’s Law to say, “As an online discussion [about AI] grows longer, the probability of a comparison involving [SkyNet] approaches 1.”
There have been a lot of overblown fears about AI that have been put out by dramatic dystopian sci-fi dramas coming out of Hollywood. These overblown fears have the potential to prevent AI from delivering all sorts of ways of contributing to the public good from saving lives to making us smarter.
Microsoft Research’s Kate Crawford sees that going straight to SkyNet can suck the oxygen out of the nuances of the issue. She was advocating for stronger ethics with the computer science community, as well as a more interdisciplinary approach to encompass many different perspectives with the AI as possible.
Alex says that the advent of print, film, and TV, marked a shift where we started to see canonical versions of stories that were told primarily from a singular perspective. But VR has the potential to show us the vulnerability of the first-person perspective, and as a result put more emphasis on ensuring that our machine learning approaches include a diversity of perspectives across many different domains.
Right now AI is very narrow and focused on specific applications, but moving towards artificial general intelligence means that we’re going to have to discover some of the underlying principles that are transferable to building up a common sense framework for intelligence. Artificial general intelligence is one of the unsolved and hard problems in AI, and so no one knows how to do this yet. But it’s likely that it’s going to require cross-disciplinary collaboration, holistic thinking, and other ingredients that have yet to be discovered.
Another takeaway from this AI workshop for me is that VR enthusiasts are going to have the hardware required to train AI networks. Anyone who has a PC capable of running Oculus Rift or HTC Vive is going to have a high-end graphics card, which if you have the GTX 970, 980, or 1080 then these are the same architectures used in NVIDIA’s even higher-end GPUs that are used to train neural networks.
When VR gamers are not playing a VR experience, then they could be using their computers massively parallel-processing capability to train neural networks. Gaming and virtual reality have been a couple of the key drivers of GPU technology, and so AI and VR have a very symbiotic relationship in the technology stack that’s enabling both the AI and VR revolution.
Self-driving cars are also going to have very powerful GPUs as part of the parallel-processing brains that will enable all of the computer vision sensor and continuous training of the neural net methods of driving. There will likely be a lot of unintended consequences of these new driving platforms that we haven’t even though of yet.
Will we be playing VR driven by the GPU in our car? Or will be using our cars to train AI neural networks? Or will we even be owning cars in the future, and instead switch over to autonomous transportation services as our primary mode of travel?
Our society is also within the midst of moving from the Information Age to the Experiential Age. In the Information Age, computer algorithms were written in logical and rational code that could be debugged and well-understood by humans. In the Experiential Age, machine learning neural networks are guided through a training “experience” by humans. Humans are curating, crafting, and collaborating with these neural networks throughout the entire training process. But once these neural networks start making decisions, then humans can have a hard time describing why the neural net made that decision, especially in the the cases where machine learning processes start to exceed human intelligence. We are going to need to start to create AI that is able to understand and explain to us what other AI algorithms are doing.
Because machine learning programs need to be trained by humans, then AI carries the risk that some of our own biases and prejudices could be transferred into computer programs. This a year-long investigation into Machine Bias by Pro Publica, and they found some evidence that the software that predicts future criminals was “biased against blacks.”
AI presents a lot of interesting legal, economic, and safety issues that has the Ed Felten, the Deputy U.S. CTO saying, “Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions.”
There are going to be a whole class of jobs that are replaced by AI, and one that is probably the most at risk are probably truck drivers. Pedro Domingos said that AI is pretty narrow right now, and so the more diverse set of skills and common sense that’s required to do a job, then the safer your job is right now. With a lot of jobs being displace by AI, then Virtual Reality may have a huge role to play in helping to train displaced workers with new job skills.
AI will have a lot of vast implications on our society, and the government is starting to take notice and taking a proactive approach by soliciting feedback and holding these public workshops about AI. This first AI worksop was on Legal and Governance Implications of Artificial Intelligence, and it happened this past Tuesday in Seattle, WA.
Here are the three other AI workshops that are coming up:
Darius Kazemi, is an artist who creates AI bots, and he did some live tweeting coverage with a lot of commentary starting here (click through to see the full discussion):
Google’s Nathan Martz is a product manager for the Daydream SDK, and I had a chance to catch up with him to talk about what developers need to know in order to get started in developing for Daydream, which will be released in the Fall. Road to VR has a great article covering some of what’s needed to build the DIY Dev kit, which Nathan refers to as the “Build Your Own Dev Kit” (BYODK). Or you can follow along the instructions here. I talk to Nathan about the 3DOF Daydream controllers, SDK features, AndroidVR, as well some of the other of the other tips for getting started in developing for Daydream.
Google’s mission is “to organize the world’s information and make it universally accessible and useful,” but what happens if there’s a shift from the Information Age to the Experience Age? Google’s Daydream mobile VR headset is part of that answer. It’s Google’s next phase moving beyond the minimum viable VR Cardboard headset where it’s starting to really leverage the Android hardware and software ecosystem to help bring virtual reality to the world at scale.
I had a chance to talk with Andrew Nartker, Product Manager of hardware and platform for Daydream & Google VR, as well as Andrey Doronichev, Group Product Manager, VR Products at Google working on the software, apps, and experiences. We talked about designing mobile VR for extended use, the differences between Daydream and Project Tango, the state of positional tracking on a mobile phone, some of their Google apps that are being developed, using voice as a primary input, how Tilt Brush and Vive development fits into Google VR, how they’re using artificial intelligence to do video stitching with the Jump camera, and adding experiences to Google’s collection of the world’s information. We also talk about some of the favorite experiences in VR, and look to the future of what’s next when it comes to mobile VR and bringing VR to the masses.
Candy Crush creator Tommy Palm has moved into making casual virtual reality games with Resolution Games. They’ve already released Bait! on the Gear VR, and it’s the first VR app to feature in-app purchases, and they just announced at Google I/O that they’re designing a launch title for Google’s Daydream mobile VR platform with a title called Wonderglade. I had a chance to catch up with Tommy about developing causal VR games that are interruptible, how they’re designing in natural breaks to not create games that are too addicting, thoughts on the future of free-to-play VR with in-app purchases, how VR games can be social without it being multi-player, developing a game with Daydream’s 3DOF controller, and how casual games may really start to blur the line between games and VR experiences.
LISTEN TO THE VOICES OF VR PODCAST
Here’s some other videos and updates from Google I/O including a new GoogleVR YouTube channel and the @GoogleCardboard has been deprecated, and Google’s main Twitter VR account is now @GoogleVR.
Here’s some of the relevant GoogleVR talks from Google I/O over the past couple of days (with more coming soon to their GoogleVR YouTube channel.
VR at Google Keynote where Daydream Labs was announced.
Daydream Labs: Lessons Learned from VR Prototyping. This is an absolute must-watch talk by any VR designer since they condensed lessons that they got from rapidly prototyping 60 experiences in 30 weeks.
Daydream Labs: Drum Keyboard will revolutionize text input.
Monetization and Distribution on Daydream
Designing & Developing for the Daydream Controller – Google I/O 2016
VR Design Process – Google I/O 2016
Learn more about the Cloud Vision and Speech API – Google I/O 2016