nathan-martzGoogle announced some new features for ARCore at Google I/O last week including Sceneform to help Java developers integrated 3D content into apps, augmented images to trigger immersive AR experiences off of trained images, and cloud anchors to enable multi-player AR experiences in the same environment. I had a chance to catch up with Nathan Martz, Lead Product Manager of ARCore, at Google I/O to talk about each of these new features, where AR is at and where it is going, and the fundamentals of ARCore including position tracking, environmental understanding, and lighting estimation. I share some of my highlights from a number of the experiential marketing demos at Google I/O, my impressions on the Lenovo Mirage Solo, and some of the open questions around privacy and ethics at Google.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Members of Google’s engineering teams do not have the authority or expertise to comment on larger ethical or privacy questions about Google’s products or policies, and so there were a number of questions about ethics and privacy that Martz wasn’t qualified or authorized to answer. This speaks to the larger issue that Google currently doesn’t have a specific contact person who is authorized to discuss the larger ethical or philosophical implications of their AI, AR, or VR technologies. There were many ethical questions that were raised from Google’s Duplex demo of AI talking to a hair salon receptionist without disclosing itself as AI, and there will continue to be many ethical questions around how much of our surrounding environments that Google will need to scan and monitor in order to determine the context of computing. There needs to be a better process for technology companies like Google to engage in ethical dialogues with the media and the public about how immersive technologies are designed and deployed.

Ambient computing & spatial computing will have an increasing need to collect more and more information about our surrounding environment in order to fully understand our context in order to better serve our needs within each of these different contexts. This means that technology will continue to desire more and more intimate and private information about our lives. There will continue to be complicated tradeoffs between the benefits of the amazing functionality that technology applications can afford but also with nuanced costs of the overall erosion of our fourth amendment rights to privacy and the risks of these surveillance capitalism business models capturing data that could be also shared and used by abusive governments — or breached and leaked onto the Dark Web.

The ethics around privacy is a huge open topic with Google, and they symbolically swept privacy under the rug by quietly announcing their GDPR updates to their privacy policy and updated privacy tools on the Friday after Google I/O. Google could have announced their new GDPR regulations either before or during Google I/O, and make a very public commitment to the changes they’re making with regards to their new privacy obligations. But they didn’t. They waited until the Friday afternoon after a three-day festival after journalists were finished covering Google’s latest advances in AI and AR. All of these amazing AI innovations would be impossible without the data they’re collecting, and so this type of behavior reinforces the impression that privacy is Google’s unconscious blindspot that they don’t want to have an honest conversation about.

Both Google and Facebook have been taking a very reactive approach to discussing the implications of biometric data. The companies have yet to deploy any technologies that have biometric sensors like eye-tracking or facial tracking for recording emotional sentiment, but Google was showing off emotional sentiment detection in some Google I/O flower experiments but also in TendAR AR demo that originally premiered at Sundance and was shown again at I/O. Neither Google or Facebook have made any public comments on the unknown ethical thresholds and implications around biometric data from immersive technologies.

Oculus’ privacy policy allows for the tracking of physical movements, and Oculus’ Max Cohen told me that the data they’re recording is at a very low sample frequency. The problem is that Oculus’ privacy policy doesn’t specify any sampling frequency, and there’s no obligation for disclosure if Oculus decides to increase the sample rate of what data are recorded. GDPR has obligations for Facebook and Google to disclose what identified data are being recorded, but there are no policy obligations to report or disclose what de-identified data are recorded. They can capture whatever anonymized data that they want, and it’s not going to show up in any of their privacy tools. Oculus’ privacy policy has a lot of vague and open-ended permissions for what they can record, while Google still hasn’t disclosed many specifics of what they’re recording with AR or VR.

Google’s general privacy policy doesn’t have any specific sections for data collected from the use of immersive technologies. Perhaps more tools will be deployed by GDPR’s enforcement date of May 25th, but until then, there are many open questions about: What data are being collected by AR and VR? What data are tied to our personal identities? What are the obligations of Google to disclose and report what de-identified data is captured and stored? As VR & AR immersive technology evolves, then Google is going to have a lot more access to biometric data like eye-tracking data, emotional expressions, facial movements, and eventually galvanic skin response, EEG, EMG, and ECG. How does Google plan on treating biometric data? Will they record it? Will they connect it to our identities? Is it possible that de-identified biometric data could actually have biometric keys that could unlock that supposedly anonymous data and transform it into personally-identifiable information? What are the risks to having massive amounts of biometric data breached and leaked onto the Dark Web?

Ethics and technology is going to continue to be a huge topic in the evolution of AI and VR/AR technologies, and companies like Google and Facebook need to evolve how they directly engage with the public in an embodied dialogue about these topics. These companies should really have cross-functional ethics teams focused on bridging the gap between the technological potential and the larger ethical & cultural impact on society. These technology companies are becoming larger and arguably more influence than a lot of governments, but there’s little to no democratic feedback mechanisms to engage in debates or dialogues about the trajectory of where technology leads our society. Technology decisions will continue to be made before there’s an opportunity to fully evaluate and discuss the ethical implications of the technology.

If there were cross-functional teams focused on ethics, then representatives from these teams could have an embodied dialectic with journalists and the public about the ethical implications of their technologies. Without a clear point of contact, then these types of ethical discussions have a one-way asymmetry where Google takes an action and then there are a lot of reactive discussions in the media and on social media without an opportunity to directly engage in a dialogue in real-time. How resilient is our society to any number of ethical missteps that could be potentially be prevented through interactive conversations?

Google announced an AI capability like Google Duplex in a way that wasn’t sensitive to the ethical implications of how this technology would be used. The AI agent didn’t disclose to the human that it’s an AI agent acting on behalf of a human, and it was like watching a prank unfold as to whether or not the human talking to an AI bot was going to determine that this was an AI bot.

How many other humans did Google use to stress test their technology in these types of field tests? Did they ask for consent for these tests? Were these tests actually scheduling real appointments? Or were they cancelled later?

There is a level of human labor involved in training AI, and humans should be able to opt into whether or not they consent to helping train Google’s AI — that could eventually be putting them out of a job. There is a lot of general fear about where AI is going to be developed and cultivated with humans in mind, and Google’s cavalier attitude around ethics and AI isn’t helping alleviate any of that anxiety.

There were also a number of Google employees who quit in protest due to Google’s participation in supporting the Defense Department’s Project Maven with training and open source AI technologies. Gizmodo’s Kate Conger reports that “One employee explained that Google staffers were promised an update on the ethics policy within a few weeks, but that progress appeared to be locked in a holding pattern. The ethical concerns ‘should have been addressed before we entered this contract,’ the employee said.”

Over 90 academics signed an open letter from the International Committee for Robot Arms Control calling for “Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.”

Conger reports that “Google has emphasized that its AI is not being used to kill,” but the open letter written by academics says that it’s headed down that path. The open letter says:

With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2]

There are a lot of deep ethical questions when it comes to AI, but also issues around privacy, and whether or not we’re on a path towards capturing biometric data from VR or private environmental data for AR. My impression is that most of the technology engineers, software architects, and designers earnestly want to do the right thing in creating useful technology that helps solve real-world problems and helps make the world a better place. The problem is that there isn’t a single individual who can speak to the larger ethical, philosophical, or cultural implications of all these technological capabilities that they’re building.

In the absence of making everyone responsible and enabling every individual to speak about the ethical and moral implications of what’s being built, then companies like Google and Facebook should consider creating cross-functional teams that are having these conversations. If this is already happening, then these representatives should be cleared to be having these ethical discussions with journalists and the public at large. Otherwise, there’s going to be even bigger public backlashes to technology like Google Duplex when it’s exalted as a technological achievement while being completely tone deaf to the moral and ethical implications of unintended consequences of what it means to have embodied conversational AI deceptively interact with us without our explicit consent. Google recently removed the “Don’t Be Evil” clause from their code of conduct, and so let’s all hope that they figure out a way to have larger ethical discussions about the technology they’re creating without being completely blinded by their technological genius.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip

Comments are closed.

Voices of VR Podcast © 2018