Ethical design of technology is a hot topic right now, and Tristan Harris has been catalyzing a lot of broader discussion about the perils of persuasive technologies. He quit his job as a design ethicist at Google after feeling limited in his ability to bring about meaningful change, and he started the Time Well Spent movement that got a lot of features integrated into the iPhone and Android operating systems to help people more mindfully use technology. He recently launched a non-profit called The Center for Humane Technology that is bringing to light the interconnected harms of technology that are mutually self-reinforcing tat include the reduction of attention spans, distraction, information overload, polarization, outragification of politics, filter bubbles, breakdown of trust, narcissism, influencer culture, quantification of attention from others, impacts on teenage mental health, social isolation, deep fakes, and lower intimacy
I had a chance to catch up with Harris at the Decentralized Web Summit to talk about his journey into looking at the harms of technology, and his holistic approach of looking at potential policy changes, educating the culture, and looking at different ethical design principles for technologists and designers. He says that we need more sophisticated design frameworks that include a more introspective and phenomenological understanding of human nature, and that include how to design for trust, for participation, for empathy, and for understanding. All technology designers are facing a number of fundamental design challenges within our current cultural context, and before we continue to design the future of immersive systems, then it’s worth taking a moment to reflect upon some of the broader issues and challenges that we’re facing with our current technological infrastructure and economic business practices of surveillance capitalism.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Here’s a video of the testimony that Harris provided at the The Senate Commerce Committee’s Subcommittee on Communications, Technology, Innovation, and the Internet on June 25 in a session titled “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms:” His full written statement can be found here.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So on the Voices of VR Podcast for the last five years, I've been trying to document the evolution of virtual reality technologies, but I always love to find the stories that connect to what's happening in the larger context with technology today. I think we're seeing a lot of discussions about technology, the impacts of technology in our lives. And I think there's just a lot of reflection of these major corporations that have a huge amount of influence over our lives and whether or not what is happening with this whole realm of surveillance capitalism and the way that these persuasive technologies are being used and how they're being designed, whether or not it's being done with the highest level of ethics and morals and virtues. And it seems like it's working really well for a small handful of companies, but there's all these other unintended consequences of technology that we're starting to look at these broader ethical issues. So it's a pretty hot topic right now to look at ethics and technology and ethical design. And Tristan Harris is someone who was a design ethicist at Google, and he wasn't seeing that he was finding much impact on his work. He didn't have a lot of power or leverage to actually bring about much change at all while working inside of Google. And so he decided to leave and start this Time Well Spent movement, which was trying to really bring awareness to all these different persuasive technologies. find ways that we can kind of bring more inner contemplative reflection in our lives, but also advocating for these different time well spent features within the operating systems. And so since then, he's actually been wildly successful in getting a lot of these features for being able to track your screen time, both at the Apple and Google keynotes. A lot of the features that he's been advocating for have started being integrated more and more into the operating systems. And so I first came across Tristan, my first tweet was back in January 7th of 2017. There was an article in The Atlantic called The Bench Breaker, which was a profile that says that Tristan Harris believes that Silicon Valley is addicting us to our phones and he's determined to make it stop. And so He then went on to a number of different media campaign in 2017, going on the Recode podcast with Kara Swisher in January 30th. And then in April, he had a big 60 Minutes called Brain Hacking with Anderson Cooper. And then from there, he went on to Sam Harris' podcast, which is like, what is technology doing to us? And so I saw the Atlantic profile, and then I saw the 60 Minutes video, and then I listened to the whole interview that he did with Sam Harris. I think a lot of his work has been really impactful in my own work of covering the virtual reality industry, because I think he's bringing forth a lot of these deeper ethical questions for how we're going to actually create these systems in a way that is ethical and that actually has a trajectory that is actually cultivating what we want. And we kind of have this spiraling out of control type of feeling that we've had with all these unintended consequences with technology. And so there's actually a lot of reflection that's happening, both from a correctional standpoint. So Tristan actually went in front of the Congress on June 25th of 2019, and he testified on this correctional hearing. It was called Optimizing for Engagement, Understanding the Use of Persuasive Technology on Internet Platforms. And also there's been a lot of talk about antitrust legislation and Facebook just announced in their earnings report that the government and the judiciary is actually looking into antitrust. So I think Zuckerberg's actually come forth and said they are inviting regulation in some ways just because there's been so much negative backlash against these big technology companies that it's a bit of like, hey, let's maybe have the government come in and try to regulate them in some ways. The challenge is that these government institutions are so far behind in terms of just being up to speed as to what technology is even doing right now. So Truston Harris is trying to help just bring government up to speed to inform them of what's actually happening, but also just trying to bring about these wider discussions about what does it mean to design ethical technology today. And I think how this relates to VR is that as we're operating from this existing systems, there's going to be a recreating of a lot of the mistakes and a lot of the learnings that we've already seen from these other social media platforms. And so how do we prevent those same mistakes, but also how do we build a little bit more resilience into the technology? I think it's still a bit of an open question for what those design frameworks actually look like. And I think there's actually a lot of overlap between what Tristan Harris is doing and a lot of work that I've been doing here on the Voices of VR and trying to talk about privacy and ethics and think about some of these experiential design frameworks that try to incorporate ethical design perspectives. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Tristan Harris actually happened at the Decentralized Web Camp. It was a camp of about 400 to 500 of some of the leading architects of the decentralized web and blockchain technologies, and trying to look at what the future of decentralized systems are going to look like. And so it was like a whole weekend camp put on by the Internet Archive. And so I had an opportunity to meet up with Tristan Harris, who was doing a presentation there. And so we're in a larger context of thinking about decentralized architectures. So this interview happened on Saturday, July 20th, 2019, at the Decentralized Web Camp in Pescadero, California. So with that, let's go ahead and dive right in.
[00:05:55.137] Tristan Harris: on urgently trying to catalyze a change from an extractive attention economy that treats humans as resources to mine attention out of, attention farming, into a regenerative attention economy that's humane and doesn't treat humans as resources and doesn't cause a lot of the problems we're seeing today.
[00:06:13.184] Kent Bye: Yeah, maybe since we're here at the decentralized web camp, maybe you could talk a bit about what type of conversations you've been having here and what type of sessions you've been having and where your work that you're doing for the Center for Humane Technology is kind of interfacing with this decentralized web.
[00:06:28.965] Tristan Harris: Yeah, well, we hosted a session this morning with my co-founder Azar Raskin on just walking people through the problem. I think a lot of people at Decentralized Web see that the central problem in technology is that we have centralized technology platforms, and that that's the major problem statement. But I think that that's massively insufficient to the actual harms that we're experiencing. First of all, we have to ask, what is the problem? Where are the areas where technology is actually imbuing harms directly to the fabric of society? So we look at the reduction of attention spans, information overload, distraction, polarization, outrageification of politics, narcissism, influencer culture, everyone has to get as much attention from other people, teenage mental health, social isolation, deep fakes, breakdown of trust. Those might sound like a bunch of disconnected issues, but they're part of an interconnected system of harms that are mutually self-reinforcing. So it's like if you zoom out and you say, why is the world feel like it's going crazy all at once? People are believing in more conspiracies. People are more polarized all around the world all at once. They don't trust each other. And the point is that because technology is seeking to find the things that light up your brainstem, those things correlate with the progressive hacking of human weaknesses. And this takes a sort of different lens to look at what's wrong with technology. I mean, I think a lot of people here at D-Web say, well, the major problem is just that all the data is locked up in these big tech platforms, and that's the problem. And it's a component, but that doesn't explain attention spans going down, why people are believing in conspiracy theories, and why we hate each other. So the thing we're trying to do here is say what if those things are happening for a reason and we can stop it by recognizing that we're sort of at the tipping point where we need to protect the limits of the human social animal. So we think of it this way like just like the Anthropocene is a new period of Earth's history in which humans are in control of the fate of what happens to the environment. The Earth is no longer doing what it's doing. It's like we are now already geoengineering the climate. We're just doing it geoengineering towards catastrophe. So I think now our social fabric has actually been made in the control of technology, and we've ceded control of world history to technology. Because every major election, the mental health of a generation, what you and I pay attention to in the moment we wake up in the morning are all determined by the forces that we're talking about. That there's a finite amount of human attention. It takes nine months to grow a new human being and jack them into the attention economy. and that the algorithms that guide YouTube, Facebook, et cetera, are already runaway rogue AIs maximizing whatever works that tends to produce the most extreme stuff. So the example we gave is, you know, on YouTube, if you start a team girl watching a dieting video, it recommends anorexia videos. First of all, you have to understand that YouTube, 70% of its billion hours a day of watch time are actually determined by the algorithm, meaning people think, oh, it's just a tool, we're just using these products however we want to, why are we talking about technology being responsible for world history? Well, the point is that technology's already out-competed human social primate instincts. The easiest way to see that is you're sitting there, you're about to play on a YouTube video, you think you're gonna watch one video, but instead what happens is you wake up from a trance two hours later and you're like, why did I end up watching all those videos? And we say I should have had more self-control. But saying that masks what was an asymmetry of power. Because when you watch that one video, you activate a supercomputer on the other side of the screen, called the Google supercomputer, that assembles a voodoo doll-like representation of you. where the clothes on the voodoo doll, the hair on the voodoo doll, the eyes on the voodoo doll come from all the click patterns and likes and email that you've ever put into Google to make that voodoo doll more and more accurate. So if they run a simulation where if I prick you with this video, this Jordan Peterson hate social justice warrior video, that'll cause you to stay 45 minutes. But if I prick you with this flat earth conspiracy theory, you'll stay here for the next hour. And so YouTube doesn't know what's good or what's bad for you. All it knows is predictions that will tend to keep you here. And if we can't make a distinction between what works on a human being versus what is good for society or good for us or what we actually want in a reflective sense, we're toast. So what's really going on is kind of a breakdown of the center of the moral universe. And this is a Yuval Harari idea from the author of Sapiens. that neoliberalism and the Enlightenment have been about putting human feelings and human choice at the center of the moral universe. In other words, the voter knows best the feelings that you have as a voter, the customer is always right, and even if you told me you wanted to go to the gym and you don't actually want to use Facebook, but you end up spending an hour using Facebook, that was your revealed preference. Your revealed preference is the thing you do with your finger. That being the center of the moral universe is the problem, because we now have already built technology that can so deeply hack and manipulate human weaknesses that I don't think we're fully appreciating that we've already handed control over to the machines. Even your audience might be skeptical at this point in the interview.
[00:11:26.513] Kent Bye: Well, so I've been covering a lot of issues of both privacy and the threats of privacy in biometric data. So being in the VR community, I tend to take a very phenomenological look at things because I'm looking at the human experience. And so when I've looked to see what the different type of data you were collecting from time well spent. for a lot of these really top apps to see what the phenomenological response was for people to report back. Do they feel happy or dissatisfied with their experience? And to see that two-thirds of the people being really super dissatisfied with how they just spent their time, there's a lot of regret. But there's still that one-third where they feel like they have their needs that are met. And so you have this quality of a phenomenological experience, but it's very difficult for algorithms and AI to capture this phenomenological experience, and even if they could, I don't know if I'd want them to. Because then it's sort of like hyper-focused on extracting my emotional reactions, which a lot of where these technological roadmaps are going are towards looking at our facial expressions, looking at eventually biometric data through VR and you know already passively turning on cameras to watch what our face is doing in these reactions and so they're already pushing the bounds of what I would be considering like this private data but it's this harvesting our emotions to correlate that to that content so they're trying to get to that phenomenological reaction but still it's going to be noisy and not perfect even if they could I don't want them to but The fact that these algorithms are a quantification of attention that they can measure, and all these other qualitative aspects that are invisible, I think are these unintended consequences that, because they're invisible, we have all these systems that hijack our attention and kind of lead to this. So, I don't know what the solution is in terms of ethical design. Like, how do you start to, when you're operating at that scale, is it a matter of, like, we shouldn't have things at this scale, or what are the different vector points that you can start to shift this dynamic?
[00:13:18.603] Tristan Harris: Well, you just opened up a wormhole of conversations that could keep us here for 10 hours. The first point you made, I mean essentially you can actually see a number of things breaking down in modernity as being the places where external observation metrics like GDP or activity or time spent or engagement that were decent representations of goodness up until some short point are breaking down and they're breaking down specifically in the areas where we can no longer measure. So harms show up in places that are not the obvious places where we measure benefit and harm. So like air pollution, take air pollution. I mean, it's like we don't have this like ubiquitous sensors for air pollution. We have to go like create and invent those. Climate change is very hard to measure. It's a complex system. There's feedback loops. And then with technology, all the things that are going wrong, polarization, outrage-ification, filter bubbles, distraction, lower intimacy, relationships, these are actually all really hard things to measure. Even attention span is hard to measure. And Apple and Google often have the strangleholds on where that data could come from, or YouTube's the only one who knows how much it's recommended certain things, or Facebook's the only one who knows how much people clicked on Russian propaganda. So it's as if Exxon, the company that was creating all the pollution, also owned all the observation satellites, and that they didn't let anybody else have access to that observation network. So the challenge is we have massive exponential systems generating more diffuse, slow growth and invisible forms of harm that are exactly the kinds that are harder to pick up by external like thermometers like in the world, be they engagement. Because, you know, the example we gave is, you know, YouTube recommended Alex Jones conspiracy theory videos 15 billion times. Right. I think people really have a hard time grasping this. So YouTube has more than 2 billion people who use the service. That's about the number of notional followers of Christianity as a psychological footprint. And it recommends Alex Jones 15 billion times. So if you don't know Alex Jones, InfoWars, Conspiracy Theory videos, that's more than the combined traffic of the New York Times, Washington Post, Fox News, BBC, etc. combined with none of the responsibility. So once you see that Alex Jones is recommended 15 billion times, the common response is, well, we were just giving people what they wanted. So in addition to a deficit of external thermometers to measure this stuff, and their relevance is a good way to see what's happening in the world, the other thing we're missing is good language. We have a massive deficit of language. If a lot of people click on and stay watching Alex Jones videos, And we engineers on the other side say, oh, we were just giving people what they wanted or what they liked because they wouldn't have spent all that time if they didn't want it or they didn't like it. If you just zoom into that one question and that one use of language, is that an appropriate description to describe what actually happened in 15 billion recommendations being made and a lot of people spending a lot of hours watching conspiracy theories? Is that what they wanted? Is that what they liked? Or is that just what worked at getting you to do that? So I think the same thing, we're sitting here on Highway 1, you know, we're driving down a highway, you see a car crash on the side, and if everybody, like 100% of people when they're driving down a freeway and there's a car crash, if 100% of people look at it, is that a good description of saying that's what they wanted or that's what they liked? No. So if you think about all the things that are going wrong with technology, our argument at the Center for Humane Technologies, it has nothing to do with the tech or the data or the privacy. It actually has to do with the insufficient or inadequate knowledge of human nature, inadequate descriptions of what is meaningful, what is lasting, and the difference between what is good versus what happened to work. And so I think if we treat what people do, their revealed behavior, as their revealed preferences, then we're toast. We need a new kind of language that says, look, what are our values and are the things that we're doing lining up with our values? And we don't tend to be very articulate as a species of what our values are. If we were, then I think we would actually realize that there's a whole bunch of things that are happening or doing that we're doing that actually don't line up with our values. And that's actually where the kind of revolution has to happen is almost like a Cambrian explosion of values literacy that we get good at understanding, for example, what's different about a weekend away in the woods with your friends where you come back and you just feel like you've hit the reset button on your whole nervous system. And you're using the word phenomenology, like what is the phenomenology that is accomplished or shaped by that experience that's different than spending seven hours scrolling through Reddit or Quora? And they are different experiences, but we don't actually have good language to describe that in the same way that we don't have good thermometers that say how much air pollution is out there, how much is just because lots of GDP is up doesn't mean people's lives are more meaningful or better. And so the breakdowns are occurring in the deficit of language that we have and the deficit of observation or external quadrant sensors of reality. I don't know if you're familiar with the Ken Wilber model of AQOL, like there's an interior subjective and then there's an external objective experience and then there's individual and there's collective. So interior subjective is like meditation. That's like noticing the wisps of air on my little mustache right beneath my nostrils. And what does that feel like? What that feels like is not measurable by a sensor. It's an internal experience. But there might be some outside correlate to that experience, like an fMRI scanner or putting a little wind sensor right underneath my nostrils. But that's trying to find that binding between what's happening on the inside, what's happening on the outside. But if you look at meaning, or you look at social trust in a community, we don't have sensors lined up all around the world asking, measuring with a thermometer how much social trust there is. And just because we don't measure it means that that's where all the harm starts to accrue, because essentially no one's looking. So anyway, these are the kinds of things that we have to fix with technologies. We're missing the subjective quadrant, and we're missing the language, especially the values language, that lets us know if we're meeting those values or not.
[00:19:27.895] Kent Bye: What I see as a big challenge of being here at the Decentralized Web Camp and last year, Wendy Hanamura was referencing Lawrence Lessig's work of having these four dials that you can turn to really shift culture in different ways where you have the culture, like the actual behaviors of people, and you can shift that by education and trying to shift culture and the behaviors of people by informing them of things. But there's also things that are just emergent for what the people are doing. And then there's the laws, so the government is actually having regulatory impact of trying to shift these collective behaviors. Then you have the economy, where you have an economic competition, where we have these companies that are essentially these monopolies in each of their own ways, and it's hard to have the economic vector in that way. And then you have the actual technology and the code, which is actually to produce something that is technologically producing some different effects. So the decentralized web camp here is trying to bring together the people to create these decentralized networks. So it feels like there's going to be each of these vectors are trying to address this issue and that to really holistically think about it, we have to address all of those. And so I understand there's a sort of moral and ethical imperative in terms of the looking at the design ethics and the cultural aspect, but Where do you see that the Center for Humane Technology is really trying to focus on? On any one of those or all of those?
[00:20:45.340] Tristan Harris: Yeah, unfortunately, just like you said, the change has to come from all directions. You could say climate change. How are we going to solve climate change? It's not individual behavior. It's not just we all drive our cars less. It's like, oh, that's right, the top 100 companies make up 71% of emissions. It's going to take policy. What kind of policy? Oh, shoot, it's going to take international cooperation and agreements. All that's to say that we think of the problems of technology as the climate change of culture, the social climate change. We call it human downgrading. That while we've been upgrading the machines, we've been downgrading the social fabric, our attention spans, our relationships, our democracy, our civility. So to solve a climate change scale problem, you need to do it from all angles at once. You have to do it with policy, as you said. That means certain kinds of very carefully crafted regulation that decouples the incentives. The use of policy is not to say what the design should be. It's saying how do we make sure that the incentives harness competitive forces so that everyone's competing in a race to the right direction as opposed to a race to create the most harm. So that's where that comes in. Then it's raising awareness. So we do that through media. You know, everyone's doing this now. I mean, this is not just us anymore, which is good. It felt lonely out there for a while. But people like Roger McNamee, whose book Zucked, is out there now. He was Mark Zuckerberg's mentor and rose awareness about essentially what's been happening. There's a film called The Great Hack that's coming out about Cambridge Analytica. People are starting to understand what automated attention-maximizing systems do to society. So that's one piece of the work. Awareness raising, policy, and then internal design advocacy, which is how do you actually get the designers on the inside of the companies to make different design decisions? And do we even know what those design decisions are? I mean, it's not like there's this obvious alternative thing that we all know how to do that we're just not doing. I think one of the challenges that we see at the Center for Humane Technology is that it's actually a different kind of sophistication about human nature. And it requires a kind of introspective, phenomenological understanding of human nature to understand, like, how do you design for trust, for participation, for empathy, for understanding? So, for example, there's a study that Some large percentage of people, I don't remember at this moment, massively overestimate the degree to which they think that they're understanding sarcasm on the other side of a text message, especially in relationships, because text is pretty narrow as a phenomenological media, you know, Marshall McLuhan style medium. And we tend to overestimate that we think we understand the other person's emotions. So if my romantic partner always texts me with an exclamation point and an emoji, and then one day it's just a period, or there's just no emoji or period, I suddenly think, oh, we fought this morning, maybe she doesn't feel so great about me. And I start getting in this anxiety loop. And we ask, like, how are we generating that problem? Well, the way to solve it isn't with more AI and better data and all this other stuff. It's actually just understanding how human minds work that we would generate that kind of misinterpretation. And so a good example of a more humane approach is asking, where do we get effortless leverage off of our paleolithic instincts? So right now, I'm sitting here with you at D-Web, I'm looking in your eyes, and we're making eye contact, we're sitting here in front of each other. If I said something that annoyed you, or you liked, I could sort of pick it up on your micro-expressions really quickly. And my brain doesn't have to do calculus and create a chalkboard to try to figure out what that might have meant. My brain does that effortlessly, with System 1 intuitive reasoning. And so like, where are our minds doing system one intuitive reasoning that ends up landing more empathy, more understanding, sort of the values aligned stuff? And how do we like pivot the way that technology works to ergonomically fit that effortless strength that we get from the evolved instincts that we had in the savannah 20,000 years ago? If you think about the counterexample to this, social validation is something that all human beings respond to as social primates, but right now those instincts are maladaptive to teenagers who are dosed with social validation on a variable reward every 15 minutes based on when they post photos of a life they're not actually living. So like we took this instinct and then we abused it as opposed to helped it. All this is to say, going back to your question since we're getting off track, There's policy, there's awareness raising, there's design, and design is what we think of as aspirational pressure. Can we actually show the world that there's actually different ways to design these products? And that's why we're here at D-Web, is there are people here that are building the new infrastructure that is about helping people to not make the same mistakes that we made in the past. And so, you know, people here are doing that. I think Apple is in a great position to do this because they're actually not beholden to the same attention-grabbing forces as a unique company. Android is similar, although they're still owned by Google, which is an attention company. So these are the dimensions that we're working on it. And I think that, you know, the saying goes that this climate change of culture of human downgrading, like climate change, can be catastrophic to how society works. But the good news is that unlike climate change where you have to have hundreds of countries and thousands of companies start to do different things with globally coordinated agreements, in the case of Silicon Valley, only about a thousand people have to change what they're doing. Like a thousand people. You know, it's like a couple hundred product managers, designers, and executives at like five companies. Maybe a dozen key regulators, policy makers in the EU, in the US, and maybe New Zealand or something. And you get the picture. I mean, there's only about a thousand people that have to change what they're doing, including a couple hundred people who are maybe building the alternative new decentralized infrastructure. And so the hope is that as big as this seems, and as hard as it seems, if we all put our hand on the steering wheel, we can make that change happen. Now, this might seem completely impossible and naive to someone listening, but I've been working on this since 2012, 2013, when I was at Google, where for three years I worked. And I went to work at a desk, at a laptop, and checked my email, and knew this problem existed. And it was my job as a design ethicist to think about how do we change this. And nothing changed while I was at Google. Nothing changed for about a year after I left. And I tried everything. I gave TED Talks. I gave talks inside the company. I tried to ask Android, can we make these five, six, seven changes? I talked to academics. I talked to celebrities like Arianna Huffington, who thought, maybe we can make this happen. Nothing happened for three years. And if you look at everything that's changed in the last two years, so going from no change happening to suddenly in this last year, as an example, this Time Well Spent movement we started, Apple, Google, Facebook, and Instagram have all launched baby steps of time well spent features in their phones that are shipping on over a billion phones. And that's just through public pressure. So that's the reason why everyone has the chart and graph of where your time goes. So I don't count that as the kind of success we need because the change that we're really demanding is much, much deeper and bigger than that. But I will say that if I had anticipated how much change we would have made given all the history of lack of change, I would have never hoped for this kind of success.
[00:27:26.970] Kent Bye: Well, it is quite amazing of what you've been able to achieve. And just to see the impact and be at the Google keynotes and to see these different features that I heard from Time Well Spent being mentioned in these keynotes as a feature that people wanted and asked for, and I think used quite a bit to help manage their own addictions.
[00:27:43.222] Tristan Harris: And it would have happened, we needed the cultural awareness to drive Google to do that, right? It wasn't just a matter of getting the internal advocates there. We needed both the external and the internal pressure.
[00:27:53.697] Kent Bye: But I think that the challenge that I see of all of this venture is the sort of elephant in the room, which is the economic business models of surveillance capitalism, which, you know, at the VR Privacy Summit, we got to all, okay, we can architect this perfect system that would protect the privacy, but it's sort of in contradiction to the underlying economic models of surveillance capitalism. We're here at the decentralized web camp, which I would love for people to come up here and say, here's a completely alternative economic model and system that is scalable and could be sort of swapped out to this existing model. But, you know, in some ways, Kevin Kelly argues that once you have access to certain technological features, you can never really take them away. They're always there. And we've got this entire civilization that's built upon this use of using these technologies. And so it's not like we can unplug it and revert back to, you know, five years ago. And so it feels like we kind of have to go from where we're at. But the thing that I'm struggling with is like these consolidation of these companies and surveillance capitalism and this fundamentally potentially unethical and a Kantian sense of using data against our own will. but it's done with these adhesion contracts and these terms of service that create a utilitarian exchange where they're giving us services for exchange for that. So they're violating our own autonomy and our data sovereignty in exchange for something they're privating service, but yet that's sort of the legal frameworks that is justifying this other sort of more unethical surveillance capitalism.
[00:29:17.048] Tristan Harris: So you just said the key word there, which is that this idea that they violated this contract. I don't know if you use that word specifically, but that's kind of what you were hinting at. The thing that has to change is that I don't think this happened intentionally. But Silicon Valley has been masquerading as being in a contractual or equal relationship, a peer-to-peer relationship with you and them. You hit the Terms of Service button, you hit OK, you sign on the dotted line, we're in this equal relationship. I say this because a magician also tries to show you that they're in an equal relationship. I'm just a person, you're just a person, this is a deck of cards, which card would you like to pick? They pretend to have an equal relationship while hiding an invisible asymmetry of power. The magician knows a lot more about how to get you to pick the card that you're about to pick than you do. In fact, you don't know the 20 steps that led to you arriving in this moment where they actually already talked to the person that's gonna do something behind you that's sitting behind you right now that you don't know. So, the thing with technology, and to answer your question about the business model, is that we actually got here by accident. The big original sin of the internet was the advertising, let's actually be more specific, the engagement-based business model, which directly couples profit with the subversion of human autonomy through gathering more data to build a better and more accurate voodoo doll of you, which is used to subvert your autonomy to get you to do what I want you to do. We actually have a name for these kinds of asymmetric relationships, and it's the fiduciary relationship. Where, you know, if you think about what is the level of asymmetry between what a psychotherapist knows about psychology and also the degree of sensitive information you are sharing with a psychotherapist. You know, we don't let psychotherapists have romantic relationships with their clients for a reason, because there's a degree of compromising entanglement, that they've shared vulnerable information and they're also in this new power dynamic with a psychotherapist, where that asymmetry of power has to be regulated, not as contract law. It's actually like illegal to say, well, we're just in this equal relationship, because we're not. So if you actually just did a side-by-side comparison of all places where we already invoke fiduciary law, you know, doctors and their patients, lawyers and their clients, psychotherapists and their clients, and say, take a protractor, what is the degree of asymmetry in the compromising levels of asymmetry that exists between those and people? And now look at Facebook and Google and say, is the asymmetry greater or smaller than those other ones? And it's exponentially greater. So this is actually just done. I mean, this is basically like we made this mistake. You cannot have a business model. So now let's make it worse. Not only do we make the mistake about the contract versus fiduciary, imagine a world where all doctors, all lawyers, and all psychotherapists entire business model was to sell access to the confession booth, to the legal case. They only made money by manipulating you as much as possible by selling access to someone else. Like, we would never allow for that business model. We could say we could have a psychotherapist, but we'd always want the client to pay, or some third party like healthcare to pay. You can't have a world where that level of asymmetry has an extractive business model where the more you use it and more data is accumulated, the more I manipulate you to get you to do what I want to do. So this is really important because it says if we just actually move to this sort of fiduciary relationship, it kills the business model. You can't have that business model if it's a fiduciary relationship. And that's one of the things that has to happen.
[00:32:34.193] Kent Bye: We've been covering a lot of the systemic issues. I want to focus on the morality and then ask a few more questions to wrap up. But in terms of the pragmatic things that experiential designers, designers can do, it sounds like that as a design ethicist and for the Center for Humane Technology, you're trying to come up with like these broader moral principles or virtues you're trying to design for. And when I look at things like artificial intelligence and virtual reality, there's like this shift of context where these technologies are so pervasive and kind of blurring the lines between all these different contexts. For example, biometric data used to be in a medical context. Now that's going to be in a context for being available for anybody in these variety of different contexts that, you know, may be used for surveillance capitalism. And so we have these different ethical transgressions that have already been happening, but even more that are on the horizon. And so how do you make sense of the human experience in all these variety of different contexts? And a framework for designers to be able to navigate these variety of different moral dilemmas that we're facing, all these transgressions that we've had, like how does the Center for Humane Technology make sense of, do you have like an ethical framework to help navigate these moral issues?
[00:33:45.021] Tristan Harris: Yeah, it's tricky. We'd love to provide many more frameworks that we have, but we just haven't been putting together into materials. I mean, the other issue here is that ethical technology or humane technology and ethical design are not just a matter of a PDF that you can just do some check boxes and be like, oh, I hit the checklist. I'm ethical now. Ironically, we have a PDF on our website that you can look at to see what our humane design guide of just doing an assessment. The point is that ethics shouldn't be an afterthought. It shouldn't be something where you do a checklist at the end. It has to be built in. And I think, like you said, we're confronted with a whole new slew of ethical issues because there's been a breakdown of context. As you said, biometric data used to only be available in certain medical contexts. And now it's available in every context because technology does that. But we have a framework to answer this question you asked me, which is going straight to Marc Andreessen, the founder of Netscape, when he said in 2011 that software is eating the world. So he left out the second part. So first of all, why is he saying that? It's because take any domain of life, medicine, doctors, taxi cabs, media advertising. The version of that field or that domain of life run by technology will always outcompete the version that's not run by technology. It's going to be faster, more efficient, etc., more profitable. So software will eat every other part of society. Now what that means is that software, it's eating up the protections that we had on each of those parts of society. So now, imagine the moment when software eats Saturday morning cartoons. So that used to be a protected area of the attention economy, governed by certain protections and rules. For example, you're not allowed to push URLs in TV ads to kids. You have to watch out for certain kinds of advertising that has to air. There has to be regularly scheduled programming. There's things called stopping cues that used to exist where there was a period of a number of seconds that the television had to be dark, just like black screen, for a few seconds to give the kids a chance to wake up and ask, like, what do I want to do next? Do I want to take a break, get a snack, go talk to mom? And when technology or YouTube gobbles up Saturday morning, does it care or look at the protections that used to exist in that part of society? No. So the same thing is true of election advertising. So we used to have the FEC, Federal Election Commission rules, that it should cost the same amount for Hillary Clinton and Donald Trump to reach a voter at 7 p.m. on a Tuesday on this TV channel. Why in the world would you make it unequal? It has to be fair. So we regulated that. That was a protection. And then when Facebook gobbles up that portion of the attention economy or advertising, it takes away those protections because now it's actually run by an auction, run by a private company with their own private incentives, and it's totally not equal. In fact, there was a 20 to 1 difference in the price of election ads. But the reason I'm invoking this metaphor is because a tool we can use to ask what's the right thing to do is to ask for that area of society that we used to govern, whether it's biometric data or elections or Saturday morning cartoons, what protections did we have in place before software ate that part of the world? And how do we retrieve, in a Marshall McLuhan sense, those protections and bring them back and ask what were the principles that we might reapply? and we need to re-instantiate. So that's one class of regulatory or policy or even just design thinking that has to get retrieved and brought back from the past. And the second class are a whole new set of ethical issues about what happens when we combine these five, six data sets we never had before and we can predict something new about a person that the medical literature didn't even have a say on because we couldn't predict that before. And that's a whole new set of issues. So our problem statement at the Center for Humane Technology for the whole issue is that when you zoom out, we just go to E.O. Wilson, the Harvard sociobiologist. He said that the real problem of humanity is we have Paleolithic emotions, meaning ancient Stone Age emotions, medieval institutions, and godlike technology. The important thing to see about these three different domains is that our paleolithic emotions and instincts are baked. It's like hardware that's just not changing. Your social validation system and nervous system is just baked. Our medieval institutions update about every four years, and they're certainly not keeping pace with our accelerating, exponentiated, godlike technology. If your steering wheel lags four years behind the accelerator, what happens? It's a self-terminating system. So our mission is to fit and embrace our Paleolithic emotions since they're not changing and go from them being maladaptive to our good sense-making and choice-making to being adaptive to good sense-making and choice-making. And then upgrading our medieval institutions to be in a faster clock rate to basically keep in touch with the modern realities and the complexity. which means a whole bunch of things has to happen there. And then slow down and guide with more wisdom the God-like technology. We have to have the wisdom to wield God-like technology. Another phrase from Barbara Marks Hubbard is, we have the power of gods without the wisdom, love and prudence of gods. So that's what has to happen. Like this isn't like an opinion. This is like, this just has to happen because otherwise we're driving ourselves off a cliff. So hopefully that's a good articulation of at least what has to change with and maybe some more clarity on what we're doing in the process.
[00:38:59.968] Kent Bye: So for you, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?
[00:39:08.165] Tristan Harris: That's a great question. The biggest question I'm trying to answer is trying to figure out how to scale our work. You know, for anyone listening, this is a problem that literally requires pressure and attention and interest and help from all sides. And we're very humble in being a small organization and a small set of people in San Francisco, many of whom, we didn't build these specific systems, but we come from the tech industry. It's a social movement, so it's going to take energy, interest, pressure, raising awareness, policy, coming from all sides, from all languages, from all countries. These are global issues. We didn't even talk about China and where that's going. So the thing I struggle with and think about a lot is, how do we scale the ability and capacity for lots of people to plug into this work? Because there actually is a plan. We just don't have good ways of scaling it so that lots of people can participate. And I struggle with that. It's very hard. People come to me at D-Web right here, and they say, I get it. What can I do to help? And I have to direct them to our head of mobilization, David Jay. And you should check out our Get Involved page on the Humane Technology website. So that's the thing I struggle with. Our closest answer to how to at least get people deeper into this conversation is we have our own podcast called Your Undivided Attention. We have no interest in getting people's attention. We are just trying to walk through a problem framework and then a solutions framework together with more people because that's the only way to scale a lot of people thinking about and working on it the way that we would like them to. But that's the big question I have. I mean, I think when you asked the last question about what is a big question that you have, I think the fundamental question that's at the root of Shoshana Zuboff's Surveillance Capitalism book and the issues that we're seeing is what is an ethical form of asymmetric influence that uses our data, uses our voodoo doll, has that level of compromising power on all of us. We've already built the dangerous runaway machine. How do you make sure that it is always acting in a subservient way to the social fabric that actually makes that supercomputer possible? And I think this is actually very relevant to the metaphor of capitalism. We have a natural regenerating complex adaptive system called the environment. And that physical environment, the physics of these trees and this water and this air and the oceans, are underneath the economy. But the economy has started to manage the resources of the physical environment without respecting or trying to protect the underlying environment. And I think we have to develop almost like a new kind of computational law or design principles that basically ensure that new systems that sit on top of other systems protect and try to respect and never destroy the system that's underneath it. So think about Facebook virtualizing your real relationships to create virtual relationships that you're managing. If it takes your real relationships in the software eating the world sense and it puts them into this virtual space, it should also then be a steward of those real relationships. Like it should be trying to protect the social fabric that it was built on top of. It's trying to protect the real friendships that it's building the virtual relationships from. So I say that in a semi-authoritative way, but really it's a question for me of how do you actually create asymmetrically powerful systems that are subservient to a species that frankly actually sees less of the world and can compute less of the world than the asymmetric tech. And I think Shoshana's answer in surveillance capitalism is we shouldn't do it at all. We shouldn't monetize this data. We shouldn't monetize predictions. Zero. Period. And I totally sympathize with that perspective. And also, I think from a game theoretic perspective, the countries that don't worry about that and use this to make predictions about cancer or war or autonomous weapons or whatever, outcompete the countries that are limiting in the predictions that they make in their tech stack. And so how do we actually have a tech stack that makes big powerful asymmetric predictions about people, the future, et cetera, can predict our pancreatic cancer six months in advance with Google search queries, can also do flu trends and things like that. What do we do when we can predict things and how do you ethically hold that power? I think that's like the question. Because that's a question of a new species of power we have never had before. And no one, not Kant, not Jeremy Bentham, not Marshall McLuhan, have good answers to this new question that we're facing.
[00:43:34.861] Kent Bye: And finally, what do you think the ultimate potential of technologies are? Decentralized technologies, or any technologies, and what they might be able to enable?
[00:43:46.209] Tristan Harris: My co-founder, Asa Raskin, has this line that, I think we've been trying to use technology to make us superhuman. But maybe that's the wrong goal. Can we use technology to help us be extra human? Which is to say that, how do we help humans do what we naturally do best effortlessly? So if you think about communication or empathy, it's like face-to-face interaction is pretty good. we're so good at so many things that instead of trying to replace it or synthesize it or manufacture a virtual version can we at least balance the portfolio of virtualized experiences with Protecting the real life human things that we're naturally really good at and I know with your audience and specifically the virtual reality that is more confounded than in other circumstances because virtual reality starts to approximate the things that are more uniquely extra-human. So I think this is a question that we have to answer, but I do think that the answer is not to upload ourselves to the cloud or to live our lives inside of virtual environments, because that's sort of like living our lives in an economic logic instead of the natural logic. Like, an economic logic would say that person's life is only worth blah because it's priced at the market at blah, so like they're worth less than this other person. And that's already distorted our way of seeing and valuing things in the world. So like, I think the economy was a kind of technology that we invented to manage resources, but we let it actually hijack our brains and our values to see the world through an economic logic, as opposed to the economy as a tool to accomplish certain values. but it's ignoring the intrinsic values of nature, our relationships, our people, which have a dignity to themselves that's granted from some other independent source. So I think this question of what makes us extra human, how do we respect the sort of more mysterious aspects of whatever life is and consciousness is, and be careful about trying to over-virtualize or synthesize it.
[00:45:44.206] Kent Bye: Do you have anything else that's left unsaid that you'd like to say to the decentralized community?
[00:45:49.232] Tristan Harris: I think this is a great opportunity to talk about a lot of things I don't normally get to talk about. So thank you.
[00:45:53.353] Kent Bye: Awesome. Great. Well, thank you so much. Thank you. So that was Tristan Harris. He's the founder of The Time Well Spent, a former Google design ethicist, and the co-founder of the Center for Humane Technology. So I have a number of different takeaways about this interview is that, first of all, Well, it was really striking to hear Tristan try to argue that contract law actually isn't robust enough to describe the type of asymmetry of power that we have between these huge technology companies who have all this information about us, and they're trying to relate to us like we have this contractual agreement where we're on equal footing, but it's actually more of a fiduciary relationship where they have an extraordinary amount of information about us. Tristan said it's kind of like if you would go to a doctor and the doctor was trying to gather all this biometric data on you to then Resell to other people that would give them more information about you So that they would try to manipulate you into making different decisions. And so with the fiduciary relationship They're trying to really look out for your best interest but in this adhesion contract situation do have this huge amount of asymmetry of power and And Tristan gave this quote from E.O. Wilson that said that, you know, we have these paleolithic emotions and instincts with these medieval institutions and these godlike technologies where we need to start to realize that our instincts and our neurological responses and things, these are like baked into our wetware of our bodies where we have a lot of existing social validation systems, a lot of things that take a really, really long time to shift and change. And so a lot of these systems are being hijacked to be able to be used against us in different ways. So we have all these needs for social validation and our nervous system is kind of being gamified to be persuading us to do things that we may not be on the same equal footing to realize that we're fully consenting to. So he mentioned Yuval Harari's book, Sapiens, where he talks about there's been this breakdown of the moral universe where, from the perspective of neoliberalism and the enlightenment, that they've really put the human feelings and human choice at the center of the moral universe, that the voter knows best, the customer's always right, and that we have behaviors that we have within these technological environments, and that whatever we end up doing is our revealed preferences, and that we always kind of wanted to do that all along. And so we have these paleolithic bodies and emotions and instincts and then these medieval institutions that are, you know, they lag anywhere from five to 10 years behind for where the current technology is at. And so how do we expect the legislators to really understand the complexity of these issues and to create law that is actually going to serve into the public interest and not to either create bad law that's going to enforce the monopoly power that's already there. But Tristan is saying that there really needs to be different policies that are really encouraging a diversity and competition within the marketplace. place. And then finally, the godlike technologies that we need to slow down and try to cultivate a little bit more wisdom for how we're using the technologies. And so you gave a quote from Barbara Max Hubbard saying that we have the power of gods without the prudence, love and wisdom of gods. He also mentioned a book by Shoshana Zuboff, that's the Age of Surveillance Capitalism and said that the conclusion from Shoshana in that book was that, you know, there may not be a real ethical way of having this asymmetrical power relationship and so maybe the whole venture of surveillance capitalism is just a business model that has been convenient for a lot of people but there's just so many different problems with it. So the big thing that I had taken away from Time Well Spent was that they had done all these surveys on all the major applications that were out there and they were finding that some of the most popular applications that people were like dissatisfied with their phenomenological experience of the app for like two-thirds of the time And that for one third of the time, they found that they got a lot of utility out of it, but they kind of were left with feeling either manipulated, controlled, or used by a lot of these applications. And so there's a lot of these invisible aspects of our human nature that have been invisible to the AI. And I think in some ways there may be a tendency of wanting to try to like close that gap and to get even more of our biometric data to get more of a sense of what's actually happening in our bodies to maybe close some of those gaps. But I think there's just so many ethical issues with going down that route of just opening up all of your emotional profile, all of your facial expressions, all of what you're looking at. They're trying to get into this reverse engineering of your psyche, and I think the biometric data is going to be this goldmine that's going to help them do that even more. And so I think right now one of the biggest challenges is for people to just become aware of the risks of biometric data as it's plugged into this existing surveillance capitalism machine. And so Tristan listed a whole list of different harms that are being done to society. He said that there is a reduction of attention span, that people are distracted, we feel like we have this information overload, there's polarization, filter bubbles, and people that are really hating each other, and the breakdown of trust within our society, a lot of narcissism and influence culture, and really grabbing and wanting attention from others and having systems that reinforce that quantification of attention from others, which is having a huge impact on teenage mental health and social isolation. And what about all this, you know, deep fakes with being able to mask and mimic our identities and then also just a lowering of intimacy. So obviously there's an interesting trade-off here between the human autonomy of the different choices that we're making and the different type of inner mindful contemplative practices that we're doing that we can like intervene and stop. And being subjected to these different systems that are hijacking our nervous system to maximize for attention and engagement and profit with some of these other unintended consequences. And so this is what he's saying. There's all these increases of these social impacts and I think a lot of people right now are pointing the finger at technology. If you're listening to this podcast, you're likely play some role in the technology industry and I think there's this a bit of taking a step back and reflecting upon like where we're at as a culture and what we can do to be able to help mitigate some of these fires that are happening right now, especially with all these elections that are coming up and you know, how do we prevent another layer of information warfare happening, but Also, you know, the power that comes with these centralized systems of a lot of things that I've been looking at lately is like looking to see what the future architectures of these decentralized web and what are the latest innovations that are out there that may be leading to a completely different technological alternative than these economies of scale and these centralized systems that which are bringing about a lot of these different problems. So I do think it's going to need this holistic approach from both a technological solution, so maybe decentralized alternatives is going to be something that's going to be a little more viable. We need some new business models because that's something that has yet to be really fully figured out to some sort of economic model beyond surveillance capitalism. Maybe subscription models would be part of it. You know, this is an open question for how to actually scale these up. And then you have the culture. So just making people aware and what consumer choices they can make, both as from an economic perspective, but also just becoming aware of what kind of risks are involved with each person's relationship with technology. And then finally, what kind of policy needs to be in place to be able to either encourage more competition, do antitrust, or are there different privacy implications of a lot of this? The FTC gave Facebook a $5 billion fine, but there's a lot of other privacy violations that it still makes sense for them to continue on with their normal business model. And I think there's a part of Facebook that wants to have the government come in and legitimize all of their surveillance capitalism so they can continue to do it. and feel like the blame isn't being put squarely on them, but that they can offload some of that pressure that they're feeling from the public onto the government because they want the government to come in and kind of fix it. But I think it's a bit of the situation where it needs to have all of these different aspects, an economic solution, a technological solution. something from the government, some sort of policy, maybe there does need to be some sort of fiduciary type of relationship and a little bit more of a judgment call of the ethics around surveillance capitalism. And then finally, you know, just bring about more culture and awareness. And that's what Tristan Harris is doing with the Center for Humane Technology. It's also what I'm trying to do here at the Voices of VR podcast and I'm going to be airing a number of different podcasts that are exploring different ethical issues. Next week on Monday at SIGGRAPH, I'm going to be talking about the moral dilemmas of mixed reality with Mozilla and Magic Leap and 6D AI and Fin Agency and a panel discussion at SIGGRAPH talking about some of the more technological architectures to be able to architect for privacy within mixed reality. So overall, I think there's probably more open questions than answers at this point for how to best address a lot of these issues. I think Tristan Harris is kind of stepping into a lot of different fires that are happening all at once all over the world. And so he's doing the best that he can just to help tell the larger story of these persuasive technologies and trying to weave together with some really clear metaphors. And, you know, I think his work has been a huge impact on me. And I just really appreciated the opportunity to be able to sit down with him directly and to be able to talk about some of these things. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.