#931: IEEE’s Ethically-Aligned Design on Autonomous & Intelligent Systems & Extended Reality

On March 25, 2019, the IEEE Standards Association published Ethically-Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (PDF) covering the ethical dimensions of artificial intelligence embedded into different systems. On May 19th, 2020, there was a chapter added to “Ethically-Aligned Design” called “Extended Reality in Autonomous and Intelligent Systems” covering the ethical implications of the intersection between immersive technologies, virtual works, and artificial intelligence.


I had a chance to talk about Ethically-Aligned Design with John Havens, who is the Director of Emerging Technology & Strategic Development at the IEEE Standards Association as well as the Executive Director IEEE Global Initiative on Ethics of Autonomous & Intelligent Systems & Council on Extended Intelligence. We talked about the history and background of this initiative, and how virtual reality, augmented reality, and extended reality became a part of this project. We also talk about the standards and working group processes of the IEEE, and how there are some standards that have come out of this project including P7012 – Standard for Machine Readable Personal Privacy Terms, as well as the importance of data sovereignty.


Ubuntu Ethics for AI by Sabelo Mhlambi

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So I'm going to be continuing on in my series of looking at different ethical issues within XR, extended reality. In today's episode, I'm going to be actually looking at this cross-section between artificial intelligence and immersive technologies. Actually, what had happened was that there's this whole initiative for the IEEE, that's the Institute for Electrical and Electronics Engineers, the IEEE. They have all sorts of societies. They have the IEEE VR conference that has all the academics come together, but they also have all these different standards and working groups covering all sorts of different issues. So ethics in artificial intelligence was something where they actually used this whole industry connections committee to be able to do this. It's not necessarily like a standard, but it's more of a white paper that they put forward and so they started this back in like 2016 where they created this document called ethically aligned design where they specifically looking at artificial intelligence and the different ethical implications of that and And then they went through many different iterations, lots of people working on this over many different years. And then they released the first edition in 2019. And then actually on May 19th, 2020, they released an additional chapter on extended reality and autonomous and intelligent systems. So in other words, AI that's embedded within these larger systems, that you can't just look at AI in isolation of the larger systems that they're within. So apparently, the extended reality, XR, virtual reality, augmented reality, was brought up to this group as an issue where there's going to have all these different interfaces between these immersive realities, these virtual worlds, and these different ethical issues that they had already been covered in this existing book. So on today's episode, I'm going to be actually talking to the Director of the Emerging Technology and Strategic Development of the IEEE Standards Association. That's John C. Havens, where he's going to be talking about the history and evolution of this document, and also specifically this overlap between AI, intelligent systems, privacy, and all sorts of issues that are overlapping these two communities, as well as how virtual reality and augmented reality came to be on their radar to include it within this ethically aligned design book that they put together. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with John happened on Monday, June 29th, 2020. So with that, let's go ahead and dive right in.

[00:02:37.397] John Havens: Sure. Well, uh, first of all, pleasure and honor to speak to you. You don't get the chance to do interviews with people who've recorded what a thousand episodes. That's really impressive. So really thank you for having me on the, on the show. So my name is John Havens. I'm director of emerging technology and strategic development at the IEEE standards association. I should say everything I say today, however, I'll be speaking as John. So it doesn't necessarily mean everything I say is formally endorsed by IEEE and all that. The work I do is I lead a large group program at IEEE, in fact, the largest focused on artificial intelligence and ethics. And then I can also tell you more about my history that is more specific about augmented and virtual reality if you want to talk about it later.

[00:03:23.227] Kent Bye: Great. So the reason why this came up on my radar is that one of the co-authors of one of the chapters of this ethically aligned design document that you've put out, version two, that's open for public discussion. So maybe you could give me a little bit more context as to what is this IEEE ethically aligned design and this document that you put out around ethics.

[00:03:44.540] John Havens: Sure, and thank you for asking. And I think that was probably Mathana Stender there, a really critically important committee member. Well, they're all really important for the extended reality chapter. But ethically aligned design is a paper, the first version of it came out in 2016 and had a number of AI ethical principles. What we're really excited about is that when we launched it, we launched it as a Creative Commons document, and we also launched it as a request for input, which probably, just through good fortune, one of the smartest things we did, because we got for that version and then the version we launched, or we relaunched in 2017, over 500 pages of feedback. Then the most recent version, which still isn't a request for input, so it's more of a final draft, we call it the first edition, came out in 2019. But the actual book now, because it's over 250 pages, with those three editions, over 700 people worked on it over the course of three years. And as I mentioned, we got all that feedback. We also had an editing process that was sort of a crowdsourcing consensus building process where about 900 people saw the final version of the documents before they went live and provided lots of feedback. So I'm really proud of it. I mean, I'm mainly just proud of the volunteers who worked on it. The goal of the book, beyond the general principles, which are critically important, and they came via consensus of all the people who were involved, And I can tell you more about those, especially the first three general principles of human rights, data agency or data sovereignty, and well-being economic metrics. The thing about the book, which we're really also excited about, is we designed it to be an ongoing kind of evergreen tool where each chapter opens up, like the personal data chapter, the law chapter, the sustainability chapter. It opens up with a one-page abstract, sort of the point of the chapter and also kind of giving an angle as to how it fits the entirety and the vision of the Ethically Aligned Design book. But then it lists in each chapter oftentimes a dozen or more specific issues where the experts, usually about 12 to 15 experts in those different areas, said, well, these are top issues that we are dealing with right now. We wanted to make it really relevant. And then they provide recommendations and resources. about those issues and recommendations. So I bring all this up to say that while there's so many amazing sets of AI principles out there, we really designed this to be very different and much more robust than just a sort of one chapter about principles. And then since the second version especially came out, the draft, That version was used by the OECD to create their AI principles, by the Future of Life Institute for their principles. IBM used it. People keep coming back to us and telling us that it's really kind of a seminal instrument and very unique in the space. And then finally, I'll say it inspired about 14 standards working groups in the IEEE Standards Association. where really the core of our work is to prioritize what we call ethically aligned design, but what that really means is values-driven design methodologies at the front end of the design process. So rather than moving fast and breaking things, the logic is you examine end-user values in alignment with those principles I mentioned before. And then finally, what I'll say is the extended reality chapter, there were still some updates being made to it. And so we launched that actually just about a month ago. So it's still very much a part of ethically aligned design. It's just that the actual chapter came out just a few weeks ago.

[00:07:21.032] Kent Bye: Right. So I think that's why it came up on my radar, just because Manantha had reached out to me and pointed out, this sounds like it's been a long standing effort to be able to look at emerging technologies, artificial intelligence, and look at how what I've seen with all emerging technologies is that it starts to blur the line of existing contexts and require a bit of a step back in trying to establish new normative standards in terms of what is the threshold for what's okay and what's not okay. And because technology tends to blur all those contexts, then we need designers to be able to have some reference to be able to navigate the variety of different trade-offs that they have to make. It sounds like with artificial intelligence, it's also been blurring a lot of those different contexts, which I see that a lot of this document is talking about AI. But I'm happy to see that there's more chapters on mixed reality here, because I do think that there's going to be a lot of this that is going to be fusing together and integration between AI and virtual reality. Maybe you could just contextualize, you know, how mixed reality and virtual reality came into this larger context of artificial intelligence and automation and intelligent systems is the way it's referred to here. But maybe you could just set that context for how this chapter on XR got added to this ethically aligned design document.

[00:08:38.215] John Havens: Sure. Well, look, based on, and by the way, your video and your work on the ethics of this area is just fantastic. So I haven't watched all of them, but you've done that really robust 10 domains aspect of the ethics of XR and stuff, which is fantastic. And like you pointed out, where algorithms are involved in a right and now we also use the term artificial intelligence systems a lot in the sense that there's really not gonna be a lot that is not touched in some way by some form of simple machine learning or certainly more advanced intelligent learning. And so in one sense, saying just the phrase artificial intelligence in isolation without context is kind of like saying the internet now. It's so broad, you have to kind of get specific and say, what does one mean? And I don't mean you, the interviewer right now, I mean just for the work. And in terms of how data is accessed, and you talk a lot about this, which is fantastic, I'm shocked, frankly, how many AI policy meetings I'll go to where they don't mention, people won't mention augmented or virtual reality or use the phrase extended reality or certainly the spatial web. And this is not to be negative. It's more that I think there's still this maybe misunderstanding and by ignorance, I do not mean stupidity by any means, just more of a sense of policymakers and people have a lot on their plates, right? So artificial intelligence, and the technologies surrounding it, everything from IoT to 5G and all this stuff, there's a lot to think about. But I keep urging my colleagues, and mainly because I've been an augmented reality geek since like 2011, or even earlier, is to say, look, if you want to like say, well, augmented reality, it's the year of AR, which people have been saying for like 10 years in the AR space. And I'm only laughing because I get it, right? You know, at least for me, I can't wait for specifics of AR or VR to become more prominent. But if you take into things like massive multiplayer games, certainly immersive games, certainly if you think about places like Japan or what have you, the idea of these immersive realities is not the future. It's very much the present. In fact, even the past. And their deep relationship to AI, again, goes back to how our data is accessed. We are very big proponents. As I mentioned, it's one of the top three principles. When we say data agency, we really mean data sovereignty and the logic of giving agency to the humans whose data is being tracked to use for any of these systems. For lack of a better metaphor, AI is really the sort of pipes powering everything soon, certainly in the digital and spatial realms, the immersive realms. So if we don't get data sovereignty right fast, and by right I mean giving, it's very aspirational, but the idea is every person on the planet, a personal data store that's somehow attached to a trusted identity, government or what have you, which again is incredibly challenging when you start talking about immigrants and all that, but there's Estonian, Anyway, all of this to say the deep connection is that the AI is going to be like air, right? It's everywhere. So we have to have personal sovereignty so that when people go into these immersive realms, they have a sense of who they are, their agency, and also certainly that they have all the rights attributed to them and their data.

[00:11:55.325] Kent Bye: So as I'm reading through this document, it's from the IEEE Institute for Electronic and Electrical Engineers. Actually, my background's in electrical engineering, and so I've been aware of the IEEE since I got my degree back in 1998. But when I first got into virtual reality, there was this split between academia and industry, where just from a perspective as a VR enthusiast at the time, I saw that the IEEE VR conference was happening, but yet there was no journalism that was happening there. And there continues to be this split between academia and the industry and a lot of these different issues. And as I read through this document, one thought that comes to mind is that we still have a little bit of this fragmentation between what's happening in academia and what's actually happening day to day in a lot of these big companies and industries. And so I guess a lot of these ethical design principles are great in theory, but I guess what is happening in terms of actually putting these into practice at some of the largest institutions where a lot of these design decisions would be actually making a difference to billions of people?

[00:13:00.189] John Havens: It's an excellent question, and I can't speak to... IEEE is a really big organization, and I joined in 2015 as a consultant, and then I joined staff in January, which I love my job, frankly. but it's got multiple operating units and then dozens and dozens of societies, and then also even more publications. So I only say that in the sense of it's so broad, you know, has membership in 160 countries, like half a million members, that what's exciting, but also sometimes just extensive is how many places like AR or VR has been featured in conferences or in publications for however many years with IEEE, just because globally people will publish from around the world. So all that to say, like, it's great that, you know, you, people like Ori Inbar, you know, we've been active at the AWE conference. I have not been yet, which I'm embarrassed to say that because I have deep respect for Ori and his work. But anyway, IEEE has been supportive of the Augmented World Expo for a long time. And then there's so many great people that you'd all recognize their names who've written for IEEE about AR and the industry. But to your point, I get the sense Now, this is where I'm just now, it's somewhat guesswork. And this is also, again, why I'm really excited that you're into the ethics side because of your, I think, more closer ties on a regular basis to corporate organizations. Is it, first of all, at least I can speak from the stuff that I'm deeply involved with in terms of the artificial intelligence systems and ethics. In the past three or four years, even though AI ethics has been, let's call it a trend, a lot of times, in my experience, what happens is the word ethics is often synonymous with compliance. So, understandably, companies that use AI for their products or services, understandably, it's not like it's a fun topic. It's not like, yay, let's talk about ethics and compliance. Or, and here I used to work at a PR firm. I was EVP of a top 10 PR firm in New York. where ethics oftentimes understandably is synonymous with morality, then companies, and I get it, like I said, I understand this from a PR standpoint, companies get worried. Like, are you talking about ethics because you're saying that we are doing something wrong, right? But when you use methodologies like values-driven design or value-sensitive design, which was created by this amazing woman named Batya Friedman, and we've worked with a woman named Sarah Speakerman, who's a global thought leader in the space, or in the UK, there's something called responsible research and innovation started in the UK. All these different methodologies are simply ways of saying, if we take more time as designers to really examine, not pander to, but to understand the end user values of who we're building for in a much more robust way, honoring data, but also things like cross-pollinating and helping, complementing, meaning bringing into the fold, along with engineers and data scientists, critical multidisciplinary experts like anthropologists, sociologists, therapists. There's just so much that we don't understand, and by this I mean society, about things like tools that speak in our presence, literally speak around us and our kids. And the thing where people say, well, AI is just the latest technology, technology has been happening for 100 years. My answer is, fair enough, but those technologies have not been taking that data and sending it up to the cloud. and then analyzing it against a lot of other data about an individual and then bringing it back into their realms. And, you know, I hope you aren't hearing because it's not intended that there's anything automatically negative about all this. It's that if things are, again, move fast and break things and just sort of shoved out there in this sort of mindset of like, well, you know, it's innovation, You know, it means that in one sense, and I know you talked about this in your work, so we can touch on it later if you want, but if innovation is always framed in exponential growth, financial only terms, right? Meaning innovation is always at some point someone's going to say, well, how much money did this make and how fast and how much? where making money is great. We all need to make money, profit's important, et cetera. But if the point is, is if the priority is on not just making profits, but exponential profits and speed and exponential growth, that means that the priorities are not necessarily on taking more time before those products and services are released to really say things like, again, what does it mean when devices that are powered with these amazing algorithms are also outfitted with natural language processing technologies or effective computing, emotion tracking, et cetera, and that then these tools are placed around things like kids or adults. Where you have these applied ethics methodologies, there are two things, and this is also now going back to your question about companies, and this is what we say all the time. Yes, there is the concern about risk and all the things about, let's see what the unintended consequences would be, because you just don't know. If you don't have, say, therapists or psychologists on a staff that's creating a technology that's like an app designed to provide mental health advice, it literally is dangerous to not have an expert on your team that can speak to that with their expertise. And it takes pressure off of the engineer, the data scientist, because how would they know? That's not their background. But then we also remind people that look along with the risk, then what you're also doing about asking questions about the end user is automatic R&D, it's scenario planning, it's innovation. You're gonna find out things that are not just how to avoid harm, but how to improve trust, attributes of what you're building, product design for the end user. And so that's where we're starting to see in multiple different arenas over the past number of years, This mentality, it's still taking a while and certainly it's still mixed as it were with people who are focusing more on compliance and PR stuff happens with ethics boards and all that. So it's all kind of happening at the same time. But especially now with COVID and risk assessment frameworks like with banks, there is this real understanding that the processes of pragmatic applied ethics Understanding end-user values is not just about, let's do the right or good thing. It's actually the only thing if you want to have responsible and innovative design in the algorithmic era.

[00:19:34.437] Kent Bye: Yeah, it reminds me of going to the VR Privacy Summit back in 2018, which I helped co-organize with Jeremy Bailenson of Sanford and Philip Rosedale of High Fidelity and Jessica Outlaw, an independent researcher. And, you know, we had gathered 50 different people from across the industry under the Chatham House rules. And so we had lots of different people talking about this issue of privacy. And, you know, my big takeaway from that meeting was that you know, unless these major companies change their underlying business model away from surveillance capitalism, then there will always be this tension between trying to gather more and more of our data for them to make money versus us trying to protect our privacy. So it feels like a little bit of like an unfair fight when these major companies are not showing up to the table and actually in direct dialogue and just continuing to push forward and gathering more and more of this data. And for me, my big concern is that with the mixed realities technologies, you have all sorts of really intimate biometric data that goes beyond informed consent. A lot of the internet, you're typing something, you're using your agency to share information and you click a checkbox saying that you consent to this. But Once you start to get eye tracking data, galvanic skin response, your body movements, what's happening on your face, emotional facial detection, you know, all this information that we're unconsciously radiating out into the world and all of the immersive technologies of virtual reality, augmented reality, they have all these tools to be able to get more and more sensitive biometric data that is going to be fed into this giant surveillance capitalist machine. that there's no real way unless there's legal recourse or there's economic recourse for people to not support these companies. But yet these big companies are the ones that are basically some of the only major players that are working in this. I mean, obviously there's competing interests with Apple and HTC and other independent efforts that are out there with the Valve Index. But for the most part, Google and Facebook, they're on this surveillance capitalism train where A lot of that interest is in direct opposition to a lot of these issues around ethics and privacy. And I've come to this impasse of not knowing quite where to go without having to resort to having the government come in and enforce compliance into something that is aligned with something that has ethically aligned design and privacy in mind.

[00:22:02.461] John Havens: Yeah, I mean, we're obviously very aligned. Again, I'm only getting to know the vast amount of work you've done, but it just seems like especially from that half an hour talk of yours that I watched, we're really aligned. And my last two books, Artificial Intelligence and Hacking Happiness, I talk a lot about personal data and personal data sovereignty. Many heroes of mine, many of whom you know, you know, Doc Searles, From harvard his book on the intention economy is one that i often recommend to friends in corporate settings because what doc talks about which i realized and i credit this to him like four or five years ago it really clicked for me. is for me, for you, for any of us to maybe get angry or frustrated with the economic underpinnings of surveillance capitalism, what I, at least in my own experience, and again, I'm speaking for myself here, not IEEE, but because the companies are, I don't want to say, make it a sweeping statement one way or the other, right? But the point is, is that tracking people's data One thing I knew 10 years ago still working in PR, and this was really early sentiment analysis, like when Crimson Hexagon was really young, I used to do Twitter sentiment tracking. I worked for, Gillette was a client, and we did it by hand. Our community manager would go in and find what guys were saying and respond to them. It was really exciting. But also what happened is, especially with Facebook, when they changed their algorithm about the ads that we buy, what I realized, because I don't really understand all the intricacies of like CPMs and like paying X per thousand dollars to track people. But what I realized early on is I kept asking my advertising friends, I'm like, when Facebook changes their algorithm, I just see us paying more, like our clients paying more money. Does that ever sort of flatten out or go down? And the answer was no. And then also the trend was it became harder to track somebody in Facebook or an online setting because things just got so much more crowded and so much more noise, as it were, with all these algorithms. So one thing that a lot of companies understand, and this is especially like consumer packaged goods or retail, non-technology companies, is if you really want to continue to have a relationship with your customer, especially in the immersive web, the only way that you'll do it is data sovereignty, period. Like I've been 10 years into this, you've been obviously many years into this in terms of personal data issues. Privacy, things like GDPR, which is incredible, meaning a very positive tool, protecting children's data, these are things that legislation, period, just need to happen. and they're related to other like, you know, the recent Facebook Germany, the antitrust stuff. So in terms of legislation needed to happen, me personally, the answer is, yeah, of course, like COPPA and the states for kids under 13, like it's well known that kids way under 13 are being tracked. And I'm not saying it's the fault of any companies, but the point is, is that it's happening. So do we just sort of say, oh, let's not legislate too soon or whatever else might. This again, this is me, John, I'm a dad. My answer is no. Of course, what do you mean? The point is like, no, we can't just have these conversations about should we legislate? Yeah, of course. If our kids are under 13 being tracked, where their data can be used for things like human trafficking, then the answer is how do we protect our kids? So some of those conversations are a given. But as a positive for companies, the point is here is like, I bring up that story from 10 years ago, because the logic is, Our data already, meaning both you and me, our data right now is a commodity. There's dozens and dozens of places where you can find our data on whatever, second or third party data brokers reports and all that. And by the way, a lot of the services are very legit. PR, we worked with a lot of the best companies where knowing that people had released their data in ways that they gave full consent for, you were using analytics that were depersonalized or whatever. So the point is, is you had good data about your customers. That was how the relationship works best. However, even in those relationships outside of really unfortunate nasty stuff with lists and all that from data brokers, it's the simple fact that if our data is a commodity, that means it's harder and harder for companies to track people where they're not paying a ton of money to interact in some way with someone after they track them on the web. And also, because that tracking data is so prevalent, The thing I keep telling my friends in advertising, which they agree with, is like, remember that by the time I have more data about a person than they have about themselves, no one tracks things like, maybe some do, but like what you ate for the past six months, right? Unless you're like on a food app or no one does everything at the same time. We don't track every single thing about ourselves and have insights about us. So the only way that especially in the immersive web that will have the tools to have direct interaction with customers is through sovereign data channels. Because really at the end of the day if you want to know who john is and you want my subjective immediate truth. Like, what I think right now, this Monday while we're recording, is you actually just have to ask me. You can track me six ways from Sunday and know that John typically does X, Y, and Z. But if you want to know things like what my favorite brand is, what my faith is, who I want to vote for, you still have to give me the tools that empower me through peer-to-peer trusted encrypted channels my answer to those questions and without that tool we lose literally the heart of who we are in our identity and what are brands going to eventually track nothing we won't have any agency left we won't really know what we like from a brand. Anyway, so the peer-to-peer side of things is what I see a lot of very smart, forward-thinking companies realizing. Banks are a good example. A lot of verticals saying, this is a fact. We must give these sovereign data tools to our customers or we literally won't be able to know how to get their direct feedback in the future.

[00:27:59.016] Kent Bye: Yeah, in this ethically aligned design mixed reality chapter, there's five different sections that go through a number of different topics with social interactions, mental health, education and training, the arts, and the last one is the privacy access and control. We've been talking a lot about the privacy access and control, but just to dive in to the structure of this document is that you'll go through each of these issues and you'll point out specific issues and then you give some context and some background and then you'll give specific recommendations and then link off to a number of different articles for people to go do more research. And so it's a bit of trying to take these issues of ethics. And for me, what makes ethics so interesting is that you do have to take a step back and make sense and come up with some theoretical frameworks around what is even happening from the experiential design aspect of the technology to the dynamics of human experience, and then to try to figure out what these trade-offs are. Because the thing with ethics is that it's never clear. It's never like a hundred percent clear decision. It's always some trade-off, but like my framework that I put forth I realized it's a very idealized approach because anybody who's actually making something cannot live up to all those different principles that I put forth. And so you have to learn how to make these variety of different trade-offs. And so I was very curious to see that you were also advocating for people to look at beyond just the Western approach to ethics and other traditions of Buddhist ethics and Ubuntu ethics and just to get a variety of different perspectives on this, but that you're trying to map out all of the landscape and you have a show like Black Mirror, which is able to, what I see at least, is like one of the most sophisticated translations of a lot of those ethical and moral dilemmas translated into a narrative form so that we can watch a show like Black Mirror and really understand the impact of some of these issues. And so what I've noticed is that some of the most compelling art pieces that I've seen have taken some of these ethical and moral dilemmas and kind of extrapolated it out into a future scenario that you're exploring through narrative. And actually at the last part of the VR privacy summit, we actually had a whole brainstorming exercise where we were essentially coming up with black mirror type scenarios and coming up with the scenarios and then trying to come up with mitigating factors. So there's a little bit of like this narrative psychology, trying to project yourself out into the future. And so anybody who's interested in science fiction, I think we're looking at these ethical and moral dilemmas actually as a good roadmap to be able to project out because These are the complicated issues that we're not necessarily going to have clear answers to, but yet we have to find some ways to navigate these trade-offs. And so, after you've put together this whole document, I'm just curious how you make sense of some of those trade-offs and how people should navigate it, since it is so overwhelming and so vast. Yeah, what's the approach to be able to even approach some of these ethical and moral dilemmas?

[00:30:58.124] John Havens: Yeah, it's a great question. And I certainly don't want to make it sound like, hey, here's the silver bullet, one answer, right? That said, there are a couple of, like, ethically aligned design inspired a number of standards working groups. And IEEE, the standards, at least the ones that I've helped, by the way, when I say develop, mainly I mean, like, introduce the people who wanted to get them going in IEEE. And then the volunteers who do all the work are literally, they're our rock stars. And it's also true. But for instance, there's one of the standards, I think it's called IEEE-P, just means it's in project form, it's still in development, 7012. And I may get them wrong, but Doc Searle's helped get that going, which I was very excited about. And it's focused on creating machine-readable terms and conditions, or personal privacy terms, terms and conditions for an individual. And the logic and vision there, which is quite uniform, meaning this would be a global universally applied thing, how it's done obviously is going to be very different based on the individual. But the logic is to say whatever a person's preferences are about privacy, and when I say preferences is, you know, you and I might have different attitudes about how certain of our data is shared. Cool. That's based on our values and our preferences. But the tools where we can actually say, but when I, John, I can't represent myself in an algorithmic form, but when my data that has proven that it's attached to my identity is being accessed in an algorithmic way, I should be able to, in the same way that my data is tracked right now, be able to sort of be present when I'm not even there and provide my terms and conditions back to the algorithms and their, of course, the companies. that are seeking my data. And from a technology standpoint, like my, I have a lot of friends that work in music, and putting in a sort of, whatever it's called, an audio tool so that when you share a music file, after five seconds or whatever, it won't let you share that file. And if it's being copied in some way, then the owner of that music file, the musician who created it, will get a notice. Someone's trying to steal your stuff, basically. I'm being very simplistic. The same logic here, the technology can be done, right? It's not that the technology is hard, it's that a lot of people don't want this to happen, where you and me would both have our algorithmic terms and conditions. But if you want to think it's not really science fiction-y, I was just on a call today, this is actually work being done in The Hague in the Netherlands, and it's public knowledge, it's part of the Odyssey program in Europe. but they're creating a physical space with like, and if I have some of this wrong, please go to the site meeting your listeners and make sure I get this all right. But the basic idea is using tools where all the citizens know in this one public park area, again, where everyone there's all the data, people know how the data is gonna be used and it's full disclosure. But the logic is this public setting soon, what can happen with you and me is if we have our personal data, our terms and conditions available in algorithmic format, And it's sort of attached to our phone, right? Meaning as I walk around, my phone is the manifestation of me in the sense of how my data radiates out from me, unless I have a node in my physical person, which that's a different conversation. But then like when I walk into a public space, and I've written about this for a long time with augmented reality type stuff, there should be something where you walk into, say, a public arena that maybe you'd get a text that says, hey, John, based on your personal privacy preferences, As a reminder, as a public space, people can take your photos here, and maybe it's a different law based on where you are physically, geographically, and the planet. And it might say things like, because you're in a Starbucks, and I'm making this up, I'm not saying Starbucks does this, data can be shared in this way. Because of your personal privacy and terms and conditions, you may want to not go in here, right? Now, that's one option. The other option is that you might get that message and then you can walk in and maybe automatically post an augmented reality like a marker that says your disagreement, right? I disagree with, I really want your coffee. And again, I'm just making this up with Starbucks. Let's make it a different coffee, whatever, general coffee place. But I don't want my data to be tracked this way. and I'm letting you know. And so it's essentially kind of like writing a customer complaint that you post in such a way that it could be seen by other people using Augmented or maybe it automatically also posts to Twitter. And this is not intended to be combative. It's intended to give parity, right? Parity is what's key. This logic that trust can be built where we are tracked only by brands. And this is not a negative to brands. It's about awareness. is one of the biggest dangers to brands that I tell all my friends working in brands. Like look, giving this voice back to your customer actually means you lower your cost of acquisition, right? Like I have all these brands that I love and use. Starbucks is one of them. I love a good Starbucks coffee. but I can give them my hyper-specific preferences through one whatever blockchain or smart contract or sovereign data exchange, where then they also know, oh great, John likes this coffee or he's changed from a latte, he likes this type of mocha drink. And if they want to give me an offer, great, right? And I can pay whatever. But they don't have to track me anymore, right? This is what's so scary is the system could change where direct interaction with the brand means. You don't have to track me six ways from Sunday and know that I like Starbucks. So just give me these tools. Anyway, so I bring that up to say that there are some universal things we're working on in the nature that they are universal. Everyone actually does need to be given these personal data stores, which is the term that's often used, so that they can know, oh, in the same way that my data is tracked by hundreds of thousands of different actors and entities, I also need to be taught how to segment my data in the sense of think about it, and then I need to be allowed ways to share it that are honored and respected. Otherwise, it's not just about privacy and law and human rights, which is critical and of course seminal, but it's just utter chaos, you know. Another thing as an example, and we mentioned Mathana earlier, who's Mathana Stender, who's in the committee, which was chaired by Monique Moro and Jay Eorio. I just have to mention their names because they're brilliant. But Mathana has this idea of what they call a universal escape key, which is brilliant. And another thing that we will all need, which is Say you're going into some new immersive game or immersive environment like Second Life, and you just have no idea about the game. You're trying it out. Right now, that's happening all the time with young people, adults, and you sign whatever terms and conditions about privacy. So let's just say all that's on the up and up, right? You feel good about the brand, the brand's not trying to steal your data or whatever else, but you go in this immersive reality, maybe you're in there for like 20, 30 hours. So what happens if all of a sudden something happens in the game, and you can read about it in the extended reality chapter, and again, this is Mufanda's idea, so they explain it better than I do. And all of a sudden something happens in the game, which is just an element of the game. Let's say it's something violent, right? And maybe there were already some kind of violent notices. So again, I'm not trying to blame the game here. The example is that say you and me between the two of us, one of us feels uncomfortable then in the game. Well, there has to be this sort of logic of a universal escape key, which is kind of like just hitting a metaphorical big button, a big red button that says, you know, I'm scared, I'm nervous, I want out. Whatever it is. And that has to be honored. And by honored, it's not because the game or the manufacturers are doing anything wrong or evil. It's just a sense of our agency is different for every person. And that type of tool we feel is quite universal. Or at least we'd want it to be offered to people. And when I say we, by the way, IEEE creates these consensus-driven standards. Anyone can join them. They're free. The work that all the stuff I do, you have to be a member. So it's not like IEEE is also dictating these things. These are the standards that are being supported. And the logic is in the people that join the working groups helped by a consensus to create these types of tools. And I can talk more about the cultural aspects of things if you want. But those are two examples. And I'll pause here.

[00:39:09.007] Kent Bye: Yeah. What do you mean by the cultural examples.

[00:39:11.735] John Havens: Well, there's a chapter, I'm proud of all the chapters. And again, I was chair of two of the committees, wellbeing and personal data, but the classical ethics and autonomous and intelligent systems chapter is really enlightening. It was to me, in the sense of, it's the first time that I just sort of grokked or understood that people who are not, not all people, but a lot of people outside of the West, are keenly aware that Western ethics, meaning like Greek ethics, is the foundation for a lot of both Western technology and policy. It's kind of like you mentioned surveillance capitalism. There's a lot of paradigms that are so deeply entrenched in the modern psyche, the past, say, 50, 60, 70, 80 years, that stem from things like the GDP and ideologies of what makes a, quote, good society. And this, by the way, I'm a big fan of Greek and Western ethics, especially eudaimonia from Aristotle in terms of well-being. However, and again, the chapter says it better than me, and I'm quoting from the volunteers here, but where Western ideas of sort of man, and I use man specifically, meaning male versus a sort of a recognition of the equality of gender and all that, is this sort of rationality, which fundamentally, in some interpretations of Greek or Western ethics, there's this idea that rationality is the primary aspect of what makes a human human. And sometimes it can be the foundation for what people will say democracy is about. And I want to be crystal clear here. Again, I am not saying anything negative. It is not my intention to say anything negative about Greek or Western ethical traditions. However, where people may interpret from them that rationality is the only part of what makes us human and then rationality, which is often really creates a sort of dualism, right? Like right, wrong. It's very binary. It quite literally has been embedded in code, where code is sort of zeros and ones, and this logic of everything having either or, yes or no, really doesn't allow for the fact that as humans, there's a lot of shades of gray. And the classical ethics chapter beautifully, and to me not in a negative way, points out that non-Western traditions, whether it's Taoism or Confucianism or in Japan Shinto traditions, certainly many indigenous First Nations ethical traditions, You know, First Nations, for instance, is some of the, and that's very broad. There's obviously thousands or certainly hundreds of different First Nations traditions because there's many First Nations around the world. But the point there is that for me, it's one of the first times I understood this sort of symbiosis with nature. As a Westerner, I was kind of raised to say like, look how beautiful nature is out there. I want to protect that tree because I like looking at it. But from a consumer mindset, like you were talking about before, there's this idea of like, well, nature's kind of a resource for me to use, where my interpretation from a lot of First Nations work that I'm still trying to learn about, is that sort of like saying to someone human rights wise, like, well, we only care about, kids are just there for us to use for work or something. Like talking about nature in many indigenous traditions that I've read, considering nature as a resource is like saying, you're a family member, it's like your family. And so I think that's quite lovely and beautiful on a personal level. And I also think it means that we prioritize the environment in a much different way with policy and technology. If it's not just a resource, it's really something that we understand is necessary for actual human flourishing. It's in combination with the environment, not just being sustained, but helped. And then the Ubuntu ethics tradition, there's a guy, I'll send you his name because I always get his last name wrong, but his first name is Sabelo, S-A-B-E-L-O. He's currently working at Harvard and he's going to be a rock star, I think, because I just read a paper of his on Ubuntu ethics, which I'll get some of this wrong, so please trust him on this. But Ubuntu ethics is a South African tradition. But the paper and the work that Sabello has is explaining how where this idea of rationality and some interpretations of some Western ethical traditions can embody this dualism, then this sort of right-wrong is also what, and this has nothing to do with the people creating the Western ethics, but has also been used sometimes in history for things like colonialism or imperialism to sort of justify We, and this is the royal we who's ever saying it, have the right to subjugate them, whoever them is, because we're rational and they're not. And from his perspective, Sabello's perspective, and especially this paper that's coming out soon, he points out why it's so critical to recognize, well-intentioned or not, that as Westerners, if we create technology that we don't recognize that people just inherently may think If you're speaking as a Westerner talking about ethics, do you know about Ubuntu? Do you know about these other traditions? And if not, I kind of liken it to me if I talk about women's rights. I have a daughter. I'm very passionate about women's rights. I have many friends who are experts and who are women in the area, but I can't speak from a lived experience about being a woman. So, you know, if there's a panel of all guys talking about women's rights, me personally, I'm going to say, look, they might be saying some smart things, but why aren't there women on the panel? you know? And in that sense, the Ubuntu tradition expands, and it's a yes and, right? It's not ignoring rationality, but as Sabelo says so beautifully in his work, it's expanding rationality to include relationality. And so I know you and I, I think we care a lot about positive psychology in this aspect of understanding as humans how we have need for community. And Ubuntu is this beautiful sort of extension of human rights, if you think of like, he explains like these three layers of human rights. The first layer is, I'll call it for lack of a better term, protection. We must protect women, kids, you know, things like in war, how to honor rights of people. Then the second layer under that is this reason and rational side of things. How do we have democracy where genuine holistic democracy available to all for real, where people can express and have their agency and express what they need to express? And I mean, very simplistic, obviously. But the third layer, which Sabelo talks about from Africa, is they developed, I think it was in the 80s, a sort of third layer of human rights, which included this idea of solidarity and community. And solidarity, as he explains it, is very pragmatic Ubuntu ethics, because it's about restorative justice, and why I love it, and why it's also, I think, so hard. is the Ubuntu idea is that you not only are caring for the oppressed, right, which we have to do, of course, but you're also asking in terms of human dignity, how can you bring the oppressor back into the fold of the community? So it's not this just sort of punitive justice. And by the way, I'm not inferring that that's Western ethical or legal traditions. I just mean, this is what they're saying. And so after apartheid, for instance, I can't imagine on a personal level what it would be like to be in that type of situation and then have a cultural system, or tradition rather, and let's just call it what I feel it is, this deep love for other humans, where in that horrible situation, the people that have done something that horrible to you, you could still say, and by the way, it's not about just forgiving them and saying, go crazy, do it again or something, but it's about bringing those people back into justice. And after apartheid, this is why Nelson Mandela, for me, I think is so revered, is look at this deep opportunity for recognizing relationality in concert with rationality. And, you know, these other traditions complement the Western traditions beautifully. But this is where we also have to, I think, honor that if we only utilize Western ethics, then I think for me, at least, it makes more sense to say, like in the EU, what makes a lot of sense is they talk about their values, their European values and stuff. And that makes sense. Like, I'm not trying to say this is what all of Europe is saying, but saying Western ethical traditions, if you're speaking from a European perspective, from a policymaker is what else are you going to be? But if we are, and by we I don't mean IEEE, I mean if one is trying to say these are globally inspired ethical traditions, then you have to ask what those other traditions are, and especially then see how they can work together and complementarily where possible.

[00:47:43.372] Kent Bye: I have so many thoughts about that. One is that I just recently published an interview that I did at the American Philosophical Association back in 2019. I did this interview with philosopher Lewis Gordon, who talks about relational metaphysics, where you see things not in terms of concrete objects or substance, substance metaphysics. It's more of seeing the underlying basis of reality as relationships. and how there's these processes that are unfolding in this relational way. And so I think with some of these ethical issues, it really requires to look at things in terms of these relationships. And so I'm really happy to see these non-Western ethical traditions, because I think it is trying to pull in a lot of those more relational approaches. and try to fuse that into decisions that designers are making day-to-day when they're making their technology, to see how they can be in relationship to the world around them, which I think is at the heart of what a lot of these ethical questions are about, is to be in right relationship with all the entities involved. The other thing that it brings up as I hear you talk about this is that in the privacy, access, and control section of the mixed reality chapter of the Ethically Aligned Design book that came out, there's a number of different issues around privacy, access, and control, around data collection issues within mixed reality. It presents multiple ethical and legal challenges that ought to be addressed before these realities pervade all of society. So we can see on the horizon that there's all these potential dangers to adding all sorts of biometric data to the surveillance capitalism machine. And that you're essentially saying, okay, well, there's lots of ethical and legal challenges that need to be brought up. And that the second issue is like other emerging technologies, AR, VR will force society to rethink notions of privacy and public and may require new laws or regulations regarding data ownership in these environments. So as a journalist, I can go to Facebook and I can ask them questions around how they're going to treat biometric data. And their answer has often been, well, we haven't released anything that is doing eye tracking data. So we're not going to talk about the ethics of eye tracking data until we have, until we have a actual product that has it, then we'll address it. Then we'll talk about it once we have it. So it's like, I realized that there's this very pragmatist approach that until it exists, it doesn't exist. but I still think there's a lot of these deeper ethical issues that need to be talked about. And so I started to look at, okay, what's happening at these larger discussions around privacy and that the American Philosophical Association, the president last year, when I went in 2019 was Dr. Anita Allen, who is the founder of the philosophy of privacy. And she gave a rousing speech to the entire APA community saying, we need a comprehensive framework for privacy, like a comprehensive philosophical framework. And it was at that moment that I realized, oh, wow, there doesn't exist a comprehensive philosophical framework around privacy, which then made me realize, okay, well, how can we expect someone like Facebook or Google to come up with a good philosophy of privacy when there's not a good fiduciary relationship between those entities and the business interests that they have? With surveillance capitalism, they have no interest to come up with a comprehensive framework for privacy. And so I came across a discussion that was between three different philosophers. It was at a privacy conference that happened back on April 24th of 2015 at the University of Pennsylvania Carey Law School. And there's three philosophers that were debating different approaches to privacy. It was Dr. Anita Allen, who takes a very paternalistic approach. She essentially is like, well, we don't think that you are responsible enough to take care of your own privacy. So we need to just declare it as a human right and all the companies just have to deal with it. which would essentially mean that the government would dictate as to decisions that you may have over your privacy. And Dr. Helen Nissenbaum takes a little bit different approach. She has contextual integrity, which is her approach is like, well, we need privacy, but it really depends on the context. Like you're able to give information over to a doctor, but we wouldn't necessarily want that same information into a company who's going to use it against you. So the contextual integrity seems like to be a very robust approach to privacy. And then Dr. Adam Moore has this more libertarian approach or approach that is giving you the rights to your privacy. And if you want to sell it like copyright, then you could license out your privacy and you could actually get paid for the data that you're giving out. and it's sort of in alignment with signing in terms of service and you're consenting to give over your privacy, which has all sorts of other issues as well. But the more that I looked at this issue, the more that I realized that maybe it's worth taking a step back and looking at some of these larger philosophical debates around these different approaches. And before we kind of rush to technological implementation of some of these things, we need to step back and see, okay, how do we even conceive of this concept of privacy before we start to come up around more lower level standards around it?

[00:52:22.865] John Havens: Yeah, I mean, you said so much really great stuff there. And look, you know, I am not worthy, you know, Helen Nissenbaum. And so let me add my paltry insights above these titans, right? But there's a couple things. Before I was at IEEE, I did consulting work for a fantastic company called Datacoup, D-A-T-A-C-O-U-P. And Matthew, the CEO, great guy, really smart. The logic, and this was 2013, I forget, but it was pre-2015. was to create a way for people to sell their data and make money. And the logic was you gave your credit card, and it was encrypted and safe, and then Dataku would sort of do what any other kind of data broker would do, and at the end of each month, pay you for aspects of your data that you allowed to give. And the logic was that sort of a range of data, mainly advertising oriented data not like medical data and what was hard then was that by the end of the month the checks that they could write to people and to match credit he was also taking money from the company to pay people to sort of teach them about their data it was like two bucks a month that you would get. it was tough because people are like, wait, who is Datacoup? I'm going to give you my credit card. Like there's all this confusion understandably about like things like data sovereignty where people are like, well, I'm freaked out. I don't want to get my data to some data store and my PII data. That's terrifying. Where this is both obfuscation combined with confusion about it's the opposite, right? Once you have a trusted identity source, your personal data vault or locker, whatever term you want to use, is just simply the place where you get to say, well, this is data I want to share. And where people say things like, well, the horse has left the barn, the data is out there. That's not what's relevant here, right? Like, okay, yeah, there's a lot of data out there about me. But right now, today, if I start to use a personal data store, and you say, hey, John, send me the file, you know, if I recorded a WAVE file as a backup for this interview, And I said, yeah, but I'm worried about this file getting shared with someone else. Then you'd say, well, let's use whatever, some app that you trusted that was a peer-to-peer. I'm only sharing it with you because I trust you because you're the host and the friend. And if it was blockchain oriented, then there'd be a ledger mentality. And the logic is, I know that if this gets hacked or whatever, that I trust you because I'm sending it to you. And anyway, all that point means is that I have more control. over my data. But to go back to your question, there's so many levels to the word privacy. And one thing I want to talk about, just because we haven't touched on it as much, somewhat, is, you know, you mentioned surveillance capitalism. And of course, Shoshana Zubroff's book, The Age of Surveillance Capitalism, is just this, I use the word seminal maybe too much, but that's what it is. The first chapter alone, I've highlighted so much, it's just sort of irrelevant. It's just like a different color now. It's all just yellow. But the paradigm underlying even our whole conversation is that the economics, not just of the Internet, but of the world, are working, right? which then privacy a lot of times is framed, even if that's not the actual conversation that people are having, but it's like the biggest elephant in the room, is that the underlying knowledge about privacy, it's not just, you know, governments want to protect people's privacy, given. But things like how intellectual property is handled, how copyright is handled. What's interesting is privacy for a company. If I want to protect my intellectual property, I have so many different ways to do that. In general, I have so many different ways to do that. And I'm not necessarily adverse to that. I'm not saying that's wrong. But where my own data, and this is where people talk about, I should be able to share my data and get paid for it. My answer is yes, but what I learned with Dataku specifically is it won't work until data sovereignty happens. And again, the reason is me telling the world, hey, here's John's data and I'm sharing it. When you can get the same data from a dozen other places at pennies on the dollar, I'm just one of dozens and dozens of salespeople of John's data, even though ironically, I'm the only John. This is, again, why it's so absolutely critical that we have these structures where then when I start sharing my data and I'm the only John in the planet, in the immersive realm as well, that's where my insights start to become precious. And that's where people realize, whoa, it's not that other people can't get my data about my actions, they can. But if they want the real subjective insights about why I do what I do and who I am, I am the only one that can do that. And the other thing about sharing data is it's very pragmatic a lot of times. Like a friend of mine, I worked for a different, I did consulting for a friend named Anil Sethi, who had a company called Glimpse, G-L-I-I-M-P-S-E, which was acquired by Apple. So apparently he was doing something right. But in America, up until a couple of years ago, I couldn't access my data at all until HIPAA laws let me actually get my data. And by that, I mean not just pieces of paper at my doctor's office. But what Glimpse did so brilliantly was the thing about getting my data was a hospital might give my data in XML and a different hospital give my data in a different whatever. Glimpse combined all the different data sources into something that was written in English so I could read it and then segmented it and said like, here's your heart data, here's your cancer data. So first of all, I just was able to access my data where I wasn't doing something that was harming the brands or the insurance companies or the hospitals. It'd be the same as if I went to all those physical locations and got hard copy paper copies of my data. It's just in a portable digital fashion. But then what was instantly valuable in terms of time savings was something like, you know, I have two kids, right? So summer camp data for your kids. I know for a fact, right? You spend hours. My kid's allergic to peanuts. My kid's allergic. My kid's frightened of, you know, whatever. Monchichi dolls, whatever it is, that data, by the time you enter, it takes hours and hours when you do the same thing over and over again. Sharing your data in ways that Glymphs taught me means you get to port the data to people you trust. And then in terms of medical data, this is also where things, it's life-saving. You're not carrying big massive piles of documents from one, you know, if you're struggling with a loved one that has like cancer, the last thing you want to be thinking about is do I have the physical paper copies of everything? It's harming people's lives to not have portable digital data. And again, notice how I haven't said anything like stop paying your health insurance or these companies are evil because that's irrelevant. The point is, is it's a compliment to the system. But where anyone might say, like, yeah, but, you know, the back end of the CMS systems of how those work and, well, I'm part of this insurance company and we prefer not to. My answer is, I don't care. I don't care. Because if your companies are saying, you're going to hinder me and my loved ones and people in general from having access to their medical data in ways they can share while they're still paying you and giving you more information that will help you be a better, you know, like they're going to be your customer for even longer in more specific ways. But my answer is seriously, I don't care. And why do we talk about innovation in ways where it's not about helping the patient? Anyway, so the final point here is that, just because I always try to say this whenever I talk these days, GDP I mentioned before, which people often say like, why do you care so much about GDP? But I bring it up because every conversation, this one included, in one sense, we are mired in the zeitgeist, that sense after the Second World War. It is assumed that single bottom line exponential growth, and you mentioned surveillance capitalism, the number one priority of how things get measured and valued is exponential growth that's based on GDP. It doesn't mean that GDP is evil. That's not the point. And by the way, as a metric that everyone agrees on, great, right? That's why the term a lot of people use is GDP and beyond. It simply means that financial metrics are critical, but we also have to have metrics that measure the environment and things like mental health. And where all three of those things are valued in the same way, then conversations about privacy become very different, I think, or they will. Because the logic is we don't have to rush to market to get something out fast to people. Where a lot of times then it's like, like you mentioned before about companies will say, well, first we're going to build something. And then we want to figure it out. My answer is no, no, no, no. Hold on. While you're building it and designing it, invite, and it could be under NDA, all the people I mentioned before that are critical to understand how this is going to affect people when it comes out. So that when it comes out, it already has factored in not just the ethics and the risk, but all the possible innovations. right? There's this mindset of like, you know, we have to release it first. And the answer is, you actually don't. You really don't. You have to test. Sure, but you can test in closed models and all that. And more importantly, participatory design is critical. You actually bring people in and help test. The real problem at the core of it is the rush to get out is all about this exponential growth mentality that's based on the single bottom line where, again, it's not that profits are bad. Profits are needed. They're great, right? But where profits, like when every quarter The main reason you make decisions is how are we going to make our fiscal numbers? And that's how every decision is made. Then that also means that's actually what is driving the pressure, as it were, on privacy, where there's not just more time taken saying, OK, everyone, let's take a step back and really examine this beautiful new technology that's going to come out that can do all these great things. And like you pointed out, all these your areas, meaning your, Kent, your 10 amazing areas of ethics. Let's take the time and run through scenario planning and go through all ten of these things for this new product X. And privacy is a big part of it, so let's think about privacy in all ten of those domains. And here's the problem, but here's the opportunities. But if there's that extra time taken on the front end, in a society that doesn't say, The main thing we have to value is financial metrics and isolation as the real key performance indicators for everything society does. Until we change that status quo single bottom line, privacy is going to be really, and by here I mean privacy legislation. It's going to be always the same, right? It's going to be, well, let's not legislate too soon and don't hinder innovation. And at the end of the day, again, I'm speaking here as John of the IEEE, but like, is it working? And the answer is no, because the environment is at this place where it may very quickly be even more dangerous. I mean, we're harming it more and more. And the same with people, mental health and all that stuff. The economics of so many aspects of GDP are not working for the majority of people on the planet. And so if that status quo is what's hindering us, That's my real underlying thing to say here, and a long answer to your question about privacy. Privacy and the status quo, to change it is going to be either about GDPR type legislation, maybe blockchain stuff, sovereign data, that's our hope, but the real shift is going to come when there's pressure taken off of companies, frankly, to say, look, Don't worry about your fiscal numbers as much this quarter. Focus more on the environment and people, and especially people involved, you know, privacy and dignity of their data. Teach them how precious their data is. Stop saying things like the horse has left the barn. Stop obfuscating that they should get to their data and access it because it won't hurt your business. It's only going to help your business. And more importantly, what is the priority? What is what a good society is about? For me, and this reflects our work in the three top principles of ethically aligned design, it's about prioritizing long-term holistic human flourishing. And flourishing is not just happiness, it means all the different aspects of a person's life. You have the ability to be happy based on your context. in symbiotic relationship with the environmental flourishing. And environmental flourishing is also not just, hey, let's keep emissions from killing us. Flourishing is prioritizing planet above exponential profits so that we have the next seven generations and beyond a beautiful, gorgeous planet that we are caring for the way we need to, for not just survival, but for the joy that we should give to our kids and the generations to come.

[01:04:57.005] Kent Bye: Beautiful. Well, just two very quick questions to wrap up here because I know that we could talk for days about all the different issues here. But if people wanted to get involved, where would you suggest that they go to if there's any specific working groups or if there's any future initiatives that you're working on in terms of these issues of ethics and where people could go to either get more information or get more involved?

[01:05:20.327] John Havens: Sure, thank you for asking and such a pleasure to do this interview with you. So excited to get to collaborate with you, by the way, in terms of your work in ethics and XR. But there's a website called ethicsinaction.ieee.org. Talks about ethically aligned design, all the standards I mentioned. And then there's emails there you can get in touch with people. I'm on Twitter at John C. Havens. I'm always thrilled if people want to DM me on Twitter or follow me or DM me or whatever. And then I'll just introduce you to the chairs of any of the work that we're doing. And the work that I drive, it's all free, meaning you don't have to be an IEEE member. You can join a course if you want, but it's volunteer driven, so there's no money involved. And we'd be thrilled certainly to have any of your listeners involved. You know, we're really excited about all the work in augmented and extended reality that we're hoping to do with, you know, thought leaders like yourself.

[01:06:11.313] Kent Bye: Yeah. And I recommend people to download the ethical design of autonomous intelligence systems. It's an epic document, 266 pages, lots of stuff that's overlapped with XR as well. And so each chapter is worth digesting and unpacking on its own, right. But because you've also had a background in augmented reality and you've been looking into artificial intelligence so much, I'm just curious what you think the ultimate potential of all of these immersive technologies, whether it be virtual reality, augmented reality, or artificial intelligence, what the ultimate potential of all these things coming together might be and what they might be able to enable?

[01:06:48.380] John Havens: Oh, great question. I think in my last Artificial Intelligence, my last book, or maybe it was one before, but I talked about the kind of simplest version of the answer to your question is like a dating app for purpose. you know, dating apps where they use algorithms based on your preferences and tracking whatever, you know, cool. But there's this idea that both in your physical community and then eventually soon in our immersive communities, we could identify the aspects of who we are that brings us flow, and this is positive psychology. So something as simple as like, I play guitar, so I could teach someone guitar. And then if they play guitar and they have another skill, kind of like the commons, we can kind of swap skills. But the logic here is that positive psychology talks about compassion and altruism, and it's kind of common sense. Like, if you and I help each other in that way, then we're actually literally increasing physiologically our mental and physical long-term flourishing or well-being. And if there's sort of this connection to each other where we can kind of know in my physical community that person across the street needs a ride to the hospital, whatever. Systems thinking means that in aggregate, something as simple as that action of me driving them, I'm doing it because I care about the woman across the street and she's older, whatever. But then I can actually save on like medical costs and that one action and through algorithms we can trace this. We can actually see how being purpose-driven, living for others but connecting our skills with others, is a way that we can really transform the world in a glorious way. And so many ethical traditions I'm learning about globally, Judeo-Christian traditions, it's called the Golden Rule, you know, do unto others as you would have them do unto you. Adam Smith kind of talked about this in his second book, The Theory of Moral Sentiments. Buntu Ethics, I just mentioned how beautiful this idea is of restorative, community-oriented sense of, in relation, we complete each other. This is where the paradigm of, right now, this idea of competitive capitalism, right, that competition is what humans were built to do. My answer is I disagree, and I have much, much empirical economic and physiological evidence to say that no, There's a lot of people doing great work. His name will come to me, a guy at Yale doing some great work on this, that in our lizard brains from 10,000 years ago, we have just as much evidence showing that we are built literally for friendship and education as we are for war and whatever else. So long answer except to say this idea of dating app for purpose means we have such an opportunity to use these amazing technologies to help each other, as simple as that might sound, as long as it's also done in concert with really helping the environment as well. So, and again, thanks for the interview.

[01:09:34.998] Kent Bye: Awesome. And is there anything else that's left unsaid that you'd like to say to the immersive community?

[01:09:39.459] John Havens: No, just, I mean, I would say it's great to be back, but I don't want to sound arrogant or whatever, because it was eight or nine years ago when I first was at Porter Revelli, and I really was in the augmented reality community. I got to speak, for instance, at South by Southwest with Lynn D. Johnson, who at the time was with Fast Company. And I was really into augmented and people like Ori Inbar and stuff, who I respect very much. Granted, much great work had already been done for years in augmented and virtual reality with tons of people. But I was really hardcore, at least in the community at the time. And then I left Porto Novelli, got into positive psychology, now AI. And I guess my main message here is it's just really exciting to be back into this world And I hope working with people like Shelf, of course, and then anyone else is with IEEE, there's so much great work that I'm not a part of people doing great work on XR and AR. But now my last message would be, you know, I'm just humbled and honored to be part of the community in whatever way, and really, really want to take the message of this community, especially to policymakers in AI. where I think the urgency here is to say to so many amazing policymakers doing such great work, we can't just focus on AI as if it's this sort of digital only or algorithmic only tool in isolation or tools or systems in isolation. which a lot of people know, of course, but the ethics work, especially that you've done such a beautiful job driving, frankly, is critical. And what a great opportunity to just really urgently come to all these, you know, like the high level experts group working on AI. There's so much policy being done for AI that literally is not even mentioning augmented and virtual reality and immersive reality. And so the opportunity is not just to sort of introduce it in a way that people think like, well, it's only, you know, 40 gamers somewhere. in Korea that are playing these games. It's like, no, no, no, this is huge. This is quite literally the future and it's the immersive web. And so is now to, as much as I can help bring thought leaders like yourself and others in the community, who of course you're already doing the work yourself, but where I might be able to help further educate people by introducing them to you, frankly, and the community would be to really accelerate the work, especially with policymakers. So they understand how imminent and important the extended reality world is.

[01:12:04.011] Kent Bye: Yeah, there's certainly a lot of fragmented silos that are out there. And I think a big part of this ethics work is to bring all these variety of different perspectives and communities together. Like you said, from all these different perspectives, everybody has something to say. And so, yeah, for me, I just want to keep the conversation going. And I just wanted to thank you for all the work that you're doing. It's so amazing to come across this vast work and to see so many different levels and points of resonance. We're kind of on a similar track and to just put the intention out there to continue the conversation and to get more people involved and much more perspectives. And the thing I've learned about ethics is this is not something you can do in a vacuum. No one individual can come up with all the answers if they're not in relationship or in communication with a variety of different stakeholders and a lot of different perspectives. It really takes that plurality of perspectives and different approaches to be able to come up with the right frameworks for the designers as they're making these technologies to be able to help evaluate all these different trade-offs. So again, thank you for all the work that you're doing and for joining me today on the podcast.

[01:13:05.657] John Havens: My pleasure. Thank you again.

[01:13:07.677] Kent Bye: So that was John C. Havens. He's the Director of Emerging Technology and Strategic Development at the IEEE Standards Association. So I've a number of front takeaways about this interview is that first of all, Well, what's striking to me is that a lot of the ethical issues that are covered within this document of the ethically aligned design have a lot of overlap with a lot of different issues that are not unique to just AI, but to virtual reality as well. And in fact, just the larger technology in general. And so one of the things that John had said is that they don't necessarily just refer to it as AI in isolation because AI is embedded into pretty much every system that we have. And so they tend to refer to it as autonomous and intelligent systems. So all systems that we're in now have some level of AI that's embedded into it. Even the computer vision that's within augmented reality has embedded within it different aspects of AI. And so this book, The Ethically Aligned Design, I think does a great job of breaking down into these different chapters and then these different sections and then the different issues and the challenges for each of those issues and then references that you can go look at. And for me what was really striking was just that the whole collaborative process this involved. It was different volunteers and different working groups and they put out an initial release and then got like over 500 pages of comments. That's what I find with ethics is that with As much as you think that you know all the different aspects of what is involved with ethics, there's always somebody who has a specific use case or experience or brings up the unboundedness of this topic. It's just that it's never ending and there's always going to be new ethical issues. Also that IEEE is a standards organization that is international. And so for me, it was very interesting to see how there's a very specific Western philosophy that's embedded within both the business sense, but also the ethical systems that come back from the Greeks. the real focus of the individual and autonomy and agency and rationality. I think when you only look at these issues through that lens of individuality and individual decision without looking at collective unintended consequences on society and our relationship not only to the earth but to each other, then when you start to use these different ethical traditions that are from the Shinto tradition, Buddhism, Taoism, Confucianism, indigenous traditions from first world nations around the country, and not mentioned in this document, but also feminist perspective and critical theory around race and gender and design justice. I mean, there's so many different lenses that you can look at. And for me, I go back to Goethe on the incompleteness theorem, where he's basically said any closed system of logic is going to have one of two things. It's either going to be consistent or complete. You can't have both. And so if you have a system that's contained, then there's going to be things that that system can't see. And I think that's the value of going back and forth between these different ethical systems is because there's going to have different lenses of like, okay, what's the relationship to the earth and the ecological perspective, something that may be coming from the indigenous ethics or What about the process of individual liberation from Buddhist perspective or looking at in terms of relationships from Shinto ethics or the Ubuntu ethics, which is really looking at the community and truth and reconciliation and solidarity and trying to see like, okay, we're not just going to like block people and exile them forever, but are there going to be mechanisms and processes by which that we can have people own the harm done and be able to have some sort of peer-to-peer direct justice that's done for healing the harms that have been done by the actions of individuals? And how do you start to do that within an environment where people are totally anonymous? So these different ethical traditions from non-Western perspectives, I think are critical to be able to see, okay, what are the other aspects that we're not seeing? And again, I add in there also just in terms of gender, race, class, ability, so many different aspects of design justice and values-driven design, which is you're looking at more of the underlying values of the users, but also their interests and what they're concerned with and making sure that you're in right relationship with all different dimensions of our humanity. Then there's a couple other of those different frameworks that John had mentioned, I think worth calling out, or both the value-sensitive design by Batya Friedman and David Hendry, as well as the responsibility research and innovation. That actually is a term that's used by the EU's framework programs to describe these scientific and research and technological development processes that take into account the effects and potential impacts on the environment and society. So again, trying to look at the unintended consequences of all these technologies beyond just like your individual agency and the individual bottom line, which that was a topic that I think came up over and over again, which is that if the GDP and exponential growth of our financials is the only lens that we're looking and evaluating and making decisions on all these different topics, then we're kind of leading the pathway towards dystopic surveillance states that are not really taking into account all the potential harm that can be done. If you are only doing move fast and break things and prioritizing innovation above and beyond everything else, then you're not going to be taking into account all the vast harms at like a societal level. And I think the network effects of these centralization of power and data. It becomes like a national security issue when these different websites get hacked, you start to undermine different aspects of democracy and our free society. That was one of the things that Sabelo Malambi, who did the chapter on Ubuntu ethics and has referenced a paper that John was talking about that's going to be coming out here soon that's really looking at three different levels of protection, the reason and rationality, but also solidarity and community of how can you use these different aspects of restorative justice to be able to integrate into all different aspects of all this technology. So looking at all those different layers. But he gave an Ignite talk at Harvard that's online, I can link to it in the show notes here. But he did this whole talk on Ubuntu ethics and is talking about safe AI and just how it's a threat to have the consolidation of all this different data into the hands of just a small pool of people. And that safer AI is that we truly see each other and we're able to recognize the humanity of each other. And there is something that creates these asymmetries of power when these data storehouses have all this data on us. So that's a topic that John came back to over and over again was this concept of data sovereignty. I mean, if we don't have data sovereignty, then there's all these people are gathering this data on us. And so there's like dozens and dozens of these second and third party data warehousing places that have all sorts of different data on us that they've been tracking us and they're selling it. And so there's all these markets of all of our actions or behaviors. So if we decide to become the seller of our own data, then we may be undermined by other people who have been surveilling us and being able to sell that as well. So we're actually competing with other people on our own data, which I think was just. a surreal insight that John was saying. And so I think one of the things that I really took away from this conversation as well is just that there's potentially specific open standards that we could start to implement to be able to start to do things like data sovereignty. So just as an example, John had cited the P7012, so the P7012, the standard for machine-readable personal privacy terms. So just like when we go to a website and we sign the terms of service, then We have to follow all those terms and those conditions in order to use that website. Well, what would it look like if we had our own terms and services about what we were willing to do and not do? Maybe there'd be a little bit of negotiation there where if a website wants to use our cookies and we're like, you know, I'm going to blank, say, no, I don't want anybody to track my cookies. Or maybe there's a site where I want to be able to give someone specific access. And so I could do whitelists. So maybe there's something like that where the terms and conditions start to get embedded within machine-readable code where we have a little bit more control over that. Someone asked this at the XR for Change about this and my initial reaction was that there's different contextual dimensions of privacy and so I'm hesitant to say like this blanket in terms of conditions for who can use my data under what condition. I think there's going to be certain edge cases and context where maybe I want to give people access to something that I wouldn't normally give them access to just because either I trust them or maybe I'm going to the doctor's office. And so I think it's maybe difficult to think about. You're going to just set it and forget it and think that you're going to be able to set these terms and conditions that would be universally applicable for every situation and context. I don't think that's necessarily feasible. But if we do have a system to be able to say, OK, this is a baseline for what my terms and conditions for my machine readable personal privacy terms are. And then from there, that's like a little bit more of a leverage to be able to go against what is essentially like you just have to sign this adhesion contract and you have to follow all the terms. Otherwise, you can't use the website at all. And could you have somewhere in the middle where you're able to basically have a place at the table to be able to say, okay, I want to use this website, but without these things, maybe then at that point, if you're not willing to give that over, then you pay for it and you're able to maintain different aspects of your privacy. This is just still the very early days, but the point that I'm taking away what John said is that this whole ethically aligned design initiative that started back in 2016, he said there's at least a dozen different working groups that are looking at specific things to try to implement either standards, or he said there's things like the Industry Connections Committee. For the past three weeks or so, I've been participating in some calls. talking about XR ethics in general, is this something that requires its own standard or working group? And maybe it's going to be an industry connections committee that's still up in the air in terms of what exactly is going to happen. But just like there was an effort and initiative that happened over the last four years, specifically around AI and ethically aligned design, I think that this immersive technologies extended reality is going to be the next frontier in terms of really looking at a similar kind of effort, of looking at all these different dimensions people putting up different recommendations and things to consider, linking off the different academic research articles around this. And the XR Ethics Manifesto, I think, was, for me at least, a first iteration of just taking these conversations and trying to map out the landscape of the different issues. It's not to the point of academic rigor where it has like footnotes and citations. I mean, this is such a huge topic that just even like try to get a sense of the landscape and how to make sense of it. That's the work that I have done so far with the XR Ethics Manifesto and all these different conversations. And to me, it seems like just continuing the conversation with folks like John and these other organizations that are starting to think about something that's very similar. And the thing that I think also comes back to again and again is, what is the public policy perspective on this? And, you know, like Loren Sessig says in his Pathetic Doc Theory, where he's listing out the four aspects of, you know, the knobs that you can turn at the collective level of society with the economics, the culture, the technology, and the code, as well as the law, Well, the law is one dimension of how to push back against these companies and the culture that the companies have cultivated, this culture of mortgaging your privacy away and not having any sort of mechanism for data sovereignty. Then is that something that can be solved through the technological architecture and the self-sovereign identity and different ways to do decentralized data stores? But there's still open-ended questions in the Fourth Amendment and the third-party doctrine when it comes to saying that any data that you give over to a third party has no reasonable expectation to remain private. Dylan Yellad has a paper, he's a lawyer who's talking about the Fourth Amendment. He's got a paper called Virtual Reality Surveillance, where he says, well, maybe we need to adopt some spatial metaphors. Just like a phone booth or a hotel room or the bathroom is an extension of private spaces within public spaces, in enclosed areas, you tend to have a reasonable expectation of privacy. And virtual reality is bringing that spatialized metaphors and intuition back into our computer technologies. And maybe we need to have a similar kind of metaphoric enclosed space of reasonable expectation of privacy when we're interacting with people in these enclosed environments. And to take it to the next level, to peer-to-peer encrypted communications, that's a whole other level of privacy that I think we could have a reasonable expectation of privacy. But as it stands right now, just in terms of the law in the United States, which is where a lot of these big major companies are hosted, that essentially anything that's on the internet has no reasonable expectation to remain private means that anything that happens on the net is free for governments to be able to pay attention and surveil and put into a big database and to extrapolate all sorts of information on us. You know, the whole surveillance capitalism thing is a whole separate issue that we can talk about, but there's a whole other aspect of the governmental aspect of the way that they are taking in this data as well, and that anything that a private company is doing is essentially fair game for the government to request. And as the Edward Snowden documents showed, there's actually been a lot of cooperation that's happening behind the scenes for a lot of these major companies to be able to hand over a lot of this bulk data. But, you know, the other things that I think were interesting about this conversation with John, he's got a background in marketing and so he's dealt with a lot of these things. And his perspective is that to really build trust, then you want to build a relationship with your customers. You want to actually talk to them and listen to them. And if you're surveilling and get all this data, then you're kind of taking away that conversation. And so you're creating an asymmetry in your relationship. You're not actually connected to them. And so obviously when you have this at scale, then you can have network effects that is efficient to be able to connect people without having a conversation. But with the smaller companies, then what John was saying is that this trend is that we're going to be moving away from this surveillance capitalism type of model. It's going to be more about actually building relationships. So again, I think we're getting back to the relationship building dynamic that seems to be a common theme that goes beyond just like thinking ourselves as individuals, but thinking in terms of these relationships. And that he says that the process of pragmatic applied ethics and understanding and user values, it's not just good to do the right thing, but it's the only thing if you want responsible and innovative design in the algorithmic era. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show