#831 XR Ethics: HTC Creative Labs’ Daniel Robbins on Values-Driven Design for AR Prototypes

Ethics in Mixed Reality is a hot topic, and I moderated a panel discussion at SIGGRAPH. This panel was attended by a number of people in the immersive industry including HTC’s Daniel Robbins, who works as a principal UX designer at HTC Creative Labs incubating immersive XR technology possibilities that haven’t already been productized. As a designer of next-generation technology prototypes, Robbins is very much interesting an ethical framework that allows him to evaluate the various different tradeoffs for what type of culture these emergent technologies could produce. He takes a values-driven approach of trying to identify the underlying ethical principles or moral virtues that he wants to cultivate, then traces down a “values ladder” to see how the technology could start to shift culture.

Robbins and I talk about the open questions and challenging dynamics of the ethical and moral dilemmas of mixed reality. I mentioned that it’s difficult to get big companies like Google or Facebook on the record to talk about the ethical implications of emerging technologies before they’ve actually shipped a product with some of those specific features. Robbins advocates that the time to be having these types open-ended and difficult questions is now before we get to the point of producing the technologies, since by that time it may already be too late if the proper ethical frameworks aren’t already in place.

We also talk about a lot of the other design challenges for mixed reality related to progressive trust versus binary trust, the risks of biometric data that may turn out to always be personally identifiable given enough samples over time, the risk-mitigating behaviors of some of the major XR players, the special considerations in figuring out what SDK features should be made available to third-party developers, designing glanceable notifications in AR to preserve your situational awareness and safety, and the challenges of moving from explicit input to implicit input with eye tracking and biometric data.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So ethics and technology is a hot topic. It's something that you see all these different transgressions of things happening with the way that we use technology driving specific behaviors, the way that things are being designed to. maybe hijack our attention or surveillance capitalism, violate some of our rights to privacy. So there's been all these transgressions that have been happening over the last number of years. And I think across the technological sector, ethics and design ethics is something that has just been a huge topic. And part of the challenge is that Because the technology is getting into all these different aspects of our lives, then we have to come up with some sort of framework and the higher order principles that are going to be able to guide us as we're making these different decisions based upon the different trade-offs in order to cultivate a world that it's a better place rather than a worse place. And that turns out to be a really, really hard problem. So I've just been over the last year, just having lots of different conversations and interviews. We had a think tank above all virtual panel discussion at South by Southwest panel discussions at SIGGRAPH and a keynote that I gave at Augmented World Expo. And so I've been gathering up all this and I'm about to go to the XR Strategy Conference by Greenlight and give a talk about ethics this Friday. So I'm going to be going through and editing a lot of this stuff and digesting it and trying to synthesize that into this talk that I'm giving this Friday. But I wanted to give a bit of a sneak peek of this series that I'm going to be diving into, which is on ethics and privacy in mixed reality and XR. And I'm going to start with this conversation I had with HTC at SIGGRAPH, which was in response to a panel discussion that I had there that was featuring panelists from Mozilla and Magic Leap and 60AI and Venn Agency. And Daniel Roberts went to this panel discussion. He had a lot of different feedback and information that he wanted to share. He's somebody who's a principal UX designer at HTC. So he's at the HTC Creative Labs. He's creating technologies that aren't necessarily tied to the existing product roadmap. So he's looking at all these future looking augmented reality, social VR type of experiences, and then trying to come up with some of the different design parameters and ethical frameworks that he's using personally in order to think about how to design the technology of the future. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Daniel happened on Tuesday, August 30th, 2019 at the SIGGRAPH conference in Los Angeles, California. So with that, let's go ahead and dive right in.

[00:02:46.757] Daniel Robbins: Good morning, Kent. This is Daniel Robbins. Right now I'm the principal UX designer at HTC Creative Labs, which is an incubation arm of HTC that's located in Seattle. I've been in VR, I'm going to call this the third wave of VR. So I've been in it since the second wave, which was in the early 90s. I came up as one of Andy Van Dam's mentees at Brown University in the computer graphics group, back when we were defining the beginnings of 3D computer interaction. before human-centered interface design was even a discipline in some sense. And I come to it from the fine arts world. I was an art major. Sculpture and making physical things has always been my passion. I've had the fantasy in this world of some point that digital tools would be good enough that I could stop being a toolmaker and just use the tools. Here I am 25 years later and the tools still suck, so I'm still making tools. But again, still holding out that hope. My recent interest in ethics in AR is fairly new, and it's built upon the shoulders of people who've been thinking about this for as long as there's been society in some sense. My concern is that we are now on the cusp of developing new technologies that we haven't been able to think deeply about, and I have this radical idea that maybe we could come up with some ethical frameworks before we actually create the technologies, as opposed to what's happened, say, with smartphones and how we may have gotten ourselves into some corners.

[00:04:14.378] Kent Bye: Yeah, and so maybe you could just give a bit more context as your journey into these immersive technologies. You said you were a part of the second wave. So maybe you could just take us back to being able to experience virtual worlds for the first time.

[00:04:27.516] Daniel Robbins: Sure, sometimes it gets a little tiresome when the old fogies roll out their stories of punch cards and things like that, but I'll give a slight sprinkling just to date myself. First computer at home was a Timex Sinclair that my dad built, and I was running strong because I had the 16k extension. Fast forward to undergrad, the graphics workstation was a vector workstation with beautiful lines and lots of blinkies. This was in the era when people would always write their first ray tracer and end up with a black screen because they forgot to turn on the lights kind of thing. From there I transitioned to working at Microsoft Research for the bulk of my sort of industrial research career, working with people much smarter than me who are really inspiring, who are inventing the beginnings of 3D graphics as we know it, mostly toward the real-time side. My first VR sort of academic experience was at Brown. We had a fake space unit, which is one of the units from Mark Wallace's company. This was a large pedestal with a grayscale, very high-resolution display that was mechanically coupled, which had amazing tracking because it was mechanically coupled. And we were doing work with NASA, Ames. Steve Bryson was the name of the researcher. We were doing fluid flow visualization in VR. It was pretty exciting days. We didn't have any notion of this being a consumer technology at that point. This was really for practitioners who were already deeply acquainted with their data and wanted to somewhat see new insights but also communicate it in a way to people who didn't have the same kind of familiarity with that. I took a break from VR and now we're in some sense in the third wave. First got reacquainted with VR working at a Seattle area startup called Visual Vocal, which does mobile VR, sort of think cardboard style for the AEC industry, which is architecture, engineering, construction. One of the things I loved about being at the startup is that we were embedded within an architecture company, which meant that I could actually turn to my right and ask an architect, does this make sense? Anybody who's in high-tech product world know that it's very hard to get the voice of the customer and to really understand what people are doing day to day. I did that for about a year and then an opportunity opened up at HTC, which taking people back, you know, allied with Valve to create Lighthouse-based and SteamVR-based VR stuff. So I've been at HTC Creative Labs for about two years. The main focus of that group is twofold. One is incubation, which is working on projects, new technology projects, which are explicitly not on the product roadmap. I have that freedom to work on those. And then the other part of Creative Labs is creating the first version of any software experience you might see on an HTC device. In the incubation side, my focus is, as you can imagine, future technologies would be things like augmented reality. I've also done a lot around social VR and trying to figure out some positive incentives to create less toxic behavior in social VR. So that takes me from early computer graphics to modern things where you can buy a Quest for $300 or $400 and be doing VR very quickly, or you can spend your money on very high-end enterprise gear, whether it be like a Varjo or a Vive Pro-I or something like that. So I've seen the gamut and it's very exciting.

[00:07:41.224] Kent Bye: Yeah, and so I think anybody that's working in technology today is, in some ways, kind of in this interesting intersection between advocating and pushing forward technological innovation, while at the same time seeing the unintended consequences of technology in our society today. And so I feel like there's this collective reckoning that's happening and needing to, I guess, draw some bounds as to whether or not there's been some ethical lines that have been transgressed. And then once they're transgressed, then you start to try to define what those boundaries are. So it feels like there's a collective process that's trying to reflect on that right now. But at the same time, it's always going to be like this ambiguous line. It's never a clear line. If you talk to any philosophers about ethics, there's never a clear threshold that makes this an easy decision. So it feels like there's frameworks and principles that we can look at to be able to start to understand it. And my approach has been to try to talk to as many different people as I can to see if I could see what they're seeing. And it's like this cultivation of these moral intuitions that the more diverse collection of people that I get that are giving those perspectives, then maybe we might be able to see what those higher order patterns are. So just curious to hear about your own journey into looking at these deeper ethical issues and where you're coming from on these issues.

[00:08:54.635] Daniel Robbins: Great. Thanks for the opening. And I want to pick out one word that you've said in passing, which was the word decision. I think this is the key point that anybody, any practitioner, and ideally someone who's actually using these technologies, I want us to be very intentional about this. And that's kind of the perspective I bring to this. I don't think a priori we can guess all unintended consequences by definition, and we can't also make a huge long list of if-then-else rules about how to approach these technologies. So use the word decision. My own perspective is I'm a practitioner, I'm in the trenches, I'm at a company making these technologies, I'm not a philosopher, I'm not particularly academic anymore, and I'm not a lawyer or someone on the policy side either. So I'm explicitly someone who is working with a technology who needs to make decisions on a daily, sometimes weekly basis. Sometimes they're local to the company I'm at, sometimes they're with industry alliances, sometimes they're working with partners. When we talk about a decision, we really mean someone is faced with trying to decide, do I do A, or do I do B, or do I do C, or some other choices. My hope is that designers can start to think about larger questions when they're making a particular design decision. So say I put on a future device and the first thing I see is the real world and then I can remove parts of the real world, distracting co-workers, papers on my desk, or coffee cups I haven't cleaned up. Some of that is innocuous, but some of it, as you said, may have unintended consequences. My hope is if we step back and take a bigger question of, is the thing I'm making better for the world or worse for the world? And that may sound overly simplistic, but it at least leads a designer or other practitioners to start creating what we call a values ladder. So, if you have a values ladder, you start with very lofty things about making the world bigger. And then you would go to things such as, in order for the world to be better, we need to have people be more connected. And then you might say, well, for people to be more connected, they need to actually care about each other, and they need to care about people who are different than them. So this comes back to what you said about diversity of representation, diversity of lived experience, and also perspectives. So, continuing down the values letter, if I need to care about people who are different than me, that means I need to be able to interact with people who are different than me, which means that I need to be in uncomfortable situations. I need to be able to sit with the unknown, I need to be in situations with people who are different than me where I can't control the situation and I can't necessarily predict the future. So we've got that values ladder. And then on the other hand, you've got new technologies that allow me to create zones of comfort whenever and wherever I want them. If I'm on the bus and I'm in an uncomfortable situation, I can tune out these days with audio, slightly video, and then eventually with head-worn things completely. That ability to instantly check out, the ability to instantly comfort myself is a headwind that works against being able to connect with people and sit with discomfort that I might have in situations that if I had a positive aspect would lead to deeper connection and then hopefully create a better world where I cared about people and did things for other people. So that's a walk down the values ladder. And if you keep walking down it, you actually get to decisions that I might have to make on a day-to-day basis at a particular company. So what you didn't see there are any rules. I'm not saying what to do about privacy. I'm not saying what to do about personal property. I'm not saying what to do about aggregation and sharing data across services. What I'm saying is that if a designer or another practitioner is very intentional about the values that they're trying to achieve in the world, And if they start pulling those apart to figure out what the impact of those on decisions are, it starts to become a framework that they can at least talk to the right people, give a pause to what they're doing, and take the time to actually think about the best ability of their knowledge that they have at that moment in time, how that decision will affect the world.

[00:12:50.577] Kent Bye: It's interesting. I have a number of thoughts. One is that there seems to be this tension between the individual and the collective whenever we talk about these ethical issues where you're trying to either give autonomy to an individual, but also think about the collective impact. And so there is a collective social dimension to human nature, but there's also this autonomous, introverted, I want to just sort of go into my own world. And so if you're starting with the principle that the most highly exalted value is to be connected, then maybe I actually just want to be introverted and not be connected to anybody and I want to be able to escape into my technology. And so I think there's actually a range of different temperaments that should be incorporated in this kind of more phenomenological way. Because if you look at a company like Facebook, where they have put this highest exalted to be connected, but at the same time, if you're the most connected, then you're erasing all dimension of privacy. So you're taking away our own autonomy over the data that we own and mortgaging our privacy. And so I feel like there's this need for diversity of the different temperaments and approaches, but also see that there's these dialectics between the individual and the collective.

[00:13:57.662] Daniel Robbins: So I may say some things that are a little unpopular here. I think some of it you bring about individualism and autonomy. We need to be very careful about how we stage those and how we frame those. And it's helpful to think about the history of technology and of who actually goes into technology and who is making these things. I'm going to paint with broad brush strokes and I'm going to say some stereotypes, but I would submit based on my anecdotal, definitely anecdotal, not database experience, that many of the people who go into technology are people who may not have felt empowered when they were in their formative years. They now are ascendant. We live in a technologist or the new land of gentry and we now because of that ascendancy have the ability to create things that give us comfort and perceived convenience whenever and wherever we want them. That sometimes gets wrapped up in various labels about autonomy, but deep down I think that it is trying to satisfy maybe hurts that people had who are technologists and they're coming up in the world. That's totally human. I feel a lot of compassion toward that. And if we take the yes and approach, I think it is also super valuable to find a balance between that individual empowerment and ways in which we can be steadfast in the face of discomfort, unfamiliar situations, and things we ultimately can't control. So I'm not a Buddhist, but I do read a lot of Buddhist writings on this, whether it be Pema Chodron. I love a lot of the writing around vulnerability from Brene Brown and I try to apply that idea of intentionality and the idea of doing things that are not necessarily about controlling my situation whenever and however I want to. So, that's a very long answer. You said also in passing, you know, Facebook leads to more connection. I can push back and say it leads to a particular kind of connection which is with things that give me comfort or things that are already familiar or things that I I feel like I have some ability to have a purvey over. Again, if I'm on the bus and I'm encountering people who aren't like me, who aren't in my friend circle or some curated group, and if I'm able to form connection with those people, it's more likely that I'm going to learn about parts of the world that aren't like my part of the world. It is a different kind of connection, and I'm not saying everybody needs to be able to bounce out of that. I'm certainly overcompensating introvert, like many people in these kinds of situations. But I really do want to invite a worldview on the part of designers and technologists to think about how having connections with the unfamiliar will ideally lead to a more positive world. That's a long answer to a seemingly short question.

[00:16:33.433] Kent Bye: I tend to look to different philosophies to get different metaphoric insights. So Chinese philosophy is one that comes up a lot for me in terms of looking to the yang and the yin, where there is this yang individuation of the ego, and then the yin, which is a lot more about the dissolution of the ego and the more interconnected, seeing how you as an individual are connected to the larger whole. And so I think there's a very young bias in our culture in that we're shifting into trying to find ways that we're connected to community, connected to the earth, and not destroying the earth. And that there is a bit of this humility that does come with that yin, which is that ego disillusionment, but to see that we are fundamentally interdependent and interconnected at a fundamental way. And so I do push against aspects of autonomy and individualism, which is you can only be individual connected to a larger cultural context and to other people and to the earth. And so, but at the same time, I feel like there's this balance that's needed between the young and the end, that individual and the autonomy and the collective. And so I feel like that's part of the tension that I'm seeing playing out right now is that we have so much of the consolidation of power with all these different companies that they are exerting undue influence over the mass collective. And so there's this pendulum swing towards like more decentralized architectures and also the philosophy of decolonization of trying to look at how if you only have a small set of authority voices, then you're going to eliminate a lot of those diverse perspectives. So, as I take a step back and look at the larger context of technology, I see that there's architectural decisions that are maybe falling on one side or the other of these polarity points. And that there's this Hegelian dialectic that's going back and forth, but that there's a lot of trying to figure out where you're at and trying to make a bold decision if you're going to be like a polarity point. to say, hey, we're in this real mass centralization context. Maybe we need more decentralized architectures. So for me, that's how I think about it, that at the end, we want to try to achieve balance, but that, if anything, there's trying to counteract the larger biases of these systems into alternative either technological architectures or cultivation of culture. Maybe there needs to be some policy, and maybe there's economic factors that can kind of swing this imbalance that we see in our culture.

[00:18:54.047] Daniel Robbins: I will agree. I mean, you mentioned a bunch of things, policy, regulation. I think it's yes and to all of those things. We're figuring this out. If you talk to people in the field of VR technology who say they've figured this all out, they're either lying or they're delusional. We are still trying to figure these things out. I wouldn't necessarily call it a race, because those of us close to these technologies know it's going to be quite a while until AR on your face in the outside world with all-day battery life, it's going to be a ways off. So we have some breathing space. That said, I also want to make sure that we understand that having a dystopian worldview, sort of a black mirror worldview, won't necessarily lead us toward coming up with the best designs. So, I want to go back a little bit. You said autonomy. I think there's another element of autonomy, and you talked about decentralization as a possible way to give autonomy while still having collective. I'm not a blockchain person or a network architecture person, so I can't weigh in on a technical point of view. What I can offer instead from the designer point of view is this question of, are we creating these systems in order to give you a sense of empowerment or a sense of control? And I think those are very different things. A sense of control is the belief that one can stop time essentially, that one can decide how things proceed from the now. A sense of empowerment, though, is a different philosophy that instead says, what are the qualities that I or someone who I am making something for, what are the qualities that they already bring to the world? What are their gifts? And how can I make a technology that in some way amplifies those gifts? This leads into another lens that I use as a designer. This is new thinking on my part. It may not be new thinking for other people, but it's a way that I would love designers who are grappling with these very hard decisions to start thinking about, which is, in design school, where the classical way that we talk about interface design is identifying pain points, figuring out the seams and the gaps in the world, and then filling those, some of those in order to address real substantial problems, whether they be displacement, hunger, lack of education, and some of them centered around convenience. The thread that runs through all those that I think we should interrogate is this idea that the world is full of things that need to be fixed. And if a designer walks through the world looking for pain points all the time, we can sometimes develop the tendency to be overly negative about the world and to be overly critical. And being critical also creates this worldview where we are trying to fix things in a way that we think we control them. If instead a designer starts with the assumption that everybody is doing the best they can, then we try to arrive at decisions that, again, amplify the things that people already can do. Let me bring this to practice. So, say I'm on the bus, and I keep coming back to public transportation because I think it's an amazing way to be connected to people who are different than oneself. And I'm blessed that I am able to take public transportation to work, so that's great. And I'm an armchair anthropologist in that setting. So say I'm on the bus and I see two teens using one smartphone with one set of headphones, where one has one earbud in one ear and one has an earbud in the other person in their other ear. As a designer, I can look at that and I can say there's a pain point, there's a seam, and there's something that needs to be fixed. I can say, you know, maybe their Bluetooth technology isn't robust enough, maybe the headphones cost too much, maybe it's too hard to figure out, you know, maybe only one person has a Spotify account. You know, I can look at a whole raft of problems with this, or I can step back and I say, Oh my gosh, these people are having essentially a real-time synchronous social sharing situation. They have figured out how to hack the system and share one stream. They can still have situational awareness because they have one ear available. They can still hear each other back and forth. How can I as a technologist make what they're doing even better? How can I figure out ways that that can be more of a magical moment? So that's just one example. If I take that lens about amplifying the ways people are repurposing technology and apply it to augmented and mixed reality, I start to find some new emergent decisions that don't necessarily compromise autonomy and don't necessarily compromise, you know, personal data integrity or other kinds of things that become important to me as I move through the world, both as an individual and as part of the larger group.

[00:23:19.600] Kent Bye: Yeah, and as I look at the different philosophical approaches to ethics, there's like utilitarianism, trying to look at like what is going to be in the best interest for the most people, and then there's like Kant is, you know, talking about how there's ways in which you shouldn't violate someone's own autonomy. So like if you're going to be using data against somebody, that would be against a Kantian imperative. And then there's like going back to the ancients with Aristotle looking at the more virtues and looking at truth and beauty, goodness and justice. And there's certain like higher order moral virtues that we're trying to embody. And so it sounds like in some ways that you might be advocating for trying to figure out what those virtues that you're trying to embody within the technology and from there start to look down the different design decisions that would try to embody that specific moral virtue. Is that correct?

[00:24:08.881] Daniel Robbins: I would agree with that. And again, I would say I'm just at the beginning of this journey starting this. I love the phrase that Brene Brown uses as a challenge she puts out, which is really hard for me to, to apply in my normal world, which is make the assumption that everybody's doing the best they can. So whether that be two teens sharing headphones, whether it be somebody I see on a plane who has three phones and a tablet and a laptop, you know, make the assumption that we are all doing the best they can. even apply that sometimes to certain companies who might be making decisions that we don't agree with. At least if we step back and start with, you know, they're trying, how can we amplify the good parts of those decisions they are making? But again, these are just beginning thoughts on my part. I also want to push back a little bit when you talked about sort of the canon of philosophy. You mentioned sort of Aristotle and Plato. That's great. That's sort of a Western point of view. I also want to Open the the room a little bit more and invite other philosophies some of them new or some older But from a broader part of the world into it as well.

[00:25:06.394] Kent Bye: Yeah, I identify as a pluralist So I do try to look at Chinese philosophy and other philosophies But is there like Buddhist ethics that you look at or what what other type of? Philosophies that are non-western that you're looking at

[00:25:16.151] Daniel Robbins: Yeah, I mean, as you've noticed, there's a thread running through a lot of this, which is the Buddhist idea about concentrating on the now and mindfulness, and that we transition from pain into suffering when we're trying to control too many things. And you could argue that that is a thread that runs explicitly through technology, that desire to control, whether it be the sophomoric ideas of IoT of, you know, having my light switch turn on exactly when I want to, or whether it be something that we might have to grapple with, or when should technologies nudge us and game us so that we do behaviors that help other people. These are all about control in some sense.

[00:25:54.275] Kent Bye: And so when you're looking at the broader field of immersive technologies with virtual and augmented reality, like what are some of the big moral dilemmas that you see that the industry has to face and the different design decisions around those?

[00:26:07.219] Daniel Robbins: So many of the things I'll state are not original by any stretch. I mean, again, you can watch Black Mirror and worry about these things, or you can just move in the world. The kinds of things that I most am concerned about are, as I alluded to, the ability to stay in the moment and stay connected to things that are important to us. But there's also a kind of dependence that can happen. If I have the inability to move through the world without using these technologies, and I'm a typically abled person, that can become problematic. What am I losing? that I can capture any moment and remember it, re-remember it, you know, experience it later on. There are some wonderful things to that, but the inability or the diminished ability to actually remember things is also problematic. I also worry about slow creeps and desensitization to certain kinds of changes that these devices will bring to us. If I can turn on the rose-colored glasses mode with my future AR devices and remove homeless people from my view and remove distractions and remove advertisement I don't want, assuming I can pay for all this, then I'm not encountering the world as it is. And that is something that is also very important to sort of Buddhist philosophy is the idea of what is called sort of truth-seeing or seeing things as they actually are. So I believe that we need the ability to still see the world as it is while empowering things that I'm better at.

[00:27:26.173] Kent Bye: And yesterday at SIGGRAPH, I had a chance to moderate a session on the ethical and privacy implications of mixed reality. And I'm just curious if you had a chance to sit in on any of that presentation, if you have any takeaways from the talk that we had with people from Magic Leap Mozilla, 6D AI, and Venn Agency.

[00:27:42.982] Daniel Robbins: Thanks for organizing that. That's really great. It's nice to get those kinds of dialogues going. I also have to ask who wasn't at the table. There are certain companies, you know, HTC wasn't there, Oculus wasn't there, Google, etc. You know, you can't have everybody there and there's sort of a nice size that leads to a good discussion. But we need to start this, you know, as I learned at a race forum I went to once, you know, If you're not at the table, you're what's for dinner. So we need to bring lots of different voices into these things as well. I appreciated the panel for the amount I was able to be there. One thing that was very interesting to me is there was one panelist who got quite emotional during the panel. And I could sense some frustration and impatience on her part where the other panelists weren't sort of getting what she was saying. And I think it's wonderful that there's emotion playing in this because we are ultimately human beings and we have emotions and emotions are wonderful guides and teachers. That said, I think it would have also been helpful for that panelist to understand that technologists and legal scholars and other kinds of practitioners have very particular language they use to refer to either particular technologies or particular frameworks for applying these things. So sometimes that language gets in the way of communicating across these different groups. I don't think we need to begin every session by having a glossary thrown up on the board, but we do need to understand what has been said before, the language that is used to talk about it, and the different frameworks. So one of the things I'm doing in my own ethical journey through these is just doing a lot of reading and a lot of talking to other people who've talked about this much more deeply than I have. So back to the panel. I thought it was great. I really appreciated the representative from Mozilla. Of course, they're in a good position that they have fundamentally a manifesto in some sense about privacy and control over how you belong online. I also really loved how that panelist brought up the trade-offs between having persistence and having ephemerality. And what does it mean to have some piece of information or a link or a door that is passed from one person and then can pass to another, and then allows for uncomfortable situations to occur in a context for which the creator first felt that they had control over the situation. I love that she didn't have an answer, but that she just posed that idea that there's trade-offs in all of these things that you make.

[00:30:10.168] Kent Bye: Yeah, I feel like that there's a couple points there. One is that Samantha Chase, I think you're referring to in terms of she's really advocating for a lot of the self-sovereign identity and data autonomy. And I think that in a lot of ways, it's a polarity point to the centralization and the control from other people. I think there's a general decision that in technology, you have to decide if you're going to have freedom, you have responsibility. So if you want the freedom of not having people manage all your data and basically being a model of surveillance capitalism to get all of this information about your life, then there's a certain amount of responsibility that comes with the freedom from that. So I feel like there's this tension and dialectic between the freedom of that autonomy and the collective behaviors and the trade-offs that you have. So I think she's sort of advocating for something that was a little bit different from the other perspectives. And also this is SIGGRAPH and this is a panel that I pulled together kind of the last minute and I was a part of the VR privacy summit and I was really relying on people that were at that summit and who had, I thought, interesting architectural insights to be able to talk about in an authentic and real way to a larger community. I would love to be able to facilitate discussions with Google and Facebook and HTC all on the same panel with these other perspectives. The problem I find as an independent journalist is that there's so many layers of corporate bureaucracy that come with publicly traded companies that have to have everything extremely vetted about everything precisely that is going to be said before it happens, that there's so many layers of bureaucracy that becomes almost untenable to be able to have it. So I'm hoping that this is maybe a start of a larger conversation and to maybe pull in more people like I'm talking to you right now from HTC, speaking on what you're looking at, and I'm not sure if you're speaking on behalf officially of the company, if you can, but I find that the larger the companies, the more layers of Concerns around being publicly traded as a company that they become less free to be able to authentically talk about some of these issues And topics and so I would love to have an authentic conversation with everybody in the industry But I find that a lot of times I can only have those levels of authentic interactions if I'm off the record So I'm happy that I'm talking to you on the record, but that's sort of a larger sort of dynamic that I have that Yes, I would love to live in a world where that is just an easy snap of the finger and I could do that and hopefully I'm building up to that point to be able to do that, but I think it takes a certain amount of trust in the process and also a not knowing but also there's a lot of be able to talk about some of the real privacy implications, it also, I found when I'm talking to Facebook, I'm like, can you talk about this? And they're like, well, we haven't released this as a feature yet. So because we haven't released this as a technology, we cannot talk about it, even from a philosophical sense. So there's a certain amount of like, we have to wait until the technology exists. and that the eye tracking is actually being shipped and these biometric sensors before we can actually talk about the philosophical implications. So I kind of run into this pragmatism that's like, unless we have shipped it, we're not going to talk about future potential futures. And that that's another dynamic that I found that it's a little bit difficult to sort of have these conversations about the future of these ethics, when there's a sort of a tying to this very strict pragmatism between not being able to talk about something until it's actually shipped.

[00:33:26.110] Daniel Robbins: So I appreciate that you're being very polite, but I might put a more candid take on it, which is that it's extremely frustrating and limiting when, as you said, public companies and some of them still private companies need to create walls and boundaries around their liability. And we're talking about risk management here. There's no other way to put it. That said, I also want to have compassion for the individuals that you talk to, whether they be at a privacy summit, whether they be one-on-one like we have the privilege to do right now, and that all these individuals, you know, have mortgages and rents and families and, you know, people they're caring for and they need to have job security as well. So there's a tension when you talk to individuals between their own sphere, we talked about autonomy before, and then also trying to, as much as they can, push their own entities that they work for to do more positive things. I'm very lucky right now, and I feel a deep sense of gratitude that I have a lot of freedom to talk about these things. You talked about a certain company's reticence to talk about technologies before they're baked. I'd rather flip that on its head and say, because these technologies aren't mature, now is exactly the time when I have the freedom to make statements about them. as opposed to something like talking about automatic flight control and compensation systems, talking about them after a tragedy happens, it may have been too late, you know, what kinds of skills are being compromised when this automation takes place. In any case, going back to companies talking about things and people being open, I would encourage all these companies as much as possible to take a point of view, not necessarily rules, we will do this or we won't do that, But what I would love and it would be so gratifying is companies to have transparency about their motivation for the decisions they're making. If you are a company and you are making decisions in order to increase ad revenue, that's totally fine. I just want the company to be honest about why they made a decision about where your data is sent or who they're sending it to, and that is in service to an advertising engine. Or if you're doing it in order to maximize the number of users on your platform, that's fine. I just want the honesty so that I as a human moving through the world can understand how these decisions are affecting me. It may be a little pessimistic, but I do not think we will somehow magically get to a place where I'll have control over my data. But I would at least like to have transparency about knowing where it's going so that I can then make decisions about what I do as an individual and as someone who can influence my kids or people around me. Another notion that I would love to have seen come up in the panel, but I'll introduce right now, is the idea of progressive trust. So, as we're talking about a lot of these technology entities and new technologies, whether they be on your face or on your eyeballs, you know, because contact lenses will come along, We tend to talk about binary trust, such as, this company violated some moral imperative that I had, they made me feel bad, so now I'm going to revoke that trust completely, and we think of it as an on or off thing. We think about that with banks, you know, my bank is hacked, even this morning there was a big hack with one of the large entities, and so then what do I do? Do I close my account? My problem with this idea about binary trust is that it's not the way we actually operate with real other people, with humans. If I have a friend and I go out to a bar with that person, every time we go to the bar they start embarrassing me, I can make a decision not to go to the bar with that friend but still be their friend. Right now, I don't have ways of having progressive trust with different corporate or technological entities. It's either on or off. Yes, there are sometimes granular controls about which kind of data we're sharing, but they're not tied to an actual interaction that I'm having with that entity. I would like, when there's a breach, for me to be able to say, OK, from this point on, I'm only going to do certain kinds of activities with this entity. Or the first time I put on some pair of new glasses and I see advertisements where I didn't think they would be, or I see that certain kinds of annotations aren't the kind I want, I don't want to have to be forced to then take off the glasses. I want to be able to say, this is the way I want to interact with you. This is how I'm going to dial that up or dial it down and keep changing that in a progressive way.

[00:37:37.439] Kent Bye: I feel like being a podcaster as my primary medium I find that there's certain topics that People won't be able to talk to me in real time about like Google or Facebook about privacy as an example You know I have had an opportunity to talk to people at Facebook or the privacy architects and actually talk to people about that but I still find that there's a certain layer of legal risk mitigation, it's not being able to really talk about the deeper values. Maybe there's principles that they're trying to open up enough possibilities for their engineers to be able to have enough latitude to be able to do what they need to do to solve the engineering problems they have, but to actually sit down and have somebody who's in charge and have an authentic conversation about some of these, it becomes difficult because there are these many layers of risk mitigation. It's a high risk thing, first of all, for them to even have that conversation, but I think There's not spoken embodied real-time conversations that are happening around these topics. And I think it's hard to build trust with people if they're not able to kind of sit down and have an authentic conversation about it.

[00:38:37.815] Daniel Robbins: I completely agree. But I also want us to be compassionate with people, whether they're individuals at these companies. you know, who are in the trenches making design decisions or whether they're PR people who are essentially trying to do their job. I also think we need to have compassion in that most people aren't really trained, and it's a skill, most people aren't trained how to talk about these kinds of things. We're not given ethical frameworks in an explicit way. We sort of acquire these through osmosis, you know, maybe you have a formal religion you grew up with, maybe you subscribe to some philosophy you took on in high school. But again, these aren't ways in which we are yet fluid in talking about in the technology world. So maybe this kind of conversation you and I are having, a one-on-one conversation, is a way of giving people a little understanding that you can do it without losing your job. You can do it without putting your company in an uncomfortable position or making promises that one can't keep. Instead, I think we can make very definitive statements about what's important to us and why we're making certain decisions.

[00:39:40.181] Kent Bye: Are there any specific ethical frameworks that you look to as being robust enough to be able to start to at least use a first take or if there's a range of different ethical frameworks that you have been looking at?

[00:39:50.796] Daniel Robbins: I'm just at the very beginning of this journey. And so right now I'm really trying to learn from people who've thought about this much more deeply than I have. You know, even though my thoughts are still half baked, I still have a job and I do have to make decisions in that job about ethics. So that does come up for me and I'm trying to do the best I can. The best thing I can do is to talk to as many people as I can. And when I'm doing that talking to do as much listening as I can.

[00:40:15.694] Kent Bye: Yeah, it was interesting for me to hear from Taylor Beck from Magic Leap, where they're really taking a privacy-first design approach, where he's really embedded into, as almost like an interdisciplinary, cross-functional privacy ethicist who's looking at the design decisions at every level. Do you know if that is happening at HTC, if there's somebody who's the equivalent of the privacy-first, or someone who's a philosopher, an ethicist, who's looking at these different layers of the stack of these architectures. Is that you or is there other people that are thinking about the ethics of design embedded into these engineering companies?

[00:40:51.053] Daniel Robbins: So HTC, as any other company, is made up of human beings, and I'm hoping, and I'll take, again, that perspective, everybody's doing the best they can, that everybody making decisions there, whether it be what food to put out in the snack room, or which NDAs to sign, or which kind of audio connector to put on a device, I'm going to start with the assumption that everybody's trying to do the best they can in making those decisions. So that's the highest level thing. A lot of those decisions aren't always the ones that we in the outside world might like. There's often constraints around suppliers and time to market and price points and, you know, competitive positioning and things like that. So that's some of the context. In terms of actual privacy work, yes, we do have people on the legal team who, again, mostly from a risk management point of view, deal with privacy issues like GDPR and compliance and geocompliance around language that's used and how we either collect or preserve or disseminate or communicate the storage of personal information. This is becoming really important when we do very specific things like eye tracking, which, you know, as you came up in the panel, is if you want to do a certain class of functionality, then you need to enable the data to be processed. So yes, there's people dealing with that and how to communicate that to the end user. As to whether there are formal ethicists at the company, I don't know of any right now. I'm making a stab at it, but it's not my training, so I'm a designer, and I'm also doing the best I can, hopefully, by learning about these things. And again, talking to people who have much less time than I do at my company, who are trying to make big decisions, those dialogues are ongoing.

[00:42:24.632] Kent Bye: Well, one of the tricky things that I find right now at this moment in history is that with all of these new biometric data sensors that are going to be coming online, whether it's eye tracking or galvanizing skin response or being able to detect your emotions while you're in a virtual reality experience, that more and more we're going to be able to get into like these real affective, emotional, intentional aspects of what we're looking at, what we're paying attention to, what we found valuable by just even head pose and combined with eye tracking data. So there's in privacy law, like personally identifiable information. And so a lot of this information, like just eye-tracking data on its own, may not be personally identifiable. But if you have that person's identity, then you are able to potentially correlate all of this information about someone and do this huge amount of data aggregation. Now, for someone like Facebook or Google, whose business model is surveillance capitalism, to me, I think there's a huge ethical issue of saying whether or not all this biometric data should be private. and that it shouldn't be recorded, it should be treated like medical information, and that we shouldn't be store-housing it and doing AI training on it, and trying to eventually try to reverse-engineer our psyche. And so, now that we are starting to see some headsets with biometric data, How is HTC looking at, and this may be not a question that you can answer, but how is HTC looking at biometric data, eye tracking data, and trying to look at the different range of potential risks that are involved with now having access to an incredible amount of information about someone's psyche?

[00:43:55.983] Daniel Robbins: I'll give a multi-layered answer to that, and the order might be random of the cake layers here. On some level, they're doing the standard things that all large companies do in that they're, you know, going to have terms of service and they're going to have buttons that you have to click yes to to accept the collection of certain data like eye-tracking data. As to whether that data stays on the device, whether it's tokenized before it's used by other parts of the system, whether it goes to a third party or not, I can't speak to that part right now, mostly because I don't actually know that information, so I'd be speaking out of turn. But yes, there's a lot of work just happening on getting the very basics of the legal stuff. I think a tension that comes up, and this came up in the panel also, is, and I thought this was kind of a cop-out from one of the panelists, he said, well, And I think it was from 60 AI where Matt basically said, you know, we're creating a tool. What people do with the tool is their own issue. We'll have the third parties deal with all of the data and whether they store it or not. Again, I think that's a little bit of a cop-out. I would hope that whoever is creating the data in the first place should have a very intentional thought about what happens with that data. Okay, now I'm going to pop out a little bit higher than that, which is no matter what we do legally, no matter what we do with sandboxed data collection and tokenization and only sending what we think of as anonymized streams to higher level processing, whether it be through edge computing or actually sending back to some server-based processing, I believe in some of the latest research is showing that anything can be de-anonymized. So I think if we just start with that assumption that the smallest number of markers, and that's going to be an ever-shrinking number of markers, can be used over time to de-anonymize. So I think it's a little bit tilting at windmills to think that some amount of sandboxing or blockchain lockdown device is going to somehow protect us in that manner. For me, more interesting is how do we train ourselves as human beings, as people who are growing up, or people who are working with these new technologies, to understand the consequences of using these kinds of devices. So an example, and again this is anecdotal, I have a 13-year-old daughter who, on the positive side, is able to move through the world of technology and society and consumerism in a very nuanced way. She can sit in a dentist's office and read a People magazine and totally enjoy that and have fun and comment on the dresses and the fashion and what the stars are doing, but she can also deconstruct the advertising that's on the following page. And she can ask, and she does ask out loud, who is advertising to me, what message are they trying to send me. So if we start with the assumption that everything is going to be de-anonymized, that we have limited control over our data, then we need to come to a point of how do we educate people so that they can be like my daughter who has that ability to to move between different worlds, the worlds of pleasure and the worlds of consideration and intentionality. So I want to move consumers to that place. That's a lofty goal. It's very vague at this point. I don't know how to get there. As we know, nobody reads anymore to a first approximation. So I don't know what the answer is, but I'm at least setting it as a goal.

[00:46:59.687] Kent Bye: And just talking to different people from the web community or just looking at the different levels of the stack, there's the operating system layer. So either like the Microsoft Windows or if you're running Android or whatever operating system is on these standalone devices. Then you have an actual platform and hardware layer that has a certain level of control. Then you have the software that is from these manufacturers and the platform creators. And then you have the app developers that are actually creating the applications. And so at each layers of the stack, the operating system layer, the hardware layer, the software layer from the platform, as well as the application developer, there's different layers of access to different amounts of data. And so there seems to be this dependency chain that, for one, it's hard to know, like, who's responsible for, let's say, security breaches or being able to see if there's different attack vectors to be able to gather information from these different disparate sources. But also just the policy layer that is, like, what is and is not okay and how do you enforce that. to make sure that even if you say it's against the terms of service, how do you detect it, and then how do you enforce it? So there seems to be a tricky issue that there's a lot of layers of trust there, not only with these headset manufacturers, but the operating system layer, as well as at the layer of the application developer and what they're doing with the data.

[00:48:21.518] Daniel Robbins: And there's a, maybe I missed it, but there's also another layer you left off, which is what end users, or we refer to as people, what people are making with these things. So I have a nine-year-old son, if he's in Minecraft and he's making a creation, and then that creation goes out into the world and other people can bang on it and do wonderfully positive things or can do negative things and repurpose it, there's responsibility that also lies with what you might think of as end users. And I think that's a great thing. I think it's an awesome thing that we have a remix world, but we do need to understand it. It goes all the way up to people at the end. I want to take it back to the practitioner level because that's ultimately what I am. I'm not a policymaker. I'm not a privacy or security expert. I'm not a developer, but I'm a designer. And we definitely have to think about these kinds of things. The kind of question I'm wrestling with right now in regards to future devices is SDKs. You know, which of these capabilities, very basic capabilities beyond, say, SLAM, but, you know, recognizing surfaces, recognizing people, which of these things should be in the underlying services that are only provided to a first party, and which of these should be in an SDK so that then third parties can go use it to enable their apps to have different and new functionality. And we don't have the answer on these things, but for every one of these capabilities that we make a decision to put into SDK, there is an implication. And I think it's really important as a designer to write down, and I do love to use this very non-traditional tool, which is called writing, to write down why I made a certain decision. So if I and my colleagues decide that the ability to recognize your boss when they're coming near you and you're wearing a pass-through AR device, I need to write down why I made that decision. You know, essentially have a decision matrix and understand what the pros and cons are, and then I can come back to that later. We never make perfect decisions. And again, we can't control the world around us, but I at least want to know the trail, the paper trail of why I made that decision and come back to it later on when I have these conversations over and over again as unintended things happen.

[00:50:20.008] Kent Bye: And so for you, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?

[00:50:27.430] Daniel Robbins: So these aren't going to be on ethics necessarily. These are the kinds of design questions that I'm trying to answer in regards to future technologies. And again, I'm doing incubation. So these are not product roadmap things. These are not an indication of where we're going, but any company in this space needs to be investigating these kinds of things. And some of these we thought were kind of solve problems, but now we have new form factors and people using things in new ways. So some of the basics are notifications, you know, when, how, where do I see a notification? If we think about. Most smartphone apps take the point of view that they have your full attention, or that they are at least the only digital thing that has your attention in that moment in time. For my early work at Microsoft Research, we always had this goal of making glanceable interfaces, which are things that you can give your attention to very briefly and then come back out of it and be back in the real world. Fast forward to 2019 and I would argue that very few of our interfaces on our phones are designed to be glanceable. Most of them are designed to be the opposite, which is that they take you in and they try to keep you there as long as you can. So if we come back to notifications, I have to ask myself, what is the least distracting but most salient way that I can show you something while still giving you situational awareness of the general things that are happening around and maintaining safety and understanding of how your body is moving in the world? So notifications is one. Another is how do I actually grab and actuate digital items in my physical world? Many of the interfaces we see today on the show floor, like SIGGRAPH or other places, either rely on voice or controllers. fine-grained finger interactions like grabbing and touching. I think those are interesting things to explore. Me, personally, and in some of my efforts at HTC, I'm much more consumed with how can I maintain my ability to use my hands in doing the physical things in the world that I want to do while doing very basic things like indicate interest, select something. So, I haven't yet seen a technology that allows this. I'd be excited if someone has it, but how can I detect things like a finger snap or one finger touching another finger? This is not fine-grained gestural control, but this is doing a very small number of actions to indicate, say, selection. What are the ways that eye tracking is going to work with other kinds of body motions? If you talk to experts in eye tracking, which I'm not, they will say that you can't really use dwell time as a way to select things. So we can use eyes to indicate degrees of interest, but then we need to engage more intentional selection techniques using maybe our fingers or our voice to actually closely couple a mixed modality interaction. The other kinds of things I'm very interested in are to what degree should information persist at particular locations. When you look at a lot of first round AR interfaces, they tend to take information and commands and put them in a fixed location in the world, and then you go to that location to interact with them. That's fine, but it may violate convenience. Why can't I have things come to me? You know, that was kind of the promise of the digital world. So there's a tension between spatial memory and kind of empowerment. So those are a sampling of the kinds of fundamental interaction questions that I grapple with on a daily basis as I create the standard sort of wireframes and mockups and vision videos for internal use as we explore these new technologies.

[00:53:46.318] Kent Bye: Yeah, one of the big takeaways I had from the panel yesterday about ethics and privacy was that I think it was Taylor Beck and Diane were both saying that privacy is very context dependent and that it changes from culture but also depending on what context that you're in. And so part of what I've been trying to do at the talk that I gave at the Augmented World Expo was try to give a cartography of the human context. And it's bounded, and so it's going to be incomplete, or it's going to be complete but inconsistent, one of the two. But it's going to be at least our approach to try to take something that feels so unbounded as the entirety of all the human experience and try to understand the different contexts within that. And so one of the things that I think about the difference between VR and AR is that in VR you can completely shift your context. Like you can be at home and then all of a sudden you're at work, or you're hanging out with friends, or you're going on a date. Those are all very specific contexts. But in augmented reality, it feels like you're still going to be in the center of gravity of whatever context you're in at that moment, but you're able to shift subtly context or add context or amplify context by doing a layer of augmentation. But it's going to be more or less you're still going to be in the world and more in that primary context. Maybe it'll be possible to do a full context switch with AR, but maybe that's not what we want with people dipping out of any context at any moment. But I'm just curious from you if you think about that, a human context, and try to think about these different applications of augmented reality through the lens of looking at how you either augment or subtly shift context within any given context.

[00:55:13.579] Daniel Robbins: So I want to go back and make sure that I give you some credit for some of the exercises you're engaging, and I think that's totally wonderful. I think it's also great that you're being honest about no consistent system could be complete, etc. I wish all designers and people who are stakeholders in product design could engage in these exercises with the understanding that they're imperfect. and that there is no answer, but at least engaging in that intentional kind of exercise about drawing the bounds and understanding the different contexts and spheres of influence and proxemics, all these kinds of frameworks is really wonderful. Again, none of them are perfect, but they all work until they don't work. So I want to, again, give you credit on that. That's a great exercise to do. Perhaps we'll create a workshop together, and we'll go around to different companies and get people to go through this, and then there'll be some crying, too, and that'll be wonderful. I want to put another lens on the way you're thinking about some of these AR issues. So you had asked about the degree of context and to what degree is someone actually embedded consciously in the place they're at. I do believe there'll be a dial on these devices and people will be able to completely check out. I think that's just going to be there. Whether you think it's a good thing or a bad thing may not matter as much, but it is going to be there. I will try my best as a dad to raise my kids with the good decision-making ability about how they use these things. Like many technologists, I probably have more reservations about these things and more rules about use of technology in my home than people who are not in the technology field. The irony is apparent to me. I also want to talk about some of the frameworks I use around the privacy and ways in which I get to cheat. Most of my efforts around future technologies are focused on enterprise. So I don't have to have the same consumer lens on a lot of these decisions around privacy and data integrity. Because when you are in the enterprise setting, again, hoping it's a place that respects you as a human being and treats you well and isn't abusing or oppressing you, the degree to which you control these things is gated by the value transfer that you're doing with that organization. I just want to lay my cards on the table that I don't have to think about a lot of the consumer things other than when I think about myself as a human being moving through the world and a father and a husband, things like that. Another principle I want to bring up, and this is again a very new thing I'm dealing with, but is what I think of as a distinguishing characteristic of AR interfaces, is that it is a shift in how we actually are giving signals to these devices. So for a first approximation, most interfaces on desktops and smartphones and tablets and other kinds of things are all explicit interaction. I press a button, I scroll something, I tap something, I open something. Those are all explicit interactions. As we move to more and more biometric and affective sensing that's either on my face or body worn in some way, many, many more of the commands essentially that are given to the systems are going to be implicit. They're going to be based on actions that I am doing that I may not even be aware that I'm doing. That is a shift in type and kind that designers need to really grapple with, and we need to communicate to the end users. Again, I don't think everybody's going to have control over all these things, and I think it's a little specious to try to give people control, but we need to communicate how the decisions are being made on these devices, and we need to communicate. how those interactions are then being propagated through a system to other entities. So whether it's an eye-tracking system that notices I'm not giving attention to a presenter, or whether it is a engagement sensor that understands that I'm just scrolling past a bunch of text and not really paying attention to it. Those are all implicit signals. You can think of them as an eventing system that a system could decide to actually pay attention to, to enact commands and decisions that are made upstream. So again, I don't have the answer here, but think about it as a framework. What happens when we move to implicit interaction versus explicit interaction?

[00:59:17.599] Kent Bye: Great. And finally, what do you think the ultimate potential of immersive technologies might be and what they might be able to enable?

[00:59:26.805] Daniel Robbins: So the Zeno's Paradox part that I'm excited about is how can we get closer and closer to connection and empathy with the acknowledgement that I can never actually truly walk in someone else's shoes, but how close can I get? Whether it be something that lets me experience what it's like to be at the border and to not have a home, or whether it be something that helps me see how this couch fits in my living room. All of those are about empowering me through different kinds of experiences. A particular one, and my bias or preference should be pretty clear in this, is I hope that these technologies can be marshaled to get me as close to being able to walk in someone else's shoes as possible.

[01:00:06.600] Kent Bye: Is there anything else that's left unsaid that you'd like to say to the Immersive community?

[01:00:12.141] Daniel Robbins: I feel ultimately very lucky and a huge sense of gratitude that I get to work on something that I love, that I'm paid to be creative. That's an amazing opportunity. And one of the activities I often engage in is doing a lot of mentoring, typically of people who are from underrepresented groups. So if there are folks out there who have a lot of initiative who are wondering how they can either break into the design field or up their design game while still being compassionate with themselves and being creative and have attained some clarity around this process, you're free to reach out to me and I'd love to be able to help other people as they grapple with themselves in the world we're designing around us.

[01:00:52.250] Kent Bye: Awesome. Great. I just wanted to thank you so much for joining me today on the podcast and sharing all your thoughts of your design process and some of the problems you're trying to solve. And yeah, just kind of open up the dialogue about these broader ethical issues that we're all trying to figure out. So thank you.

[01:01:05.737] Daniel Robbins: Thank you very much, Kent. I really appreciate this. The world would be a great and even better place if we could have more of these kinds of conversations.

[01:01:12.900] Kent Bye: So that was Daniel Roberts. He's a principal UX designer at HTC Creative Labs. So I have a number of different takeaways about this interview. First of all, Well, one of my big takeaways from the conversations that I had at SIGGRAPH, at the panel discussion at least, was that there is this tension and dialectic between the philosophical unboundedness of all the different possibilities and being an engineer or a designer who has a very specific context and make a decision. And so there's this tension between the unboundedness of all the possibilities and trying to like lock down the definition of something that is really difficult to really pin down and define precisely. But once you focus that within a very specific context, then it becomes a little bit easier. And so the thing that I heard over and over again is that as a designer, Daniel's coming from that perspective of having to make a lot of these pragmatic decisions and to try to the best he can be compassionate towards understanding as we talk about and critique all these different companies. we'd like to assume that they're trying to do the best that they can and that as a designer he's advocating for trying to write down the Decisions of why you're doing specific things and that you're trying to abstract that out to have what he calls like the values ladder So as an example, you know at the highest level, maybe you're trying to make the world a better place You're trying to make people more connected and in order to be more connected then you have to have empathy and care for other people who are different and then That means that you have to interact with other people who are different, which means you need to be comfortable with being uncomfortable and sitting with the unknown and to be able to be put into situations where you don't have total control. And so if you are putting total control as the highest level of your design decision, then that may create this different decision as to something that's the antithesis of trying to create empathy and connection to other people. And so for me, I found that there is this kind of inherent dialectic between the individual and the collective and the different degrees of which you have your own autonomy and your own sense of control versus the other aspects of being connected and interdependent within a collective context. And so there seems to be a fundamental tension even within the different ethical frameworks between something like the utilitarianism, which is trying to look at the collective interests over and above the individual interests. But then there's the Kantian imperative of trying to not use data against individuals so that you're preserving that sense of autonomy. And then you have the higher level values and moral virtues that you're trying to design for. And I really got that Daniel's trying to really focus at those values. And these companies can start to perhaps articulate what their design values are of like, why are they making specific decisions? And some of the challenges are, is that if they have an underlying business model of something like surveillance capitalism, then it's very difficult for companies to fully articulate the full reason for why they're making some of the decisions that they're making. why they're doing so much data aggregation and capturing all this information that could be used to control and manipulate people. Now, Daniel was talking about this dynamic of we are in this shift from explicitly entering information into these systems into more of these passive implicit things that are radiating from our bodies, whether that's our eye tracking, our movement, our biometric data, our galvanic skin response, our heart rate variability. These are all these passive indicators are going to be fed into these systems And then what's going to be happening with that information? And how are you going to be able to integrate that passive information in ways that's going to be making decisions on your behalf in ways that's going to be in your deepest interest for what you want? So I think the big takeaway that I got from listening to Daniel is that he's really focusing on those values that you want to bring into the world and that it's very difficult to kind of break down into rules. And we don't necessarily have the language or the ability to be able to talk about a lot of this deeper context of how all these things are interrelated together. Because in some ways, some of these values can be things that we haven't even really articulated clearly for ourselves, especially if there's certain levels of occlusion of why specific decisions are actually being made. So the other thing is that just by the nature of being a designer, there's all these trade-offs and there's no perfect design because there's things that you have to kind of trade off. Even looking at things like the difference between empowerment and control and also the individual or the collective or the yang and the yin, you know, all these different ways in which are you designing for more of empathy and interconnection and connecting people together? Or do you want to really empower people to have that full autonomy and be able to be in full control of their lives? Which I think was interesting to hear about Daniel's perspective on aspects of control because, you know, he's highly inspired by a lot of the Buddhist ethics that are saying that, you know, we should really be trying to see the world as it is, this kind of truth seeing, which is to not try to be in complete control of your environment and to if you don't want to see people who are homeless, that you are wearing these AR glasses that are somehow erasing those people from your world, there's these deeper ethical implications of that, of what's it mean to be able to empower people to remix their level of reality, to be able to consciously ignore these different issues that are still there. It's not like it's making it go away or making it better, but what are the ways in which we're going to design these technologies that are going to allow people to remix or filter out things. And he said that when we designed these air glasses, that he fully expects that there's going to be a knob that you can kind of dial it up to 11 and just completely dissociate and disconnect from around your world. And that to a certain degree, that's already true to what we have with cell phones and with audio, being able to kind of remix all of your sensory input, you know, what's it mean to then add that final layer of visual input where you're just completely in a whole nother world, but yet you're still needing to kind of operate in the context of people within this consensual reality in these shared contexts. So certainly he's taking an approach that there's likely going to be people who want to do that, but trying to design these systems and figure out how to kind of deal with the ethics of all of that. And he did advocate that, you know, he was going to try to have these conversations with his children and really advocate for moderation when it comes to how they use their technology. And so he fully recognizes the irony of being a technologist producing these technologies, but yet being fully aware of some of the negative side effects of them and to try to instill those values within his children. And so you know, he's kind of talking about this element of media literacy of how people are able to kind of discern whether or not there are these marketing messages and what's the deeper intention and message of those. And the challenging thing is that as we move forward, the level in which all of this data aggregation that's going to be collected on us is going to be able to start to introduce messages into our awareness that are going to be influencing us in ways that we're not even able to perhaps potentially be able to be aware of or to fully deconstruct just because we're not even aware of all the data that's being aggregated on us and there's this huge power asymmetry that someone like Tristan Harris talks about with the Center for Humane Technology, which is that these companies have these huge amounts of data that are able to create these very specific psychographic profiles and then once you have that then what's it mean when you're starting to have information that's subtly influencing what you believe but you know is maybe occluded or The full chain of the intention behind it may not be very easily discernible. So I think we're entering into this world We're already there quite honestly and just trying to figure out how to navigate that you know maybe the answer is that we are all trying to increase our levels of media literacy and amount of critical thinking when we're taking in information of seeing what's the intention behind it, what's the sourcing, what's the agreement around what truth is, and what the epistemology of how we discern what is justified true belief. All these fundamental questions, I think that as we move forward, I think there's a certain amount of needing to up our game when it comes to trying to discern what is the signal from the noise as we're taking in information. You know, and the last couple of things I'd say is that, you know, as a designer, they're trying to create these software development kits, these SDKs. And so there's these different design decisions, like there's going to be certain capabilities that technology can do. Is that something that only the first parties can do? So say like the companies that are officially working. directly with HTC or HTC themselves? Or what should be made available as an SDK to be able to be handed over to third party developers? And what are the risks that are involved with the capabilities of all this information? When I went to the view source conference, it was interesting because you know, they were talking about the web browser as this platform that is able to try to mitigate the amount of data aggregation and really have very consistent policies around security and it really got me thinking about the future of immersive technologies and what's it mean to have like these native applications that have access to all this raw information and What are the ways in which either these platforms or software? Have some sort of oversight so that these third-party developers that are having some of this raw access to this biometric information then how are we going to architect systems that mitigate the capturing of that information or the misusing of the information and so I He's just talking about the process of creating these SDKs. Like that's a decision they have to make. Is this something that is got all these different risks and power where you can do amazing things. There's these different tools and they can start to open up to the hardware and the technology to do amazing things, but there's all these risks and safety concerns that are opened up in that context. And so. I expect to see something like the web browser may be a way to help be an automatic risk mitigating factor and to be able to start to build up that trust where he's talking about this concept of binary trust or the progressive trust where you know a lot of times we have this idea that if a company has violated our trust or done some sort of moral action that we don't agree with that we're just going to flip a switch and it's going to be a binary and we're going to no longer be interfacing with them. But it's a little bit difficult to do that with technology because there's no sets of rules that we can start to communicate to be able to know the degrees of which that we want to engage with different technologies. It's a little bit of a binary of either you're directly engaging with the technology and having to deal with the whole levels of that or not. Of course, there are different levels of privacy controls, but the default settings go a long way to driving the behavior of what most people normally will be able to be engaged with in terms of the different power dynamics that happen with these technology companies. And then as we move forward, there's all these different concerns around glanceable information. And a lot of the design for mobile phones has been trying to capture all of our information and to keep us locked in and try to have us spend as much time as we can. But as we move forward into the augmented reality, there is going to be the need to have a little bit more of these glanceable user interfaces where we're able to get the information that we need, but it's not trying to hijack all of our attention because we still need to have a certain amount of situational awareness of our environment, where our body is and whether or not we're actually safe within that specific context. And so it's going to be trying to come up with those more glanceable interfaces and return to that, but also trying to figure out how we do the mixed modality interactions, uh, what happens with persistence. the whole degree of the implicit information that we have, then how are you going to be able to know that you're paying attention to different degrees and what's the boundary going to be between more passive ways that we're going to be able to engage the technology and knowing how to rely upon that ability of the technology to detect that deeper context or to make these more implicit actions. And so finding that blend between the implicit and explicit seems to be one of the huge challenges from a design perspective, as we move into these immersive technologies. So I guess the final thought is that these are very difficult conversations and it is very unbounded. I don't think that anybody's going to come up with a comprehensive framework that's going to be useful for every specific context. And, you know, that's from my perspective, there's been a great value for me to talk to as many different people as I can to see. the different perspectives and part of what I'm going to be doing this week is just trying to distill down a lot of what I see as the common themes and pulling from other ethical frameworks from either artificial intelligence or operating systems or the W3C and the web browser community and start to see how virtual reality, augmented reality are pulling in all these different aspects of these different mediums and modalities and what is a way that we're going to be able to actually navigate this from a design framework. The keynote that I gave at Augmented World Expo was about the ethical and moral dilemmas of mixed reality and it was looking at the 60 to 70 different black mirror scenarios that you could explore all the dystopic potentials of what could possibly go wrong with the technology and Trying to come up with at least some framework to navigate all those different contexts and ideate and brainstorm about some of the things that we need to possibly consider. But I think the thing that Daniel is pointing to is like these, these higher order values of those values are going to be able to be applied to these specific contexts that are going to be able to allow you to interrogate some of these different decisions in these different trade-offs and to try to. Create an essence what kind of values want to put into the world? and so it's that more moral virtue or looking at the ethics and the values that are at that higher level and trying to really articulate what those values are and I feel like a big part of what I want to do at this conference coming up and talking to different companies is to try to see if there's other companies that have really tried to articulate what those values are that they're using and be able to drive their different design decisions and and like Daniel said, that he would love to see these designers start to articulate and write down the different trade-offs and justifications for these different decisions that they're making so that there is a bit of a paper trail so that you can go back and interrogate it, especially if things go in a specific direction. I think that's just a bit of a process that everybody's going through. It's still so early within the industry that this is a bit of an open question and there's a bit of a vacuum that's out there that without having available something that's a very functional or operational privacy first framework or an ethical design framework, then you're kind of in some ways in the dark, having to kind of figure it out on your own and then not really having any way to kind of interrogate some of these different decisions that you're making. So that's a big part of what I'm going to be doing in the next series. Uh, this is a little bit of a sneak preview. Daniel's at the XR DC conference right now and was asking when this was going to come out. And I just figured it would be good to get this initial conversation out there and to say, you know, there are ways that I can have these different types of conversation with these different companies. And I know there's a lot of risk mitigating factors that they're trying to do, but I'd love to just continue this distributed conversation where we're just hearing from these different companies and being able to unpack a little bit, some of these design considerations and the design process and to together as a community, try to figure this out. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show