#1177: How the EU’s AI Act Could Impact Biometric Data Definitions & XR Privacy

The European Union’s AI Act is pending legislation that is classifying different AI applications across different tiers of risk, and there are a number of ways that this legislation could shape the future of how the Metaverse unfolds, especially as there could be XR privacy implications depending upon how biometric data may be redefined and what types of inferences could be made by this data. I talk with Daniel Leufer of Access Now about the human rights approach for regulating AI applications, and how some of the pending legislation drafts of the AI Act (as of January 17, 2023) may have rippling effects for GDPR and ultimately what type of biometric privacy inferences could be made from XR data.

We cover the hierarchy of different AI risks, and how the AI Act is taking a tiered approach to legislating product safety of AI platforms with different requirements at different levels. He’s unpacked some of the common AI Myths as a Mozilla Fellow, and is on the frontlines of tracking the latest AI Act trilogue deliberative processes between the European Parliament, the Council of the European Union and the European Commission (which he breaks down in further detail in this interview).

The EU is anywhere between 5, 10, 15, or 20 years ahead of the United States when it comes to technology regulation, and so we reflect on the impacts and gaps within GDPR when it comes to XR Privacy, how the AI Act may or may not fill those gaps, and getting an in-depth sneak peak into how the AI Act may “become a global standard, determining to what extent AI has a positive rather than negative effect on your life wherever you may be.” There are many extensive implications about the AI Act that will likely impact the future development of XR technologies, how AI is deployed in virtual worlds, and how the Metaverse continues to develop and unfold.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. It's a podcast that's looking at the frontiers of spatial computing and some of the ethical and moral dilemmas surrounding it. You can support the podcast at patreon.com slash Voices of VR. So continuing on my series of looking at XR privacy, today's episode I have Daniel Leufer, who is a philosopher working out of AccessNow. It's a digital rights non-government organization that is based in Brussels and so it's working very closely within the European Union. So they're a global human rights organization, and Daniel actually talks a lot about the AI Act. He's been looking at this for a number of different years, and within the context of the AI Act, it's going to be creating this tiered system of what is unacceptable risks of different applications of AI. They're going to be completely banned in the context of the EU. There's these high-risk applications that are going to be regulated and have to register and be monitored, and then there's the limited risks that have different transparency obligations. And so I was particularly looking at, in what ways are the AI Act going to have specific definitions for biometric data that may actually change the way that the GDPR is interpreting biometric data? Right now, there's very much a tight coupling between biometric data and identity. And some of the different data that's going to be coming off of VR, while maybe right now it may not be personally identifiable information, it may in the future, but as of right now, there's certain inferences that you can make out of that data. And because there's so much of an emphasis on identity, then there isn't a high risk or even unacceptable risk for what companies may be able to do with some of this data. So the AI act seems to be on the frontiers of trying to define some of those things, and it's also just trying to rein in some of the different potential harms that can come from the deployment of some of these AI systems. So they're trying to wrap their head around something that's an extremely difficult issue. Daniel talks a little bit about the human rights frameworks that are trying to rein in the AI technologies and how that relates back to XR technologies and XR privacy as well. So, that's what we're covering on today's episode of the Voices of VR podcast. So, this interview with Daniel happened on Tuesday, January 17th, 2023. So, with that, let's go ahead and dive right in.

[00:02:22.230] Daniel Leufer: Hi, my name is Daniel Loufer. I'm Irish, but I'm based in Brussels and I work for a digital rights NGO called Access Now. We're a global human rights organization that works to protect and defend the digital rights and the human rights of users at risk around the world. So journalists, human rights activists, and we're based really all over the place. We offer a digital security helpline that provides security help 24-7 all around the year to users at risk, to our community who need help. And in the EU, we work on ensuring that European legislation, EU legislation, embeds and protects human rights, and that we also bring in the insights from the work that we do through our helpline into the policymaking process. My own background is in philosophy, actually, weirdly, so I'm mostly working with lawyers and technologists, and I am neither. But the topic I work on most is stuff that falls under the vague and problematic umbrella of artificial intelligence. But it's a topic, I think, that intersects with so many different things that everyone's expertise is really valuable, and it's not a solely technical or legal domain. So very interesting.

[00:03:38.937] Kent Bye: Yeah, well, I'm really excited to talk to you to get a little bit of an update as to some of the major things that are happening from the EU perspective with regards to things like the AI Act that is imminent sometime in the first part of 2023, as I understand. But there's also an upcoming virtual world initiative or aka the metaverse initiative. There's also the GDPR, which has obviously been a huge impact of trying to bring about more sophisticated privacy architectures and privacy engineering around the world. even if here in the United States, we don't get all the benefits of something like the GDPR, it at least is getting the big technology companies to change their architecture. I'm particularly interested in a lot of things around biometric data and inferences of that that are coming from XR. And I think there's the complex of things from both GDPR and the AI Act and some potential metaverse initiative things that may be coming down the road. But before we dive into all of those things, I wanted to give you an opportunity to give a little bit more context as to your background and say like the philosophy of technology or political philosophy and how your own journey of looking at some of these regulatory issues, how you're working with this human rights organization to bring about this human rights approach to technology and working with the EU. So I'd love to hear a little bit more about your journey of where you're at now and what you're doing.

[00:04:55.198] Daniel Leufer: Great. Yeah. So I studied philosophy and literature. That was my initial background. And then I took a year out after my bachelor, which turned into five years. And then I decided to, you know, give academia another shot. and came to Belgium, which is where I've been living for almost 10 years now to do a master's in philosophy, and got straight into a PhD program. And, you know, I was generally interested in political philosophy, philosophy of technology. But funnily enough, I did my PhD on a program, an EU funded program that was focused on the First World War and philosophy. So I was working on overlapping issues to those that I work on now, except 100 years ago, 110 years ago, at this point, almost. And that was like an interesting perspective to be looking at, you know, a really cataclysmic event that happened 100 years previously, and how that sort of supercharged the adoption of certain technologies took down political regimes. It was a time of, you know, immense political change, technological change, all of these things. And seeing the kind of germs of that happening at the same time. So, you know, as I was looking at that period, things were massively happening here as well, you know, in terms of technology, that sort of coincided with the real boom in machine learning as a transformative technology. And funnily enough, I remember when I started working with Access Now after my PhD, on many occasions, I said something like, you know, it'll just take some kind of transformative global event to massively change the uptake of technologies in the way that the First World War did and then COVID hit. And that's precisely what we saw in a lot of domains, you know, huge uptick in digitalization, all of these different things spurred on by that event. So the parallels were pretty interesting, but I generally found that, I mean, when people say they leave academia, it's kind of an easy thing to leave because there's no jobs. Um, but I definitely felt more drawn to activism to really engaging with things that are happening right now, rather than and you know, there are obviously many fantastic activist academics. But in terms of the environment, I was really attracted to working in something that was a bit more hands on. And I happened to read a review of Virginia Eubanks book, Automating Inequality, probably the second last year of my PhD. And that just was like an explosion going off in my head where I realized that a lot of the political philosophy of politics types questions that I was very interested in were being posed in, you know, really urgent ways by these technologies that sort of fall under that umbrella of AI. And it just seemed like a really urgent, important and juicy topic to get into. And yeah, I don't know, I guess the stars aligned pretty well in that once I finished my PhD, I had actually started learning to program. I was like, okay, I really need a job. So I had been, you know, learning some JavaScript and stuff, and actually met, I want to give a shout out to Fabian Beneteau, who's an XR developer, met him at the place where I was learning to program and went for lunch and had a chat about AI and VR. And I had this idea to run a reading group on philosophy of technology, which ended up being on Zuboff's Surveillance Capitalism. And I simultaneously had this idea to bring technologists to the university I was at, KU Leuven, to talk to philosophers. And he proposed the idea, why don't we do something in VR? And we actually did. I can maybe dig this up if he's got a link to it. We did a VR trolley problem. And it was quite a cool talk and Fabian came and he gave a talk about VR. And we actually bought a lot of really rudimentary cardboards to get everyone to try out this trolley problem that he put together. So that was like my first taste of VR. And then around the same time, I saw that Access Now was actually looking for an intern to work on the EU at the time, at this high level expert group on AI. And FunnyHitVeggie, my colleague, was on that expert group at the time. And AccessNow was quite skeptical about this ethics guidelines approach to regulating AI. And they were looking for someone to kind of help out with that. And as someone with more of a political philosophy background, I was also quite sceptical about this idea of ethics guidelines and self-regulation and stuff like that. Obviously, I think there were a lot of philosophers, ethicists who were quite flattered to be suddenly very relevant. I think, yeah, just there was an instant click between me and Access Now in having this more critical perspective on the kind of AI ethics boom that was happening at the time. yeah, that really worked well. And while I was at Access Now, I applied to be a Mozilla fellow, and put forward this project idea that, you know, a lot of what I was seeing working with Access Now was that you spend so much time, and to be honest, it's less today, in policy discussions around AI, you spend so much time myth busting, just pushing back against these, you know, misguided ideas about what AI is, what it can do. And the basic aim of the project was to sort of crowdsource what are the worst myths and misconceptions about AI, and then work collaboratively with people to develop resources for civil society organisations to help them kind of crack through that because it was a time, so this is like 2018, 2019, when a lot of civil society organizations who maybe had been working on data protection and other issues for a long time, were suddenly faced with AI. And it's a joke I always make that if you see an academic or organization that has a project that acquired funding in 2016, it's all big data. If it was 2018, it's all AI. And there was just this shift and you're often you're talking about the same thing, but the discourse shifted. And it's interesting as well, because you had just had the adoption of the GDPR. And it was a time when if you said data protection, You know, you said regulation, you didn't say data protection, voluntary guidelines. And then suddenly everyone was doing AI and it was like ethics guidelines. And, you know, you had this mushrooming of like hundreds of sets of ethics guidelines. And so we were there at that time as Access Now and other partner organizations, Article 19, different people kind of pushing back and saying, no, no, no, no, no. we need regulation, we need human rights protections. This is not something that self-regulation can get by. So that's the transition briefly in the way that it happened for me that, yeah, right place at the right time, I think.

[00:12:11.313] Kent Bye: Yeah, that's a really great context for your own journey, but also I think connects a lot of things that I've come my own journey in terms of trying to figure out just this issue of privacy and XR and how there's no federal privacy law in the United States, but there is a general data protection regulation that GDPR that was made from the EU. And, you know, I went to the American Philosophical Association Eastern meeting 2019 and saw Dr. Anita Allen give the presidential address and was on the philosophy of privacy and digital life. And in the writeup that she did for that talk, she pointed to this paper that the EU did in 2015, it's called Towards a New Digital Ethics. And so it's trying to take this human rights approach of trying to take these fundamental human rights to inform the regulations that were being developed. And so I feel like that's something that's unique and different for what the EU is doing. And so since you're at Access Now, which is the host of RightsCon and other human rights organizations around the world, facilitating this community of technologists and policymakers and philosophers, academics, trying to look at this intersection between technology and society through policy, but with a specific lens of human rights. So I'd love to hear a little bit of your take of this connection between the human rights and taking a human rights approach to regulation and why that seems to be a good approach contrasted to maybe some other approaches that are taken

[00:13:33.278] Daniel Leufer: Yeah, no, it's a really interesting topic to dig into. And to be honest, it was one of the main things that occupied, I would say, the first couple of years that I was in the space. And it's really interesting to look back now, because the idea of regulating AI, of human rights and AI is so taken for granted at this point, but it wasn't at the time. It was really It's the result, I think, of a lot of hard work by a lot of people to push back against some of the other frameworks. And not to say that the human rights framework is the only one to view things through, you know, it has its limitations, but I think there's obviously some bad frameworks to approach things through. And that's what we were trying to fight against. And I mean, very concretely, what we were seeing a lot of at the time in these companies, self-generated AI ethics guidelines, was more of a utilitarian approach. And human rights frameworks are more deontological. So rather than a utilitarian approach, which is going to be looking at how do we maximize the benefit, maximize the good, minimize the bad, ensure that there's a net benefit to applications. With the human rights approach, you tend to be looking more at absolute or at least principles that need specific criteria in order for them to be infringed upon or to allow exceptions. So, just to take a concrete example, I like to think about the concept of automated gender recognition. This is a topic that we've campaigned on and our work in this area is built, essentially trying to operationalize the work of scholars like Oz Keyes, big shout out to Oz, Morgan Schuurman, different scholars like this who have looked at how automated gender recognition systems, which is essentially some kind of an AI system that is inferring your gender from some kind of observable data, And, you know, they vary in complexity, you have really stupid ones, like there was one a while, few years ago, that claimed to be able to infer gender from people's names, just like a total disaster, and was really bad, down to, you know, much creepier ones that are inferring gender from facial structure or from your gait, different things. And you'll often hear companies who push these things or governments who deploy them citing things like accuracy rates. And they'll say something like, this is 90% accurate, it's 95% accurate. And if you're coming at that from a utilitarian point of view, you're maybe going to look at, okay, say this is a system that controls And actually, I'm struggling to come up with a good example of what you could use an automated gender recognition system for because they don't have a good application. But a really horrible application is to control bathroom access. So that outside of men's and women's toilets, you would have a facial recognition system that infers someone's gender and then opens the door if it matches. If you're coming at it from a utilitarian perspective, you might say, right, if it's 95% accurate, that's great. Maybe it doesn't work for a few people. But, you know, overall, this is a net benefit. This makes things seamless, whatever, for people who are trying to access it. But I think if you're coming at it from a human rights perspective, or just a genuinely humane perspective, you go, who are the 5%? Is it a random selection of people? No. Is it someone like me? A cishet white guy? No. it's a 5% of people, it's the same group of people who face discrimination daily. And here, I'm really pirating Oski's work here. It's a subset of people who face discrimination on a daily basis in all sorts of other places, the job market, housing market, education, workplace, everything. And you're discriminating against them. And you're actually, if you think about these systems, they tend to classify people usually according to a gender binary, male or female. What about if you're non-binary? You're just not classified. Your existence is invalidated by these systems. They also tend to misgender people, trans people, women with darker skin. And when you think about it like that, you go, this thing is actually designed to discriminate against those 5% of people. And that I think if you have a framework that sort of incentivizes you to look at things from that perspective, it's a much more powerful perspective and a more just perspective than having these kind of often perverse utilitarian calculations. And, you know, someone might say that's a bit of a caricature of utilitarianism. There's obviously more sensible versions with more safeguards built in. But there are some quite caricature utilitarians out there making some calls. Our analysis, and again, this is building on the work of Oz and others, but from a human rights perspective, I think if you look at a technology like automated gender recognition, The conclusion that you'll come to is not, oh, this is fine because it generally works for most people. It's like, this shouldn't be allowed. It violates this subset of people's rights to such an egregious extent that it is not permissible to deploy something like that. And that's something that we'll come back to if we talk about the AI Act a bit later, this idea that certain systems should be prohibited. But that was a bit the battleground. And there's obviously very important critiques of the human rights framework from a decolonial perspective, from a racial justice perspective, all of these other you know, really valuable additions to expand the framework of the way that we see things. But I think the types of sort of simplistic utilitarianism that dominated a lot of those AI ethics guidelines back in 2016, 2017 was definitely a much inferior way to look at things. And I think it's really nice to see that you don't get a lot of that These days, the proviso I will add, though, is, as I'm sure you're well aware, we're entering another peak of AI hype with chat GPT and everything. And the utilitarianism is very strong in a lot of quarters with defective altruism, long termism, all of this stuff. So I definitely I'm bracing myself for a few more years, going around to, you know, bringing a human rights perspective to this new generation of generative AI systems. But yeah, that's more or less how that bottle went up. And, you know, it's interesting what you point out as well about the Anita Allen reference and, you know, huge props to Anita Allen there. She's absolutely fantastic. The fact that the EU was pushing this idea of an ethical approach to technology that was grounded in human rights. That is something we saw as well in the ethics guidelines for trustworthy AI, which were the first step in the process that got us to where we are now with this AI Act regulation. It was ethics based in human rights. And that was interesting, because it was specifically taking this deontological approach, not a utilitarian approach. But as we and many other people pointed out at the time, that was fine for what it was as a step towards regulation, but it needed to only be a step. It couldn't be the final product. And thankfully it wasn't. So our worries were assuaged on that point.

[00:21:14.603] Kent Bye: Yeah, I want to tie this back into XR to some degree, because my primary concern is around the types of data that are made available by XR devices, virtual reality, augmented reality, and the type of physiological and biometric data that's being radiated. Britain Heller has written a paper that she's arguing that a lot of ways that biometric data is typically thought of is if it's being able to identify someone. So the biometric privacy law that's in Illinois, United States is also very much defining biometric data as data that is able to tie you back to your personal identity. And as I was reading through the draft of the AI Act, there was a number of different things in there, like, say, the prevention of using real-time biometric information to be able to do facial recognition and identify people in the context of law enforcement, as an example of saying how you're talking about the utilitarian aspects. Well, if those 5% of people, if the effect of the system failing means that you're putting people in prison who are actually innocent, then that's a violation of human rights that has been such an egregious level that you're saying that all of these biometrically determined facial recognition, the band biometric surveillance movement, trying to eliminate these different types of use cases of this band access. But I guess what I'm worried about in some ways with XR technologies is the type of stuff that Britton Heller points out, which is the more of the biometric psychographic information, which is information that you're able to make inferences upon this data that is more real-time information, contextual, about your likes, your dislikes, your preferences, that isn't always necessarily tied to your identity, but it's still what I think of as the human rights approach of neuro rights, which is trying to protect a number of different rights, five of them that they list, the ones that I really focus on in terms of neuro rights and neuro privacy are your mental privacy. And then once you map out your mental privacy, and what you're thinking, then you move into your identity. So it's the mapping of all these different aspects of your identity, and then to agency. So once you have a complete map of somebody being able to nudge their behaviors and undermine their rights to free will or rights to agency, so you have this complex of mental privacy, identity and agency, they're all tied into this. And I was hoping to some degree, that the AI Act would start to address some of these gaps, because I don't know if this is necessarily even covered in GDPR. It may be. So I'd love to hear some of your take of how you see this future of privacy and these biometric inferences that may be coming and if they already are covered with GDPR or if there will be a need for something additional and something like the AI Act or something with this upcoming virtual worlds initiative or metaverse initiative that's coming from the EU.

[00:23:54.649] Daniel Leufer: Yeah, great. I can say as well that I was so happy listening to your podcast that you focus on this issue, because it's one of the issues I've been shouting about for years as well. And I, you know, really love Britain's work as well. I think it's been so important to highlight what Andrew McStay, who's a fantastic scholar who does amazing work on emotion recognition, in particular, has called this dichotomy of identity-based harms versus inference-based harms. And that I think XR is really a domain where you see much more scope for these inference-based harms, where the issue isn't just you're identified, it's the inferences that are made about you. And you know, which is also a broader conception of identity, like my identity is not just my legal identity, my name, but all of these aspects about me. But right, loads to unpack here and one of my favorite topics. So let's get into it. Maybe I'll start with biometric data. So in the GDPR, biometric data is defined in a weird way. And the protections for biometric data under Article 9 are defined in an even weirder way. So the definition of biometric data has two parts, and this is precisely the issue that Britain points to in our paper. It's physiological, physical, or behavioral data resulting from a specific technical processing that allows or confirms unique identification of a person. Now, that was from memory, so the precise wording may be slightly different, but that's the gist of it. And you can say that there's three things that you really need to pay attention to there. There's physical, physiological, and behavioral. which is quite broad. So it's something obvious, like my face, the physical, physiological, I guess, is more things like maybe heart rate, eye movement, different things like this. And but it also then with the behavioural, it also includes things like keystrokes, which can again be used to identify someone. And then there's the part that is also very important that resulting from a specific technical processing. So there's all sorts of complex questions that we should not get into now about whether a photograph is biometric data. Typically, you're talking about a process where, you know, if you take a facial recognition system, it's going to capture an image of your face somehow. Either it's uploaded, or it's from a camera, or something like that. And then it turns that into a biometric template. And this is some numerical representation of the main features of the face. And then some form of processing is going to be done on it. If it's a biometric identification system, like a one-to-many matching system, then the template that's been extracted from that raw data is going to be compared against a watch list of suspects that they're looking for, for example. Or, we talked earlier about automated gender recognition, an inference is going to be made about your gender. something like that. Then there's this issue that allows or confirms unique identification. This is, for me, the definition of biometric data should have stopped just before that. This allows or confirms unique identification is very, very tricky because the type of data that allows or confirms unique identification changes in line with technological developments. In 2008, for example, you needed a pretty straight, you know, looking directly at camera, high resolution photo. And, you know, the legacy of that is the type of passport photos, you know, you have these quite strict things, you know, the lighting needs to be good, all of that. Because actually, to do this type of automated matching, you needed the photo to fulfill very specific conditions. Today's facial recognition systems can do identification from incredibly blurry, oblique angles, all sorts of stuff. And so that oblique blurry photo may not have been biometric data in 2008, it is now. And that also poses huge problems because data which was collected back then would not be biometric data. If they still have it, which they shouldn't, they should have deleted it. It is now. And that's also very difficult. We were just giving comments on a proposed French law about using smart video surveillance during the Olympics. And this is one of the issues that I pointed out in our comments is that they basically claimed we will not process any biometric data, but they want to do intelligent video analysis of people in public spaces. And the comment I left was like, that will allow unique identification at some point if it doesn't already, and you cannot guarantee that. So if you're processing physical, physiological, or behavioral data about people, that should be biometric data and it should be protected as such. The second point on GDPR is Article 9, which is the famous prohibition with many exceptions on the processing of certain types of data. Under that is things like data revealing sensitive characteristics like sexual orientation, and then also biometric data for the purposes of identification. And that's quite a weird, there's a list in Article 9 of types of data. And that's a bit of an outlier because the other ones are just types of data. But the prohibition on the biometric processing is a type of data used for a specific purpose. which is kind of nonsense or circular, you know, why don't you just prohibit the identification of people using personal data? It's very circular and a bit strange. And I mean, in a way, these definitions are often consequences of the not ideal process of lawmaking, you know, where different legislative bodies are proposing amendments and compromises and stuff like that. So all in all, the definition of biometric data in the GDPR could be better. Then with the AI Act, We've been trying to highlight this issue that you raised since the beginning. As you noted, and maybe for listeners, just a kind of a quick explanation of what the AI Act does, is it regulates putting AI systems on the market and deploying them within the EU. It's not the type of legislation we dreamt of, because it's product safety legislation. And using product safety legislation to protect people's human rights is a challenge that I have been deeply immersed in for the last two years. And, you know, one of the big things we wanted was prohibitions on certain uses of AI. And you pointed to the one of what's referred to in the text as remote biometric identification. So essentially a one-to-many matching system could be used in a public space, a stadium, a concert venue, to match everyone who passes through. So everyone who enters the public space has that kind of process. Their biometric template is extracted, matched against a watch list. um, this would be prohibited, except there's some exceptions. But one of the things that caught our eye immediately, and we sent a lot of comments before the legislation came out, asking for emotion recognition to also be prohibited. And something along the lines of what Britton Howard talks about with biometric psychography, except you know, certainly restrictions to be placed on it, but for the particularly egregious forms of it, like things like, and we've seen papers out there where people claim to be able to infer sexual orientation from people's faces, to infer criminality. And often when I say that, people assume what I just said is, check if someone matches a criminal database. No, I really mean like the worst physiognomy, like 19th century evil racist pseudoscience stuff of like, maybe there's a particular criminal face. and we can infer who's likely to be a criminal using AI. And I mean, the state of machine learning research is so bad that I've seen papers that kind of implicitly recreate the theories of 19th century physiognomists. I actually saw a paper that literally its aim was to use scissor lumbroses, and I always get his name wrong. I'm always unsure if I got his name right. It was, you know, the godfather of racist pseudoscience, like saying criminals have this type of nose and, you know, literally trying to take his typology of criminal and degenerate faces and train machine learning to recognize it. Like unironically, this is the type of stuff we see in machine learning research. So we wanted that stuff as well, which is not about identity based terms. It's about the type of inferences that AI systems could be used to make about us. We also thought that should be in the prohibitions. It didn't happen. I heard that at certain points during the negotiation process, it was there. And here's a first note on EU lawmaking is that the Commission, the EU Commission, European Commission is the body that proposes a law. It's the executive body. But it has many different DGs. So different departments, essentially, you know, one devoted to borders and security, DG Home, they often have really kind of 1984 type, you know, euphemistic names, like DG Home is the one that keeps refugees out of the EU. DG Just actually is the one that's devoted actually to protecting people's rights. DG Connect, you know, will be more focused on technology, innovation and stuff. And so even within the Commission, you have different bodies with different interests, you know, arguing amongst each other, trying to come up with a law that works for all of them. So, you know, there were debates about this, but in the final text, it didn't appear. What did happen, though, is that emotion recognition and biometric categorization are in the law. So they're both defined in the law. As well as, you have the prohibitions, which is very few things. And we're working actively to try to make sure that that list is expanded. And then you have a list of high-risk AI. And basically, there's Annex 3 to the regulation that lists high-risk use cases. And if you do one of those things, that could be something like you know, a lot of police uses, migration use cases, some things maybe controlling access to AI hiring software, AI proctoring, stuff like that would fall under the high risk. Then you have to, you know, you're subject to certain transparency regulations, data quality, all of these obligations that are supposed to ensure that risks are mitigated. That's like the second level of risk. And then there's like a final level of risk. under Article 52, which is systems oppose the risk of manipulation. And that's where biometric categorization and emotion recognition ended up, along with deep fakes and chatbots. I think it's been a while since I looked at Article 52. But essentially, the only obligation that was applied to emotion recognition and biometric categorization in the original draft is that users of an emotion recognition system or a biometric categorization system had to inform the people who were subjected to them of the operation of the system. And by user here, it's the technical term in the act for an entity that's deploying a system. So it's not like the AI Act doesn't apply to everyday people. You know, if I have some AI software on my phone and I use emotion recognition on you, Kent, I am not obligated under the AI Act to inform you of that. But it's more like if a bank is using it or something like that, they have to inform people that they're interacting with it. This for us is just absolute, like so far below what needs to happen. You know, it's, I mean, if someone is using an AI system to infer my sexuality, I don't want to be told about them. I want them not to do that. I want that not to be a thing that anyone can do in the EU. Arguably, it already falls foul of the Charter of Fundamental Rights in the EU, of anti-discrimination regulation, all of these sorts of things. But we want an explicit acknowledgement in the AI act, that some of these things are just so at odds with the essence of certain fundamental rights that they need to be banned. The other thing, and this is directly to your concern about the definition of biometric data, is that emotion recognition and biometric categorization were defined as, and here I'm paraphrasing the exact wording, but say biometric categorization was an AI system that is used to classify people according to certain categories. And they gave a list on the basis of their biometric data. Alarm bell. Because you can categorize someone using data about their physical, physiological, or behavioral aspects, which may not meet that bar for identification. So there will be types of biometric categorization that would actually not fit the definition because it's tied to that high bar of allows or confirms unique identification. Same with emotion recognition. So we raised the alarm immediately when we saw this. We said, the obvious thing for us was you need to change the definition of emotion recognition and biometric categorization. Things have gone in a different route that is interesting. And here, I unfortunately have to explain more about the EU legislative process. So the Commission puts out their first draft, and then they usually do an open public consultation. You can respond as a citizen, whoever wants to respond. Companies will respond, law enforcement agencies. Then you have the European Parliament, which is the democratically elected body. And they tend to be not always more amenable to human rights concerns than the other bodies. The other body then is the Council of the EU, which is essentially the Council of Member States, the 27 Member States of the EU. It's the most opaque of the three bodies, tends to be harder to communicate with that, whereas the Parliament, for the most part, is more of an open process. You know, you can watch debates, have access to documents to a much greater extent than you can at the Council. Council also tends to be, these are absolutely not absolutes, but pushing more of a law enforcement type agenda. So we were, for example, very worried that the already underwhelming ban on remote biometric identification in the first draft would be made a lot worse by the council, better by the parliament. And then at the end of this co-legislative process, the three bodies have to get together in a process called trilogues and hammer out a compromise version of the piece of legislation. And obviously all sorts of lobby groups, interest groups, academics are involved in this process. So it's quite lively and sometimes even fun to be involved in. But what happened with the definition of biometric categorization and emotion recognition is that the council actually changed the definition of biometric data. Which for us was really a surprise. So we certainly submitted comments to various member states to say, and by we here, I mean Access Now, of course, but we've also been working in an amazing coalition of civil society organizations. And so we, in all of these cases, is referring to that coalition with Access Now within Europe is a member of European Digital Rights, which is a network organization of digital rights organizations in Europe. We also share an office in Brussels and some other great organizations like Algorithm Watch, who are a German-based organization that do really great work on use of automated systems in Europe. European Disability Fund Forum, it's also an umbrella organization of organizations representing people with disabilities. Amnesty International, so loads of other organizations. And Yeah, interestingly, they changed the definition of biometric data and they did exactly what I said it should be. They deleted the part about allows or confirms unique identification. I still am trying to find out what it would mean if that ended up in the final draft of the AI Act. I said at the beginning, I'm not a lawyer, so any listener don't crucify me for legal imprecision. But the AI Act is a lex specialis on the GDPR. So it has interaction in a formalized manner with the GDPR. And it's not clear to us and we want to find out more information. about if the definition were changed in the AI Act, would that mean that the definition is actually then broader as a whole for everything that falls under the GDPR, which would be a fantastic outcome. So the Council text is finalised for the moment, and it has this better definition of biometric data in it. In the Parliament, we took a different approach. We actually Yeah, I won't get into the details. We had a version of a thing that we thought worked. And then another professor, Christiana Venderhorst, proposed adding a new definition of something called biometrics based data, which is essentially, you know, if you have the definition of biometric data, that's a bit restricted. And then you have all the other physical, physiological, and behavioral stuff that doesn't fit that. It's like a catch-all for all of that. So it would encompass everything that fits in biometric data and all the stuff that doesn't allow unique identification. So we decided to support that eventually because it got more traction than the solution we were proposing. And that is in the Parliament approach is still under discussion, but it's in the Parliament approach and it looks like it's going to be there. The definition of emotion recognition and biometric categorization were also changed so that it's on the basis of biometric data or biometrics based data, with the idea that then that catches everything. That's as far as it goes, I think, with the definitions. And then to round that off, I mean, what we've been proposing be added to the list of prohibitions is a ban on emotion recognition. We can get into discussing this a bit more. But that was a really interesting discussion because we started from that. And when we started talking to people about the idea of proposing a ban on emotion recognition, the big pushback that we got immediately was, well, you know, it's actually really useful for people with autism. And we thought, okay, we don't want to be pushing a ban on something that's potentially really beneficial as an assistive technology. So a huge shout out to Sarah Chander from Edry, who has done this incredible coalition building. You know, often digital rights organizations maybe only work together. And she's been doing amazing work to talk to, you know, organizations that represent sex workers, that represent migrants, people with disabilities, and actually built up all these incredible contacts with people outside of the digital rights bubble. And that's how we ended up working with Meher Hakobayan, who's now at Amnesty and used to be at EDF, who put us in touch with their members who were organisations representing people with autism. And it was very interesting because there were two organisations there. One of the organisations, like we looked at some research, like Oz Keys has a fantastic paper on AI and autism, which, you know, points out that it's actually typically extremely ableist. It tends to be in this perspective of trying to teach autistic people to express emotion in a neurotypical way, which assumes that their expression of emotion is somehow flawed. And so we went and talked to this one organisation who actually tended to buy the hype a bit and, you know, did say, no, we do think it's actually useful. And then we talked to another organisation, the European Council of Autistic People, And they were incredibly informative to talk to and really were of the opinion that this is a totally ableist approach. This is not something that we want. Autistic people do not need AI to help them teach or to express the emotions we need. And you're a typical people to open their minds a bit and learn how to communicate with us. So that was super interesting. And they actually gave comments, critical comments on our paper. But very funnily, what happened was they said, so we don't need it, but actually blind people do. And then we talked to a blind people's organization. They were like, no, not something we need either, but you should talk to blind and deaf people. We emailed them as well, and they were like, no, not something we need either. So there's like a general assumption that emotion recognition has value as an assistive technology. But when you start to look at the actual things being proposed, and talk to the people who it's supposed to help, then again, that's not 100%. There are, of course, autistic people out there who do think it's of benefit. But the whole narrative starts to collapse a bit. So what we've ended up proposing there is a ban on emotion recognition, if as an assistive technology, it could fulfill certain conditions, including like clinical validity, all of these different things, then there should be scope to allow an exception. But we haven't found or heard of anything that would match up with that. Then on biometric categorization, what we've proposed is a ban on all forms of biometric categorisation in publicly accessible spaces. And I use publicly accessible spaces quite broadly here in the AI Act. It's not just a place like a park or public square, also includes ticketed venues. So you know, concerts, sports games, different things like that, anywhere that you can get into, even if you have to pay. Because we believe that it's not only being identified in public, that's a problem. And that's obviously a problem if you're falsely identified. But even if you're correctly identified, or even the chilling effect of knowing that there's a protest against the government's new plan, am I going to go? Well, I'm going to be identified. They're going to know I'm there. This is a chilling effect that kind of undermines our right to freedom of assembly and association, freedom of expression. But aside from identification, if you have a system and literally there are companies in the EU proposing systems that do ethnic origin detection, which is complete rubbish. Like you cannot do that in a reliable way with the system. But even if you could, it's horrific. I mean, that's ethnic profiling. How are you going to use that? And I mean, I don't have to do too much imaginative work to imagine how police forces might use that. Or, you know, owner of a shopping centre or a mall If it sees people who look, you know, in Brussels, you typically have a huge amount of discrimination against people from the Maghreb from North Africa. If like, you know, more than 10 North African males under the age of 30 entered the space, a security guard is called. That's so dangerous. And no one's identified there, you're categorised. So we want this ban. And again, you can keep moving to proxies, you know, even if it's not explicitly identifying ethnic origin, if it's like some kind of hairstyle, or shabby clothes, you know, there's all of these proxies for protected categories, and they can always shift to that. And we've seen them do it, you know, the typical thing that we've seen is if we call out a company for offering something like ethnic origin detection, it disappears from the website. So you know, they're slippery, and they'll change to dodge whatever criticisms is coming at them. And then the final one, I'll stop after this is, this one was hard to figure out. It was like, I think everyone, or all the people we work with, we're pretty sure that there's things you can do with AI that are horrific and are an absolute violation of the essence of our rights. Determining someone's political orientation from their face. Really, the worst type of stuff, the stuff that the human rights framework was designed to stop, the type of profiling based on biometrics that the Nazis did. you know, measuring people's noses, that's what these systems do. And then making inferences about your life chances, you know, all of these inferences are extremely dangerous. But defining it in legal language is really tricky. And like not catching legitimate forms, you know, there are definitely maybe medical applications or something like this, you can use AI to analyze biometric data to detect some kind of a heart problem. How do you define what you want to ban that you don't include that, but you do include the problematic stuff? We came up with wording, went through many, many iterations. It might not be perfect, but basically a ban on using AI systems to infer categories people according to a defined list of protected categories, gender, sex, sexual orientation, political orientation, other stuff like that. what happens then is you can basically propose amendments to MEPs, so members of the European Parliament, and then they had a deadline to table amendments by. And we got very wide support for this stuff. So it's all in the amendments in the Parliament process, and the rapporteurs on the file, so the MEPs who are in charge of the file are currently working on compromises. So basically developing the Parliament's position. And we're hearing positive things about the fact that these amendments are going to make it into the Parliament text. Then it goes to trilogues. And, you know, we have to have faith in the negotiating power of those MEPs. But yeah, things can go many different ways at trilogues.

[00:50:12.322] Kent Bye: Well, I got another call in a couple of minutes that I got to hop off for, but just as a kind of a final thought, it sounds like that there's still some loopholes potentially for the types of concerns that I have in terms of the ways that technology companies with adhesion contracts that people sign and consent to the type of profiling that's happening there with the broader use case of surveillance capitalism. It sounds like whilst there may be some protections for certain use cases for GDPR and the AI Act, there's still some loopholes for the types of surveillance capitalism business models that may be moving forward, especially with XR technology data. Is that true?

[00:50:48.259] Daniel Leufer: Yeah. And I mean, the AI Act is not going to solve all the problems. It's a kind of a shot we've got to, in an ideal case, get some transparency measures, some responsible development practices kind of set as a baseline, and some things prohibited. But absolutely, it does not solve our problems. There, we really need to look at GDPR enforcement. And, you know, we've seen the recent rulings against Meta that are very interesting because it looks like the thing that we all knew for a long time that was true, that what they do is not GDPR compliant, is maybe being enforced. So GDPR is far from at its full power. I think there's challenges with the enforcement And again, there's been some interesting rulings on the types of data you're worried about, these things that don't meet the bar for unique identification that actually say that it should be covered by the prohibition on processing of biometric data. So it's still up in the air. We've seen some positive movement towards it. But I do think that the GDPR has a lot of untapped muscle to protect us there. And in the XR context as well, I think Outside of the XR context, there's no need for anyone to be collecting all of these crazy types of data about my eye movements, my brainwaves, all of this stuff, full body tracking. Someone might want to do it, but they shouldn't be doing it and they will not have a legal basis for it. The issue in XR is that there's actually a use case. And, you know, it is going to improve my experience. It is something I want, like I want full body tracking, I want eye tracking, because it has a defined purpose that it can be used for. And there, I think, you know, just back to like basic data protection principles, purpose limitation, that there has to be a clear purpose that is consented to, the data is used for that purpose, and then it's deleted. there's no further processing activities. That basic principle, if that was enforced, imagine what it would do to the landscape that we're in. And in XR, it makes perfect sense. Like, I could have all the data about me and just use it for that defined thing to make my experience better. But as we know, of course, that isn't what's happening. And you know, you look at meta, like, I've heard them make all of these claims in public about you know, we want privacy by design, we want these protections, and, you know, say things very like the things we've just been saying. And I've pointed out to them in public panels and stuff like, you know, you're subsidizing reality labs at an insane loss, using your surveillance capitalist business model. At some point, shareholders are going to be like, we want the money, like, and our hardware sales going to be enough. Or is the easy thing, you know, when the screws tighten, like we saw back in 2008, with Google and others, the move will be to what's familiar. And so I seriously worry about the ability to hold up those promises to do privacy by design to do things right. And then, you know, they understand what the issues are. And then, you know, there's definitely work being done to do things right. But whether it'll last in the long term is something I definitely worry about.

[00:54:14.501] Kent Bye: Awesome. Well, Danny, thank you so much for giving a whirlwind tour through all the different cutting edge aspects of the AI Act, GDPR, and the EU regulation of all these issues. And yeah, appreciate all your insights today. So thank you.

[00:54:27.038] Daniel Leufer: Great. Always a pleasure to be on a podcast that has taught me so much about XR. Yeah, really. Thank you.

[00:54:34.624] Kent Bye: So that was Daniel Lufer. He's a philosopher who's based in Brussels and working for a digital rights non-governmental organization called Access Now. So he's been focusing on the AI Act for the last couple of years and yeah, had a lot of really interesting insights in terms of the challenges of trying to put guardrails on this technology. He was saying that most of the different ethical approaches for the initial phases, which was the kind of more self-regulating ethics guidelines, you know, a lot of those tend to take more of a utilitarian approach, which is, you know, what is going to be the most benefit to the most people? So there's a accuracy rate of 95%. And then the question is, okay, what about the 5% that it's inaccurate for? and he's making the point that that 5% that actually is discriminated against in many other different contexts. And so it's Automating Inequality, which is a book by Virginia Eubanks called Automating Inequality, High High-Tech Tools, Profile Police, and Punish the Poor. It's a book that came out in January of 2018, and really got Daniel to go down this path of looking at how he can be involved with trying to look at these different issues of artificial intelligence and ways to kind of rein it in. So he's saying that most of the human rights approach is taking more of a deontological approach, which is setting these different rules and trying to come up with these deeper principles of the human rights principles and see how we can live into these human rights principles. And yeah, we're kind of moving from this self-generated ethics guidelines into more of this regulation regime based in human rights and looking at some of the different tiers that they have with breaking up the AI into these different tiers. What are the different applications of AI that are just completely unacceptable? What are the high-risk applications or the limited risk applications and each of these have different obligations if it's not banned outright in the European Union then what are the ways that it's regulated to be able to register and and have different reporting mechanisms that they need to show different dimensions of their data that's not have different aspects of bias or just even how are they coming to those different decisions. And then the other limited risk that has more of a transparency and it seems to be, well, I should clarify that everything I'm saying about this, the AIX seems to be still in the deliberative process and the tri-log processes within the EU regulation, which Daniel did a very great job of describing how it's like these three different bodies of the European Parliament, the Council of European Union, as well as the European Commission. Each of them have different drafts of the AI Act that they've been putting forth. There's different degrees of transparency with each of them. And at the end of it, they have to come to an agreement. And so it's through this trilogue process that they're trying to come up with some decent regulation. For the GDPR, you know, it's not like perfect legislation, but as far as trying to apply principles of human rights and then actually deploy something out there and change the architecture of these companies, I mean, it's, like I said, other podcasts, at least five to 10, 15 or 20 years ahead of where regulation here is in the United States. So as imperfect as it is, it's still like on the bleeding edge and way ahead of where anything that the United States is even thinking about some of these different things. So as it's going back and forth to all these different deliberation process, there does seem to be some drafts that are redefining how the European Union may be looking at biometric data in the future. That it's some drafts, at least at this phase, decoupled from being tied back to identity. So, once it actually gets the final draft in the language, then we'll have a little bit more information. And there's upcoming virtual world initiative and the metaverse initiative that also has the potential to create some new regulation or, as I've talked to other people, what may be more likely is that some of the existing legislation may need to be amended in order to fully encompass some of the different concerns around the new types of data that are going to be made available. So I guess the final thoughts is that this process is still happening, is still being deliberated, and that there's some potential hope for some of these concerns around neurorights and what happens with inferences from biometric data may start to be fleshed out with some of the language. And if not, then there's going to be a need for other initiatives that come after this with the Metaverse initiative and the virtual worlds initiative within the context of the European Union. that may start to highlight some of these different gaps within existing regulations and different modifications that need to be made. So I'll be digging more into some of these other EU movements in terms of regulations with both the Digital Services Act and Digital Markets Act, the AI Act and other ways in which that the AI Act may be influencing the future of the metaverse. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a This Is Reported podcast, and so I do rely upon donations from people like yourself in order to continue bringing this coverage, so you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show