#1178: How the EU’s Metaverse Initiative May Bring XR Privacy Amendments for the AI Act, GDPR, or Digital Markets Act

There’s an upcoming Virtual World Initiative (aka the Metaverse Initiative) at the European Commission on May 31st, and I had a chance to get Florence G’sell’s thoughts on it. She’s a law professor in France teaching at the University of Lorraine and leads the Digital, Governance and Sovereignty Chair at Sciences Po. If anything, she believes that this initiative might highlight some gaps in the many relatively new regulations that span from the AI Act, Digital Services Act, Digital Markets Act, and that it may reveal some needed amendments for the General Data Protection Regulation (GDPR).

We do a deep dive into some of the XR relevant provisions of the AI Act, and do a broad overview of the Fundamental Rights approach to technology policy development and that there may already be some foundational principles. She points out that the right to respect for mental integrity already currently exists within the EU’s Fundamental Charter of Human Rights: “Everyone has the right to respect for his or her physical and mental integrity.” But that the XR and neuro technologies may highlight new threats to these rights. The EU uses these human rights to help address technologies that provide systemic risks to fundamental rights.

We also talk about a failed effort by the EU Parliament in 2017 to provide “electronic personhood” status for autonomous robots / AI by

"creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently."

G’sell believes that it’s just a matter of time before this issue comes up again, especially as we have the deployment of more consumer-grade AI systems that will inevitably have more and more autonomy over time. Before we have autonomous robots driven by AI, then we’ll for sure have the striving towards autonomous virtual beings within virtual spaces, and so this is another area to look out for as AI and XR technologies continue to merge over time.

Again, the EU technology regulations are a good 5, 10, 15, or 20 years ahead of where the United States is currently at, and so this conversation is a great overview of the underlying human rights foundations of tech policy development, but also some of the frontiers tech policy in the EU are around anti-trust, AI, privacy, and decentralized blockchain technologies. There’s a whole new set of regulations that have recently passed, and some like GDPR which are entering into the enforcement phase like the recent $400M decision against Meta by Ireland’s Data Protection Commission. Time will tell how effectively these EU regulations can be enforced, but they’re certainly starting to change the landscape of technology architectures in Silicon Valley, and we do a broad overview of the most recent rounds of legislation and how it might start to impact XR and the future of the Metaverse.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that's looking at the future of spatial computing and some of the ethical and moral dilemmas of XR technologies. You can support the podcast at patreon.com slash voicesofvr. So, continuing on my series of looking at XR privacy, today's episode I'm featuring Florence Gazelle, who's a law professor in France and a bit of an expert on what's happening in the context of the European Union, is tracking a lot of the different developments there, and was actually at the Stanford University Cyber Policy Center's Existing Law and Extended Technologies, and was on a panel discussion, Susan Aronson, who was arguing that the way to address some of these issues is through governance. Basically, I take that to be a self-governing, self-regulatory approach where companies are keeping themselves in check. And Florence Gazelle is like, actually, in the European Union, we've actually figured out a really good way to regulate these companies because we cannot rely upon these companies to regulate themselves. whether that's around how they're deploying these AI technologies or she's also looking at these blockchain technologies and all the different fraud that results in that. And we're also going to be digging into both the Digital Services Act or Digital Markets Act and the AI Act and the GDPR are going to be influencing the future of the metaverse and these virtual platforms. Also with the keen look at seeing how some of the different aspects of privacy may or may not be covered in some of these complex of all these different regulations from the EU. some of the positive aspects of the regulation in the EU as well as some of the downsides as well. And there's also a virtual world initiative and the metaverse initiative and we'll get some of Florence's reaction to that and some of her pessimism for what that's going to bring on its own right and maybe if anything it's going to highlight that there may need to have amendments for this whole complex of other regulations that are being passed. There's been a whole slew of new regulations and the AI Act is still being deliberated and discussed and we talk about all these different things and what that may mean for the future of privacy and the metaverse. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Florence happened on Wednesday, January 18th, 2023. So with that, let's go ahead and dive right in.

[00:02:24.781] Florence G_sell: My name is Florence Gsel. I am a law professor in France. I teach at the University of Lorraine. I also teach at Sciences Po. And at Sciences Po, I lead the digital governance and sovereignty chair. And over the past three, four years, The chair has been focusing on mostly platform regulation, but not exclusively platform regulation, because we look at any kind of technical evolution. And of course, we are very much interested in Web3, in internet governance, but also in extended reality and in the legal challenges of extended reality.

[00:03:04.940] Kent Bye: Okay, and so maybe give a bit more context as to your background and your journey into looking at these philosophy of law issues.

[00:03:12.312] Florence G_sell: Yes. Well, as a researcher, I've been extensively writing about the fundamental rights approach to technology. So I started with a report about AI and fundamental rights, because a few years ago in 2020, the Fundamental Rights Agency, which is an institution of the European Union, published a report entitled AI and Fundamental Rights. And I contributed to this report France. So I had the occasion to interview many people in France that were developing AI projects. But of course, after looking at the various projects that you can find in private companies, but also within the French administration, because we saw many people within the French administration developing AI projects, for the government. So after looking at those various initiatives, we had to assess them from the perspective of fundamental rights. We had to look at the various challenges those initiatives raised. And so this was an interesting journey because it was for me the occasion to speak with people that are not jurists, that are not lawyers, that are basically engineers, computer scientists, and basically startupers that are developing technological tools, but are also very concerned about ethics, and about legal issues. So this was my first experience with this question of technology and fundamental rights. And then more recently, last year, I co-wrote a new report, this time for the Council of Europe, on blockchains and fundamental rights. And we tried to look at the opportunities and the risks of the blockchain technology for fundamental rights. And as always, We concluded that there were many opportunities, but at the same time, it is necessary to be very much careful with this kind of technology because there is always a risk of mass surveillance. Even if you use encryption tools, there is always a risk for freedoms. So basically this is what I've been doing over the past few years as a researcher. And more broadly speaking, the chair that I lead also published various studies from other authors. I'm not the only one that published for the chair. And we have so far published various studies regarding the EU regulations, the Digital Markets Act, the Digital Services Act. And of course, we continue following this, the implementation of those regulations and the new AI Act that is currently being negotiated at the EU level. And that has a very strong interest when you're interested in extended reality, because the implications of the European AI Act for extended reality will be very important.

[00:06:24.933] Kent Bye: I had a chance to talk with Daniel Loeffler of Access Now, who was talking about the AI Act and some of the different provisions. I had a chance to hear you speak on a panel at the Stanford Cyber Policy Center. It was the existing law and extended reality. There's a little bit of a debate in terms of Susan Aronson was saying, Let's not focus on regulation. Let's focus on data governance. But yet you're stepping in saying, actually from the EU, we've kind of figured out how to do regulation with this human rights approach. And so maybe could explain what that means in terms of the human rights approach and why you think that's an effective way of starting to develop some of these tech policies to be able to start to rein in the power of big tech.

[00:07:08.069] Florence G_sell: Well, so my first remark about this would be to say that I suppose that in Europe, we do not really believe in self-regulation and we have maybe a traditional way to look at regulation and to look at the way we can run in big tech. But we really think that there is a necessity to adopt new rules and to have European regulators, national regulators, carefully monitor what private companies do. So this is the idea that is behind the GDPR, the Digital Services Act, the Digital Markets Act. Of course, this creates a huge implementation issue and enforcement issue. But this is, I think, what is considered as necessary in Europe. And what is interesting with all those European regulations is that they are mostly based on a fundamental rights approach. In the DSA, if you look carefully at the DSA today, you will see that there is a new concept, a new notion, that is the notion of systemic risks. And so it is required that the very large online platforms assess the systemic risks they create and try to reduce them. So what I could say is that this notion of systemic risks can be applied to other fields, can be applied to extended reality. And the notion of systemic risk, as it is defined today in the DSA, refers to fundamental rights. So you have basically a systemic risk for the DSA is a negative effect for the exercise of fundamental rights. So you have a list of the fundamental rights that are taken into consideration in the DSA, human dignity, private life, family life, personal data protection, non-discrimination, consumer protection, rights of the child. So you have really a very long list of fundamental rights that are specifically protected by the DSA through this notion of systemic risks. And I think that this notion is here to stay that, okay, today it is in the DSA, but we are not finished with it. We will keep it. And I think that it will be used again If we consider virtual reality and immersive environments, I think this notion of systemic risks can be clearly used. And of course, we will need to assess those risks. And today I already have colleagues in France that are promoting a new fundamental right that would be specifically related to extended reality. So what basically they say is that we should probably establish the right to respect for mental integrity. This is in the words that we are currently using, but the idea is to say, okay, we already have regulations. We have very effective regulations. The GDPR is very effective, but the GDPR, for example, and it's the same with the DSA, those texts do not address the issues of mental integrity, mental self-determination, cognitive freedom. And so this, of course, goes much further than the protection of personal data. And so what's at stake here is really about the defense of basic fundamental rights. But we clearly need an assessment to assess what the risks could be. And so today, I'm not sure that we have a sufficiently precise idea of what extended reality will be in the coming years. As for me, I wouldn't be able to say what's at risk at the moment, and of course we have, yes, an important work before us.

[00:11:27.607] Kent Bye: Yeah, that's really helpful in an alignment with a lot of my own assessment of looking at Rafael Eusta and the Neuro Rights Initiative that's trying to establish a set of neuro rights. I think some of them, the first three are really connected to that neuro privacy of the mental privacy. And then you have the identity, which is sort of the aggregation of inferences of modeling your identity and then the right to agency or free will. trying to prevent the nudging once you have a really complete model of someone based upon violations of mental privacy, then you're able to maybe perhaps nudge people in ways that they may not even be consciously aware of. And so, but at the same time, I don't currently see those protections existing within GDPR or even the AI Act that there's certain definitions of biometric data that has been tied to identity, there's the trilogue, deliberative process, there's some versions that are eliminating the identity contingency, you know, so I guess the question is, where would that mental integrity right be defined? Would it be an amendment to the Digital Service Act? Or would it be something that would be something new with the virtual world initiative or the metaverse initiative to start to define some of the things that have been ignored in other contexts?

[00:12:34.685] Florence G_sell: So what I could say at this stage is that we already have a right to mental integrity that is protected by the Charter of Fundamental Rights. So today, Article 3 of the Charter of Fundamental Rights states that everyone has the right to his or her physical and mental integrity. So this is something that is already provided by the Charter. But of course, we could try to see new consequences to this right to mental integrity. For example, I have two colleagues that defend the idea of, how could I explain this, a principle of cognitive self-determination. Of course, this would focus on the risk of manipulation, the risk of emotional manipulation. That is a very specific risk. But this is at least something that could be done to protect people and to limit the manipulation of behaviors. which is clearly a risk, but this is not the only type of risk that you might have. Because when you look at data protection, for example, and again, we probably need to adapt the GDPR in this respect, because we will have very specific data in immersive environments that will be collected. And I think that you've said so at the conference, but it will be very easy to detect our emotions. And so this is a very specific category of personal data that is not listed within sensitive data. So maybe we should, again, we should probably create new categories to specifically protect the types of data that we will get in those immersive environments.

[00:14:21.264] Kent Bye: And when, when you're talking about needing to define and specify that in what context does that happen? Was that going to happen in the virtual world initiative and have new specific carve outs that are defining this, or is this going to be an amendment to existing things?

[00:14:35.269] Florence G_sell: I think in the case of sensitive data, it would be necessary to amend the GDPR because today you have a list of sensitive data. The interest of including new types of data, new kinds of data in this list of sensitive data is that there is more protection. it's not that easy to collect and process sensitive data. So we already know that we will have applications that will collect and process huge amounts of data about what we look at or emotions or facial muscles. And so the expressions of our faces, even the afflictions of our voices. So we already know that those will be collected and processed. And I think this is of interest to include all this in the category of sensitive data. And so this would require an amendment to the GDPR. So yes, but we could go further because again, you have people that are very much concerned about the processing of mental data and the processing of people's state of mind, of emotions. And so you have some researchers that suggest to go further. So it would not just be creating a new category of sensitive data, it would be also trying to find out a new regime, a new scheme. Let me explain. It has been suggested, for example, to create a new category of sensitive processing rather than sensitive data. Why? Well, because through this notion, through this concept of sensitive processing, it could be possible to include intrusive processing as a type of processing that present particular dangers. And so this goes very much further than just the notion of sensitive data. So here, this is something that is currently suggested by scholars, by researchers in France, but of course, this could lead to new amendments. I think that if we want to create a new type, a new kind of sensitive data, it would require to amend the GDPR. But today we have the AI act that is currently being drafted, being negotiated, and this opens a new possibility. And so we'll see in the course of negotiations, if there are new amendments, new initiatives, and new provisions included in the AI Act, because over the past weeks and months, extended reality has been taken into consideration by the people that are currently drafting the AI Act. I think that there was an amendment that was voted in the European Parliament that said that the provisions of the AI Act apply to virtual reality. But this amendment is not necessary because it applies anyway. But we can see that the drafters are very much preoccupied by the latest development regarding extended reality. So this is really, really important. And of course, at the end of the day, the question is, should we prohibit certain processing? Should we prohibit certain applications? And I'm not really inclined to prohibit But of course, when you have certain applications that are clearly designed to manipulate people, for example, this can be understood. But all this is related to what you presented. So those ideas about neural rights, because of course, the idea of neural rights deals with technologies that is one step further, as far as I understand. But all this is related. So we have a lot of work to do, of course.

[00:18:42.692] Kent Bye: Well, I guess to clarify, I think that actually XR technologies have just as many implications for other like brain control interfaces to do more of literally reading your mind. But XR technologies, as they add together all those different senses, you have the same types of threats to mental privacy and to identity and to degrees of agency. And so I wouldn't make a strict differentiation between virtual and augmented reality and brain control interfaces because There is a spectrum under which that over time, and as you take eye tracking data and connect that with galvanic skin response or other emotional data or facial tracking or all sorts of other information, I think the point of XR is that all of these data are going to be bundled together and fused together. And with that fusion, you're going to be able to get new insights that we can't even imagine right now. And I think the thing that I've noticed, at least as I talk to Daniel from Access Now, is that there is this kind of tiered system where there's applications of AI that are completely banned. And then there's the high risk ones, which at this point, it seems like there's a little bit of debate as to whether or not it's going to be like a self-reporting, like people have to report their high risks and do a bit of a self-regulation aspect to it, which is not great. But then there's other, a lower tier than the high risk. And that's sort of, I guess, mild risk or whatever normal risk that is going to be where existing aspects of the biometric inferences live now is according to what Daniel was saying. And so in terms of the hierarchy of sensitivity, that's actually kind of pretty lowly ranked in terms of the types of risks, meaning that I assume that as long as the users consent to this, then the companies are free to do whatever they want. And I think this is.

[00:20:19.475] Florence G_sell: Yes, but the thing is, in the current version of the AI Act, if a given tool is classified as high risk, there are still many due diligence obligations. And this is quite something. If you look carefully at those due diligence obligations, of course there is a system of self-certification, of reporting, of risk assessment, but still you need to be registered. There is some monitoring. And the thing is, if you look carefully at the current version of the AI Act, most AI tools are high risk. When you look at the list, so you have the main provisions and then you have a precision given at the very end of the regulation. And when you look At those provisions, you can see that you have almost, well, you have so many, many, many categories that are included in high-risk tools that it looks like it's almost everything that we use on a daily basis, as far as I understand. There is one last point that I could highlight about the AI Act, which is the fact that you have certainly seen that the applications, the tools that are designed to distort human behavior are prohibited. Basically, the provision says that those tools that can distort human behavior in a manner that causes physical or psychological harm are prohibited, which opens the possibility of litigation about what is something that could cause physical or psychological harm, but at least The tools that can manipulate people are carefully regulated. The second remark that I could make about the AI Act is that the original version of the AI Act prohibited social scoring by public authorities. And the example is, of course, a Chinese social scouring system. But a few weeks ago, I think it was right before Christmas, a new provision was added that extended the prohibition to private actors. So basically, the provision says today, you know, it's in the process of being negotiated. So we don't know what the final version will be. But as of today, the regulation prohibits the use of AI tools for social rating purposes. And this, again, has been extended to private companies. Which is very interesting because basically we are scored everywhere. I am scored on the Uber application. So this is interesting because what I think is that on extended reality environments, of course the private companies that develop those environments might be tempted to score users depending on their behavior, depending on their compliance to the terms of use. And so this is, again, I think it's an indication that the AI Act is really designed or is really meant to be applied in those environments that will, of course, be full of AI tools and AI devices.

[00:23:46.965] Kent Bye: Yeah, just a quick comment on that is that I know in just in talking to a number of different social VR companies, just on background information, they have a lot of those types of scoring systems in place, but in a shadow way, just for trust and safety, because they have to understand like, as they're filtering through this new paradigm of conduct moderation means that this is real time conduct that they have to monitor. And so someone's behavior over time has to be tracked in some fashion. And so if they eliminate that or make it illegal, then it would actually have the negative consequences of the trust and safety within the context of their environments. And so there are various trade-offs like that, where the context under which that these things are being applied. I mean, even as we talk about the data, Helen Isenbaum's contextual integrity is trying to differentiate how there's specific context and how there's appropriate uses given those contexts. And I think that the thing that I keep coming back to is the thing that you mentioned that the processing, it's not just the data that's sensitive because we talk about the sensitive data, but it's really, the heart of it is the processing, what happens to those real-time inferences of that data. And as far as I could tell from talking to Daniel from access now, is that a lot of those are kind of like the lowest ranked of risk. And so there's, there's not even a high risk protections from the AI act with some of these different biometric differences that are happening. And so as we move into XR, either there has to be a collective within the context of the European Union to recognize the threats of some of these biometric inferences and to raise the bar to get some of those protections, or like you said, have some sort of other regime that's going to establish either those processing rights or to figure out how to really protect those aspects of mental integrity. Because as far as I could tell, they're not covered in GDPR, which would need to be amended, or they're not covered in the AI Act. and they're not covered anywhere else, which means if there's this new virtual world initiative or metaverse initiative, I don't know if that's the context under which that it starts to be adopted, or if it makes sense to use that as an initiative to recognize the threats of the metaverse and the new types of data that are made available by these sensing technologies. And if that then gives them a lens to start to make all these other amendments. But as of right now, there's no protections for this type of biometric inferences or biometric psychography that Britain helped us spelled out.

[00:25:59.470] Florence G_sell: Yes. Well, what I would add is that you still have in the AI today, you still have all AI systems or almost all AI systems, you have this transparency obligations, which it's at least a way to inform users that they are interacting with an AI system, that their data is being processed. by an emotion recognition system. And again, recently there was a new provision that was added that the users of an emotion recognition system should be informed when they are exposed to such a system. So these are the types of provisions that are currently being added to the AI Act, but even though they are not that clear, because when you have a provision that says, okay, individuals should be informed, that they are exposed to an emotion recognition system. I'm not sure that I understand what it really means, precisely means. So yes, we have safeguards at the moment in the text, but I don't know if we will have sufficient safeguards to deal with extended reality. This I don't know because it's due to the way the regulation is negotiated.

[00:27:17.345] Kent Bye: Yeah, because at the symposium, you had mentioned that you were a bit skeptical that we needed any type of virtual world initiative or metaverse initiative and that GDPR with the Digital Services Act and Digital Market Act and the AI Act should theoretically be sufficient to cover all those different dimensions. But with the things that I'm calling out that this kind of processing dimension of the biometric data, I don't know if that would be the context. But I don't know if you have any other thoughts on whether or not you still think that it's not really going to lead to any new radical regulations and that what is already there with those complex of existing regulations should be sufficient?

[00:27:53.533] Florence G_sell: Well, I would say that currently many people, you know, we we've had so many regulations adopted in a few years. So, of course, when you have people saying, well, we need a metaverse act, basically people laugh. This is not taken seriously because we already have the GDPR. So, of course, the GDPR might be amended We have the AI Act, and as I said, my impression, and I'm not, you know, it's not that easy to follow the negotiations because you do not always have the information in real time. So you read the press, you try to realize what's currently going on, but currently the AI Act is being drafted and negotiated to take into consideration most, not all, but most of the challenges raised by extended reality. Of course, it might not be sufficient. I am a little bit concerned about the fact that the GSA, for example, is really designed for the current state of social media. you know, with a statement of reasons that must be given to users in case of a decision about moderation. So it's very, very specific to content that is in writing on social media. And we will have other types, of course, of behaviors. I think that you highlighted this at the conference and others did as well, but you have attitudes. you have behaviors. And of course, this is really problematic because it will be more and more difficult to police behaviors in those environments. And we still need to imagine how we will be able to report someone or to report someone's behavior. So I know that some people at Meta told me that they were trying to develop the possibility to report someone. They are developing what they call the shield so that people can be protected, more specifically on the Horizon platform. But of course, it's probably not enough. And I... Maybe I'm thinking of the worst, but are we today ready to prevent people from organizing a terrorist attack in an extended reality environment? I'm not that sure. Do we have the tools to detect those behaviors? So, of course, this raises new issues and we will probably at some point have to regulate. my impression is that for the moment, it's not the main preoccupation of the authorities and even of experts, because we have so many, so many regulations that we have to deal with at the moment. And again, I think we still have to think about the real risks raised by extended reality environments. To me, this is not done already. We do not have a clear idea of what's at stake, but maybe I'm just talking about myself. So yeah, so my position would be that we will definitely have to adapt our existing regulations, maybe to amend the GDPR, maybe to amend maybe the DSA or to adapt the DSA. And hopefully, yes, we will be able to deal with the major questions and the major issues with what we have. But after this, you know, we will have new questions. And the last remarks that I could make is about the question of legal personhood. I don't know if you're familiar with this. Well, basically, it's interesting because I think this question will come back. Five years ago, a little bit more than five years ago in 2017, the European Parliament voted a resolution urging the European Commission to adopt a new text, a new regulation, creating a new legal status for the most sophisticated robots. The idea would be to grant legal personhood to the most sophisticated AI. So those AI tools that are really autonomous and that would therefore be able to conclude a contract, to be liable, to subscribe an insurance. Why? Because, well, the European Parliament said, okay, those tools are going to be able to make decisions and to act autonomously. So if no one controls them, it will be very, very difficult to find someone that will be liable for them without controlling them. So the best solution is to grant them a form of legal personhood so that the question of liability, the question of insurance, the question of basically the costs could be solved. So it's an interesting question. There was at the time, there was an enormous pushback. against this proposal. But it was a resolution voted by the European Parliament, it was official. But there was an enormous pushback from the other European institutions, from many scholars, many legal scholars opposed that suggestion, mostly for ethical reasons, saying, okay, today you have humans that have legal personhood, you have some organizations that are legal entities, but those organizations gather humans. So you always have a human behind all this. And here you want to grant legal personhood to a machine, which is not acceptable. So there was a real pushback, but I think this subject will come back maybe in another form. But what I think is that at some point we will have AI tools that are so advanced that to solve basic legal issues, then for example, I am concluding a contract with an AI-driven avatar. Well, for the contract to be valid, this AI-driven avatar must have a kind of legal personhood. Otherwise, it's too complicated. So for my part, I'm totally sure that this question will at some point be discussed again. And this is also related to the various Web3 applications that you can find, for example, the decentralized autonomous organizations. And so again, this is something that is very specific. It's a new development. It's not just about protecting users or guaranteeing that the user's fundamental rights are protected. It's just taking into consideration legal issues and practical issues raised by those evolutions. So I don't know what your opinion would be about this. This raises huge ethical questions.

[00:35:04.499] Kent Bye: In the context of virtual reality, I hear people talk about how in the future, if we do have these types of autonomous robots and they feel threatened, then maybe they would meet them in a virtual space where they would have less threats to their physical integrity. So I feel like there's a way in which the virtual bots are probably going to exist in a virtual space before they exist in a physical space. But yeah, another thing that comes up as you talk about that, I think about Are you going to sue a robot? Would that mean that this AI entity has their own bank account because they're able to open it, so they're able to manage their own resources in a way?

[00:35:37.106] Florence G_sell: This was the idea behind the proposal voted by the European Parliament, that they would have their own bank account and they would be able to manage the funds themselves. This is also what's at stake with DAOs, of course. the fact that you would have machines managing funds, which is dangerous.

[00:36:00.370] Kent Bye: And to some extent that might already be happening and to the ways that algorithm really driven decisions are made in the stock market. I wanted to ask one question around the digital markets act, because we talked about the digital service act and you know, that's kind of like of the four things we haven't really talked about that that much, but my understanding of it at least is that with the Digital Markets Act, you would take something like the iPhone and iOS, which at this point has no way for you to sideload something or to have alternative app stores on there, is something like the Digital Markets Act intent to try to open up the hold that companies like Apple or Google have on the new markets that are made, enabled by the technology platforms that they're creating, but they're able to take like a 30% cut And they have different ways that they could prevent other people from being on their app store. So there's incentivized anti-competitive behavior there. So as part of the Digital Markets Act to try to break that lock and start to have the future of the technologies have more open marketplaces.

[00:36:58.209] Florence G_sell: Well, you have various provisions in the Digital Markets Act. Basically, there is one aspect that is very important in the Digital Markets Act, which is the fact that interoperability must be guaranteed. And you also have various provisions about the App Store. So you have an obligation, for example, the Apple App Store to accept not everything, but to guarantee the same conditions to everyone and to every type of professional. And we say that the main idea is that users shouldn't be prisoners of a given environment. And this is what Apple tries to do. So of course, interoperability, portability are major aspects of the DMA. And you have so many obligations in the DME, many things that I could say, but the idea is that all users must be guaranteed equitable conditions, especially professional users and professional vendors. It remains to be seen how this regulation will be implemented. you already have a lot of litigation regarding Apple and Amazon, regarding discriminatory practices, etc. So we'll see. But the idea will be that we'll have the possibility to leave a given environment and go to others. I don't know if my answer is totally satisfying for you, but

[00:38:24.057] Kent Bye: Well, I guess one of the things that comes up when I hear some of that rough description, and I don't know if this would apply, but it makes me think of things like, you know, there's Fortnite that has a whole economy where you can only buy avatars within the context and you might not be able to export that, but that's because they've got licensing agreements that they've signed within the context of that to be able to get avatars of that. Or they have something like VR Chat or Rec Room or Horizon Worlds, or even Second Life where you have most of these platforms have been built in a closed walled garden context. meaning that even for that to work, they have to create a closed system in that way. But we have the open web, which is trying to create more like in the metaverse standards form from the Kronos Group and all these other consortium of different companies ranging from like Meta and IKEA and other folks that are trying to create those interoperability standards for the metaverse. But I don't know if the Digital Markets Act would be applied to these existing virtual worlds or game worlds that have created a closed system

[00:39:21.463] Florence G_sell: Well, the thing is, the DMA applies to what we call gatekeepers. So basically, the notion of gatekeepers concerns the biggest platforms. Let me look at the criteria. To be a gatekeeper, yes, you have at least 45 million users in the EU. So, of course, this specific regulation only applies to the biggest platforms or tech companies. So Apple, for example, Apple. So for those small companies that are currently being developing new environment and virtual reality environment, the DMA will not be applicable. And one last remark, because I think it's really important to highlight this. So of course, everyone wants portability. Everyone wants to be able to go to another environment and to bring the data, but it can be complicated because let's assume that you use a certain Let's assume that you have data on your self-driving vehicle and then you decide to change and to buy a new vehicle from another brand and you want to bring the data with you. It will be very difficult for you not to bring the apps, but then there might be a legal issue even in terms of IP rights. So of course we want portability, we want interoperability, but in practice this will raise new types of problems and new types of issues. I'm not saying that it's not desirable, but I'm just saying that of course nothing is easy. nothing is easy. But my answer about the DMA is that yes, it focuses on the biggest companies and the idea is to avoid too many abuses from those companies. Because of course, what they are currently doing, and it's the case with Amazon, it's the case with Apple, is they really try to prevent their users, especially their vendors, to interact outside the platforms. They want to impose their own services of authentication, their own payment services. And of course it's not tolerable. So basically this is the idea of the DMA. And there is another aspect that is about the data because the DMA provides that those big companies will have to share the data with their users, especially their professional users for the data that those professional users create. So I would say that the DMA is designed to create a more equitable world between those big tech companies and all those other companies that are trying to make business on the internet, but it doesn't solve everything.

[00:42:06.914] Kent Bye: Yeah, just a few more questions to wrap up here. We talked about all these different regulations. I feel like in some ways, the GDPR is at the phase of starting to enforce and see what the limits of how strong of the law this is going to be in terms of actually seeing what the practices are and seeing how they can actually enforce some of these different regulations on companies that have had their own interpretations. I know there was recently recent decision in Ireland that was looking at meta and the way that they were maybe in violation of GDPR. So love to hear any reactions you have in terms of the enforcement phase and what we can look forward to in the future of as the appeals process starts to work out, if we're going to kind of wait and see how strong it is or any other thoughts for what's next as we have created the regulations. Now it's the process of actually enforcing it.

[00:42:55.835] Florence G_sell: Well, I think that it's, of course, it's a very good thing that those national authorities and national data protection authorities take action and sanction the companies that do not comply. But to me, and it's my own personal opinion, I find it a little bit problematic that some of the data protection authority are, to me, a little dogmatic. I clearly understand that, of course, you want to protect users and you want the GDPR to be complied with, and you want users to clearly consent to the data processing. But if you are too strict, you might take decisions that are not economically sustainable. Let me give you an example that I think I have given at the conference, which is a decision from the French Data Protection Authority. that provides that the application Google Analytics is illegal because this application is not compliant with the GDPR, mainly because Google transfers the data to the US and you have very specific requirements in the GDPR regarding transfers and the European Court of Justice is very strict. So basically at this moment in France, Google analytics is illegal. So it was said in the press and the major French companies were notified that using Google analytics as a company is an illegal practice. But all the SMEs use Google Analytics. It's a reality. It's a reality. Of course, they are not even aware of the decision of the French Data Protection Authority. And if they had to stop using Google Analytics, I suppose that they wouldn't know what to do. So basically, it's not enforced. So we have a very dogmatic decision stating, well, using Google Analytics is illegal in France, But as of today, businesses and companies continue to use Google Analytics. And maybe one day you will have a user that will complain because he or she went to a hair salon and the hair salon uses Google Analytics, but I don't think this will ever happen. And so this is something that is true about the GDPR, which is the fact that in certain aspects, it goes too far and then it's not complied with, it's not applied. And so this is basically my regret and my other regret about the GDPR. And this is something that that we should probably ask when we deal with extended reality because we keep consenting. I consent every day, all the time. I consent and I barely read what I see on my screen. I just consent because I need to move on. And so this is what users do. And so this is what I think was the main flow, the main problem with the GDPR, which is again, which was an enormous progress that makes us probably more protected, less targeted, et cetera. But still you have flows and we should ask whether, you know, in virtual environments, we would like to have those kind of pop-ups, you know, coming up all day long asking if we consent. So yes, this is one of the questions that we might have.

[00:46:28.541] Kent Bye: Hmm. Great. I have one question about the web three and then a question just to wrap up. Um, you said you working on different web three and cryptocurrencies and is there any pending regulations around cryptocurrency? Are there any sort of human rights approaches that they're taking around cryptocurrency? Because there's obviously a lot of instances of fraud or abuse that's happening. So treating it as existing financial securities. If that's one approach to kind of use existing laws or if there needs to be kind of new laws or human rights approaches that you're looking at in terms of Web3 cryptocurrency stuff.

[00:47:02.070] Florence G_sell: Yes. So there was a regulation. The various member states have been regulating the crypto world over the past few years. It is a case in France because in France in 2016, 2017, we had this ICO hype. as everywhere else. And so the French regulators quickly reacted and they wanted to have a specific regulation. So we already have in France laws that clearly regulates cryptos, ICOs, exchanges. So all this is already adopted in France, but at the EU level in 2022, New regulation was adopted which is called the Markets in Crypto Assets Regulation. We call it MICA, Markets in Crypto Assets Regulation. And it is expected to enter into force in 2023, well, this year. And it's a regulation that deals with crypto assets and all related activities and services. And of course, you have provisions dealing with, for example, with stable coins. So it's a very particular regulation, but that deals with cryptocurrencies with the crypto economy. At the moment, we do not have anything, for example, regarding smart contracts. So regarding the blockchain technology, broadly speaking. So this will probably come. For example, we have at the EU level, it's true, at the member states level, you have various initiatives about digital identities. And so even at the EU level, there is a very strong intention to take into account, for example, what is called the self-serving identity. So those types of decentralized identities. So they are currently being taken into consideration. The EU is currently developing a specific wallet to have an EU identity wallet. So every user would have an EU identity wallet. And so the services, the commission services are clearly following carefully the technical evolutions in the field. So yes, so as for regulations, we have regulations of the crypto economy. And for the other aspects of the blockchain technology and more broadly speaking, Web3, we have a few initiatives, for example, regarding digital identity. You have value states and the EU Commission is also working on this about evidence. Can you provide evidence from a blockchain, which is not clear at the moment? Well, at least before the French courts, I don't know what they would do. So this is a kind of initiative that you can find. You do not have anything, broadly speaking, regarding the blockchain technology, regarding decentralized autonomous organizations or their status, their legal status. At one point it will come. At one point we will have to discuss this. We do not have any initiative about a huge issue on decentralized systems, which is the issue of the diversity of jurisdictions and applicable laws. And at some point, again, we will have to discuss this because we need to know which law is applicable on a given decentralized system, or if there is no applicable law, which is another option. But this is not the case for now in Europe. And I think the EU authorities have so much to do that they are not considering these type of issues at the moment.

[00:50:55.070] Kent Bye: And then the final question to wrap everything up here is I usually like to ask people to imagine what they think the ultimate potential of these immersive technologies are. And for you, I'd like to add a little twist in terms of the regulation, the role of regulation, because we have this imbalance of these big tech companies, which you've talked about how sometimes they can be stronger than nation states. And so What is your future of having this right balance between the innovation and possibilities that are made possible with AI and virtual and augmented reality and decentralized web technologies and cryptocurrency, everything all added together? What's the ultimate potential of that? But with human rights being preserved and human flourishing, being emphasized with the right kind of regulations and what does that world look like to you?

[00:51:40.531] Florence G_sell: Well, It depends if you're asking me what I anticipate or what I would like. I can clearly speak about what I would like. And what I would like is clearly a world where you would have an approach that would involve users, that would involve the civil society, that would involve NGOs, when it is necessary to decide the design, the architecture of a given environment, what the users, which guarantees the users will have. For the moment, and this is my regret about the European approach, it's very binary. We have the regulators on the one hand, and on the other hand, we have the private companies. And of course, the most recent regulations, and it's true with the DSA, are trying to create new process, new methods, since those regulations involve civil society, some NGOs, et cetera, but it's not enough. So what I would like is a world where those technological evolutions can be developed, but not only by computer scientists, not only by private companies that look for profits, but by at the same time, again, stakeholders, various types of stakeholders and not only regulators. So what I would like to see is everyone around the table making the main decisions about the design of a platform or about the main decisions of a given company, a given platform, because this is the only way to have more balance. And I'm not sure that this will ever happen. I know that you have many people that are currently very enthusiastic about Web3, about the centralized system, I am not that sure, and I really regret this, but that it will work that well. Myself, I'm still on Twitter, but I'm on Mastodon at the same time. And I still spend more time on Twitter than on Mastodon. And sometimes you feel like, okay, centralized environments are not that bad when you have just one company to talk to and to complain to. And I'm afraid that people at some point might get used to those very big platforms that have an overwhelming power, but that we can talk to, that we can complain to, and that can do everything for us. And so this is what I'm afraid of, because what I would like is, of course, a more decentralized environment, but something that would be really organized so that everyone can have their say. And maybe it's idealistic.

[00:54:38.137] Kent Bye: Yeah. I'm in a similar boat in the sense of Metcalfe's law, which says that the value of a network is based upon the number of people that are in that network and the ones that you're connected to. So the platforms where you have more connections are going to be more valuable to you. So I'm also on both, but also find myself in a similar situation where I have an existing large network that I've cultivated over 16 years on Twitter, but yet sort of nested on still new. I don't know if it's that there's one company to complain to more than the network effects of the relational dynamics that I have on these different platforms and cultivating new relationships and new connections that as they get developed, then I'm sure those patterns will change over time. But yeah, I think that's the sort of dynamic, but yeah, also really appreciate all your insights and your time here. And just, is there anything else that's left and said that you'd like to say to the broader immersive community who are wrestling with these different issues?

[00:55:29.302] Florence G_sell: Okay. Okay. No, but I think I've said everything. Hopefully, hopefully.

[00:55:35.346] Kent Bye: Awesome. Well, thanks again for your time and for giving a bit of a tour of what's happening at EU and your perspectives on all this stuff. So thank you.

[00:55:43.070] Florence G_sell: Thank you. Bye bye.

[00:55:44.731] Kent Bye: So that was Florence Cazelle. She's a law professor in France. She's teaching at the University of Lorraine and leads the Digital Governance and Sovereignty Chair at Sciences Po. So I have a number of takeaways about this interview is that first of all, Well, just reflecting upon how ahead the European Union is in terms of taking this human rights approach and applying it to regulating all these different technology companies, it seems to be a winning formula. And this paper that was put out by the European Commission back in 2015, called Towards a New Digital Ethics, data, dignity, and technology. So these different human rights principles that they have this high level, and then they think about, okay, how the technology is going to be potentially impeding on some of these rights. So she was making the argument that there's already a right to mental integrity for mental self determination, that's in the Charter of Fundamental Rights, But there may be new consequences of this. And so I guess some of her perspective is that there's this whole new slew of all these different regulations that have passed. And some of them are going to take a few years to start to be operative in terms of even being enforceable. And then we're already in the phase right now of seeing how GDPR is or is not going to be enforced by some of the individual states of the EU. So her take was that within the context of the EU, there's a little bit of skepticism that there needs to be yet another initiative for the virtual world initiative or the metaverse initiative. But what may actually come out is that with a whole complex of these new biometric and physiological data that are going to be associated with these XR technologies, that they may need to actually amend the GDPR to accommodate that. And talking to Daniel Leufer, he was talking about how even the AI Act may start to redefine how biometric data is being considered and that there's kind of a relationship to all these different acts that they have this flexibility for future legislation to change the existing legislation. And so Florence's take was that, you know, maybe there'll be something like that, we'll see some of the different gaps in that we'll need to cover it with either the AI Act with the GDPR, or there's these other two new things with the Digital Services Act and Digital Markets Act. that is trying to address these big platforms as well. So trying to ensure that there's not these anti-competitive monopolies that are created within the context of any one platform and that there's certain principles of interoperability to try to ensure that there's competition that's in the context of these markets. So it sounds like some of the different things that are being passed are going to start to not only address what these big platform companies like Facebook slash Meta or Google is already doing, but also Apple with what they're doing with their App Store markets as well. Also, really fascinating discussion there about whether or not it would just be easier to grant some of these autonomous artificial intelligent agents with personhood. I personally think that's probably gonna start within the context of these virtual worlds where you don't have to think about all the different constraints of the physicality of these robots, but we'll probably have self-generating autonomous virtual beings in the context of virtual worlds Way before we're gonna have robots, but it does beg the question as to are you gonna start to have a whole new tier of? Regulation that is regulating these or do you just grant these machines? Personhood to make it easier to apply the same rules that they would be applied for humans to these robots. So I kind of a scary future as you think about these autonomous artificial agents, these robots. I mean, the way that we think about it now is that there's usually an owner in the economic context who paid for it. But if they're owning themselves and out there and being completely autonomous, do they have their own bank accounts? Do they have their own companies? How do you ensure that they're living up to the same ethical standards for what humans are doing? I mean, it's kind of like this weird science fiction future that we're stepping into. the EU is kind of at the frontier of thinking about some of these things and the fact that it was even introduced as a possibility and that it was shot down at the time, Florence believes that it's going to come back because she just sees it as an inevitable evolution for where we're going in terms of the future of technology. So, yeah, we'll see if it comes up again, if it's passed with the larger changing context. And I suspect that there's going to be a part of which these autonomous agents are going to be living in the context of these virtual spaces way before we're going to have autonomous robots. Yeah, as time goes on, I guess they'll be looking at this personhood for AI as we move forward. And yeah, that there is the AI Act, and we talked about that, obviously went into a deep, deep dive with Daniel Leufer, where we covered in great depth, some of the different aspects for how the biometric data and the different tiered systems. And Florence was just emphasizing that there are different transparency obligations for each of these, at least for the aspect of disclosing to the user. But based upon my conversation with Daniel Leufer, that some of the biggest concerns around emotional detection, as well as biometric data, was one of the lower tiers, which has not even the more strict high-risk reporting obligations or monitoring obligations for the government. Yeah, I guess as we move forward into deploying these different types of systems, I just feel like the complexity is increasing at such an exponential rate that I don't even know as entities like the EU tried to wrap their head around enforcement of these different systems. Well that they're gonna be able to keep up. There's already a technology pacing gap that here in the United States. We're just so woefully behind like I've said it feels like 5 10 15 or 20 years behind where the EU is doing and so it's always good for me to hear the different issues that they're thinking about but yet even with Florence's talking about how Google Analytics is illegal within France, but yet there's not necessarily the same type of enforcement of all the say mom-and-pop marketing shops that may be deploying it on all these different websites, so there's kind of like an ideal version of what they're trying to live into, but then there's pragmatic reality for what the culture is already doing, be able to do their job, and yeah, trying to take these human rights approaches and deploy them out across all these different emerging technologies. I guess we'll see over the next couple of years to what degree they'll be successfully able to rein in some of the potential abuses or harms that are coming from these emerging technologies. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. If you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue bringing you this coverage, so you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show