The VR Privacy Summit happened on November 8th, and brought together representatives from the top XR companies, start-up companies, VR experts, medical academics, and legal experts to talk about the potential risks and benefits of having access to biometric data from VR technologies. I talked with co-organizers of High Fidelity’s Philip Rosedale, Stanford’s Jeremy Bailenson, and Independent Researcher Jessica Outlaw at the end of the VR Privacy Summit to capture some of the highlights, takeaways, and next steps for facilitating a broader conversation about the future of privacy using immersive technologies.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
Stay tuned for more information from the organizers from the VR Privacy Summit, but in the mean time be sure to check out my other three interviews from the VR Privacy Summit:
- #714: VR Privacy Summit: Electronic Frontier Foundation on Privacy on the Web
- #715: VR Privacy Summit: Open Web Privacy Lessons for the Immersive Web
- #716: VR Privacy Summit: Medical Insights into VR Privacy + Health Benefits of Biometric Data
Also, if you’d like to get ramped up on some of the complicated and nuanced issues around privacy with immersive technologies, then check out some of my previous interviews that cover a broad range of issues of featuring legal, medical, and technical experts. There are also a number of convsations with some of the biggest VR companies about privacy in VR:
- #493: Is Virtual Reality the Most Powerful Surveillance Technology or Last Bastion of Privacy?
- #516: Privacy in VR is Complicated & It’ll Take the Entire VR Community to Figure it Out
- #517: Biometric Data Streams & the Unknown Ethical Threshold of Predicting & Controlling Behavior
- #676: ACLU’s Jennifer Granick on Surveillance, Privacy, [the Third-Party Doctrine,] & Free Speech
- #641: Oculus’ Privacy Architects on their Open-Ended Privacy Policy & Biometric Data
- #645: Oculus Go + Open Questions Around Facebook, Privacy, Free Speech, & Virtual Governance
- #520: Oculus’ VR Privacy Policy Serves the Needs of Facebook, Not Users
- #533: High Fidelity is Architecting for VR Privacy with Self-Sovereign Identity
- #625: Decentralizing Identity in VR with [Simbol] & Self-Sovereign Identity
- #670: A Primer on Self-Sovereign Identity Standards with Kaliya Young
- #684: Insight Engineering: Data Portability, Identity, VR, & The IoT Edge
- #669: Internet Co-Inventor Vint Cerf on Decentralized Internet Challenges
- #475: Designing Google Earth VR [with questions aoubt Google’s Privacy Policy]
- #507: Unanswered Questions about VR Privacy & Google
- #492: HTC’s Dan O’Brien on Vive Tracker & Privacy in VR
- #360: Open vs Closed Metaverse: Project Sansar & The New Experiential Age
- #365: Democratizing Neuroscience with OpenBCI & [Unique Biometric Signatures Could Prevent Anonymous Data Collection]
- #514: Tobii Recommends Explicit Consent for Recording Eye Tracking Data
- #518: Advanced Brain Monitoring EEG Metrics [& Biometric Data Privacy]
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So on November 8th, there was a VR privacy summit that happened at Stanford University, and it was a collaboration between Philip Rosedale of High Fidelity and Jeremy Balanson of Stanford. Also with Jessica Outlaw, an independent researcher, and I was involved a little bit with the logistics, but mostly my contribution was doing the coverage starting back in March of 2016 at the Silicon Valley Virtual Reality Conference. I started to hear that people were talking about the concerns around privacy within virtual reality. And since that time I've been talking to people how there's this complete paradigm shift moving from a 2D realm into an immersive web where there's going to be so much information that's coming from these sensors that are going to be about our unconscious biometric data that is going to be able to tell information about ourselves that we don't even know that we're communicating. There's a part of our unconscious psyche that could be made available if this biometric data is being captured and recorded in mind. But there's also some amazing healing possibilities for what is going to be possible with this type of data, whether it's going to be through medical applications, or there could be quantified self-applications, Or there could also be really malicious uses for this type of data if it's collected and turn into psychographic data that could be used against us in various different ways. And so I think there's a lot of risks and potentials that the entire industry was trying to figure out still. And so what happened was that Philip Rosedale and Jeremy Balanson wanted to bring together all of these different people from different companies and have a meeting that was closed and private, but using the Chatham House rules. And so we're able to talk about the content of what was discussed, but not who specifically was talking about. it. But after the VR Privacy Summit ended, I was able to talk to eight different people on the record about their takeaways and impressions and what they had to contribute or what they learned and what's next for them. And so I talked to someone from the EFF, someone from Mozilla, three medical professionals, and then the organizers of the VR Privacy Summit, Jeremy Bailenson, Jessica Alal, and Philip Rosedale, which is what we're going to be covering in this interview. taking what their big takeaways were for the day and what's going to be happening next since this seems to be the first gathering but it's certainly not the last and it's opening up the dialogue to broaden the coalition to bring in more technology companies but also more researchers and more privacy organizations and certainly a lot more medical professionals as well. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Jeremy, Jessica, and Philip happened on Thursday, November 8th, 2018 at the VR Privacy Summit at Stanford University in Palo Alto, California. So with that, let's go ahead and dive right in.
[00:02:46.710] Jeremy Bailenson: My name is Jeremy Balanson, and along with Jessica, Kent, and Philip, we hosted a privacy summit to try to figure out how to protect one's nonverbal and other data in augmented and virtual reality.
[00:02:58.227] Jessica Outlaw: My name is Jessica Outlaw. I'm a culture and behavior researcher in virtual reality. And one of the things I did today was lead a workshop where people had to identify what were the risks of not doing anything around privacy and just imagining adverse outcomes. And then afterwards, imagining solutions together that would prevent those adverse outcomes.
[00:03:21.218] Philip Rosedale: And I'm Philip Rosedell, I'm the CEO of High Fidelity, and before that the founder of Second Life, and along with Jeremy, who was, I think, the two of us who initially started talking about this idea some time ago, I have been passionate about bringing people together to contemplate what we need to do about privacy if, in fact, virtual worlds and VR devices really start to take off in the next couple of years, which I think they're going to.
[00:03:46.660] Kent Bye: Yeah, and I just wanted to give a shout out to also some support staff from High Fidelity that helped coordinate today. I don't know if you want to name who else was helping put this together. Absolutely.
[00:03:54.335] Philip Rosedale: We had Ashley and Christine here helping us, and it's been a great experience.
[00:03:59.325] Kent Bye: And so I went through today, and I'm just wondering if you could sort of summarize, like, what was the big takeaway, or what was, like, who was here, or what was the importance of what just happened today? What kind of ideation, and what was maybe figured out?
[00:04:12.994] Jeremy Bailenson: So from my perspective, we had an amazing group of both attendees and speakers. We had a California Supreme Court Justice. We had people in charge of the privacy policies for the major tech companies. you know, after seven hours of intense deliberation, I personally came away with at least three new ideas that I hadn't had about how to think about win-wins where companies can still thrive and consumers can still feel safe about their data.
[00:04:39.035] Jessica Outlaw: I think one of the things that I got from today is identifying where my blind spots are about privacy. And so there are some people here who are experts from the medical field and from many other aspects of tech who are very, very good at foresight and imagining these adverse outcomes. And I think they went to places using their expertise that highlighted how dangerous dissemination of data can be and keeping data in a centralized database. that links your virtual identity and your real life identity.
[00:05:10.711] Philip Rosedale: I thought this was inspiring because there was a general consensus on really two things. One, the possibility that VR and the general use of VR outside of specific applications is going to become something very soon. I was delighted by how much everybody in the room happily, I think, agreed this was the case. And then the second thing was just a general concern and thoughtful treatment of privacy, and I think a general agreement that its implications are very, very important for us as technologists to be contemplating right now.
[00:05:45.098] Kent Bye: Yeah, I think one of the things that I also took away was this idea of this institutional review board that we have from academic studies, but having something equivalent for privacy. I felt like that was one of the ideas that was like, hey, what would that look like to actually have some sort of independent review along that? But also, just how much data can be gathered to be able to identify people, whether they're moving in different ways, how they're moving their bodies, and so this idea of incognito within a 2D world kind of goes away once you're in a virtual reality world because you're broadcasting your gait information, how you're moving, and soon more about your facial movements, all sorts of data that's going to be able to be extrapolated and potentially personally identifiable. And what happens to that biometric data I think is probably one of the biggest open questions for myself in terms of what are the ethics around recording it, what are the implications, especially if that data is in the hands of bad actors, what could happen, what's the worst case scenarios. going through different sci-fi scenarios. But maybe you could talk a bit about what's the state of the art in terms of what personally identifiable information within a virtual reality context?
[00:06:47.925] Jeremy Bailenson: First, I'm really torn by this. We did Chatham House Rules today, which is we were allowed to talk about the ideas, but not who said them. But I want to make clear that the brilliant idea of the Institutional Review Board for privacy agreements wasn't ours. And I'm so torn because I want to give credit to the genius that came up with it. Because I hadn't heard it before today, but we'll let that person decide to take credit for it. really inspiring idea to take, to apply from the medical field and the psychological field. Just a board, you know, it's a lot to ask for a board to regulate all companies' privacy and their activities. It's not a lot to ask to have these agreements where there's a small number of them to be vetted by an organization that's going to make it transparent to consumers, that's going to make it understandable, that's going to make it kind of fair and win-win. So I love that idea.
[00:07:31.570] Jessica Outlaw: And I think we should explain a little bit more about the idea behind the Institutional Review Board. So one of the main ideas is that it's perspective. And companies would have to state in advance, not only are they collecting the data, but how they're going to use it. And then if they're going to keep it, and then they decide they want to use it for something else later on, they would have to go back to the Institutional Review Board and identify, like, is this an OK thing to do? Or do we actually need to contact these people and reconsent them? for this additional usage. So this whole idea of having all the work done prospectively about using data would actually safeguard users and make sure that if a company is sold and the data goes somewhere else, the new company can't just take the data and run with it.
[00:08:17.460] Philip Rosedale: I would add that I think a general recognition and concern about the richness of biometric data overall was a key point in our discussions and again there was a remarkable degree of both knowledge and thoughtfulness and then general agreement that perhaps in a manner akin to medical data, this information which we're going to be swimming in in virtual environments is an important thing to be thinking about how we protect.
[00:08:41.193] Kent Bye: Yeah, one of the things that was fun in the afternoon was doing this almost science fiction writing in terms of different disaster scenarios of how all this could go wrong, and then to do the antidote to how to actually put in some practices and implementations to prevent that. But it kind of felt like the writer's room for Black Mirror with a bunch of people projecting out into the near future to come up with some scenarios that are painting out some of the risks. And so I'm just curious, what's some of the takeaways from that? Because it generated a lot of good discussion. almost a narrative frame to be able to look at some of these issues, but a grounding that allowed us to start to see what the potential future is holding with some of these implications of the future of privacy and virtual reality and augmented reality. I'm just curious to hear some of your takeaways from that.
[00:09:23.868] Jessica Outlaw: So one thing that was a theme across many of the anecdotes was people chose to write disaster scenarios about children. So it was imagining a future where people were growing up and had no real ability to consent in to systems where their data was being used. And then in the future, those young people had reputational consequences, loss of opportunities, loss of access, loss of freedom. And I think that was a really strong theme across all of the different groups.
[00:09:52.563] Philip Rosedale: You know, another idea we talked about was the idea that some of the powers granted to us by VR and AR devices in the future will be so great as to be a matter of human right or, you know, insurance or a medical requirement, which I think was fascinating that we are entering into a time where we will so amplify ourselves with these devices that we've got to all think about how we equitably distribute that. And we don't cause addiction and we don't remove something that's been granted because it is almost a part of your body.
[00:10:23.276] Jeremy Bailenson: You know, one of the ways I prepare for this conference was reading every paper that's ever been done where we look at the nonverbal data tracked inside VR and associate it with an outcome. The outcome could be, did you learn? What's your identity? It could be, what's your mental state at the time? What's your emotion? So we reviewed many, many studies that show that you can predict very accurately who someone is, what they're thinking, based on how they move in VR. One of the solutions to this quandary, which is what are the implications of having all this nonverbal data that tells companies and governments so much about me that I hadn't thought of until today, is, you know, it turns out taking your name off data doesn't help. We can infer who you are from all sorts of things, whether it's your zip code or your IP address. Removing your name doesn't help. And the neat thing about today is that we had companies here in which you can store data for a short amount of time, maybe that's 10 seconds, maybe it's only a tenth of a second, enough for the advertisers to glean, for example, did you show an interest in this part of the scene, where they can still win selling advertising, but your data are recorded only for a nanosecond or for some chunk of milliseconds. And hence your privacy is protected, but the companies can still make money.
[00:11:31.184] Kent Bye: There's different dimensions of the underlying economic models of how everything is going to be run here. And there's financial incentives that I think go against user privacy, at least how things are done. But there's not necessarily something that exists where there's something that can replace it. So we're kind of in a situation where, despite all the best efforts to come up with the highest levels of ethics and privacy, there's a bit of what happens pragmatically in terms of how does this actually shift from here going forward. So I don't know if you guys have any thoughts where this could go given the sort of economic incentives there.
[00:12:11.328] Philip Rosedale: Faced at a moment where VR perhaps is becoming real, we're at the same moment that we were with the internet where we have to make really big structural decisions around the mechanisms by which we will all monetize or even more broadly keep the whole thing running. And again, there was just a broad sensibility and agreement around that we need to contemplate as we did or perhaps did not with the internet, the difference between business models that are driven by collecting data and business models that are driven by paying for things. And so I think a very powerful discussion that was again, you know, just kind of across the board, thoughtful and had a lot of people engaged in it, was this difference between, you know, should VR be something where you pay for things or something where you kind of give up your data for free and then it's kind of used against you. And I think, again, there was a pretty great conversation and a lot of belief that you should be able to pay for things.
[00:13:04.942] Jessica Outlaw: I would also add that it's not either or. I think there are solutions that are going to come out of this that still allow companies to use data to develop great products and can create opportunities for them to monetize. and to protect user privacy. I don't think this is an either-or situation. If you're thoughtful, if you're perspective about it, then you can set up a win-win situation.
[00:13:30.636] Kent Bye: So other things that were mentioned in monetization was doing some sort of basic attention token or things that are happening in real time rather than collecting and aggregating data. It's this idea of real-time interaction rather than storehousing data that you may or may not use, but that there's lots of risks that happen once you start to get into the business of storing data.
[00:13:47.591] Jessica Outlaw: Right. And then another aspect is that companies are very concerned about creating great products, but companies also have a reputation to protect. And so if companies create a reputation where they protect user privacy, that would probably draw better employees to them. I mean, I think we had a really recent example in tech where employees were walking off the job because they were really unhappy when they learned about how the company was handling sexual harassment charges. And so there are other ways to incentivize companies to have good behavior besides just free market data collection and usage.
[00:14:25.202] Kent Bye: All right, so what's next with all of this happening with the VR Privacy Summit? We had a gathering here. There was a bunch of brainstorming. What are the next steps, and where do you see this going from here?
[00:14:35.474] Jeremy Bailenson: We are going to write a series of pieces. Philip, I believe, is going to write a blog post. But we are going to produce a set of guidelines that will help companies figure out perhaps what they should do in regards to privacy. The long-term goal, which is not going to happen in the next day or two, is a template agreement where companies can choose to model portions of it.
[00:14:57.156] Philip Rosedale: I think it's just exciting to have met so many people from such a, we're so early, to have met so many people at such an early moment in an emerging industry and have everybody recognize that these are important issues to discuss. So the next steps I think for me will be, we will certainly write about this, but I suspect that this meeting will be remembered as one where it set us up for subsequent meetings where we all did get together and much more thoughtfully consider the future of VR, perhaps, than we did the future of the internet the first time around.
[00:15:29.504] Jessica Outlaw: So there was the workshop that I ran in the afternoon where people had to identify risks and identify solutions, and already some people have come up to me afterwards saying that they're going to repeat the exercise when they go back to their companies and run it internally. And so I'm hoping that, you know, we can disseminate that process And more and more companies can think for themselves, this is the right solution for my product. And it will actually make the industry as a whole better.
[00:15:57.394] Kent Bye: Great. Is there anything else that's left unsaid that anyone would like to say?
[00:16:02.295] Jeremy Bailenson: Ken, thanks for doing your podcast. I always love listening to it.
[00:16:05.295] Philip Rosedale: It was great to have you helping us through the day as well. Fantastic.
[00:16:09.336] Jessica Outlaw: Everybody should give money to Ken's Patreon.
[00:16:12.928] Kent Bye: Awesome. Thank you so much. Overall, I do think that this was a huge turning point within the conversation of privacy within immersive technologies and that Philip's going to be writing up some of the big takeaways and seeing what might be next with is there actually going to be like an institutional review board that starts to get formed and to just build more momentum and to be able to have the next conversations that I think are going to be a little bit more focused. I think this conversation was to get a lot of just information as to what is even known and possible. The first half of the day was people giving a number of different talks. I specifically talked about some of the different highlights from my coverage. And so the big points that I personally was making was that talking to a behavioral neuroscientist, John Burkhart, the way he put it to me was that there's going to be an unknown ethical threshold between predicting behavior to then controlling behavior. And so I think we saw that with Cambridge Analytica with the information that got out, and we have this concept of whatever data that we're giving over these companies are just going to be in the secure, safe vault forever, and that it's only them that's going to have access to that. But But the way that technology and security works is that sometimes there's going to be security vulnerabilities. And so in some ways, you have to think about like, well, what are the worst case scenarios if all of this information gets out onto the dark web? And what could you possibly start to do with that? And I think that when it comes to biometric data, to me, it represents this possibility of starting to create these really sophisticated and robust psychographic profiles on people that they could be targeted or manipulated in specific ways. So in talking to Sarah Downey, one of her recommendations was to just not record data that you don't need. I think in today's tech world, there's this sense of like not being sure what the business model is actually going to be. And so there's this mindset of capturing as much data as you possibly can. However, from a legal perspective in a third party doctrine, anything that you give over to a third party has no reasonable expectation to remain private, which means that if collectively as a society that we're starting to give more and more of this data over, that weakens the legal definition of what is reasonably made to be private from a Fourth Amendment perspective, which has implications when it comes to being able to deal with authoritarian governments, but also what kind of laws are passed to allow other companies to be able to have access to some of that data. So if there's a weakening of the Fourth Amendment, then there's a connection to the First Amendment, which is that you may not feel as willing to be able to speak openly and freely if you know that there's going to be things that are going to be recorded. And it's even going to get to the point where you start to read pupil dilation, or galvanic skin response, or emotional facial expressions. There's going to be all sorts of unconscious tells where we have control over what we say, but we don't have complete control over what happens to our body when we're giving different stimuli. what that means is that we could start to be receiving the stimuli and then our body is going to actually tell information than we're actually willing to be able to disclose ourselves. And so there's this whole dimension of consent that gets really blurry when you're talking about unconscious data where you may not actually be completely consciously aware of what you're saying with your body. And providing companies or authoritarian governments with access to this information could paint a lot of really crazy black mirror scenarios. So there may actually be some use cases for being able to capture this data for your own self growth, or if you're trying to monitor different medical applications. And so this concept of self sovereign identity, or the data is actually being stored by yourself on your own device, then what does that start to look like? But if the third party doctrine is changed or if there's different legal ways of addressing that, I think it's an important thing to think about because without addressing that, then all of this conversation then turns into this point of any data that you're giving over to these third parties doesn't have any privacy protections at all. And finally, the major point that I was making was just that all of this is being talked about in the context of surveillance capitalism, in that, in a lot of ways, until there's other alternative business models that come up with, then there's always going to be like this inherent tension between the economic incentives of a company, as well as the different privacy policies that are actually implemented in that. despite all of the best efforts in the spirit of something like GDPR, if there's going to be a fine that is a small fraction of the amount of profits they're going to make for vitilating the spirit of the law, then there's very little economic incentives to actually follow the spirit of what the GDPR is setting out. And so, in a lot of ways, it comes down both to the ethics and the underlying economic incentives of that company. And so I actually don't have an alternative to being able to provide universal access to all the world's knowledge to everybody in the world at the scale that Google is currently doing. And so I think that's one of the challenges is to figure out some of these new innovative business models that may be able to allow some people to pay and have more access, but to still be able to sustain these business models without resorting to something like surveillance capitalism. So in the second half of the day, I just want to give a huge shout out to Jessica outlaw, because she led everybody through this process in which that we started to walk through some of the worst case scenarios that could possibly happen with all of this data. And it honestly felt like the writer's room of black mirror, both creating these different scenarios, but also listening to what other people were talking about. But then after that, we swapped the different stories and scenarios, and then had to try to come up with different solutions of looking at this possibility. Then what does that mean in terms of what we actually need to implement? And so. From that, we were able to then brainstorm different ideas for what would be some of the best practices that people would like to have and see within the context of privacy within immersive technologies. And so I'm just going to read some of the top suggestions down just because I think there was a lot of good ideas. And this is in some ranked order of some of the preferences of what people thought were some of the most important concepts and ideas. First of all, a lot of support for the Institutional Review Board for privacy policies to make sure that there's informed consent. Having limited storage of biometric data, I was personally advocating for no storage, but I think that either reducing what type of biometric data is recorded, or having a very small window of doing real-time processing on it, but people recognize that having some sort of limitations or constraints around the biometric data, it was important, and what that actually looks like was sort of ambiguous and not well-defined in the moment. being more explicit about what data are collected and why. I think there's a lot of data that are collected and there's just a data hoarding that's happening and without really necessarily knowing what it's going to be used for in the future. And so I was advocating for, well, just don't be hoarding the data if you don't need it. It's a popular idea. I don't think it's actually very pragmatic. I think a lot of people want to record all the data and figure it out later. That's just how a lot of startups and companies have to figure out like how they're actually going to survive. it's easier for them to have the data than not have it. Having an anti-oppressive lens that considers the real world status and conscious of who the vulnerable populations are and I think that just noticing that sometimes there may be different minorities or just being aware of either gender identity and expression or different religious beliefs. Sometimes there could be culturally sensitive aspects of someone's identity that just being aware of like how can you actually protect those different dimensions of someone's identity because it could be putting them into life-threatening situations if information gets out. Being able to opt in and share data, not opt out. So you have to explicitly consent that you want to share this information rather than, you know, by default going out of your way to have to turn settings off. The default privacy settings was a huge thing. having some sort of industry association to create standard policy templates. Is there going to be some sort of like best practice, like terms of service that if you're a small company and you want to be on the good side of privacy and VR, then what the templated standard privacy policy might look like? That was a popular idea. Being able to follow where the data goes, there's actually very little accountability of what data are collected and what's happening to it. Having some sort of accountability mechanisms when the rules are violated. Again, very unspecific of what that actually looks like, and I think it's a huge part because if some of these accountability institutions don't actually have any teeth, then what is the utility of having them? So thinking about that up front I think is important. People wanted the companies to follow the spirit of the GDPR, not necessarily to the letter. I think the reality is that there's a lot of loopholes to get around the GDPR. And as long as there's economic incentives, then there's going to be companies that are trying to get around the deepest intention of what the GDPR is advocating for. Having the right to destroy data. So if there are companies that do have biometric data or any other data, then just having option for you to delete it and to have them actually delete it from their servers. Investigate alternative business models. I think this is a key thing. I mean, in a lot of ways, until some of the economic and financial issues are really fully fleshed out and figured out, then there's always going to be economic incentives that are diametrically opposed to the interest of privacy. Having some sort of privacy certification. If we're going to continue to come up with some of these best practices, then is there going to be some sort of program that people can go and get certified and be able to actually implement it within their companies? The disallowing of the future use of data. I think it's a good idea. It's going to be actually very difficult to implement. It's basically the concept of like, don't hoard the data and that, you know, you're going to figure out something to do with the data later. And then are people really consenting to that if you don't know what you're going to do with it later? Yeah. It's sort of an ambiguous issue, but. The bottom line is like we'd love for companies to be really super intentional for what they're recording and why. Having end-to-end encryption on all the data just to make sure that if the data are sensitive then are there additional ways to be able to protect it. Having industry awards for good privacy practices so just trying to create a positive association for companies that are going above and beyond the best practices for privacy and just recognizing that so that if you're trying to find good models for who to follow then that could be a way to start to service that. Just being able to learn from the medical field and how data ownership is a huge thing because oftentimes you supposedly own all of your data, but even in the medical context, sometimes there's a shared ownership where you have access to the data, but the actual medical institution owns the container to be able to manage that data. And so yeah, data ownership, even within the medical context, hasn't been fully figured out. And I think there may be some lessons to learn for how that exists in the medical field and maybe be able to do it a little bit differently so that we can have a little bit more clear guidelines as to if it's your data then do you own it and what can you do with it. Having some sort of disclosure of what is paid advertising, having the companies declare what kind of de-identified data are collected, and some sort of updated ledger. I found that in the case of Facebook, any information that's connected to you as an individual has to be reported through the GDPR regulations and they are providing that but if there's de-identified data then there's no obligation that they're sharing what is being collected at what frequency and I think in the future if we start to look at the threats when it comes to either having AI bots that are in virtual reality being able to capture these biometric information of information that's going to be already broadcast out, I mean just being able to do that would be able to start to unlock some of the biometric signatures and be able to potentially connect the behavior from de-identified data to be able to connect it to that individual person. But also it's an open question as to how de-identified is it actually, are there going to be like these biometric signatures within this de-identified data that's going to potentially be able to be unlocked by some sort of AI algorithm or being able to correlate to other data that's going to be able to take this information that is claiming to be de-identified but actually could end up being personally identifiable. There's a request to have addressable data. We talked about defining the optimal default privacy settings because only about 4% people end up changing their default privacy settings. And so like 96% of the people are going to be working with those default privacy settings. And so just trying to figure out what some of those optimal default privacy settings should be for most companies so that if you do want to use some of that data, then having people opt into that with their explicit consent, rather than automatically capturing all that data. So if a company has been gathering all sorts of information and data and they get sold, then what happens to that data? So should there be a clause in the terms of service that is asking for each of those users to reconsent that this new company and entity has the right to be able to have ownership of that data? That there should be some explicit permissions to be able to read, write, and share data. And then finally, just having AR headsets signaling if they're recording data. So those were some of the big suggestions that came out of the brainstorming session at the VR Privacy Summit. I think the big takeaway for me of the gathering was that it's just an opportunity to get a cross-disciplinary set of people from across the industry, from some of the biggest companies in the industry, and to start to build trust in having these dialogues and discussions about some of these issues and to lay out what's known about them but also some of the risks and some strategies for being able to figure out what we can take from the medical field, what we can take from the open web and security and what we know about the risks and possibilities for this biometric data and taking into consideration some of the methods of accountability like this institutional review board or to help build trust with the audience as well. Like Philip said, this is the first gathering, but it's likely not going to be the last. There's going to be more gatherings and more people coming together. And in terms of what's next, it seems like this gathering was certainly a success in terms of bringing people together and building the momentum to meet again, to potentially be a little bit more refined and intentional as to what the next steps may be. I think it's a little bit unknown as to where to go from here, but I think that it's clear that there's enough people that are becoming more aware of some of these sensitive issues and that there's enough amazing potentials of what can possibly happen with this data. It's going to help drive and motivate different members that participated in the summit to be able to try to figure out how to come up with the best practices for how to best navigate this new realm of data from these immersive technologies. So that's all that I have for today and I just wanted to take a moment to thank my Patreons. I wouldn't be able to do the Voices of VR podcast without my support from Patreon and one of the things that was made clear to me at the VR Privacy Summit was just how much the work that I've been doing has been building awareness within the larger community to be able to take it to the next step to have meetings like this at the VR Privacy Summit and The coverage that I did here at the VR Privacy Summit, I hope, is going to help to build even more awareness, to broaden the coalition, to bring in more people, more experts. There's a lot of education that has to happen about these issues, both within the companies within the VR industry, but also to some of these non-profit privacy organizations that are still not up to speed with some of these issues within the VR industry. because VR is still small it hasn't been necessarily on their radar but hopefully by having this gathering and both do this coverage for my podcast as well as the reports that the organizers are going to be putting together will help to build a broader awareness to help go to the next step which I think is to really think about some of these issues and to see like what is needed in terms of coming up with the default privacy policy. So the work that I'm doing here on the voices of VR podcast is making a difference in the VR industry And I need your support to be able to both continue it and to grow it So you can donate today at patreon.com slash voices of VR. Thanks for listening