#716: VR Privacy Summit: Medical Insights into VR Privacy + Health Benefits of Biometric Data

The medical professionals at the VR Privacy summit has a lot of interesting insights into how they navigate issues around informed consent in medical research through Institutional Review Boards, and perhaps how privacy policies for companies need a similar independent agency that advocates on behalf of consumer privacy. I had a chance to talk to UCSF’s Adam Gazzalley, Stanford’s Walter Greenleaf, and the National Institutes of Health’s Susan Persky about what the medical profession can teach consumer VR technologies for how to navigate privacy issues.

The medical profession in the United States is a specific context that has HIPAA (Health Insurance Portability and Accountability Act of 1996) to protect medical information as private. What does it mean for consumer technology to be able to gather a lot of this biometric data that would normally be protected as medical information, but it’s not switching into a new consumer context that may have private legal implications through the third-party doctrine.

We talk about all of the amazing health and healing applications for virtual reality technologies, but also some of the potential risks for making this data available for different contexts. The overwhelming take-away that I have from this conversations is that the potential benefits could far outweigh the potential risks, and that it’s worth exploring how to create private and safe contexts to use virtual reality technologies to their full potential, but that there are many open questions for how to find a balance between these risks and benefits.


This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Photo: System Lock – Yuri Samoilov Creative Commons Attribution 3.0

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So on November 8th, 2018 was the VR Privacy Summit that happened at Stanford University and was a cross-disciplinary gathering of people from the different VR technology industries, from academics to people who are medical professionals and researchers. And the topic spans different dimensions of law, different dimensions of the economics of what it even means to run a technology company. as well as some of the specific details around biometric data and what it can actually do. When it comes to biometric data, this is something that medical professionals deal with all the time and it has specific HIPAA regulations that try to maintain the integrity of the privacy of that data. But they also have these processes of informed consent within the medical community and these institutional review boards that are coming in and actually reviewing these different studies and just trying to eliminate bias and to make sure that the people participating in these experiments are actually having their full informed consent for what's actually happening. And so I think one of the big takeaways from the VR Privacy Summit for me and everybody there was just to see how much we can learn from the medical community in terms of looking at some of these best practices from privacy and how we could start to apply those insights into virtual reality technologies and biometric data. and that there's just so many possibilities for how this type of biometric data could have some amazing healing effects. And I would say that there's also quite a number of different risks, especially when you switch context from medical context into something that is using this data for business models of surveillance capitalism. So I had a chance to talk to three medical professionals that were there at the VR Privacy Summit, Adam Ghazali from UCSF, Walter Greenleaf from Stanford, and Susan Persky from the National Institutes of Health. So that's what we'll be covering on today's episode of the Voices of VR podcast. So this interview with Adam, Walter, and Susan happened on Thursday, November 8th, 2018 at the VR Privacy Summit at Stanford University in Palo Alto, California. So with that, let's go ahead and

[00:02:15.408] Adam Gazzalley: Dive right in. My name is Adam Ghazali. I'm a professor at UCSF. I direct research and technology development center called Neuroscape, and we develop technologies, non-invasive technologies, to better assess and help improve people's cognitive abilities.

[00:02:33.762] Walter Greenleaf: I'm Walter Greenleaf. I'm a research neuroscientist here at Stanford, but I also work with a lot of emerging technology companies to help translate their basic research and ideas out into the product stream. I might work with a group that's doing fundamental research and help them figure out how to turn it into a product and then bring it out for validation.

[00:02:54.228] Susan Persky: So I'm Susan Persky. I'm a behavioral scientist at the National Human Genome Research Institute at the National Institutes of Health. And I direct a VR-based research lab called the Immersive Virtual Environment Testing Area, where we use virtual reality as a platform to conduct research in health and medicine.

[00:03:10.402] Kent Bye: Great. So we just were at the VR Privacy Summit here at Stanford. And I think that one of the themes that came up was how similar the medical field has been, I guess, on the bleeding edge of trying to figure out a lot of these privacy issues with HIPAA and other best practices. And so what would you summarize as sort of the insights that, from the medical perspective, could be applied to different aspects of either biometric data or privacy within virtual reality?

[00:03:38.143] Adam Gazzalley: Well, one thing we discussed today, as a medical researcher for many years, when we want to conduct an experiment, a study, we write it up and propose it to our Institutional Review Board, IRB, which is there to protect the research participants. so that they're being consented in an unbiased way, so that the research is conducted without influence, so that everything is transparent, so that they understand their rights, their ownership, what is going to be done with the data. It's our bread and butter. It's how everything is done. And it was interesting today to think about how an IRB of different sort might apply that same skill set of like a very systematic oversight of the consenting and disclosure procedure to a technology platform like VR.

[00:04:25.565] Walter Greenleaf: I think Adam's point is really well taken. I think the medical and clinical and research community has strived very hard over the last few decades to get it right, to balance the need to do research and take data and understand its value and how to personalize medicine, but also how to balance privacy concerns. So the lessons learned from those hard-won battles I think should be transferred and probably will be transferred out to the whole issue of how interactive media, virtual environments are going to be part of our world. I think the other reason that it's really good that there's a dialogue going on with members of the research and medical community here is that there's tremendous implications of VR and AR technology on affecting our cognitive processes, both for assessments and interventions. and many other areas of health care. So a big part of VRAR technology is going to be going into the medical enterprise. So we have to have the dialogue to protect patient privacy and yet leverage all the data, rich data that we're going to get to help people with some very challenging problems such as autism, such as Asperger's, such as addiction, such as anxiety disorder, such as depression. So it's such a healthy thing that we're all here together talking with each other as a group.

[00:05:39.894] Susan Persky: Yeah, I mean, I think that's also true. And I think something else we also think about in the medical arena is, you know, what kinds of data should we be thinking about as sort of special, you know, as having properties that we need to give sort of special considerations to. And so we kind of think about that in terms of genetics and genomics data. You know, we certainly think about that in terms of medical data in general. And now with all of the sort of tracking points and the behavioral data you can get out of VR, that is really unique. I think it's kind of cool that we're using medicine as a framework for that because those are two kinds of data that are really unique and special for a lot of reasons.

[00:06:12.145] Kent Bye: Well, going to what used to be called the Neurogaming Conference and now is called the Experiential Technology Conference that was for the last two or three years, that there is a lot of medical people who are looking at this biometric data and use it within a medical context. But what I see is happening is that a lot of these same technologies are now changing context into more of a consumer context with what is essentially like these private companies that could be collecting information that could be normally considered medical information. This biometric data that normally would be under the HIPAA regulations. And so how do we navigate this in terms of like should this data be treated as sensitive private medical information that would be regulated by something like HIPAA?

[00:06:55.352] Adam Gazzalley: I mean, I think it is a complicated time for the reasons that you stated that technologies are allowing us to access data, not just physiological and functional data like we can with consumer facing technologies, but genetic data as well. that were at one point in history, not very long ago, accessible only by a visit to a physician's office or in some research study, and now they are consumerized. And it's created a tension in academics and in industry that I see all the time between not just the privacy of the data, which is an important one, but the degree of validation that's required for something to be delivered to the public. When it is clear that there is a clinical indication that's being addressed, it's not ambiguous. If you want to treat someone with Alzheimer's disease and you want to make that claim, you need FDA approval. It's not very blurry there. But if you want to help someone's memory, then there is a lot more leeway. And without the FDA regulatory approval required to make a claim that's general, like health memory or attention or well-being, there's often a complete lack of validation, which is not great. So now I feel companies are trying to understand how with all these amazing, accessible, and sometimes sensitive, sometimes it's unclear how sensitive they are, but tools to collect data, how they become either successful as a healthcare company or as a consumer company, and it is just very complicated.

[00:08:31.866] Walter Greenleaf: Adam's absolutely right. The continents are colliding. The large tech companies are jumping into health care, and they're going to be major players in a variety of ways. And as they team with medical device companies, the pharma companies, and the startups that are coming up with the cool technology, the world's going to change. And that line between consumer health and wellness and clinical care is changing because the point of care is becoming where the person is right now, instead of where the clinic is or where the hospital is or where the expert is. And because of that, things are just going to be so dramatically different. So this issue of sensitive information, yes, we have to grapple with that. But the good news is that we have new tools to leverage that data to help improve health and wellness for people. I think we'll get on top of it. The FDA is adapting to its way of regulating. The research groups are embracing technology to help improve research. And I hope that the consumer technology companies are listening and trying to learn and meet us halfway. I think this meeting that we're at now is a good example of how that is happening.

[00:09:32.623] Susan Persky: Yeah, I mean, I think a lot of the issues that we're facing in VR are similar issues to what we're facing in a lot of other technologies in this arena in particular. I mean, HIPAA is fairly narrow. A lot of the medical and health-facing VR and apps and so on, you know, it's really outside of the health care system. And I agree. I think we're going to have to be encompassing of everything as we start to think about privacy and security and of all the data that we're going to be able to collect on individuals. And individuals are collecting on themselves as well.

[00:10:03.110] Kent Bye: Well, I'm curious to pose the question around the implications of biometric data. The way it was told to me is that in some ways having access to eye tracking data, eventually EEG, galvanic skin response, emotional expression on your face, your heart rate variability, all of these things added together in some ways have a map of your emotional experiences, your unconscious reactions, especially if you're correlating it to what you're actually seeing in a virtual reality experience. You basically have this Rosetta Stone to the unconscious psyche is what I see. To me that's like very dangerous to just start to record it and to put it out there and to potentially get into the wrong hands of a bad actor who could use it for manipulating people and some sort of like experiential warfare. But I'm curious to hear from your perspective of the risks of that, but also what you see is possible with this vast amount of adding up together all of these different aspects of biometric data.

[00:10:54.957] Adam Gazzalley: Well, I'd say what you described is, to me, the most exciting thing in all of technology right now. And I just want to just create like a sort of dichotomy. We have data that's more static, like your genomics, your genetics, and other data that are not changing your structural images. I mean, they change, but much more slowly. And then you have data that's incredibly dynamic and functional, like your emotional responses, like your attention and your memory, which is constantly changing. The way that you sample those are not in isolation. You sample those in the context of an experience. And so what makes VR so exciting in this particular domain is that you could stimulate all the senses in a single synchronized moment in a way that almost could completely, you know, in the future, replicate a real world experience and then record, as you described, this rich data set across all these different signals integrate that data together and really understand someone in the moment in a way that we could never have imagined. Now that is still a future event. So I don't think we're in danger of that happening today. We're working on this in our center at UCSF. diligently, and we can't do what you described yet. So we're not there, but to me, this is the opportunity to finally move what I like to think of healthcare right now as sick care, right? We take care of people at our medical institutions when they are already in the need of care. With this type of data, I would hope that we could be much more preventative, that we can understand people in a more nuanced way, in real world, in real time, not when they're in a clinic, and then help them help themselves, essentially. Use technology not to replace people. Use AI not as a way of offloading, but as I like to think of AI for HI, for promoting human intelligence. And I think that is the real promise here, that this closed-loop data, how an experience drives metrics about you that feed back and shape an experience that can be used to improve you, that's the real win that we'll see in our near future.

[00:13:02.572] Walter Greenleaf: And if you take what Adam just very elegantly described and try and look at the question you posed, what's the cost-benefit sort of equation, I think the power to improve health and wellness, and not just in a trivial, but in a major way, and it is so significant that, yes, there are concerns, but with any emerging technology, the automobile, the airplane, the printing press, there's always been people saying, what are the risks, what are the worries about it? And I think we just have to solve it. It's a challenge. But from my viewpoint, we finally have some tools to address some very difficult problems that we haven't been able to address before. And that's so exciting. And I think that will propel us to come up with the solutions to address the potential risk.

[00:13:47.912] Susan Persky: Yeah, I'm thinking too about the Precision Medicine Initiative, now called the All of Us Research Program, where there's this big effort to collect a cohort of a million people and collect ongoing data about people with trackers, with ecological momentary assessment where you ask people questions every so often or in concert with events that happen in people's lives. I think this big data effort is really what we need to try to figure out how we might get this Rosetta Stone. We don't have this Rosetta Stone, right? When we think about biometrics, we have very few actually validated physiological measures that tell us about what somebody's thinking. It's not like we can index every emotion with physiology at this point, or maybe we won't be able to, I don't know. But there are probably some things that these kinds of data are gonna be better for, and some things that they're not really gonna work very well for. It kind of depends on what we do with this information. Are we just going to bank it all? And maybe that would be a little bit more dangerous. But I think the people who, at least I know, who are out there trying to collect all the data to make sense of it, are good actors and are very transparent about risks and benefits. And I think that's really the important thing.

[00:14:55.171] Walter Greenleaf: And something that I think is very important to keep in mind is we have a crisis going on right now. And that's the fact that we have an aging population. And we don't have to do much more than just do the math. And the U.S. population has doubled since the 1960s. The problem of not having enough caregivers because family sizes are getting smaller. It's not just a U.S. problem, it's a world problem. And if you look at the percentage of people who have a neurodegenerative disease, once you get up into your 60s and 70s and 80s, we're going to have an economic crisis, a healthcare crisis that we don't have many tools to solve. So yes, there are concerns. We have to figure out how to do things in a way that protect people's privacy. But we also have to rush into and embrace this technology to deal with this tsunami that's coming at us. We just have to do the math. We know it's going to be hitting us very soon.

[00:15:44.430] Kent Bye: Well, the big takeaway I take from that is that there's so much positive use in the medical context that it's worth pushing forward. And I guess my concern is the public and private context of people capturing and collecting this data, and then having some sort of data breach where it gets into the wrong hands that then has a malicious use to it. And so I guess it brings up a question of, where should the data live? Is self-determined identity? Or should the patients be owning the data? Or this data security around that issue, but also the differences between you're talking about a medical context, but yet, you know, how could bad actors who want to control people in terms of behavior neuroscientists said to me is like the line between predicting behavior and controlling behavior start to have an unknown ethical threshold that once you start to be able to predict behavior, there's a very small jump over to what it means to be able to control behavior. And I think that, is the risk that I worry about with gathering and recording all this biometric data in the context of a private company and then either them using it explicitly or for it to get into the wrong hands to be to do some sort of like information warfare in the context of the nation.

[00:16:51.008] Adam Gazzalley: I mean, this is why we met here today, right? Like that sums it all up. Like, we obviously see great potential or all of the brilliant people that assembled today wouldn't be wasting their time, you know, meeting about this topic. So, you know, we see a future that's valuable. We want it to be a healthy future. I think everyone here appreciates the fact that a lot of that is like prospective decision-making, that you don't want to just do it because it's cool and exciting and interesting and then apologize in 10 years about all the mistakes you made. That has happened. We are aware of those situations. We're not proud of them. But the only way to potentially correct that is to get out in front of it with a multidisciplinary group of people like we have here. and to challenge each other. What is the black mirror here? Where are we going? I actually found it very hard today, I knew it was important, but to talk about things that I do and love and think about the most horrible outcomes of those. It was emotionally exhausting, but I knew it was critically important to do, and I don't think that's been done enough in the past. So, you know, your points are all well taken. This is the challenge of not just VR, but of all technologies, and the medical world still struggles with these technologies. We still constantly debate the ownership of your medical record and your images. It's not like it's solved. I feel like we're a new field that has some uniqueness to it, but in a lot of ways mirrors the same challenges that the technology and the healthcare and the education world are also facing.

[00:18:29.184] Walter Greenleaf: You know, I should push back a little bit on some of the concerns that you have. I do think that AR and VR technologies is a powerful tool for shifting behavior, but it's not done just like that. It takes a lot of work. It takes time. I don't think we're going to be easily able to change people's political views or basic personalities or how they view the world using VR and AR technology. Maybe if we combine it with some things that are going to be generated in the future in terms of medications, etc. But right now, I think it takes a lot of work. And the good news is, you know, for people who are dealing with anxiety or with depression or addictions, we have some tools that can help. But it's not like, oh, we have a tool now that's going to be used for propaganda in a major way. I think we have to be vigilant, you know, look at all the issues that are going on out there about, you know, hijacking the news stream and putting false information out there. but that I don't think VR and AR technologies is just gonna be used in a negative manner with ease. I think it's gonna take a lot of work to really do bad things with VR and AR technology.

[00:19:32.392] Kent Bye: Yeah, certainly for now, but maybe in the future. But yeah, that's a good point, well taken.

[00:19:36.681] Susan Persky: Right. And I think that's something that, you know, I know from being in the behavioral medicine world is that behavior change is really difficult. You know, we always try to get people to change their behavior in terms of, you know, their dietary behavior, their exercise behavior. And it's very, very difficult. And so, yeah, we will be able to maybe get at some of that with VR in ways that we haven't before. But I'm also, like Walter, fairly optimistic that some of the negative behavior changes that we might really worry about, you know, will be a little bit more difficult

[00:20:04.744] Kent Bye: So I'm curious to hear each of your major highlights from today's or takeaways that you're taking away from the VR Privacy Summit.

[00:20:11.506] Adam Gazzalley: Well, I feel encouraged by the gathering at this early stage in the field. As I just said, I think that we should have been doing this from the very beginning of the internet and social media. And I don't think that these conversations happen. I don't think that every entrepreneur or researcher investor in the space has been sitting in a room and saying, think about the worst possible catastrophe that your exciting company or invention is going to cause. And so I'm encouraged. I mean, it's hard to turn that mirror on yourself and it's important to. So I don't think it was solved today, but I liked that the questions were being asked. I liked that the group was diverse. I think that what I've observed in multiple stages of my career, that some of the most interesting things happen at the intersections of different fields. I think that as the medical group here that you're interviewing, we have some fresh perspectives from our own field that could be valuable to be considered by a group that might not know about all of the IRB and these other parts of our world that are just bread and butter to us. So I'm excited.

[00:21:25.288] Walter Greenleaf: I think there's a lot of work to be done, but I think we're all in agreement here that this has been a fantastic step forward. And I'm optimistic. I think that the dialogue will continue, and I think we can solve these problems. And I think the will is there to solve it. It's not going to be easy, but let's do it. Let's get on with addressing these problems now that we've looked at the worst case and talked about the worst case and also talked about some potential solutions. The next step is implementing that.

[00:21:53.324] Susan Persky: Right. And I'm also very encouraged by what's been going on today. I mean, I came in to the meeting with a charge to talk about medical privacy and HIPAA and things that are admittedly really pretty conservative privacy policies and so on. And actually, I was expecting to get more pushback than I did. I think, you know, those kinds of models were embraced fairly readily. And I think people are really recognizing what kind of data we're dealing with here. and that we really need to start now, probably should have started already, to think about how to protect that data in a really meaningful way.

[00:22:24.711] Kent Bye: Great. And finally, what do you each think is the ultimate potential of virtual and augmented reality, and what it might be able to enable?

[00:22:34.138] Adam Gazzalley: I mean, I think that this is the new medicine, the new education. From my perspective, you know, as a cognitive neuroscientist, I think that this technology has the potential to allow us to impact in a positive way, a negative way, which is why these discussions are important. Every aspect of what makes us human. I really think that that is all on the table right now in a way that we have tried but not succeeded using molecules as pharmaceuticals or even our education system. I think that this opens up entirely new pathways of positive growth across both the healthy population and those that have impairments.

[00:23:14.600] Walter Greenleaf: I think we're in a new era now that the technology is finally becoming affordable and out there. I think medicine is going to be transformed. I think as VR and AR technology moves from into the enterprise of how we work and how we interact with each other socially, I think medicine will be one of the biggest and most profound use cases. I think it will change how we do all aspects of medicine, how we do training, how we do preventative medicine, how we do assessments, how we do interventions, and how we follow up and shift our behavior to adhere to what we need to do to be healthy. So it's just incredible the power of what we're going to be able to do. So I remain optimistic, excited about it, and I think medicine can lead the way for some of the other issues that we have to address here, too, because we're leveraging the fact that medical communities had to address privacy issues over the last decades.

[00:24:06.442] Susan Persky: And I'll answer that question sort of from a research perspective because I primarily use VR as a research platform. And, you know, I think now that it has become affordable increasingly that more and more researchers are able to use it to understand social behavioral factors in ways that we never could before. to do really compelling experiments that are very controlled but also have a lot of elements of realism in them and are very much more like real life than sort of the sterile lab environment. And I think as VR sort of rolls out more completely into communities, we'll have the ability to, rather than bringing people to us, hopefully to push research out into communities and reach underserved research populations that we haven't been able to reach before. in centralized community clinics and so on. And so I think there are really, really exciting things down the road for research.

[00:24:57.586] Kent Bye: Awesome. Well, thank you so much for joining today.

[00:24:59.407] Walter Greenleaf: Thank you. Thanks, Ken. Thank you for the good questions.

[00:25:02.330] Susan Persky: Thanks a lot.

[00:25:03.603] Kent Bye: So that was Adam Ghazali of UCSF, Walter Greenleaf of Stanford and Susan Persky of the National Institute of Health. So I have a number of different takeaways about this interview is that first of all, well, first it's immediately clear that there are so many just amazing possibilities that are going to be coming from biometric data and what it can do within the medical context. Walter said that it's going to be everything from training to preventative medicine to assessments and interventions and follow-up and shifting behavior to be able to actually adhere to some of these rehabilitation protocols. There's no question that you're going to be able to do some amazing stuff with the medical applications. in his research at the UCSF Neuroscape is in process of actually trying to integrate and do this sensor fusion of all of this biometric data and to try to create these immersive experiences that are trying to really push the limits of what's even possible of doing this type of reactive experiences. So they're on this pathway and road of trying to really unlock what I describe as this Rosetta Stone of the psyche. And I think that it's possible. I think they also believe it's possible. We're just not at that point of having that yet. And I think there's this interesting tension when it comes to the balance between the theory and the practice and what is possible philosophically and what's actually pragmatically real today. And I think part of what was happening at the VR Privacy Summit was trying to actually balance some of those things through this process of trying to project out into the future and imagine like these dystopic science fiction black mirror scenarios where everything goes absolutely horribly wrong. It was interesting for me to hear that. It was actually really difficult for Adam to do that. I have no problem of projecting out and coming up with the worst-case scenarios. For me, I just connect the dots and see these pictures. I appreciated Walter Greenleaf's pushback against me during the context of this interview, because this projecting out into the worst-case scenarios has to be weighed against the potential benefits. There are a lot of potential benefits within these medical contexts, But I'd say the difference is that this is also within the context of these major technology companies, and they don't have the same HIPAA regulations, they don't follow the same protocols, they don't have the same privacy protections. And not only that, but there's a completely different context with the third-party doctrine, which says that any information that you give to a third party has no reasonable expectation to remain private. And so when you start to have cultural behaviors, that is what is defined as reasonable. And if the collective culture is saying, you know what, I'm okay with you recording my eye tracking data and storing it forever, as well as my Gavilan excrement response, my heart rate variability, all this stuff that is coming from my body, that's then saying to the government and to everybody else and to the lawyers to be able to say, OK, well, this is open game for anybody to do whatever they want with that, and there's no legal replications to be able to stop that from happening. And I think that is the risk that I find the most terrifying. Because, yes, this is in the conversation of a medical context, but as we switch these contexts, it's actually completely eroding the Fourth Amendment to the point where it's going to have this negative impact on the First Amendment. These are difficult legal issues, and I think it actually takes a really nuanced, cross-disciplinary team to be able to actually, in the moment, be able to address all these various issues and the different risks that are there. But I think that in terms of the loophole in the third-party doctrine, which is that essentially anything that you give to a third party has no Fourth Amendment protections. There's no reasonable expectation for it to remain private. That's part of the context. The other thing that I would say is that it's not necessarily going to be these propaganda campaigns coming from VR or AR. It could actually be that this data is being recorded and put into some sort of deeper psychographic profile and fed over to service ads and other properties where the business models are already very robust. These major corporations to be able to start to gather this very intimate biometric data and start to share it to the other parts of their business. That's already within their privacy policy. The risks aren't necessarily that it's going to immediately come into experiential warfare campaigns that are delivered on VR or AR. I think that's certainly a possibility in the future, but in the short term, it's more of like what types of insights can they gain from this types of data and be able to pass it over to other dimensions of their business. And so, I think that the Institutional Review Board is a really good idea, that maybe there is going to be some sort of independent review board to be able to actually look at some of these policies, and ideally, maybe even start to actually see what the actual behaviors are and practices. get that type of accountability, the big question that I would have is, what type of teeth would this institutional review board actually have? Would it mean that they're not going to give their stamp of approval of the privacy policy? Or, in the case of research, it's that you can't do the research. But in the case of a business, they're already doing their business. A lot of times in the context of a company, if they find that it's cheaper to pay the fine for violating some of these things, then they'll be more than willing to pay that money. I think the deeper context to all of this is that whatever institutional review board that they have, if there's going to be economic and financial interest to go against the interest of privacy, then the interests of privacy are just going to go by the wayside. To me, it's like this deeper ethical and moral center of gravity of what are the values that are driving these companies, and are they in alignment with how they're actually making money? Because if it's not, then they're always going to find ways to find these different loopholes about whether it's the GDPR, whether it's these different regulations, whether it's these fines they're having to pay, or whether or not we create this new institutional review board that's looking at these different privacy policies. The deeper economic business models, I think, is very difficult for me, at least, to look at this overall issue and to see that they're going to actually have some viable tracks until it's looked at holistically. This was the first gathering and overall I'm more optimistic that things are moving in the right direction, but the challenging thing is that it's actually very difficult to have conversations with each of these different individual experts and for them to be fully informed into the full holistic picture in terms of all the legal implications, all the security implications, all of these black mirror embodied metaphors of what is actually possible. So the overwhelming thing that I took away from this conversation is that in talking to the medical professionals, they are completely convinced that the benefits far outweigh any of the potential risks, and that they are wanting to completely move forward and to use all this biometric data to be able to gain deeper insights into what it's going to mean for the healing and health benefits and effects. It doesn't make sense to never record any of this data, ever. I guess what I would say is to really differentiate the difference between the medical context and the consumer technology context, where there's completely different incentive structures. Those incentive structures are all about gathering as much data as you can to be able to come up with these psychographic profiles, to be able to persuade and do advertising to you in different ways. I do think that there's a very unknown ethical threshold when it comes to being able to predict and control behavior. Even if it's not happening within the context of VRAR, the data are going to be able to go in between into the other applications and be able to potentially start to target ads in a much more directed way. But if the intention of the gathering of that data is towards your own benefit in health and healing and this quantified self, then I think that there's a lot of reasons to be able to push forward. In fact, a lot of those potential benefits are catalyzing people to really want to figure this out and to do it the right way. this VR privacy summit that happened at Stanford was really like this first gathering of trying to actually get everybody in the room and to just start the conversation to see what else needs to be done and to talk about some of these potential risks and to build the political will that's necessary to collaborate, to figure out what can we do to actually solve some of these issues and to find some of the, either the best practices or some organizations like this potential institutional review board for privacy or other recommendations that are going to be able to be handed out to these different companies. So that's all that I have for today. And I just wanted to, first of all, just send a huge shout out to my Patrion supporters, because, you know, I wouldn't be able to do this type of coverage without support from Patrion. It allows me to be an independent voice within the VR community and to cover these issues like privacy and to help educate people and to create this context, to be able to have these larger discussions in these larger group gatherings, to be able to get everybody gathered in the same room and to be able to explore honestly. Some of the implications of where this technology is going and some of the potential amazing benefits of what's possible as well as the potential risks So if you're enjoying this coverage and you'd like to see more then become a member of my patreon at patreon.com Slash we said VR. Thanks for listening

More from this show