#517: Biometric Data Streams & the Unknown Ethical Threshold of Predicting & Controlling Behavior

John-BurkhardtI recently attended the Experiential Technology Conference where there were a lot of companies looking at how to use biometric data to get insights into health & wellness, education for cognitive enhancement, and market research. Over the next couple of years, virtual reality platforms will be integrating more and more of these biometric data streams, and I wanted to learn about what kind of insights can be extrapolated from these data streams. I talked with behavioral neuroscientist John Burkhardt from iMotions, which is one of the leading biometric platforms as a service, about what metrics that can capture from eye tracking, facial tracking, galvanic skin response, EEG, EMG, and ECG.

LISTEN TO THE VOICES OF VR PODCAST

I also talked to Burkhardt about some of the ethical implications of integrating these biometric data streams within an entertainment and consumer VR context. He says that the fields of advertising and brain washing often borrow from each other’s research, and he is specifically concerned about whether or not it’ll be possible to hack our fixed action patterns, which are essentially stimulus response behaviors that could be operating below our conscious awareness. Most of the work that iMotions does is within the context of controlled research and explicit consent of participants, but what happens when entire virtual environments can be controlled and manipulated by companies who know more about your unconscious behaviors than you do?

Burkhardt says that there is a behavior that he would consider to be unethical for how this biometric data are captured and used, but the problem is that no one really knows where that threshold lies. We might be able recognize it after it’s already crossed, but it’s hard to predict what that looks like or when it might happen. We’re not there yet, but the potential is clearly there. An open question is whether or not the VR community is going to take a reactive or proactive approach to it.

Burkhardt also says that these types of issues tend to be resolved by implicit collective consensus in the sense that we’re already tolerating a lot of the cost/benefit tradeoffs of using modern technology. He says that it’s just a matter of time before someone creates a way to formulate a unique biometric fingerprint based upon aggregating these different data streams, and it’s an open question as to who should own and control that key. The insights from biometric data streams could also evolve to the point where big data companies who may be capturing it could be able predict our behavior, but potentially even be able to directly manipulate and control it. He also says that it raises deeper philosophical questions like if someone can take away our free will with the right stimuli, then do we even have it to begin with?

As I covered in my previous podcast with Jim Preston, privacy in VR has many utopian or dystopian outcomes, but it’s likely to fall somewhere in between of being complicated and complex. There are lots of potential of new forms of self awareness of being able to observe our autonomic and unconscious internal states of being as well as changing the depth and fidelity of social interactions. But there also risks for this type of data being used to shape and control our behaviors in ways that cross an ethical threshold. It’s something that no individual person or company can figure out, but is something that is going to require the entire VR community.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR Podcast. So I recently attended the Experiential Technology Conference, and it used to be called the Neurogaming Conference. So they're looking at all sorts of different EEGs and biometric data, and how can you feed that into different types of immersive experiences. Now, I think they're moving away from these real-time interactions and they're looking at biometric data in the context of what can it tell you about your overall health and wellness, but also cognitive enhancement and education. But it's also being used for market research, so to look at the emotional reactions or what people are paying attention to. So on today's episode, I talked to a biometrics expert from iMotions. Now, iMotions is a platform as a service to be able to do different types of research where you're wanting to gather all sorts of different biometric data streams and start to integrate them. Now, they're not explicitly a virtual reality company. They're more generally kind of like one of the leading biometrics data experts that are looking at this field. So over the next couple of years, we're going to start to see more and more of this type of biometric data being tied into virtual reality experiences. And so I wanted to talk to John Burkhart, who is a behavioral neuroscientist who's coming from academia, moving into this business realm. I wanted to just talk to him about all the different biometric data streams and what you can tell from them. And also looking into the future, what are some of the ethical and privacy implications of starting to tie in some of these biometric data streams that may allow people to not only predict your behavior, but potentially also be able to shape and control it. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by VRLA. VRLA is the world's largest immersive technology expo with over 100 VR and AR experiences. They'll have tons of panels and workshops where you can learn from industry leaders about the future of entertainment and storytelling. I personally love seeing the latest motion platforms and experiences that I can't see anywhere else. Passes start at $30, but I actually recommend getting the Pro Pass so you can see a lot more demos. VRLA is taking place on April 14th to 15th, so go to virtualrealityla.com and get 15% off by using the promo code VRLA underscore Voices of VR. So this interview with Jon happened at the Experiential Technology Conference that was happening in San Francisco, California on March 14th and 15th. So with that, let's go ahead and dive right in.

[00:02:50.768] John Burkhardt: My name is John Burkhart. I like to say I'm a recovering professor of neuroscience. I spent 15 years university side working on Parkinson's and astrocyte networks and joined iMotions about six months ago to take the rigorous application of behavioral methodologies to the business world, where I really think that it needs to be applied far more than it currently is. And iMotions, it turned out, was a serendipitous finding because that's kind of the same mission that iMotions has and so what we're doing now is looking at how can we implement rigorous biometric investigations into really any setting that somebody might have use for it and how can we help facilitate collection of good data with good methodologies. Biometrics, of course, is the heart and soul of it. The company was grounded in eye tracking, looking at where is visual attention either static or over time, what draws attention, what holds attention. And we also overlay now facial expression analysis to look at emotional response to things. galvanic skin response, look at levels of physiological arousal. We have a lot of use for electroencephalogram, looking at focused attention, approach avoidance response, cognitive workload, and more and more we're seeing people have interest in EMG and ECG to look at stress response, you know, levels of muscle contraction or cardiac variability, looking at how do people either respond to stressful events or in a physiological capacity, say for a an athletic context or some sort of physical performance. And then where the real gold comes in is looking at how do all of these different data streams interact with each other and what can we extract from the integration of all these different data streams as opposed to just the individual ones discreetly.

[00:04:31.081] Kent Bye: Yeah, so it sounds like that you're able to gather quite a wide range of different data from the body and that I think in the long trajectory of virtual reality, we'll eventually be taking this as input into virtual experiences and potentially having like live feedback and to be able to look at what is happening in your body and be able to actually perhaps change the environment or to change the branch of the narrative that you're on. Let's just take a step back and to look at the types of problems that iMotions is able to solve right now in terms of people coming to you and needing to, let's say, do a research study based upon seeing how people respond. So getting kind of like the objective data of what's happening inside of somebody's body, but to be able to extrapolate that into more rigorous scientific research to measure different So, what are the type of questions that the companies are asking and what is iMotions able to do to be able to answer those?

[00:05:23.947] John Burkhardt: Sure. We've heard every conceivable question you can imagine from, you know, just how can we go about running this study to get published data to crazy things of how can we figure out the emotion of architecture. It's really limited entirely by the imagination of the person asking it and their ability to apply a rigorous setup and a structural methodology to that question. In terms of what sorts of questions, I mean, something very popular for the business side clients is UX. You know, how can we look at How are people interacting with a website? Let's just say, for example, that's a very, very popular one. What's drawing the visual attention and how long does it take them to accomplish a task? Are they looking in the right areas on the screen? How long does it take them to find that? And then how can they use that information to optimize the user experience for their websites? Also seen a lot with market research, what has emotional resonance when advertising or selling a product? Do certain things create a more aversive response if you include fear elements of them? Does that drive them away or can you use fear conditioning to drive them towards the solution to that fear? On the academic side, we see everything. We have a lot of people working, call it preclinical, in psychology fields and also in some of the therapeutics. Autism research is very big right now in terms of looking at how people who fall on the autism spectrum process facial affect and are able to mimic that facial affect themselves or properly choose the correct response to that. Yeah, there's not really any limit to the question. It's just a matter of, is there a data stream that will answer it for you? And then, can you come up with a way that that data stream will answer the question for you?

[00:07:02.217] Kent Bye: And so at the very beginning of this interview, you listed quite a wide range of different data streams. And so I'm wondering if you could step through each of the different inputs that you're able to gather, and then what you can extrapolate that in terms of emotional state, or if you start through the different things that iMotions platform specifically does, which seems to be a pretty wide range.

[00:07:22.166] John Burkhardt: Sure. So, as you might guess from the name of our company, where we got our start was with eye tracking. And that's one of the oldest biometric data streams, goes back decades. The evolution of eye trackers has been fascinating. When I was at the University of Oslo 10 years ago, we had an eye tracker that took up an entire desktop. You had to fix your face in a frame. It cost $50,000 and it would tell you the quadrant of the screen in which your gaze is located. That was it. It was a four-point output. Whereas now you have a $4,000 system from Tobii that magnetizes the front of your monitor and tells you within half a degree of angle where you are on the screen. So the technology has progressed enormously. What does eye tracking tell you? It tells you about visual attention. It tells you what are you looking at, what sequence are you looking at things in, and are you lingering on it? What path do you take to get from one thing to another? What can you get from this is exactly that. What's the visual salience? And then, is that visual salience being suppressed or facilitated by anything else? There are two major modalities for doing that right now. We have what's called fixed eye tracking, wherein you sit at a computer terminal, or you can set up on a tripod while you're watching television in a living room setup, something like that. And it's for a flat screen or a stationary setting for the person making the observation. There's also mobile eye tracking, which uses a combination of glasses and with a webcam, which we have the webcam that records what you are seeing, your view of the world is, and then does overlays eye tracking on top of that. So it essentially comes up with a dynamic representation of your point of attention in space. And that's really powerful for looking at what's your response to things around you. If you, for example, as we're sitting here, out here on this balcony, if all of a sudden there's a noise behind me, and I'm say I'm wearing these mobile eye tracking glasses. See, okay, what's my search pattern? How quickly do I turn around? What do I focus on? Does my search pattern change if it's, say, you know, a gunshot noise versus a car crash noise? And so, yeah, the visual attention and then with the mobile eye tracking glass, you also get behavioral response to different visual stimuli. Next one facial expression analysis that's becoming very very popular largely because well for two reasons one It tells you a hell of a lot. It tells you emotional affect How do people respond to things you present them with what's an emotional? Salience is something that I mean, it's always been relevant, but more and more, especially in the research fields, it's becoming something people have realized we haven't paid enough attention to this in decades past, and we need to look more at how do people respond to things emotionally and not just rationally and logically. It's also very popular because it's very easy to implement. It's the most straightforward of all of our data streams by far. It's a Logitech plug-and-play webcam, and you can use any camera in the world you want, as long as you can get a video recording. So, it's very simple to implement, and what it does is it calculates what are called facial action units. A guy named Paul Ekman created the facial action coding system a while back, which is a system of describing the specific movements of the face. For example, the raising of the eyebrows, or the wrinkling of the nose, or the curling of the lip corners. The facial expression analysis measures all these different action units and reports them, but then it can also extract seven basic human emotions from that, and let me see if I can remember all of them. Anger, surprise, fear, joy, sadness, contempt, and... Disgust. Disgust. Wow, thank you. Yeah, so that's the seven. And then the individual facial action units are not exclusive to any particular emotion. So it's the combination and the interaction of the different facial action units that give you these emotions. And so these emotions are then calculated from the recorded action units. What does this tell you? This actually can tell you a number of different things depending on what you're asking and what the context of what you're asking is. Most straightforward, it tells you what is somebody's emotional response to something you present them with. And if everything were that simple, it would be wonderful. The problem is facial expressions are not emotion. They are a proxy for emotion. And there is an enormous social component to facial expressions. So, for example, one of the things you'll see very often, we've heard this a great deal from some of our business clients who do video ad testing. If you put somebody in front of a monitor and play them a stimulus, they have very, very flat effect. And you can see the same thing. Imagine yourself sitting alone at home watching a comedy on your computer or television. Even if you find it terribly entertaining, you're not going to necessarily emote a great deal of expressions. Whereas, conversely, you can look at the opposite thing, where it's very often, if you're in a social setting, you find yourself being very expressive, even if you're not feeling a whole lot internally, because even if it's not obvious social pressure, there's social expectation and conformity that happens to extract this facial data from you. And so, one of the things we overlay on top of that often is galvanic skin response. That's our next sense we get into. Galvanic skin response is the electrical resistance of your skin. Or it's variations, really, more precisely, in the electrical resistance of your skin. As your physiological arousal changes, increases, decreases, you have changes in the modulation of sweat across your skin, and sweat contains ions, and therefore the conductivity, the electrical resistance of your skin changes depending on your physiological arousal. And so this could be overlaid on top of facial expression analysis to look at, one, are people actually feeling something? If you see a really, really big smile but completely flat GSR response, and you say, OK, that's probably affected. That's somebody's acting. Or conversely, going back to that watching a comedy solo example, if you see no facial expressions but a big spike in GSR response, you say, OK, they just don't have all the appropriate context around them to emote this. And then in a more ideal situation where you actually do have genuine affect matching up, then you see, okay, how strongly is somebody actually feeling this? And we say, okay, we see a smile and we see a spike in GSR. How big is the smile? How big is the GSR spike? And it gives us a level of the intensity of emotion. Facial expressions can only tell us what the emotion is. GSR can only tell us what the intensity of emotion is. Then that's a very interesting point and a very relevant point for GSR. It can't tell us what the emotion is. If you only are recording GSR and that's your only data stream, you don't know what they're feeling. You only know if they're feeling something strongly, weakly, what have you. That's where the convergence of these data streams becomes very, very important. Moving on from that, electroencephalogram, EEG. As a technique, very old. Goes back to, I want to say, the 50s, 60s. It's kind of fallen out of vogue in medical research of late in favor of fMRI, but it's becoming more and more popular on the business side and in psychology experiments and the like because it's fairly straightforward and simple to implement and does not require anywhere near the processing power that fMRI does or the expense that fMRI does. And EEG is one of those metrics that people who advocate it say it is powerful and flexible, people who don't like it say it's a frustrating piece of crap. It's wholly dependent on your patience with it and your willingness to learn and develop the analytic toolbox. If all you see are the voltage outputs of each individual channel, that doesn't tell you much at all. There's a lot of back-end signal processing, but from that, what you can extract are things such as, say, how aversive or appetitive a particular stimulus is. A metric that's very popular right now is called frontal alpha asymmetry, looking at the difference in the alpha frequency band power between the right and left frontal lobes of the brain. And depending on the direction of asymmetry, it's an indication of how attractive or aversive something is. Other things very popular to look at is cognitive workload. Is a task, a message, a block of information, how challenging is it to interact with, to interpret, to process? We see this a lot with marketing departments who want to craft a message that has high salience but doesn't require a great deal of cognitive workload. Beyond that, you can also look at levels of attention. If somebody's showing a lot of focused attention on a task, are they showing a great deal of distraction? And this is, again, it really comes into its greatest power overlaid with other data streams. For example, eye tracking can tell you if somebody's eyes are moving across something. It tells you nothing about are they actually processing it. overlay on top of that a measure of cognitive workload or engagement and you see, okay, you know, their eyes are going over this block of text and you're seeing an increase in workload. So then it gives you a really good assessment of information processing. Moving on, ECG, EMG, these are two more electrophysiology streams looking at voltage outputs from the heart and then from the muscle tone. These can tell you different things. Facial EMG is a very specific application of EMG that's becoming very popular with VR. And actually, this is where it may become very relevant to some of your audience here. Obviously, because of the realities with VR headset, you can't do facial expression analysis while you're wearing one. It includes so many of the anchor points and dynamic points on the face, you can't calculate the action units. What you can do is calculate the EMG, the electromyographic activity of different muscles on the face. And so the corrugator and the zygomaticus are the two that are most popular. The corrugator and zygomaticus muscles, these are the muscles associated with frowning and smiling. and so you can put electrodes on the face to look at are people smiling or frowning while in a VR environment. What's great about this is EMG is a very sensitive signal, so even if people aren't necessarily giving much response, there's always going to be micro activation, there's going to be basic muscle tone in those muscles, and you can see pretty good fluctuation even without a huge overt response. You can also do EMG at any muscle in the body. And we see this as a stress response, you know, muscle tone, static muscle tone tends to increase as people become more stressed, more tense. That's what it means, tension. And then ECG, electrocardiogram, is also used in a similar capacity, looking at what's the absolute heartbeat rate? Does pulse increase in response to a stimulus? Also, heart rate variability is often used as a measure of actually different things, to answer your question. Overall health, it seems a little bit counterintuitive, but a variability in basal heart rate is actually a mark of good health, and if it's fixed and very, very regular, that's indicative of a pathological condition. And also, what's the variance over time? Do you see an increase or decrease in that in response to a stimulus, in response to an intervention? And that tells you, how are people responding to it? And then can you, conversely, flipping around, can you use a stimulus to push people in one direction or another? Yeah, that's our boilerplate toolbox. That's probably way more than you were hoping to get. And then, beyond that, for iMotions itself, we have an API platform so that any data stream you want to, if you can get a TCP or a UDP output, we can stream it in real time. And we've seen all sorts of different applications of that. For example, The Stanford driving simulator, they have this very immersive car simulator. They import the steering wheel and brake and accelerator data streams into iMotions concurrent with the data stream. I've also seen it used to look at voice volume or just sound volume over the course of a stimulus. So if there's a biometric data stream that you want to do, then that can also be brought in if that facilitates the answering of whatever question you have.

[00:18:32.590] Kent Bye: Yeah, actually, that was exactly what I wanted to hear in terms of all the different types of input data and what you can tell from it. And I know that in the context of VR, you kind of alluded to a couple of things with the problems of using, let's say, a webcam to do face tracking. If you're wearing a virtual reality headset, you have the occluded face. And so I know that there's some researchers like Hal Lee that have been trying to do this database regression analysis of just looking at the lower part of the face and to a VR experience. there is going to be likely some combination of eye tracking combined with perhaps some sort of external camera sensor to get the face. But the question I think that VR has a long trajectory to get to is microexpressions and the importance of microexpressions when it comes to subtle signals, when it comes to being able to detect keys to emotion. So I'm curious if microexpressions is something that you've looked at and to be able to, first of all, detect and how you actually do that.

[00:19:41.850] John Burkhardt: Sure, and you're absolutely correct, especially when it comes to social communication, micro-expressions are critical. It's a huge part of how we communicate with each other, how we indicate emotional states. It's challenging, as you point out, even without VR, even without a headset on. Microexpressions are very challenging to observe and to rigorously quantify because, well, they're micro, as you've named. And there are a few different ways you can do this. In absent VR context, the ideal is always going to be to have some kind of camera-based assessment, because nobody likes to have sensors on their bodies. Cameras are nice. We're comfortable with cameras. We have them pointed at us all the time. And so if you can do some sort of camera-based, that's great. It is very difficult and the algorithms, the best engines out there still are not fantastic at detecting micro-expressions. Muscle Tone is the gold standard for that sort of thing. It's absolutely the best way to go. There is a challenge right now in just electrode size. Electrodes take up space. There's only so much real estate on the face and to get a comprehensive recording of the different muscles responsible for micro-expressions you'd have to have dozens of electrodes on the face and either wires coming off of them or large transmitters off of each of them. So there's just a practical limitation in terms of, at that point, are you even getting genuine facial expressions anymore? And to what extent is just having all of these electrodes on your face and all these either transmitters or wires hanging off changing up what you're emoting. It's the butterfly effect. You're changing the context and therefore you can't really be sure in your data anymore. I think that that will eventually be surmounted. I don't know if you were at the talk this morning with Ramez, I believe was his name, and he talked about miniaturization is inevitable. Everything is going to get miniaturized and mass-produced. My own personal feeling is that we're probably five to ten years out of having an electrode solution that is practical enough to deploy to get genuine micro-expressions. And in that case, yeah, that opens up a different, I don't want to say challenge, but different task before us. Because then we have to start writing the primer. of how do we decode these micro-expressions. Because even though they're the same facial action units, when you talk about them in the micro-context, they don't code for emotion the same way. They cue various things. They don't necessarily indicate gross joy or gross anger. And so then writing that primer for the micro-expressions, I think, is going to be a very interesting task for whatever crop of grad students and postdocs happen to be working in that era, whenever it becomes relevant. I pity them and I envy them at the same time. We're going to see some really good papers come out of it, and it's going to change the game, especially for VR communication, because then you can get realistic integration of whole-face affect into a VR context, and you can start getting legitimate VR communication between two participants. Call it a real-world social experience in virtual space.

[00:22:41.077] Kent Bye: Yeah, I think that's the kind of thing that I hear is that being able to capture and transmit micro-expressions, I think once you're able to do that, then you're able to maybe have a full range of facial expression and to mimic what it feels like to talk to somebody. But right now there's just a bit of a simulcrum that happens and not being able to get that full fidelity. I think that actually kind of brings up other kind of sociological impacts. If, like, let's say we take all of this biometric data that is maybe operating at an unconscious level where we're not even really fully in control of it, let's say, that if we have an emotional state, you know, we can kind of have a bad poker face so that we have a tell into, like, different emotional feelings that we're communicating that we may not even be aware of and so there's the one dimension which is the just self-awareness of being able to perhaps be in social situations and record yourself and watch you know the biometric data to get a little bit of a reflection of to what was actually happening internally and what was being communicated externally, but then I think there's also like a social impact of what does it mean when we start to integrate all these kind of hidden biometric signifiers within our social situations and what does that mean if it is aiming us towards a larger trajectory of more radical authenticity of being more in alignment with what we're saying and what we're believing or if it's going to perhaps have other sort of effects where people try to hide deliberately hide and mimic their facial expressions to be able to go the opposite direction, which is not towards that radical authenticity. So just curious to hear some of your thoughts about the future implications of making these more available to ourselves, but also other people in these contexts.

[00:24:17.895] John Burkhardt: Sure, yeah, great question. The ethical implications is, it's not even just a conversation, it's probably a dozen different conversations on a wide array of topics that needs to be had. It does push things into a different space than we've been interacting with before. I've said for many years now, people think it's a joke, but it's not at all, that the fields of marketing and brainwashing have a long liberal history of borrowing from each other's research. And this sort of technology, this biometric research slots very, very neatly into both. And you do have the question of when you reach the point where you can accumulate so much biometric data with such high fidelity that you can genuinely predict or push somebody's behavior in a very specific direction, At what point along that link does it become unethical? At what point do we have to say, that's enough, we can't push this any further? We're not there yet. We're probably quite a ways from that. But I mean, that's the goal of marketing, right? That's the goal of advertising. People are trying to do that. And when do we say stop? In terms of availability, yeah. I think anytime you start talking about hidden sensors, that becomes a problem. We've just sort of had a tacit agreement as a population that we're okay with cameras in public spaces, but that game gets flipped entirely once you go indoors or into a private space. And even though cameras are okay in public, we don't have that same tacit acceptance of recording conversations in public. And so a lot of that is driven by social norms, for better or for worse, but it also has been predicated upon this is the current level of what we're able to do. We don't really have a playbook for what happens when, you know, say, I can read your emotions, or I have a camera rather, that can read your emotions from 200 meters away. And then I can prepare myself for how to interact with you once you walk in my direction. And yeah, assuming that even becomes possible, then what's the social consequence of that? Will people start wearing masks? Just like literally, physically wearing masks so that their biometric data is less interpretable to those around them? Will people not care? It's hard to say. A lot of it, I think, is going to be dictated by whatever the collective consensus is. So much of how technology in general manifests and progresses is dictated by convention and custom. We're seeing that already now, just taking this in a very different direction. But social media, the accepted modalities and ethics of social media communication have evolved because we've just kind of decided we're okay with things being like this. Not because it's necessarily practical or ethical, but we're willing to accept it. And my own suspicion is that you're going to see biometric implementation in the real world, in the day-to-day space, evolve in much the same way.

[00:27:04.948] Kent Bye: Yeah, in talking to different people about the VR space in particular, but it kind of involves these huge companies like Google and Facebook that they classify as these performance-based marketing companies, which is sort of aggregating all this data about us and then kind of tying that back to our personal identity. And I think that the technological roadmap that I see, at least, is that we're on this trajectory of bringing more and more biometric data into this realm of virtual reality, but also in the context of some of these companies who are going to be tempted to want to tie that biometric data back to our personal identity. So it brings up a question of right now, which is that if you take just a data stream from your body, it may not necessarily be personally identifiable. However, I suspect that there may be sort of unique biometric identifiers such that you may be able to take either a single data stream or multiple data streams and put them together, and that while it may sort of be to the common person looking at a bunch of numbers not personally identifiable, with the right algorithm to be able to discern that, I'm imagining that you're going to be able to essentially unlock kind of like a unique biometric fingerprint for each individual. curious to hear like what kind of work has been done in terms of being able to take this biometric data that's coming from our body and being able to kind of tie it uniquely to an individual.

[00:28:24.145] John Burkhardt: So, about 15 years ago, the UCI, the International Cycling Union, realized it had a very big problem on its hands. That the cycling performance-enhancing drugs had outstripped their ability to test for them. That the pharmaco interventions were way too sophisticated for them to proactively test for them. So they came up with this idea called the biometric passport. The UCI was ahead of you on this concept 15 years ago. They realized we can't necessarily test for things specifically, but we can create a profile for each of our riders. We can say, okay, obviously there's very different biometric data streams. They were looking at blood values and protein values and resting heart rate values. Say, okay, we know that this is the normal range for this person in a rested state. And if we monitor this over the course of a season, if we see it fluctuate outside of this accepted range, then that qualifies as somebody is doing something illicit. So, you're absolutely correct. If Target can take the shopping habits of somebody, we've all heard the case of when they predicted a girl was pregnant before she even knew she was. If Target can calculate that sort of thing based on your shopping habits, then it is unquestionable that in pretty short order we're going to be able to calculate a unique biometric fingerprint. It's going to happen. I would say to some extent it's unavoidable. The question is going to be who holds those keys and what is done with it. And again, in some ways, we're also kind of OK with that already. If we're OK with blood being drawn, then we're OK with giving up our genetic fingerprint. Because hey, guess what? You have a unique genetic signature every time you give blood. And we don't think about that, because yeah, we're kind of OK with that. Biometrics feels more personal. I think, in general, people are much more reluctant to give up that sort of thing because so much of biometric data is overt, for lack of a better term. It involves active engagement, facial expressions, conscious thought, eye tracking. Even though a lot of these things really are autonomic to a much greater extent than we believe them to be, they feel like they're volitional. And so I think people are going to have more of a reluctance to have that calculated for them. I do think it's inevitable. It's going to make for a wonderful computational paper from somebody at some point. And at that point, the genie is going to be out of the bottle. And I think the question that dictates how that plays out is going to be, again, who holds those keys and to what end?

[00:30:53.767] Kent Bye: There was a moment where you were talking about the connection between these, you know, marketing and brainwashing is sort of like this fine line between are you marketing to people or are you actually directly, deliberately controlling them. So you had mentioned that you have a background in behavioral neuroscience. So I'm just curious to hear some of your thoughts of, you know, where you see some of the larger ethical implications of that are headed.

[00:31:17.469] John Burkhardt: Yeah. My big concern is when people start to figure out how to specifically trigger fixed action patterns in people. Fixed action patterns are pretty much exactly what they sound like. They are patterns and sequences of behavior that are more or less stimulus response. And these are well documented in the literature across a number of species. A classic example, the turkey. Think of the wild turkey. Delicious animal, dumb animal. Turkeys have remarkably small brain capacity, remarkably low brain volume, yet somehow they manage to raise young chicks up to full health. And so the turkey mother is really, really good at knowing where to sit on her nest of chicks. And so the question was always, with a brain that useless, how can the turkey figure this out? And through a series of investigations, what they figured out was the mother cues in very, very specifically on the sound the chicks make. It cues in on the cheeping sound of the baby turkey. And the mother sits on that sound. So what you can do is, what they figured out was, You can basically take a little, I mean this was in the era of tape recorders, that would play back the cheaping of a turkey and the mother would sit on that. It was a fixed action pattern. It was purely stimulus response. She had no capacity to control whether or not she did it. Birds are frequently used for this sort of research just because birds have a lot of physiological and behavioral advantages to handling. I forget the exact species, but a mother regurgitates food to feed the chick. How does the chick know where to respond to get this food? Turns out the chick responds to a red-striped band around the mother's bill, and it targets this band and it pecks at that, you can create a super response by making an artificial bill with three or four red bands around it and the chick will go nuts going after this. It's beyond even the natural capacity. Again, fixed action patterns. These are things that are programmed into us. It's not a volitional action. It's not a choice. It is done. And we like to think as a species, as a human species, that like, okay, this happens in lower order animals. This doesn't really affect us. But that's not really the case. We are far more stimulus response and fixed action pattern driven than we like to even contemplate. And if you don't think that's the case, ask yourself why Hare Krishnas are no longer allowed to give out flowers at airports. There are laws. There are laws on the books because the Hare Krishna sect has always insisted on begging. And they found that when they gave a flower to somebody, it increased the rate of donation by about 70 to 80 percent. And it was such a massively efficacious behavioral trigger that it was made illegal. Because we couldn't stop ourselves from giving to them, even though we didn't want to. If you ask people after the fact, why did you give to them? I don't know. I just did. The principle of reciprocity is a massive behavioral trigger for us. The concern is there are a lot of behavioral sequences, behavioral chains that we learn, context-dependent, sure, but a lot of them are common across populations, across individuals. When people get enough biometric data, enough behavioral data, enough neuronal data, and can read the codes from all of these, what happens when they're able to trigger a series of behavioral responses in us? I would argue that's clearly unethical, but at what point is that unethical? We already do that. We already do that with advertising, with marketing, with interactions, just dating. You could argue that dating is largely an interaction where both people are trying to get something out of the other person. Now, hopefully it's a bit more symbiotic than that, but yeah, there is absolutely an element of what you can get from that other person involved in that. And so we're okay with a level of that, but... If that goes beyond a certain level, that becomes a very big ethical problem. What is that level? No one's really addressed that question yet. I think my own personal feeling is because few people have adequately accepted how much of our behavior really is stimulus response. And when somebody is able to understand that, interpret it, and manipulate it, much of our free will, if you will, goes away. Now, that creates a very separate philosophical question of if somebody can take away our free will because of the right stimuli, then do we even have it to begin with? Now, that could go on for hours and span several rounds of drinks, but yeah, so my big concern, as I said, is what happens when people do use this data, use this biometric data in conjunction with other data streams to figure out how to read and how to activate our own behavioral patterns and our behavioral sequences? And I don't have a good answer for you for that. I wish I could tell you what the great hope for us all is after raising this terrible specter, but I don't have a good answer for that. And I hope that one of your listeners or groups of your listeners can figure out how to best proceed from here.

[00:36:09.062] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality and the integration with biometric data and what it might be able to enable?

[00:36:19.659] John Burkhardt: Oh, wow. Now you're taking me very much outside my comfort zone. I'm not a VR person by any stretch, but I do think that feedback is going to be the big thing. VR right now is primarily visual-based, and then there's auditory components as well. I think that the goal, of course, is to create a genuine immersion where you can get complete sensory replacement or sensory simulation across all modalities at some point. And, of course, for that to happen you're going to need very, very accurate physiological data. One, to figure out how do we provide those data streams, and two, how do we keep them within safe tolerances. My hope would be that VR is not going to evolve to a place where it can be used maliciously, Yeah, I think that the notion of biometric guided VR experiences, adaptive experiences is going to be very important and hopefully direct a great deal of that. The ability to calculate emotional responses to direct an experience. Obviously, the most straightforward application would be to create an appealing experience, to create something that makes you either relaxed or enjoy, but also take it in a therapeutic context. This is not really my field, so I might be talking out of my backside here, but I could think of a clinical psychologist or psychiatrist using feedback-guided VR to guide somebody through some sort of traumatic event or to move them past a particular mental block that they may have. this feedback and this back and forth between the actual experience and your physiological response is going to ultimately be where both fields converge and evolve.

[00:37:56.105] Kent Bye: Awesome. Well, thank you so much.

[00:37:57.726] John Burkhardt: Thank you very much. I enjoyed it.

[00:37:59.550] Kent Bye: So that was John Burkhart. He's a behavioral neuroscientist that is now working at iMotions. So I have a number of different takeaways about this interview is that, first of all, I think it was just really fascinating to hear all the different biometric data streams that are out there. And that over the next two to five years, I think that more and more of these biometric data streams are going to start to be integrated into a virtual reality context. I think that there is a lot of really compelling use cases for how to do this biometric feedback and integrate it into these experiences. Not only to have more self-awareness about what's happening in your body, to bring these unconscious autonomic processes into your consciousness, maybe that feedback loop will make them a little bit more volitional. There's also going to be different interesting social implications of what does it mean to start to have these more explicit indicators of somebody's internal emotional states within a public context. So I think that there's obviously a lot of ethical issues that Jon was bringing up and that's part of the reason why I wanted to air this episode now because this is where the technological roadmap is happening and some of the privacy policies and discussions around this I think need to happen now so that we can see where this is going. The big takeaway that I got from going to the Experiential technology conference was that there's a specific medical context or market research context or cognitive enhancement learning context under which a lot of these biometric data streams and EEG signals are being used and to put them into more of a entertainment or a consumer VR context, then the question of who is able to capture, store, and record this data, but also tie it back to your personal identity. And do you want it to be tied back to your identity? I think that's the biggest question is like the capturing and storing of it and being able to essentially data mine it and to be able to start to predict these different behaviors. So just quickly to go over this sort of technological roadmap, we have eye tracking, which is tracking what is drawing your attention, what is holding your attention, what's the visual salience of it, what are your search patterns in looking into a scene, and what's your area of focus? Facial tracking is going to start to track your emotional response to stimuli. What is your emotional affect and emotional salience? And it's going to be able to use these facial action units to determine one of the seven emotions of anger, fear, joy, sadness, disgust, contempt, and surprise. And then there's the galvanic skin response, which on its own is not able to tell you much, but it's basically this intensity of the degree of which you have any type of physiological arousal. Then you have the electroencephalograph, the EEG. That's doing all sorts of many different things. The things that John mentioned is that it has focused attention, approach avoidance response, your cognitive workload. And, you know, in talking to other companies, there's all sorts of other different types of AI algorithms that you can use to be able to determine different biometric markers for different types of diseases. Then there's the EMD and ECG that's looking at stress responses, your muscle contraction, and your cardiac variability. Looking at your heart rate variability over time is telling you your overall health, but it's also able, over time, to look at the actual heart rate to see if there's an instantaneous response to stimulus. Or, over time, looking at the heart rate variability to see if there's a long-term impact in terms of the physiological arousal that you might be having as well. So it sounds like it's not necessarily just an individual data stream, but it's the fusion and integration of all of these that you start to extrapolate all sorts of different internal information that you may not even necessarily be completely aware of. So I do see that VR, in combination with these different metrics, is going to give you this additional sense of self-awareness, but there is this question of all these other ethical implications. You know, the thing that really jumped out and is going to stick with me for a long time is that John said that, you know, the fields of advertising and brainwashing often borrow from each other's research. So the line between what is marketing and advertising and what is explicit control, you know, we've already started to see some of that impact of different social science experiments that Facebook has conducted, being able to change various different things and seeing how that would change the emotional effective state of small selections of users. So this idea of a fixed action pattern, which is essentially these chains of stimulus responses that may not be completely in our conscious awareness, that they're below our conscious perception. I think that that's sort of what's already been happening with marketing and advertising. And as we start to move into using these same techniques where you're not only just getting an ad within the context of an environment, but potentially even controlling every dimension of this environment, then what are the ethical implications of that? Where is the line where that should be drawn? I think that's essentially what John is saying. You can see where at the high extreme of that, that would be unethical. But where is that threshold of where that crosses the line? So lots of still big open questions about privacy in VR. And like Jim Preston said in the previous episode, privacy in VR is a complicated issue. And it's going to take the entire VR community to start to talk about these issues and figure out as a community what we're want to have in terms of what is going to be public and what is going to be private. How do we navigate this realm where there's a line that we know that's out there, but we haven't yet crossed it yet? And are we going to just move forward and wait until we cross it? Or is there a way that we can predict where that is? I don't know. I mean, that's sort of an open question that I can't even answer. But I think these are important questions that John is asking that I also wanted to just kind of put out to the VR community. So that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do tell your friends, spread the word, and become a donor to the Patreon. Just a few dollars a month makes a huge difference. So go to patreon.com slash Voices of VR. Thanks for listening.

More from this show