#819: Neuroscience & VR: STRIVR is Leading the VR Training Revolution from Elite NFL Athletes to Walmart Employees

michael-casale
There’s been a virtual reality training revolution that’s been slowly brewing over the last five years, and STRIVR has been at the forefront of innovation working with everyone from elite athletes in the NFL to thousands of Walmart employees. STRIVR has been implementing a number of proof of concepts and initial implementations, and it sounds like the results are positive enough for many of their top clients to continue to invest and expand their virtual reality trainings. STRIVR has been keeping a pretty tight lip around the details of the clients and extent of VR training, but they able to announce in September 2018 that Wal-Mart was purchasing 17,000 Oculus Go VR HMDs for training purposes after an initial report in 2017 announcing that Walmart would be bringing VR training to all 200 Walmart Academy training centers.

I had a chance to have an in-depth discussion with STRIVR’s Chief Science Officer Michael Casale at the Games for Change Conference in New York City on June 19, 2019. I’ve had two previous conversations with Casale in episodes #429 and #595, as well as Stanford’s Virtual Human Interaction Lab founder Jeremy Bailenson in episode #616. Casale told me about how the training for elite quaterbacks and Walmart employees has an amazing amount of similarities from a learning perspective.

Here’s a quick overview of all of the ground I was able to cover with Casale: I was able to get a lot more high-level details for the positive response to VR training, what they’re finding after training many thousands of people within VR, some of the underlying open questions of neuroscience and the nature of learning, how VR is allowing people to upskill and have more agency over the career path, implementing best practices for spaced repetition, how eye tracking may be able to help determine expertise, the frontiers of biophysical data and what EEG might be able to contribute to learning in VR, ensuring that there’s enough variation in learning, and how coaching is evolving with real-time feedback from specific contexts and experiences in VR, what the mobile and tetherless Oculus Quest will mean for the future of training, and finally how VR is a behavioral scientist’s dream come true in being able to simulate many aspects of the deeper context of an experience.

There’s still a lot of questions of how to assess and quantify expertise, and to determine when someone is truly ready to move from virtual training into actual deployment. But there technological roadmap for VR is that there’s going to be a lot more biometric data that’s going to be made available, and that eye tracking looks like it will have some of the most profound impacts, especially once they’re able to compare the eye gaze patterns of experts with novices. There are a lot of indications that are pointing towards that the immersive industry is headed towards a larger revolution with virtual reality training, especially with the early successes that STRIVR is reporting. There’s a lot of technological innovation that’s still left to be done and also integrated with the best practices in learning, but it looks like STRIVR is benefiting from it’s early-mover status in the industry, and they’re currently focusing on scaling out their trainings to larger and larger deployments. We’ll be hearing more information about Oculus’ enterprise offerings at Oculus Connect 6, and get more data points as to how VR is being adopted within the enterprise.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So this is the final episode of my series of looking at the future of neuroscience and VR. I'm going to be talking today to Dr. Michael Casale. He's the chief science officer at Stryver. So Stryver has been on the bleeding edge of looking at how to train everything from elite quarterbacks in the NFL to Walmart managers and lots of really big companies are starting to use VR training. So Michael started off. He's looking at computational cognitive neuroscience He's a behavioral scientist looking at behavior and learning and doing it an academic environment where he felt like very constrained to use these abstract constructs and to do these different tests on a dozen people and it just wasn't necessarily convincing to him that this was going to translate into the real world and So when VR came along, it was like a dream come true to be able to actually start to simulate these real world environments, put people into these different situations and to actually see if these learning theories actually applied. So Stryver is working with some of the best football teams in the world from college, Clemson Tigers to like lots of different people within NFL. lots of quarterbacks who've been doing these trainings within VR, and then also working with huge companies like Walmart, who made a big announcement a number of years ago that they had bought like 17,000 Oculus GOs. That was a huge turning point, I think, and signaled not only to the wider VR industry, but also to Oculus itself to get their own enterprise offerings in order and to actually kick up and start up a proper enterprise department there at Oculus, because this is a huge market that's out there. And so I had a chance to catch up with Michael Casale and I always love talking to him because he's really on the bleeding edge of showing the future of what's possible with the training, working at a scale that there's not a lot of other people in the VR industry that are working with as many people as they are there at Stryver. So I think they're at a point where they're able to take a lot of the lessons that are being proved out and both the research and academia, Jeremy Bailenson of Stanford's of one of the advisors involved in the company as well. And so. He's also been on the bleeding edge of all this research and trying to integrate this and take it to scale. So they're being able to take these latest technologies and start to deploy them out. And so it just kind of have a state of the union in terms of where VR training is at, what they know and what they are kind of learning about the nature of the mind and the nature of learning as well. So that's what we're covering on today's episode of Oasis of VR podcast. So this interview with Michael happened on Wednesday, June 19th, 2019 at the Games4Change conference in New York City, New York. So with that, let's go ahead and dive right in.

[00:02:54.318] Michael Casale: So I'm Michael Casale, I'm the Chief Science Officer at Stryver. We're generally using immersive technologies to train in the workplace. So this can be anything from folks in the front lines in the sales, retail sales industry, or even in managerial positions to be able to communicate better with co-workers, etc. So if you can imagine anything that happens in the workplace that's training related, that's something that we believe we can make an impact in and improve.

[00:03:22.572] Kent Bye: Right, and so this is, I think, our third conversation we've had on the record. And we've talked before about a variety of different issues. But I'm just wondering if you could catch me up in terms of your background and a little bit more context there, but also your journey into immersive technologies.

[00:03:36.763] Michael Casale: Sure, yeah. So by training, I am a computational cognitive neuroscientist, which is a mouthful. So I did a PhD in that domain in the psychology department at UC Santa Barbara for, a while and during that time I really focused my research on a lot of just basic cognitive phenomena, learning and memory, how people acquire information, how they make decisions with that information, and specifically thinking about the brain areas that subserve that. What we really were after is building these models of, obviously not completely accurate models, but models of the brain so that we could better understand how that happens, that decision making, that categorization, that memory, that learning. And so that was really fun. I had a great time with it. But I was much more interested in applying that research outside of the domain of academia to see if anybody could find it useful. And then during my PhD, I actually met a guy who was working in a lab right across the hall from me, Jeremy Bailenson, who's now a professor at Stanford, runs the virtual human interaction lab. So he and I became friends then, and so I took a passing interest in the work just because VR was pretty nascent. It was cool just to, you know, get an experience that was unlike any other, but the technology was obviously a lot more infantile than it is now. But we always kept in touch even after he went to Stanford, and then I had been working in the actual medical research space of all places, doing behavioral research, finding if technologies could help improve various mental health behaviors, patient behaviors, et cetera. And he basically calls me up one day about four years ago and says, hey, I have a master's thesis student who's really interested in training football players with VR. And he's like, I don't know anything about learning. I don't know much about sports, but you know about both. And I know about the VR, so maybe we can tag team this. So we both co-advised on his master's thesis. That gentleman was Derek Bell. She became our CEO at Stryver. And so Derek did his master's thesis. Everything was good. Pretty interesting stuff. But then like a lot of academic projects, it just kind of stopped, right? And that was my issue with a lot of academic research. It was like, well, where does this actually have a place in culture and society? So he founded a company shortly thereafter and asked if I wanted to be a part of it, and at that point it was just so uncertain and unknown, especially the VR space in general, that I said I'd help consult anything he wanted on that front. But I was very dubious about the fact that this could be a real value proposition for the world, especially how little we knew. But a lot of the NFL teams that we worked with took it on its face that this was going to help. They just saw the power right away of being able to transport someone onto the field and get that real-world learning scenario that they couldn't get sitting in front of a video, listening to a coach talk. So again, for that year, it was kind of interesting. I helped how much I could. And then not too long after the company started, Walmart, of all companies, came along and said, hey, that sounds a lot like what we want to do skill-wise. And I started to think about it. It's like, well, one's a football player. One's a Walmart employee. But from my perspective, from the learning perspective, there's a lot of similarities in terms of what you're doing skill-wise, what you're learning, and what you're having to apply that decision-making in the real world. So I got really excited, everybody at the company got really excited, and we started to pursue this enterprise route, and that's when I came over full-time with Stryver and became their chief science officer. So now I, because of my computational background and because of my learning background, I help basically set the methodology and the foundation for how we develop immersive training. So basically, you know, we want a good experience, we want an engaging experience, but we want one that's true to the learning science. And so developing it with those best practices in mind. And then on the other end of that, the analytics, the insights that we can get from these learners is unprecedented. And it's going to allow us to be able to really understand what's working and what's not in the space of VR. And then, like I said, get in the mind of the learner. Are they prepared, really? Because even if they sat in front of a training in front of a computer and took a couple of multiple choice questions, it doesn't mean they're actually ready. We can actually follow their behaviors in a much more rich and relevant way to really understand how prepared they are. So it's been exciting. We're working now with dozens of major companies, tens of thousands of employees, and getting a really good understanding, unprecedented understanding, of how VR is able to actually shape behavior. So, couldn't be more excited for the work that we do.

[00:07:45.754] Kent Bye: Yeah, I remember talking to you about training elite quarterbacks and some of the skills they have to do and able to like look at a scene and be able to categorize it and I've thought about our conversations a lot as I've been playing Beat Saber a lot because Beat Saber is very similar in the sense of like you have all these blocks that are coming at you and you have to be able to discern the pattern. And your body unconsciously learns that process, like you just have to keep doing it, and that eventually you're able to see it. It's been so surreal for anybody who's just getting started with Beat Saber. I encourage them to jump into Expert Plus and to see how impossible it feels, but then to go through the game design mechanics and then slowly kind of build up to that point where you can actually perceive what's happening. So I've thought about that in terms of like, wow, that's very similar to what quarterbacks have to do in terms of like, having the defense, being able to identify the patterns, and then to listen to the intuition, and then actually take and move the actions. So the thing that I don't understand is how is being an elite quarterback like being a Walmart employee through the lens of a computational neuroscientist?

[00:08:43.605] Michael Casale: Yeah, that's a great question. You just have to trust me. No, I'm kidding. So I think the high-level answer to that is, and I'll talk maybe a little bit more detail, the high-level answer is the learning systems that we know govern a lot of our day-to-day decision-making are really kind of unconscious learning. And so the thing that I studied in particular doing my academic work was dissociating different types of learning systems and how we categorize information visually in the world. And there's a lot of instances when we are able to use like these really simple heuristics that you memorize, right? So think about you're stopped at a light and it turns from red to green, then you know how to go, right? And same thing here, same thing in Australia, many countries around the world. It may look different, that stoplight, but you're looking for basically just one of the dimensions, which is color, red or green. So you can take that rule, memorize it, apply it anywhere. But I think that's a lot less common than a lot of the other types of learning that we do. So if you think about just what I mentioned, navigating a store, right? So if you're a manager at Walmart, you have many dozens, if probably not hundreds of decisions you're making in an hour about people, about products, about the safety of the store. So there's a lot of things that you have to kind of keep in mind. And it's impossible to be able to do that in a conscious way, that you're kind of systematically running through all these things. What you're really doing is you're taking in all the sensory information, a lot of visual information, but also a lot of auditory information. What are people saying to you? What do you hear in the store? Do things look out of place? Maybe smells are part of that too. And you're basically combining all of this information at an unconscious level. And we know this to be true because this is a lot of what I studied. And you're able to make decisions. And you only know how to do that through repeated exposure, through a lot of trial and error. And if you're a manager at Walmart, you don't really get that, right? You get a training that gives you the rules to follow, but not how to apply those rules and not really how to learn them. So it's kind of like a primer, but you still have to go through that experience. And sometimes you're just not prepared when you're on your floor. OK, so now take a quarterback. The parallel there is, again, you're taking in a lot of sensory information, right? Even on every single play, you're looking at positions of players, you're looking at the situation, the context. So if it's, you know, for anybody who's familiar with football, you know, whether you're close to your goal line or close to the other team's goal line, what down it is, the distance, the weather. how your players are playing, like all these things. And you're not going to sit there in a matter of 20 seconds and go down the list and kind of individually consider them. They kind of form this kind of agglomerated percept, right? And so this is something that we know to be true, again, from a lot of the work that I studied in this kind of unconscious learning, this implicit learning system. And so, again, the rules that govern learning in both those situations are the same. We know that, for example, real-time immediate feedback and realistic feedback is really critical for learning. If you don't give that in learning situations, it's unlikely that people are going to be able to learn. So in other words, have them make a decision and tell them if they're right or wrong, as opposed to, hey, look at all these instances. This is what you should be doing. Kind of follow this model. That doesn't necessarily work for that kind of learning. So it's really that exposure where you're able to combine all those real-world percepts that's really critical. You know, the commonplace training for both those situations is a very observational, rule-based one. And that's not nothing. Like, it definitely helps learning to some extent, but it's not sufficient, right? It's not able to get you to that place where you're able to make those decisions, you know, most of the time accurately in the real world. So that's the parallel that we draw between a lot of types of learning that we do, but certainly a Walmart manager and a quarterback and how similar they actually are and the skills that they have.

[00:12:14.128] Kent Bye: So I was having these debates with some of my friends about this dialectic between the internal, external, and the collective, and the individual. So the objective individual thing would be a fact, an objective fact that you can observe and prove. and falsify in some way, and then the aggregation of that, so the collective of that objective, is knowledge. Well, this is sort of up for debate, I'd say. But that's how we kind of think about it, like all these aggregations of facts lead to knowledge, but then there's intuitions that are non-falsifiable, they're just like a feeling, and then those maybe aggregate into beliefs. But I don't know if the brain actually makes a differentiation between facts and knowledge and beliefs and intuition, if it's all just kind of mashed together. And so I'm just curious, like, how is this information being stored and referred to in our brain?

[00:13:01.468] Michael Casale: That's a great question. I'm not actually sure if anybody knows the answer to that, but I'll take a stab at it. So the supposition is that you have intuitive beliefs that are kind of maybe inherent or that you formed unconsciously. And then on the other side of that, you're going to have explicit facts that we've discovered throughout the world and that, you know, is there basically different brain mechanisms for storing that information and then how we come to have knowledge versus beliefs. That's a great question. I this is probably someone has spent some time studying this my guess is that yes I would say in the difference is probably not necessarily in well probably in the representation of the information in the sense that those Beliefs that are intuitive are probably gonna be much more indelible. Like in other words, we probably have them represented as much more kind of hardcore like I think you were alluding to this, like, kind of wired beliefs that are hard to change. And so, and then a lot of this is probably, you know, evolutionary, right? The way we kind of perceive the world, the stories that we tell ourselves, these narratives that we live with every day. And I, just to get really philosophical with this, I think that's a great probably example of an intuition that's really indelible, that we don't really are ever taught, but it's this mechanism that we use where, you know, we wake up every day and we have this narrative. We don't really consciously access it, but it's the narrative that lets us kind of determine what we're going to do that day. We don't wake up every day and we say, oh shit, something completely new. Everything's on the table. No, this is who I am. I wake up, and this is who I am in this job. These are things that you don't actually consciously tell yourself, but you live by these principles. And I think maybe that's what you're referring to, is these intuitive things that there's no knowledge that's been collected or facts that's been collected that says, Kent is this person, and here are the things about him. Everybody that would talk about you might have similar things to say about you, but would represent you differently. But you represent yourself in a certain way and that's really what you live by. You don't live by some kind of like body, collective body of knowledge. You live by your intuition that you've kind of developed over the years about who you are in the world. And that intuition I think is really hard to, like I said, it's really indelible, it's really hard to alter that. And it's necessary, right? I think from an evolutionary perspective, we often kind of tie together these facts and narratives. So maybe it's more of like that, it's like more of a meta-knowledge, but like I think that The way we string together information and the story that we form from that, whether that's true or not and objective or not, it's the thing that we live by. And I think that's a really kind of indelible construct that everybody has. And I would not say that that's based on objective, systematic collection of knowledge. That's just based on something that your brain is constantly doing. But that's a really powerful thing that drives your behavior. Does that make sense?

[00:15:31.375] Kent Bye: Yeah. What comes to mind is the predictive coding theory from neuroscience. Are you familiar with that in terms of neuroscience theory?

[00:15:38.320] Michael Casale: So the way I know predictive coding is you see something and you kind of anticipate. In maybe one case, it's like you're assigning value to a particular object. And then you're seeing. So when we think about it from the learning perspective, a lot of that is based on the neuroscience of the reward structures involved. So you see an object that elicits a certain representation in your brain. And if you're using it to take a test and you say, OK, this thing belongs in group A, this other thing belongs in group B, and you say, OK, I'm pretty sure it belongs in group A, and these neurons are predicting that behavior. Well, if you're wrong, that's a really big signal that your brain pays attention to. It causes some dopaminergic release, and then your structures are altered accordingly. And then likewise, if you're making correct decisions and you're getting them right, it's expected, right? Your neurons are predicting that this is going to be rewarding because you were right, and actually that dominance is much smaller. So it's really the errors that are causing a lot of the alterations in the neural structure. So that's how I've come to know predictive coding.

[00:16:39.349] Kent Bye: I don't know if that... Yeah, I was at the Canadian Institute for Advanced Research. They had a workshop, and they had a neuroscientist talk about the predictive coding theory of the mind, and also did an interview with a creator who's got this experience called Miu, but... What my understanding was that when we're perceiving the world we're not just like perceiving it as it is, we're actually fusing together with all of our prior experiences of that world and of the context and we're kind of doing this comparison between our models of the mind with what we're experiencing and then there's this error code that then if it does have that error then there's sort of a change and a plasticity and so with Miu they were doing like this moving your body around but then kind of slightly changing it in different ways so then you were kind of like having your body representations be messed with to a certain degree but it was giving you this sense of novelty because it's not something that you expect and so you have this play and I feel like in some ways the way that we experience the world is that we want that novelty. We want something to kind of blow our mind in some ways. And so we do chase after things that are new, things that are different, and that because it does give us that dopamine hit. But I guess the thing that I was thinking about with that predictive coding theory is like, well, these representations, I think of something like artificial intelligence, where there is traditional coding that is very symbolic, and you can have a human read it and write it, and it's linear. You see the code. And then there's the non-linear parallel of a neural network architecture, which gets provided an experience of data, and then it gets all these weights in a more relativistic way. And then you have this gestalt that it's able to make an inference, but it's only able to do that after all this data, but it's sub-symbolic. You can't point to any one neuron and say, oh, this is why this neural network said that this is a cat and not a dog. It's very difficult to pin it down reductively. And so that's the question I have is in terms of this, unconscious versus conscious, I don't know if it mirrors the way that we see artificial intelligence as this symbolic and sub-symbolic, if there's certain things that we can pin down and have like a little bit of an audit trail in the black box versus whether or not there is no audit trail in our beliefs and our opinions, or if it's all just kind of mashed together and there's no audit trail at all.

[00:18:47.236] Michael Casale: Yeah, there's a lot of thoughts there. So you're kind of more macro point about artificial intelligence and really understanding the parallels between that and human intelligence. Pretty interesting. So I've obviously followed space pretty closely and even kind of working and dabbling more or less as a researcher back in the day and kind of helping build these artificial brain networks to help make decisions. There's a really fundamental problem in neuroscience, which isn't prohibitive for us to keep learning things that are important and useful about the brain structures that underlie behavior, but at some point you run into issues, and I think this is one of them in artificial intelligence, which is How do you represent something abstract and immaterial, like a thought or an emotion or a memory, in something that's so material, which is basically electricity right in the brain, chemical and electrical firing in the neurons. That to me, and I think every neuroscientist would probably acknowledge, is until you really solve that problem in terms of where's that direct cause and mechanism. So I'll give a counterexample to that, which is with other parts of our physiology, heartbeats for example, right? You can actually follow very particularly in a very physical way, the biochemical interactions that happen in the nervous system that would cause a heart to beat, which is a very physical manifestation of all of that other biochemistry, right? So you have a bunch of cellular interactions, chemicals are being swapped, electricity is being propagated down a bunch of kind of chain of cells that result in a physical compression of a heart. And that heartbeat is critical, obviously, for our lives. So you can follow the audit trail. And that's the problem with AI is because you can't see, we're not touching anything. You can't see it. It's really hard to measure a thought and a memory and an emotion. So even building any kind of machinery that would hopefully replicate that, even if you, and I think this is your point, even if you could sufficiently represent that in an artificial network, you could say, I know exactly how to create an AI that thinks like a human. Your point is exactly right. Well, where's the audit trail? How do we actually go into that brain, that artificial brain, and start to mess around with it in a meaningful way and make predictions about, well, I know if I mess around with this area of the brain and these cells, I'm going to get this. We don't know that. And even more fundamentally, we're not even close to that in AI. But even if you could build a network now, and you can kind of more or less predict people's movie preferences, you don't necessarily know why. And you don't know the alterations to the brain or to even that artificial network that would allow you to understand that network and how people come to make these decisions, what parts to make it concrete. Like in that movie prediction representation in a model, you could have features like all the past movies that you've watched, maybe weighting the more recent ones, looking at your musical taste, looking at your social. Let's say you had access to all this information that became features in a network. To your point, even if you predicted with really good accuracy, 90, 100% accuracy, movie predictions, you don't know what features are being accounted for more relevantly. We can't even do that in a simple context in AI, let alone do that kind of in that fake brain context that I described as well. So that is a very hard problem. But I also think that's never going to go away unless you solve the more fundamental problem of how you go from something so material, like these biological cellular interactions, to something so immaterial, like a thought. Does that make sense?

[00:22:09.245] Kent Bye: Well, yeah, it makes sense that we don't know what consciousness is. It's like the hard problem of consciousness to some degree, which is that the neurological correlates to be able to jump from the aggregation of all these weights into those level of the abstraction that is the gestalt. And when I go and cover artificial intelligence at the International Joint Conference for AI, a number of years ago, I was talking to a researcher who said that they were actually combining looking at Wikipedia pages for bird descriptions, like this is what this bird looks like. So a human can read that. OK, this is what this bird looks like. And they can go out and see a bird. And it's like, OK, that's that bird. That, from an AI perspective, is GeoShark learning. You have had one instance of that bird, and you can identify it based upon the features. That's something that AI can't do right now, to be able to blend this symbolic representation and then to get the translation of what that means in terms of the perceptual input that you're getting. And humans can do that. But at this point, we haven't figured out what the interface between those ideal forms of those archetypal representations, those category schemas that we know and we can name. So all of the advances in AI in terms of AlphaGo and the AlphaZero, that it's combining this sort of top-down hierarchical knowledge base with the sub-symbolic neural network architecture. So you have the top-down and the bottom-up working together. So a lot of, like, there's ways to start to do that, but I guess from a computational neuroscience perspective, it's like we're trying to advance the AI, and maybe we'll be able to replicate it, but it's still a bit of an open question of how to interface between those high-level abstractions with the low-level perceptual input.

[00:23:39.836] Michael Casale: Yeah, so I think one key thing here, and obviously guys like Steven Pinker talk about this a lot, which is, so in a kind of AI model sense, and to use your example, if you just give it one instance of an object, of a bird, that's wildly insufficient for it to go out and have successful classification. Part of the reason, I would argue, and I think other people would argue, is because we're not starting from zero. The model's starting from zero, but we're starting with these kind of pretty robust structures, right? We're not blank slates. And it's really critical to understand those templates that we're born with. And, you know, they're going to look a little bit different from human to human, but there are some very fundamental things across the species that we probably have as templates for the world and a way to scaffold on top of, right? So that we can integrate information easily, robustly, so that we don't need to see 10,000 instances. We can extrapolate the key information right away. we wouldn't be able to do that if we had none of that structure or none of those templates that existed. But that's basically what you're doing when you're starting with these AI structures. If you don't give it what you're saying are these top-down types of schemas to work with, it has nothing to integrate into. So of course, it's going to take a lot of exposure. So I think that's one fundamental difference. And I don't know if that's necessarily going to solve that problem sufficiently, but I would imagine it would help a lot if somehow you can understand the base representations and the base ways that we have to learn. as humans and then being able to kind of build that into AI. But because we don't know, people don't want to make any assumptions with their model. They want to kind of leave it relatively unbiased, which is probably smart, because if you start to bias it in the wrong way because of the uncertainty, you could actually have worse learning, right? And it takes longer for that network to eventually learn. But I think that would move the field forward, understanding, like I said, the kind of human-based representation, how we integrate knowledge, et cetera.

[00:25:26.072] Kent Bye: Well, what I think is very interesting is that, for one, I think VR is this interdisciplinary melting pot that is bringing together all these different disciplines. And I think there's this interesting blend of neuroscientists. I think more and more are starting to look at VR at the Canadian Institute for Advanced Research. Brought together a number of different neuroscience researchers who are starting to use VR to do very specific research that couldn't be possible with the level of tracking that they're able to do and the eye tracking, the EEGs. It's like a whole new world of learning about the brain with virtual reality. And then on the other hand, coming from the more industry side, what Stryver is doing with the scale that you're operating at, you're actually able to have sets of data that is unprecedented in terms of understanding about the nature of the brain and nature of learning. And so, what are you learning about what the nature of learning and the nature of memory is?

[00:26:16.233] Michael Casale: Yeah, so I think what you said, you nailed it. And you're obviously a smart guy. And you can kind of see the future here. And I think that is an opportunity that Stryver has that's pretty unprecedented right now. You mentioned the ability to capture data in a way that's not just the volume of data, but the way that we capture it. So just to take a step back, one thing I left out of my preamble was when I was actually doing my dissertation study, One of the things that was, you know, somewhat satisfying but also somewhat frustrating was this idea that we were working in a lab. We were working with very abstract representations of objects, of learning, because we had to, if you really want to kind of do it cleanly. But then you don't really know if what you have found about, found out about human behavior in your studies actually applies in this kind of real-world environment, because it is multi-dimensional. It's not as clean as an experiment in a lab. That's what VR offers. And so I sat there one night thinking about like, man, what if I had a real world environment, one that I can control, like I could control an experiment. And that's exactly what VR is for us. It's basically a real world environmental experiment where we can control the variables meaningfully, but give you a realistic sense of what the world is like and understand how you react to it and all the measurements that come with it. And you mentioned the great advancements being made in a lot of the biophysiological measurements, which you know are really That's key and can provide a lot of insight. And that's only advanced. So that's great. That's exciting on its own. And then couple that with the other thing you said, which is the volume of data, right? That's the other thing about laboratory studies, especially in the world of behavioral science, is you have studies that may be best case scenario on the order of many dozens, but certainly not hundreds and not thousands. And then we have tens of thousands. So we're just at the kind of like inception of learning about that. We are a business, which I'm learning also doesn't always lend itself well to kind of research, right? You have to kind of take your opportunities. We're also finding a lot of these companies have become willing and progressive partners with us because they want to understand. They're actually truly interested in wanting to understand better their employees because they know if they can create a more friendly work environment, one that people can really actually upskill themselves, right? You don't have to just kind of rely on the whims of, you know, the working world that we live in now in a lot of places for upskilling to happen or promotions, etc. You can actually have people take that into their own hands by offering them meaningful training and meaningful learning experiences. So they're really interested in that. But like I said, we're just kind of at the beginning. One thing we have done obviously is to be able to show you know minimally that VR has become a really effective training tool in at least being able to replicate a lot of the on-the-job training that we see and then kind of hopefully be able to further that by offering them even better training than on-the-job because of all the variety of instances that they can get exposed to etc but as far as like memory and retention all that stuff so we're obviously gonna learn about that as well how much training does it take how long does it last what are the individual differences between people and And I think for me, what's really interesting, what are the conditions, right, that people are going to learn optimally? And so we talk a lot about things like flow states. Something that's really interesting to me, that you can actually potentially have a person who's going to be better at learning in different times of the day, or if you induce them into certain kind of physiological states, they're going to be much more receptive to getting that information. and having that information stick. And so I think that's one thing that we're kind of after, too, is we can see, just from the data, who's learning well, who's progressing well, and then what are the conditions that are really facilitating that or fostering that, both from the content experience, but also from their just basic physiology. That would be a really interesting question to answer. But like I said, deploying at scale has been great. I think we'll be able to uncover these learnings over time. But we're just at the very beginning of just making sure that people have training that they think is useful. and is interesting, and is actually helping them. And I think we'll start to learn all that, the other stuff, kind of research stuff over time. So that's a pretty fascinating proposition for me.

[00:29:56.837] Kent Bye: Well, and I've been to a number of different quantified self meetups, and Steve Jonas from Portland was talking about the spaced repetition programs that he was using to be able to learn information or remember it, where he would tell me that in order to learn vocabulary, you would learn it, and then you would sort of do one iteration, and then you would be tested and you would report back how strong you remembered it. And then based upon that, then the algorithm would tell you when the next time would be when you were going to need to know that information, which is just at the moment that you're about to forget it, according to this sort of system. And so you have these spaced repetition ways of kind of spacing things out. And if you just think of something like Beat Saber, You can play Beat Saber for like eight hours the first time you play it, but you're not going to see much progress in one day versus if you sort of start to do it each and every day because there's this unconscious integration process that I think is happening with things like that. So that's the question is like, you're doing these trainings, do you find that they have to kind of do space repetition and kind of space it out?

[00:30:57.988] Michael Casale: Yeah, so we definitely advocate for that. That's interesting you brought that up. So that is what is pretty well established phenomena now in the learning cognitive science literature is this idea that if you space out material, it's typically better learned. So if you have the same amount of total amount of learning time spent, but in one instance, you space it out. And I think the other condition is typically for do is massing the learning. What's really happening from a neuroscience perspective is you're kind of repetition fatiguing the neurons that are representing that information. So you're not kind of letting it settle and there's not this refractory period where on the second presentation, if you space it out well enough, you actually have a stronger response neurally to that information. But if you don't space it out sufficiently, you might actually have a weaker response. And so the plasticity is going to not happen, certainly not the same pace if you mask the information versus spacing. So we're very sensitive to that. And we often advocate and we build our training accordingly, so we do leave time to space. But definitely thinking about next level things like being able to build intelligent algorithms that can say adaptively for each individual how we should present this information next within the training session or even kind of map out a training session over time. One of the things that we're running up against right now, of course, with companies is it's a slow start, right? It's always like, it's change management. They are coming to this VR thing, embracing it, knowing that it can change, but it's kind of one step at a time. So even incorporating VR, the headsets, et cetera, into the organizations is a big step forward. Then making sure that you have enough time to train is a big step forward. So we're getting there with a lot of these companies. But we haven't been able to really take advantage of all these kind of, like you mentioned, cool learning principles that we know work. But it's slowly but surely we're starting to be able to make headway with these organizations, especially the more progressive ones, to be able to actually do things in a way that's consistent with the best learning practices. So that's exciting to see. But it's certainly not happening overnight. But I have a list. I have a priority list of like, OK, maybe now it's spacing. OK, now it's making sure that they get immediate feedback. Now it's making sure that they get enough variation in the concept, right? They're not just learning one repetition of one kind of event, they're actually learning a series of them. That's going to be a more robust way for them to learn and make decisions in the real world. So there's all these learning principles that we know are great, some of which we're able to incorporate now, some of which, you know, it's kind of a slow and steady, one at a time, as we start to grow within the organization and they start to dedicate more and more resources to the training because we know, you know, they know that it's going to be more effective for their employees at the end of the day.

[00:33:23.047] Kent Bye: Well, being here at the Games for Change, a big topic that comes up in the education space is assessment and being able to like quantify in some ways the before and after state of going through and to say that this is effective in some way. And you look at elite quarterbacks where they're playing a game and then To a certain extent, they're performing, but how do you tell what's happening with, like you said, there's dozens of different factors that are being integrated all at the same time that are sub-symbolic and they're unconscious and you can't even put a number to it, so how are you going to, first of all, establish that? But for Walmart employees, they're on the job and they have to be able to maybe deal with different situations, but how do you start to try to quantify and assess something that is fundamentally qualitative and unquantifiable?

[00:34:06.118] Michael Casale: Yeah, that's a good question. Some of these things, I think, will always remain, like you said, kind of qualitative. I do think that's the great thing about VR, is we are going to be able to start to quantify some of these things. So, learning behavior, typically, because of a lot of the current resource constraints that I mentioned, is very coarse. So, we think about paper and pencil, multiple choice tests as the main avenue, because it's the easiest thing to do, and it's close enough. But it's really not, because, you know, you send people out in the world who did 90% correct in their test, but they're just wildly unprepared when they get in the real world situations, then they're having to learn on the job anyway. And that takes time, and there's a cost to that. What we can do in VR is we can actually start to look at more nuanced measures of expertise. So instead of just looking at, hey, what did they answer in that multiple choice question, We can get and use, you know, and again, use the research literature that's out there that have studied experts. And so thinking about sport in particular, there's actually a really, really interesting researcher in Montreal. I don't know if you've ever encountered him, Jocelyn Faubert. I think he's an ophthalmologist by training, but he He works a lot in the decision sciences and behavioral sciences and what he's really interested in with athletes is this idea that you can see their pattern of gaze and where their pattern of gaze goes in the course of sport to really understand their level of expertise. When he's not doing this top-down, he's actually bottom-up looking at how experts see the world and are able to extract best practices from that. And then you can use that as a way to say like, hey, this person really knows what they're doing or they don't. And even logically, it makes sense. It's consistent. They're looking at the play develop in a certain way that a novice doesn't. In other words, they're anticipating where things are going to happen microseconds sometimes before they actually happen. So their eyes are places where the novices aren't. And they're more efficient with their gaze. They're looking at fewer, more meaningful places. And so we can do the same thing. We can put someone in an environment, a highly multi-dimensional environment, whereas, you know, you test them individually. Okay, where should this employee go tomorrow? How much inventory? Okay, they do really well on that test when you kind of test them individually. But now combine all that information, that decision-making like you would in the real world, and then see how they perform, right? See where their eyes go first. See where they decide to choose to explore first in that environment. And that can give you a sense of how expert they really, see how quick they do it, right? Timing is a big component to this too. So testing people under kind of cognitive duress and cognitive load is also a good way to test how expert they are as well. So we're getting better, more nuanced ways to understand, hey, both of you maybe got 90% on this paper and pencil test, but you're really pretty different in terms of how expert you are. So I actually think within VR, because of the data that we have, we have so much more of a rich data set behaviorally. We can start to make better quantitative assessments of who's really expert, who's really ready to go, and who's not. Ultimately, where we'd like to go with this is kind of seeing, you know, hey, do people in the real world, what does their performance look like, and then what does their VR performance look like, right? Then we can actually get to real predictive models and say, we know because you did this in VR, what's going to happen to you six months on the floor if we put you out now versus train you a little bit more. And that's data that we're hoping to capture as well so that we can put people on the floor and put people in quarterback positions that are actually going to be prepared instead of putting them in an environment where they're kind of set up to fail, right? They don't have enough training. And I think that's going to be a huge thing for companies going forward is to be able to feel confident and have the employee feel confident that they can do their job. And if they can't, we have access to training that they can't get otherwise, right? This ad hoc system, they can just pop on an Oculus Quest or whatever the headset du jour and actually get that training in an efficient way that they currently can't get in the centralized, human-led training that they get now, that's actually not that effective. So, I think predictive modeling, combining the VR data with the real-world data and understanding how we can start to see if people are prepared to go onto the job or not, I think that's going to be a really powerful thing going forward.

[00:37:51.042] Kent Bye: Yeah, and seeing some of the videos of Stryver, of what was happening with the coaching of the elite football quarterbacks, was that the quarterbacks are going through the repetitions, but then I find it was really interesting to see how the quarterback is in VR with the coach there, watching it on a screen, and being able to then give real-time feedback from a coach. Not just sort of automated through the technology, but to have the technology be able to have this real-time coaching feedback, which I also thought was kind of a new thing in some ways of being able to like simulate that context and have the coach standing right there and to play it again to go back to the exact same experience and to see how they do it and give that feedback. So that seems to be like a huge innovation in terms of real-time feedback but what are the other aspects of real-time feedback that you're trying to do for maybe something that may not have an opportunity to have like a coach there with Walmart. I'm not sure if they have people that are there coaching them in real time or if it's all sort of automated and self-contained, but just generally where you're going with the real-time feedback.

[00:38:50.402] Michael Casale: Yeah, so that's a really great point. So one thing I should mention is what you're describing, which is a lot of times we're never eliminating, we're almost never eliminating the human. We're just augmenting their capacity to facilitate training. And so what happens now is, you know, if you have a human-led training, it's like a classroom. Someone gets up, they talk, You're listening. It sounds abstract. It's hard to see how that actually applies in the real world. So maybe you're getting the information. Maybe you're not. It's all text-based. So you do well on a text-based test, but you can't really do it in the real world. Well, now what happens is, OK, forget all that. We'll just put you in the environment and see how you do. And as you're making decisions, we can coach you along the way. So that, as I mentioned just a few minutes ago, that real-time feedback we know is critical for this type of learning, this highly multidimensional learning that takes place for most of these learning and decision-making situations. So we're not only able to get people in the real-world environment and have them make decisions as they would in the real world, we can give them feedback right away in a meaningful way, which we know is critical for learning. And right now, like you mentioned, in a lot of our football situations and even some of our enterprise situations, it is human-led. And we think that's fine and that's powerful, too. And actually, one of the really interesting things, it facilitates a discussion between the facilitator, the instructor, and the employee or the quarterback or the coach that hadn't happened before. Now they're engaged in a way. Now they can understand, oh, now I needed to actually do that when I saw this guy kind of coming off the line. And they can have a more meaningful conversation about what they didn't know before. It's almost like you don't know what you don't know. So until you're in that situation, you realize you have a weakness and you want to explore that further. That's creating a conversation with instructors and coaches and employees and quarterbacks that hadn't happened before. But as far as next steps for the technology, I think another really powerful thing is to be able to see the consequences of your feedback. As I mentioned a little while ago, a lot of the way that we learn just in the world, forget about classrooms, just going out in the world and figuring out how to get to Brooklyn from downtown Manhattan, you're going to have to figure that out. You're going to have to figure out the train lines, how they run, when they run, how much it's going to cost you, and maybe you're wrong. And that trial and error learning over time is going to be critical for you to actually learn the right route that you're going to have to take, what time of the day you might want to go, is it going to be crowded, you know, if it's a humid day out, maybe it's better to walk, etc. And that's all stuff that you're just going to have to learn by experience. So we do this every day, right? We don't sit in a classroom, that's not the only place we learn. And so replicating that kind of learning, and some people would say we're evolved to learn best that way, is also going to be really powerful. So if we can actually show in the virtual environments the consequences of what happens when you make a decision, not just someone telling you you're right or wrong, but if you're a quarterback, you're making the wrong decision, you see the ball intercepted and you feel the deflation and the defeat of throwing that interception and having the other team win, that's a powerful, having that emotion-laden type of feedback is a really powerful teacher. And we know that we want to be able to leverage that as well. So I see us in the technology heading in that direction as well because we know that's best for learning. So that's one obvious way we can start to augment even the feedback that we give now. But it's pretty cool to see, like I said, these conversations happening that would never have happened otherwise if people weren't exposed to that rural environment. So I think that feedback alone is even different and better than what happens in current training.

[00:41:56.553] Kent Bye: Yeah, as you're talking about this, it just makes me think about how the coaching is going to completely evolve as well, because you have the players that are learning how to do these repetitions and iterations within the technology. But having the coach there be able to observe the player in this virtual environment, but also eventually at some point getting eye tracking data and galvanic scan response, all these other biometric data markers, that they're going to be able to have all this insight as to what's happening inside internally and to their players. And that's never been available to coaches before, and so how's this going to change coaching?

[00:42:27.721] Michael Casale: Yeah, absolutely. So I think in the way that you're alluding to, which is having a better insight into the mind of a learner. And again, if two quarterbacks are making the right decisions, maybe you would think that they're comparable, right? OK, they knew to throw to the out receiver, made their progressions correctly. But maybe they didn't make them in the same amount of time. Maybe they didn't use their attention. and the way they should, and then we can get a level deeper, which is what you're saying, kind of in the biophysiology. Maybe one of them was actually a lot more nervous and a lot more uncertain about where to go, right? We can measure uncertainty early, right? This is a thing that a lot of researchers do now, and we can see how confident they are in their responses, and again, how quick they got there, and that can give you a better indication, and at some point, you know, again, linking that with real-world performance data, we can say, okay, unless they pass this threshold in the VR environment, this threshold being they made the right decision in the right amount of time, efficiently with their gaze and their, you know, kind of biometrics look like this, until they fit that pattern, they're actually not ready. You just don't have any of that nuance right now. You just have like a yes or no response from them. And then of course, you know, it doesn't always play out like that in the game and you wonder why. So it's really giving guys access to a level of preparedness that they wouldn't get before. Now there's another question in terms of like, what if they're not prepared, what do you do? I think VR can definitely help right now, but it's not sufficient. I think it's really kind of knowing that they're not prepared with certain plays, and then finding the time and the resources to actually get them better exposure. Or in a real world use case that we have, what's actually happening, what I found was really interesting. So one of our best users in the football realm is a young quarterback. Mitch Trubisky, and he plays for the Chicago Bears, and him and his quarterback's coach, who's also a pretty young guy, Dave Ragone, they love this technology. And Dave loves it because he can actually get an insight into Mitch's decision-making that he can't get currently, certainly on the field. And especially on the field, in practice, you're only getting a certain amount of repetitions. You can't see everything. So what Dave and Mitch are able to do is to go in every, I think they're every Friday, they're spending an hour, hour and a half in House Hall looking at VR and Dave is asking Mitch, hey repeat these plays back to me. What do you see? What do you see? And that seems to be really helping Mitch in terms of, you know, being confident. But then also it helps Dave in terms of like, well I know Mitch is, you know, maybe not comfortable with these plays, maybe he's comfortable with these plays and then he can actually Progress from there right and say okay now when I get to the game in two days on Sunday I know which plays the dial-up for Mitch better than I would if I just saw him practice, right? So it's actually providing Dave more intelligence in terms of his decision-making and how to help coach Mitch in the game So that's just a wonderful thing to see like again that layer of insight and while you know while we believe you bring a level of expertise to the table with learning and analytics we're also learning a lot from our users right those who are really invested in the technology and And you're certainly seeing it happen with the younger generations, both in the enterprise training space and in the athletic training space. So it's a cool thing to see. So we're able to both learn from each other. And just what a cool thing to see someone actually improving by virtue of the science that we're applying to the technology. So for me, it's otherworldly that I'm able to do this. So it's pretty cool.

[00:45:30.030] Kent Bye: Well, didn't the NCAA football champion use Stryver as well?

[00:45:35.107] Michael Casale: Yeah, so the Clemson Tigers were our heaviest users when they won the national championship not that long ago. And, you know, even like I mentioned Mitch, you know, you saw the progress that he made from year one to year two. Pretty amazing as well. So you're seeing a lot of the kind of the best and brightest using this and then telling us like this is absolutely a tool that I can't live without. And again, I think we're just scratching the surface. So it's almost like I'm seeing this and I'm like, ah, is this really going to work? And they've embraced it, they're using it, they really feel like it's working. But there's even so much more we can do with this technology. We're really just giving them the first layer of window into that real world environment. But now you start to think about the technological advancements that are being made with things like 6DoF and a tethered free environment, being able to really freely walk around. at some point maybe incorporating other users into that environment meaningfully so that you can interact with them in a more meaningful way, getting the real-time feedback, seeing the consequences. So thinking about having some sort of better intelligence to trigger events based on your decision-making so that the environment itself changes. We're not just stuck with whatever film that we captured. Whatever we're providing now is certainly useful. You start to see where this technology can go, and you've even seen it in some of the more cutting-edge companies that are doing things in the volumetric space. in the haptic feedback space, analytically in the kind of biometric recording space, eye tracking, all of that. And that's like, I just can't wait for that world to come because I know how much more we're going to get out of this technology when it comes to training.

[00:47:00.715] Kent Bye: Well, I think in a large part, Stryver and the work that you're doing at Walmart probably got Facebook and Oculus to actually get their enterprise offerings together after 17,000 GOs were sold and the big press release that there was so many people being trained in VR. Clearly, there's a lot of compelling use cases, but that was the GO, and now we have the Quest that just came out. So what's Stryver going to be able to do with the Quest now?

[00:47:26.877] Michael Casale: Yeah, so like I mentioned, just having this untetheredness was the key. So a lot of what we are dealing with, it's probably what a lot of the world's doing when it comes to adoption. So we're here talking to other people who are heavily steeped and complete believers in the technology. But these are the people you don't want to convince. It's everybody else in the world who sees, yeah, maybe I can play some games on this. But it's kind of a pain in the ass. I've got to hook it up to a computer. Now I have to have this laptop. We're done with that, right? We've evolved. Like even having that standalone untetheredness was a huge step forward for us in being able to implement and have that adoption happen. And now they're able to see the value of it because they're now putting the headsets on. And before they were just things that seemed pretty trivial, but prohibitive, right? Trivial to maybe you and I. well, that's okay, just go plug in the headset. That becomes a really pain in the butt for a lot of these guys, and figuring out ways to kind of build that infrastructure for so many employees, like I said, became a little bit prohibitive to reach the number of people, but now that they created an untethered device in the Go, that was a big step forward for us to access so many more people than we would have otherwise, and again, learn about what's useful and what's not useful from that mass of data. And now with the 6DoF and the Quest, being able to have them walk around meaningfully, now you start to think about things like, creating a volumetric space where you can actually interact a little bit more meaningfully with the environment where I think it's so incredibly useful for them to have an environment with even just that one point of view and be able to look around and see what's happening. But now you can actually go over lean over and see what that that level of presence and engagement that you get from being able to really feel like you have some agency in that environment. It's huge, not only just for general engagement and people more motivated to interact with the learning environment, but actually having that learning be more robust. You're now starting to elicit more emotional responses, which we know are going to be really good for learning as well. So it's really kind of bringing on this whole host of new experiences for us that we think are going to be much more powerful for learning. And then, of course, like I mentioned, You start to bring in things like volumetric capture of the environment, being able to kind of swap out objects, you being able to kind of manipulate objects in some, you know, even little bit of a meaningful way. I think it's going to be really relevant for a lot of the training that's going to happen going forward.

[00:49:34.744] Kent Bye: You know, I just did an interview with Rory DeBuff from Accenture and Accenture just released a report recently talking about all the big wins of virtual reality and especially in the realm of training and they have their own way of quantifying that but at Stryver you have access to all sorts of additional information and analytics and so how do you tell the story of in what ways is VR training effective and like how do you quantify that?

[00:50:00.197] Michael Casale: Yeah, it's a great question. So that kind of falls squarely within my domain of, you know, when we say VR training works, what do we mean? And I think you're alluding to this. It's highly multidimensional, right? It's minimally... The employees themselves, the quarterbacks, do they feel like they're getting a better experience? That's really big, right? Even if they were getting a similar experience, the fact that they think they're getting a better experience is good for just employee engagement. So right off the bat, we have a lot of wins. But of course, we're after something more meaningful, which is actual real-world impact. And for us to be able to You know, even the other efficiencies that we get from like reducing training time, et cetera, those are really meaningful to organizations, creating more efficient training, creating more on-demand training so that people can get enough, right? But then we start to look at real learning effects, and we're starting to see, we've already seen a couple instances, I obviously can't talk specifically about some of the organizations, but we've been able to do, with a couple big companies, direct comparisons. and ask them, you know, from their traditional training, what do you know versus the VR training? And we've shown differences for certain concepts being acquired better than they would in the real world. And actually, even kind of more interesting to me is this dissociation between what people kind of know versus what people know how to do, right? So there's this, like, difference in the kind of cognitive science world of know-what versus know-how. And what we're really after is know-how. So oftentimes when these people get trained, they're getting the know-what, right? They're getting the semantic facts, the abstract text-based facts that are really difficult to know if they actually can apply them in the real world. What we're seeing is the know-how. And we've actually done some studies to directly compare hey this you know one person who learned and of course we do this across hundreds of people but like for one person it's you know they're learning the know what so tell us you know what should you do in this situation and in a multiple choice sex play format nailed it right nine out of ten times we're getting it right put them in vr Now ask them to do that thing without kind of prompting them, you know, and see what they actually do. Like way less than 50% of them know what to do. So you see these huge associations and how much people actually can apply the information. And that's really what we're after is the application of that information. And again, it tells us, do you really know what to do? Are you going to be in that real world environment and are you going to be able to act accordingly? And the answer largely is no, even though the paper and pencil test tells us differently. And I think that's just a huge kind of dissociation for us. So anyway, like I said, we're on a path to kind of always want to know what the real world impact is. So more and more, these companies are becoming invested in understanding. That's great. These things happen in VR. You're able to show better learning. But how is this actually playing out in the real world? And so that's kind of the next step for us is to be able to make that connection. to those real-world KPIs. That means something for businesses and for quarterbacks. Obviously, it's the same thing for athletes. What's your on-the-floor performance and how is that being affected? Which is not trivial, right? Because you want to parcel out the effective training versus all the other things that affect that number. But we're getting there. And I think we've seen, like I said, a lot of companies that work with us are being really progressive about also wanting to understand that because that's a direct line to value, right? Then we know that we can actually invest even more in training than we do now because we're showing those real-world impacts.

[00:52:58.562] Kent Bye: Yeah, and I was talking to Liana from Trip, and she was talking about how this next iteration of some of these headsets from MindMaze or Neurable, where you have dry sensors to be able to get EEG data, or these different sensors on the forehead and galvanic skin response, be able to do respiratory, breathing, all sorts of new biophysical, biometric data that you're going to be able to integrate. Eye tracking data is another huge one. So I imagine that there's various trade-offs from, like, Do you want to use the oculus go to get the scale of walmart? Do you want to have a spatial like with oculus quest? but if you have like these elite quarterbacks with a pc and like the highest best of the best then I could imagine like They're going to want as much information as you can as long as it's not too hard to get on all these sensors and whatnot. It still has to be sort of like a good user experience. But I feel like the next iteration of headsets are going to have that. And so as a neuroscientist, a computational neuroscientist, why are you excited to get your hands on in terms of what type of biophysical and biometric data to start integrating into to get real-time feedback in terms of the learning process?

[00:54:00.075] Michael Casale: Well, certainly the eye tracking and that's the thing that we know is coming quickly. And obviously a lot of companies who have now since been bought. So companies like SMI and others who figured out ways to incorporate meaningful eye tracking in the headsets. I think that's a really in, you know, Toby and others who are still out there kind of just making the equipment. And we're already kind of seeing product announces from Samsung and others that they're going to have these native in the headset, which is not surprising. that's gonna open up not just things for, I think everybody thinks about the things like foveated rendering, but for us it's an amazing data point, right? To know exactly the pattern of gaze because a lot of the research literature has also gotten us to a point where we know a lot about what gaze can tell us about behavior, right? It's kind of the window to the mind kind of thing, right? And so knowing patterns of gaze from the eyes and not just the head is a lot more meaningful and I think we're gonna be able to do a lot more in terms of inferring where people's mental states are to get things like vigilance and preparedness, engagement, etc. that we can't really get right now. We do a pretty good job with the head, and for a lot of things the head is a good proxy for gaze, but I'm really excited to see what gaze can bring us. And we know, I think this mostly applies to the social domain, so we have a lot of training that doesn't involve social interactions, but more and more we're seeing our training kind of evolve towards social interactivity. And I think that is key, right? So being able to have a meaningful conversation. If you're in sales, for example, you're doing kind of some sort of managerial training where you're having to deliver a difficult conversation to an employee, or you're a doctor at a children's hospital and you gotta tell the parents, you know, it's not gonna go so well. How you're able to do that, how comfortable you are with doing that, how effective that social interaction's gonna be is largely dictated by gaze. And again, this is, there's a wealth of research literature into this space that tells us certain gaze patterns can really help us understand what's happening inside somebody's mind. So I'm really excited for that and that data. And again, you know, just even preparedness. Knowing that they're experts and know where to look, I think that's going to be really big. You mentioned Neurable. We've talked to those guys a lot. I really like those guys. Ramsey's and those guys have done something wonderful, which is to be able to build you know, EEG into a headset. And we know we can do a lot with those EEG signals, especially the kind of gross ones that tell us about novelty and, you know, when things are actually eliciting a response that we see something that's a meaningful object in this environment. We are familiar with it. This is something that we're expert with versus not expert with. Is it new? Is it new information? And again, those signals and the signal processing algorithms that have come a long way to process that EEG information has been really insightful to be able to tell us something meaningful about how people are taking in information. So two people, again, kind of going through an experience, if you're just looking at, you know, how they respond to something, that doesn't tell you how comfortable they are with the environment necessarily. And I think through those signals, we're able to get that extra layer of insight, biophysical signals like heart rate, heart rate variability, galvanic skin response, same things. And you mentioned things about usability and kind of the ergonomics of that. I think that's come a long way. I've tried the Neurable device and it's really good. It's really easy to use. It's not very invasive at all. And, you know, obviously all the signal processing software they have to go along with it. I've seen it. It's really easy to use. We obviously have yet to test it in our training environment. So that's the moment of truth for us is to see how well it does in predicting behavior in those environments. But But I have a lot of faith that that technology will get out there because it's so useful and there's so many really smart people working on that technology. So I'm really looking forward to a lot of that. Now the question is, well, how do we actually employ it with our folks? And I think it's selectively, right? We're always going to have a little bit more progressive individuals who are willing to try and experiment. And that can give us a good head start on understanding when to use that data, because it's not always going to be useful, what context, and then how it's going to be used with a bunch of the other data that we can get, again, to make these predictions about behavior. Having access to so many people, we feel like we're in a really good spot to be able to make those assessments and inferences. And it's a matter of finding the right use case with the right customer, and we've already found that to be the case with some of the customers so far. Folks who are willing to try and experiment with some of these technologies because they want to always kind of be on the cutting edge if it's helping them, right? Not just for the sake of using the technology, but because it's actually providing something useful and insightful as far as human behavior is concerned.

[00:58:05.549] Kent Bye: Well, you mentioned that there's a number of companies that you can't talk about that you're working with and, you know, obviously you're working with a number of different football teams, NCAA teams that have talked about it. I think that when Stryver released that press release about how many Oculus Go's were being used by Walmart, you know, in collaboration with that announcement, I think that did a lot to the industry. It made a lot of people wake up in terms of the potential. And so, in that spirit, is there any other customers that have either announced or disclosed that they're working with you?

[00:58:37.165] Michael Casale: Yeah, so we have working relationships with a few of them. The big ones, I'm unable to talk about. So I think in due time, they're going to have their own press releases. Fidelity Financial is one company that we've that we've made a lot of headway with and they've gone on the record for us saying that they've actually helped with a lot of their call center training. So this is kind of the financial services area where there's a lot of transactional things happening with customers, but where they are finding a lot of benefit is in kind of re-humanizing that transaction, right? And so oftentimes for their customers, they're making not great decisions for themselves. And so they want their employees to be able to have better connection, right? Better understanding of who those people are, their context and their situations to be able to help their customers make better decisions for their financial health. And that's something that's obviously really critical in our society now is people being able to have kind of good decision making when it comes to their financial health. and being able to recommend things. So a lot of this is just about, like I said, I think the general term I would use is kind of re-humanizing, right? Just putting the kind of human face to that transaction to understand is this the right thing for that person instead of just, you know, mindlessly going through the machinations of like, okay, if then, you know, statements that they're going through with these transactions, maybe they need to take a step back and understand, hey, this person doesn't seem like this is, they're in a good situation to be making this kind of withdrawal or whatever it is, consider their context, ask questions meaningfully. That's a critical part of that as well. So you can have a more effective conversation with a customer. And they feel like they've had a lot better customer interactions because of that. They have some data that they've collected to back that up and we continue to do work with them. So that's one company that has really kind of glommed down to this technology and we're happy to be working with. And a really progressive group as well. I mean, they're really forward thinking. They have a really cool innovations group that we're able to discuss a lot of these new ideas with and figure out how we can incorporate that into their training. So it's been really great to work with those guys. Walmart, obviously, and then even the different areas of Walmart that we've expanded into, you know, where training basically happens all over the place where we can actually fit in and provide a better training experience. That's been really exciting. And then there's three or four other big ones that I'm sure will get announced in due time, but really can't talk about. And then there's a host of others, right? So and again, I, you know, right now we're being pretty conservative. were able to talk about.

[01:01:06.176] Kent Bye: Yeah, that's why I asked, just because I know that the fact that it was the one that Walmart being announced, I think, is a big deal. And I look forward to the press releases once they come out, because I think it's important to know how this is getting out there. And I think it's just when Facebook bought Oculus for $2 billion, that was a signal to the rest of the industry. And I feel like with the early evidence that's coming back from Accenture and the feedback that you're getting in terms of the efficacy of this, it feels like it's on the trajectory of really exploding over the next couple of years.

[01:01:33.597] Michael Casale: Yeah, definitely. And I think once companies see like companies like Walmart are actually adopting this, it really gives them pause. And then to your point, all the other good press we've gotten and kind of signaling that there's something here in the enterprise space is good for everyone, you know, rising tide. So we're all going to benefit. We welcome everybody kind of engaged and interested in this space. Obviously, Oculus is now on their own with their enterprise platform, which is great. Again, it's signaling that there's a real opportunity here. And we know that just from working with these companies, we can get, again, from a scientific perspective, at these best training practices, but also all the other efficiencies that come with the headsets and being able to train through the headset. So there's a host of benefits, getting back to the question of how do we measure effectiveness or is VR training working? It's all those ways. And different companies are going to find different benefits for it. But the fact that we're able to provide many benefits and then kind of the companies experiencing all of them or even just a subset of them has been really powerful. And you know, if anybody's interested in what companies we've worked with and what kind of data we've been able to collect and proof points, there's a bunch of case studies that we've hosted on our website. So feel free to peruse that as well.

[01:02:42.493] Kent Bye: So for you, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?

[01:02:49.437] Michael Casale: I think technology. I mean, that's something that we don't do, right? We don't make the headsets. We don't really kind of work on the technological development of a lot of the, not just the headsets, but a lot of the peripheral technologies, content capture, content generation. We have some great developers and engineers at our company. We can't expand that team fast enough, so it's cool to see the technology developing, but we're, to some extent, beholden to where that technology goes, right? And so, it is, and you kind of mentioned this just a second ago, it is this kind of weird, bi-directional feedback. It's like, well, is this the thing? It's like, okay, great, Stryver found a training case. Okay, let's go invest more money into it, but how much should we invest? And it's like, well, how much are they gonna be able to use it, and is there a real training case? I think we're slowly helping each other figure out how to make that technology grow. The fact that Oculus has an enterprise platform, obviously we had something to do with that to some extent in signaling that there's a real use case here. You're seeing technology develop based on what the market opportunities are that we're able to uncover and others are able to uncover. but we have to wait for them to really make the big investments. The companies with all that money, all that R&D money, and then the geniuses who kind of create these technologies as well. So I think things like volumetric capture, making more robust networking people in an environment so that it really feels like you're there and you're real and you can have really robust interactions, social interactions. I think all of these things are really going to be critical, how they develop, how we're able to use them, how quickly they can scale, how cheap they are. Those are all open questions that we don't totally control, but we're obviously influencing in other ways by opening up opportunities in those markets. So it's exciting for us to think about where all the possible technological developments can go, meeting the people who are kind of on the bleeding edge of all that stuff, but hoping that this stuff can scale so that we can actually have that native in the headset, we can develop for it easily, et cetera. I think those are some just really big open questions. And they're not prohibitive for us right now, but they're things we're always kind of keeping on top of and trying to devise strategies around. So that's stimulating to really kind of think about where the possibilities are. But obviously, where they go is largely going to be determined by what we can show as kind of market value, but also will determine what we can do as a company as far as giving out really meaningful training experiences.

[01:04:58.949] Kent Bye: DUSTIN KIRKLANDER Great. And finally, what do you think the ultimate potential of immersive technologies are and what they might be able to enable?

[01:05:09.053] Michael Casale: So I think you mentioned, just from my perspective, and I think we've talked about this before, is just being able to understand people in a way that you can't understand them now. So as a behavioral scientist, it's just always fascinating to me to think about being able to get, not just the measurements. We've been talking a lot about measurements and the insights that those can bring, and that's great. I think those technologies will develop independent of VR. But then it's really critical that you measure things in a meaningful context, people behaving in meaningful ways. And I think that's where these virtual environments are really impactful on the kind of behavioral research side, is again, like I was thinking a dozen years ago, sitting there, man, what if I had the ability to take what I'm doing now, which was this abstract experiment with abstract stimuli on a 2D screen, and put people in a real environment, but one that I can control so I can really understand how they're going to react to different social interactions, how they're going to react to different emotional stressors in their environment, and really get a better understanding of how people work, right? And that's kind of what I've always been after is Really understanding those kind of fundamental things about human behavior, especially learning behavior, so that you can optimize for that, right? You can actually help facilitate learning. We do this in the classroom all the time. We give people the same learning. We just kind of throw it out there to the masses in a classroom of 30 kids all sitting in a desk and assume that they're all supposed to take it in the same way. Well, we know that's not happening, but we don't know what to do about it because we don't have the power to really understand what's going on in their mind moment to moment. But slowly but surely, we are. We're able to get that information through these kind of sensors that can capture a lot of meaningful information easily. But then it's not enough just to do that. You also have to have the right environment for them to behave and to really understand how it is that they're learning, how it is that they're taking information and be able to optimize for that. And so I think for, just for me as a behavioral researcher, the more realistic these environments get, the more you can interact like you would in the real world, but also control that environment, right? Now we're talking like all the science fiction movies that we've seen in terms of creating these kind of matrix-like environments, not for dystopic reasons, but for really reasons of understanding, you know, how people work and be able to kind of understand how they react to different stimuli. For me, just to kind of be able to facilitate and improve and augment learning behaviors, I think that's a pretty cool proposition. So I'm looking forward to that. And of course, we're already seeing that kind of marching forward every year with all these new technological developments. So really excited for that.

[01:07:28.960] Kent Bye: Cool. Is there anything else that's left unsaid that you'd like to say to the immersive community?

[01:07:34.695] Michael Casale: Yeah, I mean, obviously keep building. Like I mentioned, you know, this is kind of a great time. We're seeing, I think, the acceleration of the technological curve, of course, with the headsets. And it's really the people who are developing these really amazing one-off experiences that are setting the kind of North Star, right? So you see someone, you know, in a basement creating these amazing experiences with this technology. It's not meaningless, right? That it's just a one-off experience because it really shows you what the world of possible is. And it kind of sets the mark for the benchmark for where everybody should go. So it's cool to see this community get together and really kind of be able to exchange ideas because this is the vanguard, right? This is where we're going to see all the new opportunities and new ideas in this space come to life. And then for us to be able to meaningfully take advantage of that and provide them at scale, I think that's a, It's an extremely complimentary system, right? To have people on the bleeding edge of this stuff, really kind of showing us what's possible. And then for people like me to really understand how to make that practical and useful for the world and companies like us. And then of course, companies that are doing both, I think is really gonna, is a cool thing. So I just, you know, keep meeting, keep building and know that like there's other people out there that are watching and looking at this stuff. And it's just a kind of a cool time to be in this space.

[01:08:45.509] Kent Bye: Awesome. Great. Well, thank you so much for sitting down with me to go into all the details of what's happening with Striver. And yeah, just thank you for joining me today on the podcast. So thank you.

[01:08:53.842] Michael Casale: Yeah, always great to talk to you. Really fun, smart, intelligent, provocative conversation. So appreciate the time.

[01:08:59.588] Kent Bye: So that was Michael Gasali. He's the chief science officer at Stryver. So I have a number of different takeaways about this interview is that first of all, well, I always love catching up with Michael just because, you know, like I said, at the top of the podcast, he's really on the bleeding edge of working with a whole wide range of different companies. I know that he talks about Fidelity Financial, as well as Walmart and all these other NFL and NCAA football, the Clemson Tigers, lots of different elite sports athletes who are finding a lot of value of being able to do these. Essentially, they're capturing like 360 videos, and then to be able to play them back in VR and to be able to have like a live interactive coaching sessions, but also like working with a whole range of Walmart employees, And to me, it was a bit of a surprise to see the connection between elite quarterback and Walmart employees. But apparently it's all about like being able to take in all of this unconscious information, whether it's a football player, if you're looking at what the context is, what the down is, where you're at in the field, what's happening with your players, what the weather conditions are, you have to like fuse together all these different variables. And to be able to have a certain amount of situational awareness of what the defense is doing and be able to, to know where to look. And all of that is also happening on the level of training people to be managers at Walmart, where you have to be aware of all these different conditions, information that's coming in to be able to. make hundreds of different decisions per hour. And to be able to just have this level of situational awareness, that's really hard to put into explicit rules to abstract out into like pen and paper tests, it's a lot easier to actually learn on the job. And so what Michael Casale is saying is that, you know, a lot of what VR is able to do is actually put you into a context and have a lot closer stimulating you and making you have these social interactions and these emotional intensity of being embedded and having this sense of embodiment in these different contexts. And there's a lot of difference between knowing what and knowing how. So you may know what to do, like, theoretically, abstractly, and taking a multiple choice test, and you're able to answer the right answer. But it's a lot different from actually knowing how, which is that you're actually embedded within that context, and you know what to do without having to be prompted, and you just are able to identify the context and be able to make the right decision. So at the end of this conversation, Michael was saying how there's this kind of dialectic between the pioneers who are like pushing the edge of what's possible technologically. There's a lot of research that they're drawing upon to be able to like fuse together, but they're operating at scale. So I think in some ways they have to take a little bit more of a conservative approach where there may be some of their customers that are willing to do like the duct tape prototype and to really be on the bleeding edge of integrating all the new technology of. you know, what's coming with like eye tracking data and EEG and all this biometric information so that you're able to get this sense of what's happening with people's comfort and whether or not they're stressed out and are they looking at the right places and being able to actually get all this deeper insight into people's levels of cognition with EEG and you know this is all like on the near future but immediately you know they're working with like oculus goes and you know oculus rift and htc vive and you know at some point they start to integrate with the oculus quest to be able to have like tetherless but you know they're operating at such a big scale that it takes them a little bit longer to integrate some of these different aspects But at the same time, they're really on the bleeding edge of like integrating something like the Oculus Go into enterprise training that was happening at Walmart. Like 17,000 Oculus Go's that sent a pretty strong signal to Oculus to get their act together and to start to actually like take the enterprise market seriously and to spin up their entire enterprise offerings. They made their first announcement here at F8 2019, and I expect to have a lot more information about some of their first enterprise offerings and what they're going to be providing to customers at Oculus Connect 6, which is coming up in just a couple of weeks here at this point. But there's a larger signal that there are actually companies that are willing to embrace and take on virtual reality, even if it's something as like a three degrees of freedom Oculus Go, which is essentially like a media delivery device. But the way that they're able to capture 360 video and be able to actually put people into these different contexts and have these different training scenarios. I saw an amazing demo called Avenue S by Courtney Harding, worked in collaboration with Accenture, also just showing the potential and power of using something like Oculus Go to be able to do these different training scenarios. And she was using a lot more artificial intelligence and working with Kevin Cornish, who has been on the forefront of trying to experiment with AI and how that integrates into the story. And did a really amazing and moving and powerful demo that I saw at South by Southwest this year. And, you know, Accenture is also just on the bleeding edge of trying to see what's happening in this training space. And I have an interview with Rory Dubuff. I'll probably end up eventually at some point having a whole exploration of what's happening with training within virtual reality, just because I have a number of different conversations and interviews and, uh, That's definitely something that I see is going to be a strong way in which that virtual reality is starting to have like huge wins and be able to provide some pretty significant value of reducing training time and just making training. That's a lot more effective. It sounds like Michael was talking about that. They're getting to the point where they're getting a lot more data back from a lot of this first rounds of these different proof of concepts and these prototypes into these different companies. and able to show like specific ways, how VR is able to allow people to understand and grok specific concepts a lot better than previous methods and able to actually compare and contrast that. And sounds like that there's a lot of really positive results from that, and that they're going to be doubling down and expanding a lot of what is already there. And I'm sure also expanding out into lots of other companies as well. I think a theme that came up a number of times throughout this conversation was hearing from Michael, some of his previous frustrations of working in academia, where it was kind of sandboxed out these small use cases, different levels of abstraction. I think with VR now it's able to like close that gap and able to take something that would have been maybe in an artificial context and studying behavioral research that now with VR, you can actually get a lot of. Trying to translate people's actual behavior from what they do in VR how they behave in real life and I think that was one of the things that they were trying to look at in terms of like trying to find experts in different domains and put them into a virtual environment and then to have a whole range of new ways of being able to quantify different things and Being a look at gaze and time that they're making these different decisions that there's actually a lot of ways in which the spatial computing technologies are able to quantify certain aspects of behavior that weren't able to be quantified before and and they're able to potentially draw these correlations into like, for example, just talking about eye gaze and just to see what you're looking at and how fast you're able to assess a certain situation. So there's a lot of talk about where your eyes are looking and having that eye tracking data eventually fused into their training process to be able to assess whether or not you are able to know what to look at. And experts have different ways of the patterns of gaze. If you're able to be put into a certain situation with different levels of stressors or high cognitive load, are you still able to make the right decision? And so being able to like put people into these different contrived situations with VR, but to be able to more closely mimic what the situation might be in the real world. And just that they're able to start to create more of this on demand type of training in that there seems to be demand for people who want to advance their career, to be able to get the training that they need to be able to have some sort of trajectory in these careers. And so just to see how companies like Walmart are starting to like lay this out and have these different training programs, these different modules. be able to get this VR training and then to be able to then not just do a bunch of mental abstractions where, you know, you're able to pass a multiple choice test and then you get put onto the job and then you still have to do a lot of on the job training. They're just finding that they're able to maybe get that same level of on the job training while doing virtual reality training. Now, the thing that I also found interesting was to see how humans are not being completely eliminated from these different situations, but that both in the case of coaches and these elite athletes, that the way that coaches are able to have more deeper insight into the decision making process of these quarterbacks, And to be able to have different conversations that they couldn't have before. And just the same within the context of corporate training at a place like Walmart, being able to put people into an actual context, maybe give them an experience, a direct experience. And then from there, start to then unpack it and have different conversations and start to unpack the decision-making process that they went through. And so. a little bit more of an interactive process, but something that is a lot more visceral and engaging than just, you know, showing a video or talking about things in the abstract or actually give them a direct experience, put them in a position where they have to make a decision and then see what they do and then unpack it from there. So having still humans in the loop, but providing virtual reality experiences to be able to provide a context and a direct experience that then could be the start of a conversation and an opportunity to actually understand the importance of these deeper high-level concepts that they're talking about. And it sounds like that the trajectory is going to be integrating more and more biophysical factors or in some situations like at Walmart, they may be just using the Oculus Go and then maybe eventually the Oculus Quest. And so they're going to have to like figure out different levels of volumetric capture, maybe more CGI creation and going beyond just like 360 videos, but also being able to integrate different aspects of the eye tracking data, the different heart rate variability, and fusing that together with all these different biophysical sensors to be able to figure out, OK, what are the different factors that are going to determine whether or not someone feels prepared, if they're familiar, if there's a lot of novelty in a scene, lots of different ways in which that you could start to integrate these biophysical markers and to be able to feed back into the training. And so at some point, making the training even more specified than it is now to do highly adaptive training. There's this whole concept of spaced repetition. So being able to space things out, having a lot of variability within different training that you have. And so maybe the biophysical markers can see whether or not you're responding to that novelty or not, being able to provide that immediate feedback as well as having enough variation of a concept. Also, just being able to look at the eye tracking and be able to determine whether or not people are comfortable within a conversation. It sounds like there's just a lot of information that you can get, almost like the eyes are the window into the mind, being able to discern all sorts of different aspects about people's cognitive abilities and their cognitive processing just by being able to track their eyes and be able to correlate that to what their behavior is. So that's going to be an interesting aspect as well as they move forward. And, you know, there's still quite a lot of open questions in terms of the nature of consciousness. And they have a lot of these neuroscience mechanistic ways of describing what's happening in the brain. But there's a certain point where it stops. And I'm talking to Joel Zylberberg, he was talking about how there's these other aspects of like the psychological or cognitive aspects of, you know, having these abstract concepts, and how those abstract concepts provide the foundation of the predictive coding model of neuroscience, where you have an expectation of all of your private experience, and that's somehow encapsulated in all these deeper structures of how you understand the world. And Michael was talking about how zero shot learning within AI is sort of like, you have one instance, and you're able to understand it. And that is extremely, extremely difficult to be able to do. But humans are able to do this kind of so called zero shot learning. But that zero shot learning is also based upon an entire lifetime of lots of different exposures to that data as well. We have these pre existing structures for how we make sense of the reality, either for common sense reasoning and ways that we understand the world, all of our mental models, and all of that is like the underlying structure is something that doesn't exist within AI yet. And so to understand how to have that feedback mechanism within AI, it's like the back propagation or the credit assignment problem, which is that if there is an error and you're trying to correct for it, then how do you distribute the weights in a new way to be able to lead you to the right answer? And I think it's like kind of like mysterious aspect for how our human consciousness and how we come up with these different mental constructs and how that feedback into like the neural network level. And that's sort of the mystery of being able to like trace down thoughts and be able to unpack a lot of these things. It seems like that linguistics and language and words seem to be a key part of this, of anytime we're able to reduce things down into language, into words, and that provides some sort of interface with the rest of the structure of how we make sense and encode information and knowledge within our brains. So language could be a part of, you know, how those primary category schemas get formed, but still the interface between how the language and all those concepts are interfacing with how we're perceiving reality is still a pretty significant open question, even at the computational neuroscience level, and then how to like feedback that into like the process of learning and how we Update these different category schemas and have these different errors and it seems to be that we do have this interface between The pre-existing concepts and anytime we're experiencing something in the world We're comparing that with all of our prior experience in our concepts but how that actually plays out at the the neural network level is still something that's a bit of a mystery and And that from Michael's perspective, you know, if that is figured out at some point, then they're going to be able to just really super optimize the process of learning. And so it could be that through the process of working at the scale that they are and being able to get so much more biometric data and be able to get many different people and just have access to this information to be able to maybe find some of these deeper patterns. Maybe they'll be able to find some of these interesting insights into the nature of learning and consciousness and be able to discern. different aspects of what's happening from our brain from the EEG and be able to Contribute back to the wider scientific community in some way I think that's the challenge is that a company like Stryver ends up working with a lot of these companies and then you know They have access to this huge troughs of data. That's like a goldmine of information. How do you integrate that back into academia? and so there's this always this pressure of Once you get to that point as a company, you have your own sort of bottom line of what you need to do to advance as a business. And then there's that obligation to sort of contribute back all this learning back to science, or maybe to collaborate with these different researchers to be able to have, you know, some access to that data, and then to be able to get these deeper insights in the nature of learning the nature of consciousness. So that's just a balance that they have to strike. But you know, from Michael's perspective, he was getting a little tired of not having sort of the concrete ways within academia to be able to have more impact and be able to directly measure some of these learning behaviors. But with virtual reality, it's sort of this democratization of behavioral science into many different domains. And so you can start to replicate so many different aspects of these different contexts and experiences, and then maybe start to make it more easy to be able to make things that so sort of contrived or abstract within an academic research context, but actually make it feel like it's actually real. And that's what I think is so fascinating about what strivers doing is that they're taking all this insight from the research, but they're actually applying it and seeing what works and what doesn't work. and trying to find the perfect combination of all these things of how to best optimize this process of learning while they're collaborating with everything from like some of the most elite athletes in the world to some of the biggest companies in the world who are working at such a huge scale to be able to train all their different employees to be able to do skills that they're just finding like huge impacts for what virtual reality can do when it comes to training. So that's all that I have for today. And I just wanted to thank you for listening to the voices of VR podcast. And I just wanted to thank you for joining me on this series of those last 13 interviews of doing a deep dive into the neuroscience of the VR. You know, I've been going and traveling to VR conferences last five years, and I've got hundreds of unpublished interviews where I can just kind of dive in into some of these different series. this is a particularly interesting series for me is just because, you know, I went to this future of neuroscience and VR workshop with lots of neuroscientists, but I was able to kind of pull together lots of other conversations that I've had over the last three years, conversations that maybe within the context as I have been publishing things, just sort of, you know, get lost, I will go to a conference, and then I'll publish a number of interviews and then I'll go to another conference. And so moving to this batch approach of releasing these different conversations and these themes allows me to kind of go back through the backlog and put together all these conversations. And you start to see these different connections and themes amongst all these different conversations over time. And I wouldn't be able to do that or any of this without the support of you, my listeners, the support of Patreon. And so if you've enjoyed these different types of series and you want to see more of this, documenting the evolution of spatial computing and this real-time oral history, not only for you right now to understand what's happening, but also for future generations to look back and to learn about the evolution of these spatial computing mediums, then please do consider becoming a supporting member of this Patreon. Just five dollars a month is a great amount to give if you could give more that's great It just allows me to continue to bring you and the rest of the vr community this podcast for free So become a member and help to sustain and to grow this podcast So you can become a member and donate today at patreon.com slash voices of vr. Thanks for listening

More from this show