HP’s G2 Reverb Omnicept edition is a special version of their VR headset targeted for enterprises that includes Tobii eye tracking that can track real-time pupil dilation, eye saccades, visual attention with eye gaze vectors, a camera for facial tracking, as well as photoplethysmogram (PPG) that can measure pulse rate variability and heart rate. Combining these physiological measurements together enables certain human psychological inferences including things like cognitive load that can be correlated to the situational and contextual dimensions of a training scenario, or to measure real-time responses to cognitive effort.
HP lent me G2 Reverb Omnicept edition to try out along with access to OvationVR’s public speaking application as well as a demo of Mimbus’ Virtual Indus electrical training application, which both had integrations of the Omnicept’s cognitive load features. In the absence of having and calibration or real-time graphing features exposed, then I found it hard to correlate the calculated cognitive load numbers from my VR experiences to my personal experiences. The Virtual Indus just gave a minimum and maximum range of cognitive load as well as an average number, and I was able to get my average cognitive load down on the second try of the demo experience. And I wasn’t able to figure out how to get a more granular graphing of cognitive load over the course of the exercise within the VR app (although it looked theoretically possible to do within their Vulcan website). I was able to look at a graph of my cognitive load while give an impromptu speech in OvationVR, but only with slight fluctuations over the course of the talk with a peak value coming at what seemed to be a pretty arbitrary moment.
The challenge with capturing and using this type of physiological data is that sometimes it is really hard for users to see deeper patterns or draw immediate conclusions from these new streams of data, especially in the absence of having any real-time biofeedback to help calibrate and orient to these changes in physiology that may or may not have corresponding changes in your direct experience. I have found this to be a recurring challenge and issue whenever I test out VR experiences that have biofeedback integrated into it. Verifying that it’s accurately calibrated and can provide data that has utility relative to a specific task is the biggest challenge and open question.
It would be nice if HP developed some apps to help users do their own QA testing on each of the sensors, and that provided some real-time graphs to help with this real-time calibration and orientation. Having some canonical reference implementations could also help more developers adopt these technologies, since the success of enterprise platforms like this has a lot to do with how many different Independent Software Vendors (ISVs) implement these sensors into their applications.
I also had a chance to talk with Scott Rawlings, Manager of the HP G2 Reverb Omnicept Platform, Henry Wang, Product Manager for Omnicept, and Erika Siegel, Experimental Psychologist, Research Scientist, Subject Matter Expert on Human Inferences. We talk about the current physiological sensors and what types of human inferences are enabled, how these could be used in different industry verticals including training, education, simulation, wellness, and architecture, engineering, and construction.
Overall, I get that the Omnicept is still within an early and nascent phase of development where ISV developers are still building up the larger XR design and business infrastructure around training use cases within specific industry verticals. In addition to OvationVR and Mimbus’ Virtual Indus, Claria Product Design was mentioned as another company who is shipping support for the Omnicept.
The G2 Reverb is a Windows Mixed Reality headset that still has some quirky workflows. The inside-out tracking has a simpler set up in terms of hardware that needs to be installed, but there’s still some increased complexities with it’s reliance on the Windows Mixed Reality Portal, and how that integrates with Steam. I personally found that it was easier to get the Omnicept to work if launched from Steam first rather than from the Mixed Reality Portal, but this is also more of a reflection of the state of Windows Mixed Reality devices having some technical complexity and quirks that may be dependent upon your computer. There were times when the G2’s room tracking wasn’t as solid as my external lighthouses with my Index, but for these enterprise use cases I was testing it was definitely sufficient overall.
Overall, the HP G2 Reverb Omnicept features access to a lot of physiological data that will eventually also be coming to the consumer market. There’s still many design challenges for translating the potential of these biometric sensors into pragmatic value within the context of an enterprise VR application, but with these challenges come new market opportunities for developers and companies to tap into new ultimate potentials within the medium of VR. The Omnicept edition starts at $1,249, and has been available since May 2021. You can hear more context about the development and potential applications of the Omnicept in my conversation with Rawlings, Wang, & Siegel.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So one of the trends with virtual and augmented reality is that it's going to take all sorts of new information from our body. Physiological data, our heart rate, our heart rate variability, our eye tracking, our isochords, visual attention, our face and what's happening with our mouth. There's all sorts of information that's going to be fed into these immersive experiences. One of the first virtual reality headsets that's really trying to take in all this information and apply it to different enterprise training contexts is the HP G2 Reverb Omnicept Edition. So this is a special edition of HP's G2 Reverb, and it's got all these different sensors on top of it. And it's really aimed to have specific independent software vendors, ISVs, develop special applications so that they can really leverage and use a lot of these sensors. So I had a chance to try out the HPG2 Reverb Omnisubtitution, and there's only a couple of different software applications I was able to really try out. And there's ways in which my behavior and actions are translated into these numbers. For example, a cognitive load of how cognitive loaded that I was in any given moment within one of these experiences. The challenge that I found, at least initially, is that it's sometimes difficult to make correlations between what these numbers are saying and what my own phenomenological experience is saying. Some of it is just in terms of not having real-time feedback, being able to do this close analysis. The other part is that there's this translation of all this specific information that's happening, and it's digested into HP's own way of calculating these numbers around cognitive load. I think in the long run, there's a lot of potential for where this is going to go. I had a chance to talk to a number of different people that are helping to manage and plan out and create this platform, as well as doing the experimental psychology and research science on specific applications of this, using lots of different artificial intelligence and whatnot. So I'll have a chance to have a chat with some of the folks from HP to unpack their intention for this. And then I'll break down a little bit more about my own direct experiences of this, and also the challenges of how to evaluate this type of hardware that is doing all this biometric information. Yeah, just some general thoughts with the HP G2 Reverb Omnisipt edition in general at the end. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Scott, Henry, and Erica happened on Friday, September 3rd, 2021. So with that, let's go ahead and dive right in.
[00:02:36.813] Scott Rawlings: I'm Scott Rawlings. Nice to see you again, Ken. I managed the Omnicept platform and I'm also working on some other future platforms, mostly in the software arena in the function of product management.
[00:02:50.945] Henry Wang: Hey Kent, I'm Henry Wong. I'm a product manager here in the virtual reality group at HP. I focus more on our hardware and devices part of our portfolio. And so specifically here, I'm the product manager for the HP Reverb G2 Omnicept edition.
[00:03:07.410] Erika Siegel: I'm Erica Siegel. I'm an experimental psychologist by training. I am a research scientist on this project and I am the, actually worked very closely with Jeremy Bailenson, who I think you know very well. And I am the subject matter expert behind the human inferences that are part of this product.
[00:03:26.512] Kent Bye: So why don't you each give a little bit more context as to your background and each of your journey into VR and what you're now doing relative to working at HP and interfacing with the broader XR community.
[00:03:38.685] Erika Siegel: I'll start. So my background is as an academic scientist. Before coming to HP, I specialized in taking information from sensors and inferencing human psychology. So using physiology to predict things about emotion or stress or resilience. I did a PhD in Boston, and then I was a research scientist at UC San Francisco. Working on stress physiology and was developing with Samsung research America, a product that would allow you to essentially inference, you know, predict stress from things like blood pressure. I was actually recruited to come work on this team based upon the work that I was doing as a scientist, as an academic scientist. And I was really excited to come work because it was a number of other really exciting cutting edge machine learning scientists. We had some HCI, we still have some HCI researchers. And then other academic researchers. So I've had gotten a lot out of my relationship with Jeremy Bailenson, who I was just talking to you a little bit about before we got started, who is an expert in VR. And so, you know, that's what I do at HP is just continue to think about the science and keep the inferences close to the science and making sure that that's what we can do reliably and what we can do on a regular basis. And so. That's what we're working on. I'm working really closely on a number of different VR exciting projects. And yeah, it's really fun work. We're having a really good time. I gotta say.
[00:05:09.537] Henry Wang: Yeah. So for my part, I started at HP in our commercial PC business before we had a VR org or anything. And then, you know, I was looking into VR as sort of part of my tasks at the time when VR was still sort of a, you know, nascent as an industry. And I was just absolutely struck by how much potential there was in VR as a technology to really influence the way we work, the way we play. And so when I heard that HP was going to create a whole business unit around VR and immersive technologies, I absolutely jumped at the chance. And what I find interesting about VR is really the different ways people use it, both on the consumer side and on the commercial side. And so that led me into the product management role that I am in today. And it's been an absolute joy to have really worked on bringing a device to market. in the Reverb G2 Omnisub edition that not just delivers, you know, an incredible immersive experience, but then, you know, integrates sensors in a way that enables some of the use cases that Scott and Eric were talking about earlier. And so, you know, looking to the future, how do we make headsets and devices even more immersive, but then also continue building on those capabilities for adapting experiences within the context of those different commercial uses.
[00:06:39.185] Scott Rawlings: Yeah, I actually started in the first virtualization, which is flight simulation. So it was quite a long time ago, but that was my first job out of engineering school was I was a design engineer on a $20 million flight simulator where you have platforms and you put the cockpit on the platform and we had 12 channels that we piped out on a dome. You know, it was for very high end type customers. all the airline industry and the government applications. But yeah, that's where I started. And then I moved into pioneering multimedia and non-linear video editing solutions. And then I moved to human computer input for creators before coming back to HP. And it's been really fun because VR really is about creating experiences. So it's all the multimedia and content development experiences, human computer interaction, And it's this virtualization simulation. So it feels in a way for me personally, it feels like my career is coming full circle and it's really been a blast to be part of this team.
[00:07:48.198] Kent Bye: Great. So maybe you could give a bit more context as to this product of the HP OmniCept and how it came about and how it's different than say the HP Reverb just as a consumer device.
[00:08:00.350] Scott Rawlings: Sure. So I think it's probably clear, but the HP Reverb G2 Omnicept Edition is solely focused on commercial applications. And the larger idea of that device in conjunction with a developer platform that we call Omnicept, HP Omnicept, is that we can marry biometric sensors that are integrated into a VR headset with machine learning. to understand the context of what the person in the VR experience is actually going through as they experience the application, and that that can inform the application to dynamically adapt and adjust to the person in the VR experience. And so it can do that through biometrics like heart rate, pulse rate variability, real-time pupil dilation, and even what we call eye saccade, which is how we move our eyes, how our eyes dart around, which we don't realize, but it's that eye saccade naturally changes depending on how we're responding to an experience. And so it's all those types of insights that we provide as inputs into a machine learning model. And this is where Erica comes into the equation in a huge way, because there's the human response to stimuli. and what we can feed into machine learning models that are real-time models that have been trained up appropriately to give deeper insights, and we call that human inference. And so, Erica, maybe you could just dive in real quick on our first human inference, which is cognitive load, and speak to the relevance of that, and maybe a little bit behind the development of cognitive load.
[00:09:51.105] Erika Siegel: Sure. Yeah, absolutely. So the high level skinny on cognitive load in this context is it's a real time measure of mental effort. So in general, right. In any given, I mean, we have a, certainly have a subjective sense of what cognitive load is. Like if you're, for example, trying to answer an email and you're getting 15 other emails and you get a text message and your wife is calling you from the other room, asking you for something, you get the sense, the subjective sense of, feeling sort of overloaded and you kind of don't know what to process in any given moment. So that in general is cognitive load. And it turns out that cognitive load can be estimated pretty reliably through a host of physiological sensors. So that using data from physiological sensors, we can predict in any given moment how cognitively loaded an individual feels or is experiencing a task in the moment. Now, Scott just referenced a number of indicators that we use that are really important. One of them is saccades, which is any time your eyes just sort of flicker from one direction to another. It turns out saccades are pretty important for visual attention, trying to understand processing of information and a task in a screen and in an environment. Some of the other features that are particularly important are things like heart rate variability or some other indices of heart rate, other information from the eye tracker. So pupil size is a nice indicator of physiological arousal, which is like sympathetic nervous system activation. So that sort of sense of like fight or flight or decreased rest and digest. And so we take this information and from that, we are able to predict an individual's cognitive load. Now, obviously that doesn't happen in a vacuum. The way that we get to that is that we trained machine learning models. This data was collected over the course of two and a half years. I think it would have been much shorter if we hadn't had COVID, smack dab in the middle. We collected data on industry standard or actually research scientifically, scientific industry standard, cognitive load tasks. in which participants tried to do a number of different tasks of different levels of difficulty with different types of demands. And during that we measured both the physiology of the various sensors I just described and also their subjective experience of how loaded they felt. We collected that data across four continents. So we collected data in Brazil, we collected data in Africa, we collected data in the US, and we collected data in, I think I gave you some countries and some continents, South America, Africa, North America, and Asia, four individual countries, over 2000 individual research participants. And from that, we were able to train a machine learning model to predict in real time how loaded an individual was feeling. And so that is what we have implemented in Cognitive Load. And that's part of our software developer kit, our SDK. That's the first insight. And so in general, any of our ISVs or customers or anyone that we're working with essentially gets that information and then implements that into their already existing environment, whatever their virtual environment is, with some estimate of the confidence in the given moment of the estimate. So the prediction is, you know, how cognitively loaded are you from zero to one? And then there's some confidence around that estimate. We also provide just straight heart rate. So that also comes as part of the cognitive load estimate and heart rate variability. Am I missing any, Scott? I think we have some looking behavior indicators, but those are the ones that are baked in.
[00:13:24.795] Scott Rawlings: We passed through gaze vector and pupillometry. So if people want to develop their own machine learning models, they can have at it. It is a very complex activity to do where you're trying to remove biases and get adequate data samples. develop the benchmark testing. And we have provided, and Erica could speak to this, our open test data set so that we can be as transparent as possible about our stated level of accuracy and performance for the human inference cognitive load. I do just want to take a moment to speak to, okay, why does all this matter? You know, who cares about cognitive load? Who cares about these human inferences? So I'll just give a couple of examples where this could matter. So one quick example could be that you have children going through experiences that try to distract them from discomfort. It may be something that they're experiencing when they go to the doctor or they're going through certain kinds of wellness experiences. So to distract the child, you develop a game, and that game empowers the child based on their cognitive load, based on their calmness. And the more calm they are, the more empowered they are in the game. And so it's a reinforcing cycle that gets their focus and then can actually authentically reduce their discomfort and keep them in an extreme way distracted from what's happening outside of that experience. So that's one example where this could play a role, where it's a feedback loop. And it's adapting the person in the experience to increase their focus. Another good example is training, where we often talk about the Goldilocks zone, which is you're measuring. And this is where Erica alluded to this, but there's always context to cognitive load. We don't know the context, but the application knows the context. The application knows exactly where the person is in the experience and what's going on around them. So when you marry that context with a human inference, for example, in training, the first time I'm going through taking apart a specific aspect of a jet engine, it's difficult. I've never done it before. My cognitive load is high, my performance and my speed is pretty low. that the fourth time I've done it, my performance comes up. And hopefully, we start seeing the cognitive load come down. And as you see that cognitive load comes down, now the trainer may go, well, that person is ready to move from VR training to hands-on training, which tends to be a lot more expensive than VR training. And it's a way to cycle people through training processes in a streamlined way. Right. That cognitive load insight is important. Go ahead, Erica.
[00:16:28.313] Erika Siegel: Yeah. So one of the pieces of that that I think is really important is cognitive load provides a little bit of extra information over and above performance. And that's because, essentially, cognitive load is how loaded you feel. So I think we've all had experiences. Certainly, I have. I think a lot about when I was first learning to drive a car. I was successful at driving a car, but I would get out of the car and I felt like a wet dishrag. There's a way in which you can perform well but that you're really loaded. It's taking a lot of cognitive and emotional and mental resources to complete the task. So even if you're doing it successfully, it can be very exhausting for you. And the science is very consistent on this. So people who have jobs where they're really highly cognitively loaded over long periods of time experience more burnout, They tend to get sick more frequently. They experience a lot of exhaustion. They have more stress, so on and so forth. And so I think that's the sort of next part of Scott's point, which is that that's what you want. You want someone to not only be able to perform well, but to be able to perform well in a way that's comfortable for them on a regular basis. And so that's part of this Goldilocks zone is that both your performance is good, which we've always been able to measure performance. But are you able to perform with finesse? Can you perform in a way that feels easy and comfortable, that flow state that we all talk about? So that's part of what we're really excited about for this. And sorry to cut you off, Scott. I just wanted to add that point.
[00:17:51.716] Scott Rawlings: No, that's great. And then I would just use one third example, and that's simulation. And hopefully I won't go wrong on this, Erica. But let's say you've got pilot training, or you've got some high risk training scenario, which is almost always done by a simulation. There's a cognitive load aspect where my right engine went out, I'm having hydraulic problems with my gears coming out, and I've got high wind conditions. And it's not just one thing, it's three things that are just driving me nuts. And what am I going to do? How am I going to handle this situation? And by the way, I'm running low on gas because I just came across the ocean. So it also translates in some way to stress because of the context of that situation. And then we come to what we call after action review, where you're actually measuring not only performance, but decision making. And as the pilot gets through that experience, they go into after-action review. And when they can see their cognitive load spiking and they can relate it to decisions they were making in the moment, it's not only part of the real-time simulation experience, but it's part of the overall process of learning and doing better the next time.
[00:19:09.806] Kent Bye: That's great. That's a great overview. As I think about a product like the HP G2 Reverb Omnisubtradition, it reminds me of technology diffusion. You know, Simon Worley describes it as these four different distinct phases. The very first one is like the duct tape prototype genesis of an idea. Then it is into the custom bespoke handcrafted enterprise applications. And then eventually it's a consumer product that is available for consumers to buy. And eventually the fourth phase is it reaches a state of mass ubiquity where it's a commodity like electricity or cloud computing, where it's less about the differences of the product features, but it's just everywhere. But a lot of the technologies that we're talking about here so far are still in that custom bespoke handcrafted enterprise space rather than the mass consumer space. I expect eventually that a lot of these technologies will get into that consumer space. And there's obviously a lot of ethical issues that are going to be introduced at that consumer scale. But I think that there's more of bounded contexts when it comes to the enterprise use case. So maybe you could describe that a little bit in terms of the specific industry verticals that you're really targeting with this and the B2B type of structure that you have set up here that may be different than just say buying a piece of consumer hardware. There may be other software services or packages that enterprises are buying that maybe are beyond the price range of something that a consumer would buy. So maybe you could get into both the industry verticals and how you're structuring, how this piece of hardware is dispersing out into the XR industry.
[00:20:43.781] Scott Rawlings: I think just in terms of what we're targeting, like many of us have mentioned, we're focused on the commercial applications of VR. We see them really starting to scale out now. And probably the largest one is training and simulation. There's a lot of proof points for why and how VR is relevant for training and simulation. You know, the typical classroom experience, if you walk away with 20% retention, you've done pretty well. If we say that you learn by doing, and the beautiful thing about VR is that you do learn by doing, and that's why the retention rates are above 75% for most learning that's done through VR. And I don't think any of us would say that VR is the answer to all kinds of training, but it's definitely one of the important new tools that we have at our disposal in the education and training process. And because it's immersive, there's things that you can experience in ways that would be impossible to do any other way. And I'm sure you know that, Kent. So training and simulation education, that's a huge area of opportunity for commercial VR. After that, we see use cases in wellness. We've mentioned some of that as examples, and in particular, with advances in solutions like we present with Omnicept. where it provides new avenues for research and methodologies and approaches that really are pioneering. And we feel like it's blue sky right now, like you mentioned, it's pretty early stage. So we're excited to see where it can go. In some ways, we're presenting this as greenfield technology and we don't know exactly where it's going to go, honestly. But we know that it's exciting and we're seeing strong early indicators that it can make a real difference.
[00:22:39.520] Henry Wang: Yeah, and I think speaking a little more about the dispersal of this technology within the commercial space, within those use cases, those verticals that Scott was talking about, you know, HP obviously ships a ton of hardware products and a ton of devices, but This product was very much unlike so many of the others that we've done in that working closely with commercial developers, enterprise developers, and ISVs is such an integral part of this effort. Because at the end of the day, there's no Omnicept solution without your ISVs, like your Ovation VRs, like your Mimbuses and Pixos, creating the content within the expertise of their particular industry verticals that really bring the benefits of Omnicept to life. And so, you know, one of the big challenges in bringing this, you know, you talked about that phase of commercialization and bringing it from kind of a concept to an actual product and closing that gap was that, you know, we had to work closely with those ISVs, with prototype devices, with early access and, you know, make sure that we're creating the right device for them to use there as well. In addition, and of course the right tools, software tools and SDK for them to light things up as well.
[00:24:04.589] Scott Rawlings: Yeah, I don't think that ecosystem development can be overstated. And that's where Joanna and other people on our team have really played a critical role. I'll just add one other use case that we haven't mentioned yet, but that's really important to us. We're part of the advanced computing and solutions group. And part of the segments that we focus on are product development and architecture, engineering and construction. And I've spent a good portion of my career with design thinking and human-centered, user-centered design approaches. And when you begin to think that you can do what I call hyperreality, where you can put somebody in the cockpit of a car and you match the virtual with the real, and you can start measuring things like cognitive load or emotional response, both aesthetically and as they go through, you know, turn on the windshield wipers. How do they go about that? Being able to understand where they look and what order, what emotions they're feeling, how frustrated they are. These are really important insights to designing better dialysis machines, to design better cars, better cockpits for airplanes, and for making these experiences not only more functional, but more enjoyable. So in product design and architecture, there's huge ramifications for being able to understand what people are looking at and what they're experiencing with what they're looking at and how they're interacting with their environment.
[00:25:39.001] Kent Bye: Yeah, I'm wondering if we could do a brief overview of sort of the input and output in terms of there's some sensors that are on there like eye tracking and I don't know if there's other stress sensors that are on this headset and then from a SDK level like how that is interfaced by these ISVs who are integrating this into their experiences. It's something they just drop into a unity project. And then all of a sudden they have all of the affordances of eye tracking or how, where the data flow is going. And if it has to go into like cloud processing to kind of understand the statistics of what's happening and what the interface is between the software that's being developed, the headset, and then any other cloud services or other things that have to kind of do some additional providing of context or analysis of some of the raw data that's coming through.
[00:26:28.414] Scott Rawlings: Yeah, I think there's a couple of layers. So something that you've mentioned before is how did things progress over time to become more ubiquitous or more prevalent in their use? Right now, we are in the mode where we have a developing ecosystem. We're in the early stages of that. And you're right, a lot of the development is much more bespoke. And Omnicept is, to be fair, an indication of that. It plugs into Unity or Unreal. And just to keep it simple, it's a development platform that you would manage via your Unity or Unreal development with that augmented capability that's plugged in that we call Omnicept. But once you've developed your application with that plug-in architecture, as you go to deploy it, you're taking a model that's already been trained. It doesn't need the cloud. And that model needs to run on the same host as your device is connected to. So in the future, if that's an all-in-one device, you would need a client app that's running in the all-in-one headset, and it's all processing locally. Now, it may be that in the future, we do have split rendering or augmented compute. So it's hard to predict exactly how that can carry forward. But in the PC attached model that we have right now with what we call our carbon headset or the HP Reverb G2 Omnicept Edition headset, it's a PC attached headset and the Omnicept runtime is on the PC that the headset's attached to. And it's got all the human inference training embedded into that runtime. So you don't need to call the internet. And that's good for a lot of our customers because many of our customers don't want information going out into the ether. And this gets to security and privacy and other concerns.
[00:28:33.458] Kent Bye: Yeah. And Henry, what are some of the, as you're managing the product, what are some of the either upcoming features or features that you're working on or stuff that you're have already launched that you're really excited about and seeing a lot of traction?
[00:28:45.692] Henry Wang: Yeah. So, you know, I think a lot of that is sort of already out there in the open in the headset itself. You know, the kind of nature of hardware is that it's for the most part got to be in the device for it to function. And so while today, you know, we've launched Omnicept focused around cognitive load and kind of creating that awesome feedback loop for training and simulation use cases that Eric and Scott talked about. That primarily makes use of the eye tracking sensors and pupilometry in large part, as well as the PPG sensor located in the face gasket of the headset. But if you look on the device, there is, of course, a face camera as well. And so it's a camera mounted on the bottom of the headset pointed out the mouth. and lower face area. And so our hope for the future as we continue building the OmniSub platform is that we start to make use of that in conjunction with the other existing sensors on the device as well to deliver expressivity. So you look at a lot of VR experiences today, many of them are multi-user. Some of the best ones really are about having multiple people sharing the same experience, whether that's a training experience, whether that's a simple collaboration, meeting room, maybe on the consumer side, you're thinking of something like VR chat. And so the ability to create these avatars be much more expressive and much more natural really will take VR collaboration to sort of that next level in terms of engagement and, you know, really comfort as well. You know, when you're talking to someone, is it more akin to face-to-face interaction and how can we continue getting closer and closer to that type of experience? So for me personally, that's one of the things that excites me most about the future of the headset as well as the Omnicept platform itself.
[00:30:46.177] Kent Bye: Yeah. As you bring in the aspects of social VR, as I look at the consumer space, the power users of VRChat are on these very expensive PC rigs with full body tracking and even like a Nios VR having facial tracking built in from the HTC Vive space tracker. And so is this something that if. people who are more consumers wanted to get access to the HP, um, instead with these eye tracking, to be able to sort of show your eyes within social VR. Is that something that'd be even possible to be able to have access to say like a Nios VR or VR chat to be able to make use of some of these high end sensors, or is it only targeted towards commercial use cases and part of your terms of service to prohibit people from using this within a social VR context?
[00:31:32.183] Henry Wang: We definitely don't want to place restrictions on individual consumers that want to go out and start using this for that. The challenge currently, I would say though, is the integration of these additional sensors into a VR headset is still a relatively new thing in the context of VR. And so there are certain challenges around bringing that down to sort of a level, a price point in particular, that's much more accessible to the consumer market and really democratizing the ability to have sensor integrations for more expressive avatars, to have that at a more accessible level, which is why we're focused on enterprise and commercial, at least for this initial iteration. But I think in terms of those folks with the really elaborate setups, multiple body trackers and such that you just described, you're absolutely welcome that sort of tinkering experimentation. Cause ultimately, you know, it's commercial developers that make Omnicept great for the commercial use cases, but it'll be those. folks on the consumer side developing and integrating those sensors to their own uses that'll really help drive adoption and make the case for this to become accessible for that part of the market.
[00:32:52.891] Kent Bye: Yeah. And Erica, I wanted to ask you a question in terms of the science and the research. One of the things that I've noticed within the realm of virtual reality technologies is that sometimes you have a technological innovation that actually opens up entirely new fields of research. Like what comes to mind is say like control labs that was bought by Facebook, which is a EMG neural interface, but The science was there for many, many years, as is often the case that academia is far outpacing ahead and kind of figuring out the foundational concepts that are even possible. But yet sometimes there's a confluence of all these things coming together that actually open up entirely new fields of research and talking to Michael Casale from Stryver. He's a behavioral neuroscientist there. I know. I guess. Yeah. talking about a lot of what he sees as where this could be going in terms of just being able to do entirely new realms of behavioral research on folks. And so I'm curious if, since you're coming from that science background, if there's any part of your job that is, you know, seeing those new realms of say, Hey, I have a sense that there may be an efficacy of what is possible here. Let's do some foundational research and publish it and then establish that this is now possible given this hardware.
[00:34:04.154] Erika Siegel: You know, that's a really good question. I think, you know, often what happens with, so let me make sure I understand your question. Are you asking if there's foundational research out there or if we're conducting that foundational research?
[00:34:14.341] Kent Bye: Well, if you, if you feel like there's stuff that is coming together and something that is with the Omnicept bringing a lot of these technologies together at a commercial scale, there's maybe opening up new opportunities to do the type of research that wasn't really that feasible before.
[00:34:28.272] Erika Siegel: Yeah, so one of the things that I with one of my colleagues Bart Massey who's a designer at HP we gave a presentation at GDC this year. And as a result of that presentation, we talked a lot about Omnicept and He's just sort of like a big sweeping thinker. And so we had a sort of dynamic conversation where we talked about just this exact topic. And what came out of it was some really exciting conversations. And so let me give you a little preview. I mean, this is one of the things that's really great about OmniSeparate is that it enables developers to dream big. And one of the things that we're trying really hard to do is make sure that our technology lets them fly. And so one of the really exciting ones that has sort of started to seed I've seen in the developer community is biofeedback. It's essentially taking these sensors and having that data essentially streamed in real time and then fed back into the program. so that you can start to see things like any number of things, really. I mean, we have a host of sensors that are part of Omnicef, but there are other sensors that individuals in the XR community are using for measuring things like, you know, like you were talking about, I think you were talking about EMG, but you know, facial EMG measures the musculature of the face, but there are a lot of EMG sensors that you can put on the body. So I've seen some really exciting pilot programs where people are doing rehabilitation for like stroke or rehabilitation, but I was actually meaning like recovering from injuries in general, right? So you could imagine having information from sensors of the musculature that are essentially being fed back into an individual VR environment, and that VR environment then takes that information to change dynamically to give you new and exciting things to do with your body. So you can imagine a game changing in real time. This is what I mean by biofeedback, so that the information from the sensors is essentially represented in the environment and then changes it as a result of the information from the sensors. That I think is incredibly exciting. I'm really looking forward to that technology. I think, you know, when you were mentioning this idea of, you know, taking the science and turning it into a real product, There's always scalability issues, unfortunately, or just availability of sensors or just commercialization of sensors, but that potential is one of the things I'm really excited about. One of the things I worked on early in my scientific career is trying to understand the extent to which people can perceive their own body signals. you know, how much do you know, how fast you're breathing or how much sort of conscious insight do you have into changes in your heart rate? And so being able to instantiate the information from these sensors into an environment that's happening in real time offers some really exciting possibilities for healthcare, for gaming. I think once we started really talking to the gaming community, we realized, you know, they're just always very excited. I feel like the developers are so creative that we've been talking to. So I'm really excited about that as a a place where the science has been there for a while, and taking the technology and putting the sort of wonder of VR into it, I can see some just really exciting possibilities there.
[00:37:35.775] Scott Rawlings: I'll just give a real quick case example. So we work with a company called Theia Interactive. I don't know if you know them, Kent. But they created a new product called Claria that supports OmniCef. And what it allows is for A-B testing. And we're talking to other companies who are doing point of sale type of solutions where they simulate point of sale or retail. To your point, being able to get insights where you can actually simulate a customer, have a customer put on a headset, walk into a store that's virtually with it. It used to be that it was always an exit interview. If you went to a movie screening, they didn't know what you were looking at. They didn't know if you were asleep. You know, they didn't know anything. All they know is that when you walked out of the theater, you filled out a survey. That's what they knew. Now, imagine that I'm watching the movie through a headset, and you can understand if you get scared or you get excited or, you know, there's lots of insights that can be gotten through neural insights now that are much more scientific and informative for making products better, making solutions better, making medical devices better, et cetera. So I think it is opening up a lot of opportunities to Erika's point.
[00:39:06.703] Kent Bye: Yeah, just a few more questions here to wrap up, but one technical question is all the different sensors that are available and what is integrated. Cause I don't know if there's ECG, so electrocardiogram to be able to get heart rates and heart rate variability. And then you're looking at the mouth. I don't know if you're able to do any sort of emotional sentiment analysis on that. If there's any strain gauge sensors or EMG sensors that are built into this yet. Or if that's something that's on the roadmap and then you have the eye tracking and different aspects of the pupil and pupilometry. So maybe if you, if you have a quick run through of all the physiological inputs that are being made available with Omnicept.
[00:39:43.257] Erika Siegel: Yeah, sure. I'm happy to take that one. So heart rate and heart rate variability are calculated through an optical sensor, which measures pulse plasmography, but generally people call it PPG. So ECG in general are sensors that need to be on the body, but PPG you can get in almost any place. And in fact, most of our wearable devices that can measure our heart rate have what are called PPG sensors. So heart rate and heart rate variability will technically was actually pulse rate variability, right? Cause it's pulse plasmography are estimated through that sensor that is actually embedded in the mask. In addition to that, we have a number of sensors that we've already sort of, you know, high level mentioned, which is eye tracking. So that's powered by Toby, who we've been working with very closely over the last several years that gives information about, you know, the movement of the eyes. We were talking about saccades. the size of the pupil, whether the eyes open or closed. In addition to that, we have information about gaze. So we have gaze vector and in the X, Y, and Z, and I guess W technically space as well, which is the quaternion in VR. So We ended up with another vector. And in addition to that, there is a motion, an IMU sensor. That's correct. I know there's an IMU sensor. I think that it is. Yeah. I think the data from that is actually made available. In addition to that, there's the mouth camera and then the eye cameras feed into Toby. So that information from the eye cameras is actually what we use to produce the eye tracking, but the raw feeds are not part of the sensor, the sensor array.
[00:41:10.690] Kent Bye: Okay. So no EMG sensors within the headset yet.
[00:41:14.071] Erika Siegel: That's correct. Yeah.
[00:41:15.632] Kent Bye: Okay. Great. And, and finally, what do you each think is the ultimate potential of virtual reality and what am I be able to enable?
[00:41:25.317] Henry Wang: Yeah, so on my end, when I look at what VR does, it really is about sharing experiences and spreading those out. Now, those might be unique from one person to another, but to be able to simulate an entire experience and let someone else go through it, to really put someone into a different set of shoes, and you distill it down, that's what it's all about, right? And maybe you're telling a story and trying to develop empathy with that, or maybe you're imparting knowledge, right? Even if you think of a basic procedural training experience in VR, like a maintenance technician would go through, it's about putting them into that set of shoes and running through that scenario over and over again so that they build that knowledge. I think a big part of what we're aiming to do is making VR continuously more immersive and then constantly improving the interactions within it as well, so that these experiences that people walk through, what they put on a headset, just become more and more compelling until VR reaches that final stage that you mentioned, Kent, where it's just ubiquitous. And I think when we hit that point of ubiquity, You've got an entire world, the entire world's population that can really benefit from just this direct sharing of experiences, whether that's for playing a game or experiencing entertainment, or for, you know, developing empathy for various scenarios in like diversity and inclusion, or for even just fundamental kind of training, hard procedural skills. that VR becomes that super effective medium to do so. And so I think that's really what it's aiming at is making this type of experience sharing more accessible to everyone.
[00:43:19.682] Erika Siegel: You know, the reason I do this work and like what really excites me is the potential for honest to God equity. So one of the things that I think is really exciting and I see the seeds of it and it makes me just like, feel so my heart get big and excited with, uh, with the potential is, you know, true, honest to God equity. So, you know, you live in a rural part of a place where there's no doctors, you have a really hard time getting access to medical care. The idea that you can talk to a specialist or have a, you know, an appointment with a specialist in New York or something, you know, someplace where there are more, you know, you can't throw something out of the window without hitting a doctor. That to me is amazing. Just the idea of in education, in ability, disability, like rights and access, giving people access to things they don't have access to right now, making the world a place where there's a reliable technology for getting access to all of the things that the world has to offer to me makes me feel like That is what the true, from my perspective, the true value of VR is, being able to just have access to, no matter who you are or where you are, have access to all the things the world has to offer.
[00:44:32.973] Scott Rawlings: I would just add that our group is really mostly focused on commercial. We've talked about a lot of the use cases that improve how we learn, how we develop our skills, that can improve our well-being. This could even go to things like physical therapy, where the experience in VR gives me range of motion and helps me forget how painful the therapy is much quicker than other things. So there's lots of examples of what it can mean to product development workflows, architecture workflows, training and learning, education, wellness, But I think entertainment is still early stage as well. I mean, what VR can mean for adaptive entertainment experiences. And, you know, if I like horror, what it can do to scare me versus, you know, if I like some other genre and how it adapts to me and kind of flexes and it's a new experience every time. That's a pretty exciting area as well. So it just feels like We're at the early stage, but starting to hit a tipping point where we really start to understand the potential of this technology.
[00:45:50.720] Kent Bye: Great. And if people want more information, where should they go?
[00:45:53.963] Scott Rawlings: For our HP Omnicept, if you go to hp.com forward slash Omnicept, we have our whole solution laid out there.
[00:46:03.120] Kent Bye: Okay. Well, Henry Scott and Erica, I just wanted to thank you for joining me here on the podcast and give me a little bit more context information about the HP G2 Reverb Omnicept edition.
[00:46:13.430] Erika Siegel: Yeah. Thanks. Thanks for a really great time.
[00:46:15.012] Kent Bye: Thank you so much, Kent. So that was Scott Rawlings. He manages the HP G2 Reverb Omnisubstitution. Also, Henry Ring, who's the product manager for the Omnisubstitution. So it's Erica Siegel. She's an experimental psychologist, research scientist, and subject matter expert on human inferences. So I have a number of different takeaways about this interview is that first of all, Well, I think there's certainly a lot of potential with using all these different sensors and being able to translate all this information into useful information that's either being fed directly into the experience to modulate the experience in real-time, or to be able to measure different aspects of things like cognitive load over the course of an experience. My own direct experience of this, I think, is difficult to draw any firm conclusions from. basically had access to two specific applications, which is called ovation, as well as the virtual Indus, which was a training application. Both of these are using different aspects of biometric information. They were both using cognitive load, but there wasn't necessarily a real-time feedback to say, OK, you're peaking on your cognitive load right now. You could record yourself giving a speech with an ovation and then play it back, and then it would actually show you the different parts of, like, here's your peak heart rate, here's the peak of your cognitive load. But these talks that were given were kind of just off the cuff and didn't necessarily correlate to anything of me really struggling with anything that I could remember from giving the speech. It's a bit arbitrary in terms of the speaking. Now, doing the different training exercises, I definitely had a wide range of levels of cognitive load as I was learning how to swap out this electrical circuit. But the problem for me that I had was that I also wasn't doing a nice graph and analysis within the VR application that I could be able to correlate things that I had trouble with or that I was struggling with, that I was trying to figure out. So, because of that, again, it gave me sort of an average cognitive load, but also the minimum and the maximum, but it didn't give me a sense of like, okay, at this moment, it was like really peaking. The graphing apparently was available within the website, but again, the user interface wasn't necessarily good enough for me to quickly or easily figure out how to record or document something and be able to correlate that to that cognitive load. Now, that's not to say that this is not going to be useful for some people. It's just that in the existing applications, it was kind of arbitrary for me, at least. It was hard for me to be able to tell, OK, this is definitely a moment when I'm using all of my brainpower to try to figure this out. And if I had some sort of real-time biometric feedback, Then I think it would just help to be able to calibrate this. And I think this is something that would be useful for the HP team is to have some applications that would allow you to have some sort of real-time calibration with some of these different sensors. Because as you go through these different applications, sometimes it's difficult to see however they interpret or record it. So, even just the eye tracking as an example, I found that in the ovation, it was supposed to be tracking my eyes, but it wasn't necessarily tracking my eyes. I had to still move my head so it shows me who I should be looking at and puts a big red dot for who I should be looking at at any moment. And rather than tracking when my eyes were looking at that person, It was somehow not calibrated well enough for me so that I kind of had to end up moving my head around until it turned green anyway. So it was kind of functionally more of a head tracking rather than eye tracking. And that could have been some sort of calibration issue. There is a calibration application that is included and I ran through that and it seemed to be pretty sensitive to how the headset was fitting on my face. But I think this paints a broader challenge, which is that whenever you have these different applications, how do you correlate this data that's coming from your body and how do you match it to your own phenomenological experience to make sure that it's kind of matching with what you would expect it to happen. And that's generally a challenge with all these different biometric signifiers that are coming from these headsets. That's not to say that it's not going to be possible or that they're not useful. It's just hard for me to know for sure to kind of validate or to be sure that this numbers that are being generated are actually matching what my own experience was, or how do you even test something like this? So I feel like broader tests to be able to actually prove out, because this is in some ways HP's own formula for how they're taking all these different aspects of isocos and visual attention and what's happening with your pulse rate variability. There's lots of different stuff that they're pulling in to be able to digest it down into the single number over time. And it's sometimes hard to know what is actually going into all that and if it's miscalibrated or if it's measuring all the different information properly. So again, it's difficult to know any sort of internal collaboration, but also experiences over time or matching it to what you would expect. So if there's some sort of calibration applications that would be able to actually test some of that out, I think that would be helpful just as people are trying to get a sense of some of these sensors and what they can do, but also integrate it into these different applications. And I think that would actually help some of the different independent software vendors to be able to get some ideas for what they could do with the data that is coming out. Because right now, they're really leaning upon these ISVs to be able to do these integrations to even really leverage a lot of these sensors that are being made available. And in the absence of having a robust software ecosystem, then it's kind of like you're left to your own devices to be able to integrate and build your own applications and roll your own. So, it does look like there's some other enterprise applications that are out there. It's just we only had access for a couple of them for this kind of review edition that I had. So, just some other comments about the different types of biometric and physiological markers that you have. You have the heart rate, as well as the pulse rate variability. You have the real-time pupil dilation and the isochords and the visual attention, and it's also measuring your mouth. There weren't any applications that were really leveraging the mouth movements or anything like that. I think in some sense it would be really helpful to have integrations into social VR applications like either VRChat or Rec Room or Neos VR. It is an enterprise application, but for people who are doing theatrical performances and they really want to have a high-end VR experience, then I think having these things tracked on your body be translated into a virtual experience where maybe you could have a mirror and you track yourself or just even to record to make sure that the mouth movements and the eye movements are working properly and correctly. So just something like that I think would go a long way to having some other applications that would have more consumer market applications rather than just the enterprise applications. So I think that would be helpful to have other people from within the VR industry pick it up and start to do those different types of integrations. These sensors are going to be coming. One of the big concerns that I have, at least, is the different privacy implications of all this data. Certainly, you can get a lot of really interesting information in terms of what you're looking at, what you're paying attention to. That starts to extrapolate into this real-time biometric psychography in terms of what you're paying attention to, what you find valuable. That within itself is super scary in terms of where that's going to go. So in some ways, I'm glad that all these different sensors are being held within this kind of sane box of an enterprise application because doing different tracking analysis or education training simulation, medical simulations, you know, these are all the different applications and industry verticals that they're targeting. And so it seems to be very specific applications and people who already are maybe buying these high-end headsets and computers to be able to do these different training situations where people are coming in and maybe they're walking through a grocery store and they want to be able to track all this different stuff of what people are paying attention to whatnot. So yeah, I think there's a lot of potential for where all these sensors are going to go, and it's still at the very beginning. So if you do want to check some of this out, like I said, there's just a handful of different applications that are out there, but there is a number of different toolkits to be able to integrate all this stuff into. I didn't have a chance to be able to test any of the photovariated rendering type of stuff. Apparently there's ways to be able to actually do like higher end foveated rendering with some of the eye tracking and there's some potentials there, but didn't have a chance to test that out. specifically and I will say just generally the Windows Mixed Reality system does have like the Windows Mixed Reality portal and sometimes there can be a little bit of confusion in terms of it talking back and forth between SteamVR and firing this up into the Mixed Reality portal and it's always kind of running and then you want to launch something, but then it's not exactly launching and you know I had to end up launching SteamVR first, and then if I would launch the Mixed Reality portal first, sometimes it wouldn't always work. There is a little bit of streamlining that I think can still happen with the Windows Mixed Reality headsets and having a whole separate Windows Mixed Reality portal that's run by Windows interfacing with these other applications. On the other hand, there is this overarching Windows interface so that you can just be completely in VR and go back and forth between different applications. It is have something unique there, but in terms of the overall VR ecosystem, it's probably one of the workflows that still could use a lot of different optimization, at least from my own setup in terms of trying to figure out how to get all of the things working together. But the visual clarity of the headset was super clear. I was high resolution. And again, this is not necessarily a consumer device, but if you do want to check out the G2, the resolution is one of the best that's out there right now that's available. And depending on what kind of things you're doing, if you're doing a lot of movement around, you know, there were some situations where I was doing the training that it was, you know, it has inside out tracking. And so it's not. As solid as the outside in tracking in but you know at the same time it could be easier to set up or more mobile more portable So there's just different trade-offs when it comes to the outside in versus inside out and yeah for most of these different applications They're not going to be needed to have a first-person shooter or beat saber to be able to track your body You're just trying to get a sense of somebody's avatar embodiment and being able to kind of move around a little bit but nothing like a high-end PC VR gaming applications at least Some of the different training scenarios or the public speaking application that was able to try out So I do want to thank the HP for sending me over a HP g2 reverb omnisipt addition to be able to try out temporarily and to be able to send back and Yeah, just to be able to test out some of these different applications and sensors and everything and just kind of see where this is all going again, this is all in an enterprise space and hopefully there'll be more independent software vendors to be able to actually use this and kind of experiment. I do think that in the future, this is where things are going. So if you are an independent developer and want to start to try out, then I highly recommend checking it out. And there's all the different information can be found at hp.com slash Omnicept. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. If you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.