#466: VR Analytics for Playtesting & Optimization with Cognitive VR

tony-bevilacquaCognitive VR is an VR analytics platform that has an impressive system for visualizing a player’s movements and gaze within a 3D representation of a VR experience. Their SceneExplorer tool translates the 3D geometry from a Unity or Unreal experience into a WebGL mesh that can be shared within a web browser. It allows VR developers to quickly and intuitively visualize how they’re moving around and what they’re looking at, but also helps to identify performance bottlenecks based upon a wide range of different hardware configurations.

LISTEN TO THE VOICES OF VR PODCAST

rob-merkiI had a chance to catch up with Cognitive VR founder Tony Bevilacqua & Product Manager Robert Merki at Techcrunch Disrupt where we talked about their VR analytics platform, and where it’s going in the future. They’re looking forward to eventually adding more qualitative feedback and more detailed eye tracking analytics that will expand their user base beyond VR developers, but also looking at how to handle augmented reality analytics where there are so many variables with changing environments.

There are a number of different VR Analytics platforms out there, but the approach that Cognitive VR is taking in correlating their visualizations within a 3D model of an experience is one of the more compelling and interesting implementations that I’ve seen so far.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR Podcast. So virtual reality analytics, I think, is one of the areas where there's a number of different companies and players out there. But probably one of the most impressive systems that I've seen so far is cognitive VR. And so I'm going to be talking today to the founder, Tony Bevilacqua, as well as the product manager, Robert Murky, about Cognitive VR and what they're doing with VR analytics, which at the beginning is really focusing in as a developer tool to be able to help virtual reality developers optimize their experience, but also to get some feedback as to what their users are actually doing and looking at and experiencing within their VR experience. And so it allows them to be able to take this quantifiable data and map it onto this three-dimensional mesh that is created of a VR scene. So we'll be able to see what the different paths are of people going through the experience and kind of what they're looking at. So we talk about some of the tools that are available in cognitive VR, as well as where it might be going in the future to be able to do more sophisticated eye tracking and other marketing research applications. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by Fishbowl VR. Fishbowl VR provides on-demand user testing for your VR experience. They have hundreds of VR playtesters who record their candid reactions with a turnaround time as fast as 24 hours. You can solve arguments, discover weaknesses, and get new gameplay ideas. User testing is a vital part of the development cycle, and Fishbowl VR takes care of all the logistics so you can just focus on the creative process. So start getting feedback today at fishbowlvr.com. So this interview with Tony and Robert happened at TechCrunch Disrupt that was happening in San Francisco from September 12th to 14th. So with that, let's go ahead and dive right in.

[00:02:11.536] Robert Merki: I'm Robert Murky. I'm director of product here at Cognitive VR.

[00:02:14.978] Tony Bevilacqua: Tony Bevilacqua. I'm the founder and CEO of Cognitive VR. And we are a analytics-backed developer tool platform that gives developers good insights into what's going on inside of their experiences.

[00:02:25.737] Kent Bye: Great. So I think one of the biggest challenges for VR development is hitting the 90 frames per second. So being able to match the frame rate, but be able to detect when someone's experience is dropping the frame. So maybe you could talk a bit about the approach that you've taken to be able to identify that.

[00:02:42.004] Tony Bevilacqua: Measuring at scale I think is the biggest issue for developers in understanding all the different configurations and hardware configurations they have to deal with. The biggest thing that our platform does is it gives us good insight on what hardware configurations all the users are using and it also gives the ability to automatically detect when the frame rate drops below 90 and then visually represent that in a way that developers can quickly find the issues and resolve them.

[00:03:03.323] Kent Bye: Great. And so maybe talk about the pipeline in terms of what you're able to do in order to give this layer of visualization of the experience to be able to do these heat maps and other things. So maybe talk a bit about what actually happens.

[00:03:15.251] Robert Merki: Sure, yeah. So we're a plugin that sits in Unity or Unreal Engine. And what we allow you to do is export the geometry of your scene. And that gets decimated down to really a nice, small, very quick format for the web. And then our plugin also collects data from users and then overlays that onto that exported geometry. And that's all done on our dashboard. So you can really see, like, overlay data, events, telemetry, all that fun stuff right onto your scene geometry on the web.

[00:03:40.126] Kent Bye: So within Unity or Unreal Engine, basically, the character becomes the camera. or the movement, so you're moving the camera through the scene. So what I see you showing here on the floor of TechCrunch Disrupt is that you're taking this model of your whole environment that's kind of created into this geometry or mesh that's on the web, but yet you're overlaying it, the path that people are walking through, and then not only the path of their VR locomotion, but also what they were looking at.

[00:04:07.260] Tony Bevilacqua: Yeah, absolutely. So we track the user's gaze as they move through the environment. We focus on what's available in the consumer market today, so it's the center point of the headset. Eventually, that'll move into retinal tracking as well. But the idea behind it is to kind of correlate different layers of information. I think gaze alone is interesting for kind of the marketing use cases and things like that, but I think that gaze in conjunction with other types of event-based telemetry and path tracking gives really good insights into what's actually going on within the scene. And looking at that in a 3D environment gives our customers the ability to kind of see what the context of the situation is.

[00:04:37.740] Kent Bye: So maybe you could talk through a bit of a use case in terms of a developer, they've included the SDK within a build, and then where does it go from there for how they're using it?

[00:04:46.936] Robert Merki: Yeah, so one of our customers spends about 50 or 60% of their time just making sure that they're compliant with the 90 frames per second for the Vive. And not only do they not know when people crash or have low frames per second, they don't actually know where either in their games. So we've helped them, you know, literally visually find on the geometry of their level where their low performance areas are. You know, it might be a texture bug on certain graphics card or some sort of odd geometry that's causing issues. And then they can really quickly find and eliminate that bug or optimize your textures. That's a really common use case we've seen just because of performance issues have been so hard for developers. Another great one is sort of around the architectural use cases. So we have some real estate customers who want to sell condos. So instead of building a whole condo showroom, they just build it in VR, throw their headset on, and their customers can run around their pre-built apartments and You know, if they don't buy them, then instead of running around with a checklist figuring that out, they just kind of go through and see what people are interested in and what they looked at and what turned them off.

[00:05:48.333] Kent Bye: So it sounds like that as people are going through these experiences, they may be going into a specific part of the experience. And if it has like badly optimized objects that are in there that have too many polygons or, you know, essentially you're able to locate that with a visualization that's on the web that has kind of like this warning sign. And then you can kind of click on that. And then what do you see from there? You're kind of seeing what they see.

[00:06:12.962] Tony Bevilacqua: Yeah, absolutely. I think that what we're seeing is that developers don't mean to make the mistakes, but sometimes they do. And it might be that they upload an oversized texture, or they have something that has more polygons than it should. So our system gives the ability to find that really quickly. Ultimately, our roadmap has become part of the day-to-day vernacular of a developer. So our next steps will be creating build tools that are part of their daily builds to understand and detect when they've created changes that cause drops in that performance. Ideally, between 30% and 50% that Rob was talking about, we want to eliminate the amount of time it takes to create VR experiences.

[00:06:45.682] Kent Bye: Yeah, I know that Unity has a lot of profiling tools to be able to actually see different performance as you're doing it on your own computer, but maybe you could talk about what you're able to do with those profiling tools, and then what you're not able to do, which is the gap that you're trying to fill at Cognitive VR.

[00:07:02.692] Robert Merki: So with regards to profiling tools, I think Unity's profiling capabilities are actually a huge asset, right? So being able to track performance, hardware data, things like that is just another input for us to then visualize on the web. And we can really show you, you know, GTX 1080s ran awesome on this section, but then some other graphics card run really poorly. So maybe there's some bug you need to look for. And that's all supplemented by Unity and even Unreal Engine's really, really good profiling tools.

[00:07:30.060] Tony Bevilacqua: The tools that we have are built in WebGL, and it's really about giving accessibility to people above and beyond just engineers. For the Unity heat mapping tools, for example, you have to have Unity on your computer with the source code, which we don't really think is a relevant use case for bigger teams, especially the product managers, producers, and so on. So being able to externalize the geometry into WebGL and make it available for everyone I think is really important. I think on the build process side it's really important to create potentially scores or some sort of baseline telemetry score that can be measured across multiple different experiences. So I think part of a build pipeline being able to have a score between 0 and 1 that represents a level of comfort or level of performance for a particular scene I think is really important. I think FPS is a very important element of that. you know, turning of the cameras and other types of things that might happen might also contribute to that telemetry as well. So we're thinking about things like the cognitive score, which might measure things like immersion, comfort scores that look at things like performance metrics and things like that.

[00:08:26.032] Kent Bye: Yeah, it sounds like that Unity and Unreal Engine is keeping track of some things that you're able to also get track of, but you're in some ways using your visualization to be able to take some of that data that's available, but make it presentable in a way that goes beyond just a bunch of stream of numbers, but it has a spatial relationship relative to their actual experience.

[00:08:47.350] Tony Bevilacqua: We built the original dashboard as a dashboard. It was a normal analytics platform with charts and graphs on it. We took that to customers like, oh yeah, this is really cool. We'll install this in 18 months when we're done. And it was like, ooh, yeah, that's not going to work for us. So what we found is that they actually want to be able to represent the data in a very visual way within the experience to understand what's going on with their customers. Charts and graphs alone in 3D space is very difficult to understand and correlate multiple sets of data. And I think that's what Scene Explorer does really, really well. It gives you a really good visual representation on what's going on in each area of your experience.

[00:09:20.878] Kent Bye: Yeah, another thing seems to be the heat maps. I can imagine a time where someone's creating an experience, but they really don't have a lot of quantifiable data as to what people are actually seeing or what they're doing. And so it seems like one way to be able to put this SDK within an experience, and then it's gathering all this data to be able to get some of that metrics as to what people are actually paying attention to. And so I'm curious what people are doing with that.

[00:09:43.437] Robert Merki: Yeah, so for right now, a lot of it is based on, like we said, performance data. So that's been the really big use case so far. We've had some people approach us about advertising, marketing aspects. I think the scale is just not there yet for VR. When that hill comes, we'll climb it. There's some branding implications right now. I think the most interesting stuff right now is around games, narrative, story, optimization, performance, obviously. That's kind of our main use case for now.

[00:10:10.379] Kent Bye: So within VR experience, it's at 90 frames per second. And so it's a lot of data that's going in real time. And so I'd imagine that if you're recording someone going through this experience, I don't imagine that you want to do it quite that frequently. But what is the frequency update rate of some of those different data streams?

[00:10:28.295] Tony Bevilacqua: Yeah, I mean, you can configure it yourself. But we're capturing effectively 10 frames right now per second in terms of the amount of data points we collect. You can configure that to be more aggressive or less aggressive. But the challenge is really how do you get the file off the device and up to the cloud, right? So I think that's the thing you have to be cognizant of. On mobile devices, you want to set that lower to make sure that you can get the payload off the device before it's shut down, right? So those are the things that we kind of focus on on that side. You know, we don't collect 3D data, per se. We collect data points, like XYZ coordinates of where things are happening. So the payload is all text-based data, right? So it's much smaller than what you'd expect in trying to cut movies or video or kind of 3D telemetry off the device.

[00:11:07.381] Kent Bye: I think one of the challenges within interactive dynamic environments is that there may be a static part of the environment, but there may be things kind of moving through the scene and kind of dynamically moving. And so how do you account for time-based or things that are triggered based upon the reaction of what people are doing?

[00:11:24.089] Tony Bevilacqua: Yeah, it's a real challenge. We started off by representing everything statically right now. We use event-based telemetry to understand where dynamic things have happened. So when you use Scene Explorer today, it will export your scene completely static. And then once you have that scene available, then your data is rendered on top of it. If you have dynamic elements inside of it, they're just not represented. But if you set up events for those dynamic elements, the events are represented. So you can kind of see where things are happening. Ultimately, there's some other work that we could potentially do there, but it's a uphill battle.

[00:11:51.646] Kent Bye: Yeah, because you're basically trying to look at something that's kind of taking a snapshot of a time and being able to represent almost like in a movie of a VR experience, but having kind of things that end up being a little bit like a sculpture in some ways.

[00:12:03.995] Tony Bevilacqua: We're also trying to understand the relevance of causality as well. Did this cause that? And that type of thing. So we're starting to think about how do controllers factor into this? Can we represent those on Scene Explorer? And then 3D audio is the other thing we're thinking about as well. Trying to understand, is there cues that we can give developers around audio that give insights on, did the audio cause somebody to do something?

[00:12:26.803] Robert Merki: I think one of the other interesting things we're looking at is AR as well. So, you know, there's some complexity around sort of static scenes when you're trying to do a dynamic VR game. But what about AR when actually your entire world is dynamic? There's almost nothing static in like a HoloLens scene. So there's been some interesting things on our planned roadmap that I think will both solve some of the problems regarding dynamic VR content as well as AR content. And I see Tony's kind of nodding, but I don't know if I've told him yet what I think.

[00:12:56.383] Kent Bye: Well, I think one of the things that's a challenge for VR experiences is storyboarding. You're talking about a 3D immersive environment, but yet you're only able to express things in 2D a lot of times. But this feels like a way to start to do rapid iterations, to start to, in some ways, maybe do a dynamic capture of an environment. And if you are able to capture some of those other elements, then you can, in some ways, use that to iterate in future changes, kind of like a sculpture 3D version of a storyboard.

[00:13:25.512] Tony Bevilacqua: So there's two pieces of that. We want to be involved as early on in the development process as possible, right? So as soon as you start developing the experience, install the plugin, kind of see how your scenes change over time and how that content kind of changes over time. The other side of it is this building a feedback loop, right? So I think that telemetry is very good, you know, in terms of discovering insights as a human, but I think there's a feedback loop to be built here where Scene Explorer, as well as our tooling, can influence the experiences moving forward, potentially in a personalized or dynamic way.

[00:13:53.585] Kent Bye: Yeah, so what's next for you guys? What's sort of the next thing that you're moving forward to now?

[00:13:58.307] Tony Bevilacqua: Yeah, our biggest stuff right now, I think we have some pretty good features and capabilities. We're keeping a really good ear to the ground with our customers in terms of what's going on. We've got about 125 so customers, about 25 installations that are using the platform. So we've got a good basis to kind of collect information. Our biggest thing is we want to be everywhere that people are building things, right? So right now we support Unity. We've got an Unreal plug-in coming out as well. And then we're going to be supporting other platforms moving forward. It's important that we are available on the platforms that our customers are building content on.

[00:14:26.927] Kent Bye: And I know a lot of applications, when they crash, they have an option that says, there's a crash that happened. Would you be willing to send data back to the developer to look at it? Is that something that you've kind of integrated within the workflow here of Cognitive VR?

[00:14:41.537] Robert Merki: We haven't released that feature just yet, but that's a huge, huge, huge thing for developers. One of our really good customers, one of their biggest problems, they get these really angry Steam messages from users who say, you know, I crashed, some expletives, and then, you know, please fix your game, refund, blah, blah, blah. And they want to know what happened, and they have no idea. So a feature like that, and we've been looking at ways to do that, would be absolutely invaluable to some of those customers. And along with that, you know, not just that, but like, you know, when you finish your level, maybe like a happy face, sad face, do you want to send feedback to your developer? Was this a happy face for you or a sad face? Okay. And then, you know, you as a developer can see, okay, show me the sad faces. Okay, these people did not like the experience. Oh, it's because they missed X, Y, Z part of our narrative, or they had a performance issue.

[00:15:30.870] Kent Bye: I see. So they're able to send the data back of whatever happened in their experience, even though it crashed, it may have still been captured and then be able to be sent back.

[00:15:39.294] Tony Bevilacqua: I think the biggest thing that Rob's kind of mentioning is having qualitative data available along with the quantitative is really important, right? So being able to ask a qualitative question at the end of the experience and then, you know, marking the user profile with that particular qualitative data allows filtering and segmentation, right?

[00:15:54.078] Kent Bye: Great. And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?

[00:16:01.086] Robert Merki: Wow, there's a lot there. I think VR has that magic feeling that you don't get from any other platform. That first time you put on a high-quality headset, I remember actually the first time I met Tony, he kind of strapped a headset on me. The first five seconds, I just knew that there was so much potential there. I think it was the first time I also tried a 3D game when I was eight years old or whatever it was. There was just that magic there that I knew I couldn't go outside and get that feeling. I couldn't go anywhere else. I think one of the big issues that VR content has right now is a lot of people are trying to mimic existing 3D experiences in VR, like the Minecraft experience that just came out for Oculus is an amazingly polished, great experience, but it's not a VR-first experience. It really does feel, even though it's very polished, like it wasn't built for VR. I'm really, really excited to see what the Minecraft-built, you know, VR-first is going to be, and I think As soon as that hits, like, PSVR, Vive are gonna be sold out, you know, full nightmare for parents, you know, can't find one. Yeah, and I'm really excited for that, Dan. I think nobody knows what that is. I think that experience is on someone's laptop right now in a basement somewhere. I think it's gonna be some indie developer who has some crazy idea that just becomes a huge hit.

[00:17:18.154] Tony Bevilacqua: Yeah, so just some final thoughts. I think I've just talked a little bit about VR, AR. A lot of people think that these two things are going to kind of converge together. I think they're two very independently separate spaces, right? And I think VR has very powerful capabilities on the gaming entertainment side and immersing people in content experiences and giving them that level of presence there. I think that that has a place alongside augmented reality as well, where you're trying to augment the existing world. And I think VR has a kind of a great opportunity ahead, and I'm excited to see not just what they do on the entertainment consumer side, but also what we see on enterprise around training simulation and creating really great experiences that require immersion.

[00:17:55.454] Kent Bye: Awesome, well thank you so much.

[00:17:56.876] Tony Bevilacqua: Absolutely, thanks a lot. Thanks a lot.

[00:17:59.197] Kent Bye: So that was Tony Bevilacqua. He's the founder of Cognitive VR, as well as the product manager, Robert Murky. So I have a number of different takeaways about this interview is that, first of all, their whole process of being able to translate your experience within VR and take the assets from Unity and Unreal and translate that into a 3D mesh that could then be viewed within a WebGL on a web browser, it's super elegant to be able to take all of this data and metrics and be able to map it onto this geometry in this 3D space. And talking to Oliver Kralos, one of the things he told me a couple of years ago is that, you know, in the realm of data visualization, there's really kind of like two areas. One, in which the data is actually connected to 3D geometry. and then more of the abstract data visualization that doesn't have any spatial connections. I think right now the data visualization that does have spatial relationships are the area where it's going to have so much benefit and a clear easy win in terms of how much more visceral and engaging it is to be able to actually go in and be able to see what people are doing within these different experiences. From a perspective of a developer, you're kind of sending out these experiences to people, and there isn't really any good way to get feedback as to what's happening. Except if you're actually taking videos of people and you're able to actually watch and observe. That's probably the best, to be able to actually see and hear what people are thinking and doing. But in terms of getting the hard data and being able to see different patterns of what people are going through and what they're looking at, I think a system like Cognitive VR is probably one of the most elegant that I've seen so far. There was just an Immerse conference that happened up in Seattle and they had this Shark Tank type of competition with all these different startups. And in that competition, Cognitive VR did come out as the winner. And I think it's partly because just the clear application and win for developers for what they're doing. I think in the future, it's interesting to hear how they're trying to think about how to incorporate more of this qualitative feedback and other types of perhaps biometric information and data and I could see that there's a wide range of different applications from market researchers and advertising and you know just trying to create experiences for people that you could put them into a virtualized experience but get all this additional information and insight as to what's actually happening when people go through one of these experiences. Adding eye tracking in the future I think is going to be a huge win especially if you're able to go beyond where the center of the viewport is i think that's what they're doing right now is that when you're looking around it's a close approximation but it's not as precise as doing eye tracking but doing something like eye tracking would probably have to do something that might be a little bit more than 10 frames per second I'm not sure that you sort of have to do this trade-off between the fidelity of the information and the overload of the information. So I guess it kind of depends on the experience of what you're doing. But for now, it seems like just using the center of the headset is a pretty close approximation for what people are looking at. It sounds like Cloud Head Games and some of their optimization of the gallery was using it. And I think also in the future, being able to look at bug reports for people when an experience crashes to be able to actually see where were they at within the experience, what were they looking at. just having more information and data to be able to go directly into a location and start to optimize the experience. Overall, for developers, they're trying to optimize and streamline and make their experience hit the frame rate of 90 frames per second. It sounds like quite a lot of effort is put into that process of just optimizing an experience for performance. You know one question that I have and listening to this again is if there's different types of experiences where this works particularly well, I think with An adventure game like Cloudhead Games is the gallery called the Starseed where you're literally kind of exploring around this huge space. And perhaps the developers want to get a little bit of insight of how people are actually looking around and if they're able to solve the different puzzles or if there's different places where people are getting stuck. That's just helpful information for the developers to be able to continually to tune and refine their game. But for some experiences, it may not work as well. For example, if you have a lot of procedurally generated content, I can imagine that could be an issue. Or if there's a lot of highly dynamic and interactive types of experiences, something like a job simulator where you're moving the existing state of the experience around so much, So having different dynamic interactions and objects within a scene that may be a little bit more difficult to get some clear insight as to what is happening or, you know, it's kind of like really framing what are you trying to really figure out. If you're trying to figure out how are people exploring a space and what they're looking at, this sounds like a very useful tool for being able to figure that out. And in the future, I think this type of VR analytics platform is going to be much more applicable to other types of research of being able to capture and reflect on human interactions within different physical spaces. So that's all that I have for today. I wanted to just thank you for listening to the Voices of VR podcast. And if you'd like to support the podcast, then please do tell your friends, spread the word, and become a contributor to my Patreon at patreon.com slash Voices of VR.

More from this show