At the IEEE VR conference this year, there was a pre-conference workshop about Immersive Analytics, which talked about how to use VR & multi-sensory interfaces to support analytical reasoning, decision making, and real-world analytics tasks by immersing users in their data. Data visualization expert Chris North gave the keynote for the Immersive Analytics workshop, and I had a chance to catch up with him to talk about the four different ways that intelligence analysts use space for Big Visualization, Big Cognition, Big Interaction, and Big Algorithms. Chris also explains why the principles of embodied cognition are causing intelligence analysts to look at how to use the body, physical movement, and the surrounding environment in order to support and amplify distributed cognition.
LISTEN TO THE VOICES OF VR PODCAST
Here’s more details about some of the other talks at the Immersive Analytics workshop at IEEE VR 2016, and a link to the paper on the Six Views on Embodied Cognition by Margaret Wilson that Chris recommended that everyone should check out.
Homework for Immersive Analytics: Read Margaret Wilson's "Six Views of Embodied Cognition."https://t.co/1kQffI3Y2Z pic.twitter.com/kbXzS7QLKK
— Kent Bye (Voices of VR) (@kentbye) March 20, 2016
And here’s a graphic of the Pirolli-Card Sensemaking Loop that Chris referred to within his talk as a process that intelligence analysts use in order to do intelligence analysis.
Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. My name is Kent Bye and welcome to the Voices of VR podcast. On today's episode, I have Dr. Chris North of Virginia Tech. And so at the IEEE VR academic conference this year, before they get started with the regular conference, they have a number of different workshops. And one of those workshops this year was immersive analytics. And so they're really looking about how to use virtual reality technology in order to do data visualization, analytics, sense making, being able to use the affordances of VR in order to help with data mining or information visualization. So, as part of this immersive analytics workshop, they had Chris North come in and give a keynote. And so, His expertise isn't necessarily in VR per se, but he's working with intelligence analysts and helping create these huge screens in order to help different people within the intelligence community better do their job in data mining and sense making. So we'll be talking about immersive analytics and the four different ways to use space to help with cognition, as well as this really interesting concept of embodied cognition, which I think is going to be really important for the virtual reality community. So that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by Unity. Unity is the lingua franca of immersive technologies. You can write it once in Unity and ensure that you have the best performance in all the different augmented and virtual reality platforms. Over 90% of the virtual reality applications that are released so far use Unity. And I just wanted to send a personal thank you to Unity for helping to enable this virtual reality revolution. And to learn more information, be sure to check out Unity3D.com. And so this interview with Dr. Chris North happened on Sunday, March 20, 2016, and it happened right after the end of the Immersive Analytics workshop and right before Chris was about to head out to head home. So with that, let's go ahead and dive right in.
[00:02:22.345] Chris North: Well, my name is Chris North. I'm a professor at Virginia Tech. I do a lot of work in the space that's kind of between visualization and data mining and how to use big displays to support big data analytics.
[00:02:35.859] Kent Bye: Great. And so at the presentation today, you were presenting to the wider virtual reality community about immersive analytics and talking about space and the four different dimensions of space. So maybe you could talk a little bit about how you see those four different dimensions of space kind of playing out in immersive analytics.
[00:02:54.396] Chris North: Sure, yeah. So today we talked about the four roles of space in big data analytics. The first role having to do with supporting big visualization, the second being big interaction, the third being big cognition, and the fourth being big algorithms. And so we talked a little bit about how each of those can support big data analytics. Really some of the key concepts really come down to the notion of being able to support visualization of multi-scale spaces. And so that has to do with being able to see the fine details like textual information that you would see right up close right inside of the big scale of maybe many documents and how they all relate to each other and say a text analytics scenario. And so with big displays, the opportunity you get is to be able to see all those details all at once. You're not just seeing one detail at a time, but you're seeing all the details all the time. And that leads to a way to navigate that information that we call physical navigation. So if you've got all this information up on a big display, you can kind of step back to get the big picture. Or you can step up close to see the fine details and read the text, right? And you can kind of walk across the front of the display as kind of analogous to panning across a data set. And so that kind of physical navigation enables you to make use of your innate physical capabilities to navigate the world, to navigate information. It relieves the need to do a lot of virtual navigation where you're doing some kind of panning and zooming in the space using some sort of mouse-based or other kind of interaction technique. And so what we find in all our studies is that the physical navigation is much more efficient than the virtual navigation. And it's for all kinds of tasks, for finding information in a big pile of data, for finding relationships, for having a better understanding of spatial relationships in there, for example. And those kinds of effects are all beneficial in a bigger visualization, big physical navigation setting. Some of the other, I think, important characteristics of these big display spaces has to do with supporting cognitive abilities. What we find in working with analysts, when they're analyzing big data, is they kind of externalize their thoughts into the space. Imagine an intelligence analyst looking at a bunch of documents, trying to figure out what the bad guys are up to. Maybe a bunch of news articles or a bunch of intercepted emails. When you see them do this in their offices, they'll lay out all this information on paper, on tabletop, and circle things and draw lines and all that kind of stuff. Oh, well, we can do that same kind of thing on a big display space. We can get all those documents all up there at the same time. And then they start manipulating. They start grouping the documents actually on the display. They start drawing and annotating on the documents. And so that gives the analyst a kind of space in which to externalize all these thoughts they have and start creating structure out of the information. And what's really cool about that is now we can bring data mining into the picture to actually observe the analyst doing that structuring of the information and exploit that now to do big data analytics. If the algorithm sees the analyst starting to cluster some information, the algorithm can actually learn from that using machine learning techniques and say, oh, OK, if you're clustering that information, then probably you should also read this other information, which would get clustered with that information. So then it becomes a kind of collaborative process between the human and the algorithms as we do these complicated big data analyses. So there it is in a nutshell.
[00:06:11.969] Kent Bye: So it sounds like you're using these huge displays and because you can do a lot of this virtual, I'm just wondering the implications of doing in this like a room-scale virtual reality experience or maybe even Oculus Rift where you're sitting down and looking at it. It seems like having these expanded virtual spaces would allow you to do that without having to have like 8 or 16 monitors in front of you.
[00:06:34.072] Chris North: Yeah, yeah, so in the past there's been a big trade-off there where the big sort of tiled LCD display spaces have enabled this kind of multi-scale visualization where you can get up close and actually read the fine details of the text, you know, really small, because of the high density of the pixels. But with the recent trends in HMDs and things like that, we're starting to get that kind of high-density visualization capability in some of these virtual display systems. And so I think there's an opportunity forthcoming to simulate what we have been doing in the past in these large, high-resolution display spaces with now more virtual-oriented kinds of displays. so that we can get all those advantages of the multi-scale, also with the inherent abilities of the physical navigation of head-mounted displays and things like that. So I think there's a great room for opportunity there, and I'd love to see some experiments along those lines to see if the HMD displays are really to that point yet, or if we could somehow modify them to make them such. I think of, like, inverse fisheye views, where you could put a lens on the display to somehow create more pixels in the foveal area of the user's vision. But yeah, those are experiments I'd like to see happen.
[00:07:42.972] Kent Bye: The one really interesting thing that you said is that one analyst said that he hates information visualizations because it hides the data, which was really interesting because he wanted to see all the raw data to be able to look at it and find the patterns in aggregate. And so I'm wondering, like, what is it about that level of human perception to be able to look at the raw data to be able to see those patterns that I guess is unique and how are you kind of looking at that dimension of it?
[00:08:06.004] Chris North: Yeah, that was a funny example. We usually think in terms of visualization as being for the purpose of seeing the data, but here was somebody saying that, no, it's hiding the data. And through talking with this analyst, the realization was that he was referring to the data being highly aggregated in the visualization, where this big data is being crunched down into small data so that it can be displayed on a regular screen. And that's hiding all that detailed information in there that he's interested in. So the opportunity is how to use these big display spaces to de-aggregate the information, to actually display the original source information, but to augment it in some way so that it does have a visual form. So just showing the text is probably not good enough, but can we put some encodings around the text, some visual encodings around the text in order to capture some of the quantitative aspects of the text. So that when you do this kind of physical navigation, you can step back and see those big patterns, but easily step forward to see the detailed information itself, enabling that sort of clean transition, very rapid transition between details and big picture.
[00:09:10.277] Kent Bye: So in your presentation, you recommended for homework for immersive analysis to read Margaret Wilson's six views on embodied cognition. And so I'm wondering, you know, what does embodied cognition have to do with immersive analysis?
[00:09:23.568] Chris North: Yeah, so it's interesting. So, yeah, I'd encourage all of you to go read Margaret Wilson's paper. There are many views, right, about how embodied cognition works, and it's, you know, this is not a well understood thing. But the basic idea is that it builds on the theory of distributed cognition, that when people think, they think not just in their heads, but they think using their body, they think using interaction by playing with information, and they think using the environments around them to do that. So that's the point of distributed cognition, that our cognition actually takes advantage of our bodily manipulation abilities within the environment around us. So, how can we exploit those kinds of principles in data visualization? Well, that's the research question. How do we do a better job of that? These big, rich, interactive display spaces, our goal is to support that. An example is what I was describing earlier about how analysts manipulate documents on their desktop in order to create structure. That's them thinking with these documents on the space. When they reorganize the documents to make groups and clusters, That's them embedding their thoughts into the space. That helps not only in just helping them sort of create structure out of chaos, but it also supports their memory of it. So later when they come back to it, they say, yeah, okay, I remember what I was doing here. Let me finish now. So I think it's a research agenda for us in visualization to think about how we can create sense-making tools that better exploit the environment, the space around us, to support analysts in that kind of work.
[00:11:01.136] Kent Bye: And when you're talking about algorithms, this is kind of an interesting, like, how could you use space to better understand algorithms?
[00:11:09.161] Chris North: Yeah. Well, the thought there is that there's a serious usability problem in analytics, and that is that analytical algorithms are complex, and those who want to use them aren't necessarily experts in them. So algorithms have parameters, and the parameters have to be tuned in order to get findings that are useful. What are the meaning of those parameters? How to tune them? What values to pick for those parameters? It's a hard problem. It's a serious usability problem. So one possible way to solve that is by making use of the space and these rich interaction spaces. And that's kind of what I was describing earlier, where the algorithms can actually observe. So the idea is to treat the space as kind of the means of communication between the algorithm and the human. Humans think in space, as we've been talking about, like the analysts organizing papers on their desk. Algorithms can also compute on space. So what that means is, as the analyst is doing this work that the analyst is doing anyway, which is organizing things on their desk and circling important words and highlighting and drawing lines and stuff like that, that's all stuff they've got to do anyway just to make sense out of the information. Well, why don't the algorithms actually make use of all that? Why don't the algorithms observe those interactions occurring and do machine learning on it? in order to help the analyst do the work. So a simple example is if the analyst starts clustering some documents, machine learning algorithms, we can do learning of clusters to suggest to the analyst, oh, I see what you're clustering on here. Well, here's a bunch of other stuff that would also fit into that cluster. You might want to read that as well. So this creates a very kind of smooth analytical sense-making process between human and algorithm, where as the human is going through this incremental formalism process of beginning to create hypotheses, the algorithms can be doing machine learning on that, figuring out what the hypotheses the user is beginning to come up with, and exploit that to forage for additional relevant information, and to forage for additional structure that would be related to the structures the users are created. So it creates a kind of analogy between human incremental formalism in the cognitive process and algorithmic incremental learning in the machine learning process.
[00:13:20.781] Kent Bye: So what are some of the biggest open questions you see in these four different domains of how to use space to be able to do immersive analysis?
[00:13:29.504] Chris North: Lots of hard questions yet to solve. By no means are we even close to understanding how best to make use of space in the analytic process. You know, I think in terms of space for big visualization, you know, one of the big questions there is how do we design physical and virtual navigations that work well together. So, you know, the displays are never going to be big enough to display all the information. There's always more data than pixels. And so at some point, virtual navigation is going to be required. how to fit that in well with the physical navigation that is already in place. So I think there's all kinds of design questions related to that. I think with respect to the space for cognition, so we've seen how analysts can use kind of a 2D space to externalize their thoughts and create clusters of documents and so on. I kind of wonder how that will extend to a 3D, a virtual environment, 3D space. So I think there's lots of experiments that one could do there to see what kind of new structures Analyze might create in a 3D space where they have a lot more freedom to create different kinds of spatial organizations. And I think, you know, when it comes to the algorithmic aspect there, you know, the examples I've described are ones where, you know, there's an individual algorithm that's interacting with the user. But boy, how do we integrate many algorithms, right? How do we have many different kinds of algorithms that can recognize many different kinds of structures that the users are creating and have them all working together in this rich space that understands the idea of mixed metaphors and things like that. So there's a lot of work to do.
[00:15:03.613] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?
[00:15:11.497] Chris North: Well, you know, so for me, coming from a visualization perspective and where a lot of my work has been in 2D in the past, I think what you're asking is, what is the value of 3D in that? And, you know, that's always been a hotly debated question. In terms of what I've been talking about today with using space to support analytics, I think the question really is about whether these concepts that we've been exploring in 2D will extend to 3D. Will 3D indeed give us that additional dimension in which to externalize cognition, that additional dimension in which to physically navigate multi-scale data? And those are open questions. I'm looking forward to seeing how that turns out.
[00:15:51.927] Kent Bye: OK, great. Anything else left unsaid you'd like to say?
[00:15:54.854] Chris North: VR is awesome. OK.
[00:15:57.601] Kent Bye: Great. Well, thank you so much. See you around. So that was Dr. Chris North. He's a professor at Virginia Tech as well as the Associate Director of the Discovery Analytics Center. And so a number of different takeaways from this talk is that, first of all, I think that that concept of embodied cognition is something that's going to be really super important for many different dimensions of virtual reality, including immersive analytics, but also a lot to do with education. And so the basic concept of embodied cognition is that our cognition happens throughout our entire body, not just in our brains, as well as not just our body, but also our entire environment. There's a number of different interviews that I did about embodied cognition at IEEE VR. Another one that I'll be publishing here soon was using dance in order to teach computational thinking. And so being able to move your body in specific ways in order to just remember different sequences that could then be broken down into four loops or conditionals and just this feedback loop that can happen when you're trying to take abstract concepts and put them into your body and then you start to move around and you can remember them better. Well, the idea extends to more abstract thinking, such as cognition. And so a big point that I think Chris was making is that a lot of these intelligence analysts that are working for the government have this very specific sensemaking process with these different rigorous steps in order to go through this whole process of gathering all the data and analyzing it and coming up with hypotheses and forging for information that may falsify that theory. And so it's a whole process that's really difficult to just keep completely within our working memory. And so he's creating these huge screens where people are able to start to move information around and start to tack the information into different parts of the algorithms and processes. And so It's interesting to hear the connection between space and embodied cognition and how we think and so I think there's going to be a lot of applications for education as well as kind of sense making tools like this for intelligence analysts that right now they're using just 2D screens and Perhaps 3D will add something that's new or different. And at the end, Chris did say that it's a little bit of controversial within the visualization community, the value that you get from 3D. Coming in from the outside, I think that my bias would be like, of course, 3D is just going to be better. But I think I didn't have time to ask Chris, but what I've heard from other information visualization experts is that adding 3D to different visualizations can actually make the data more hidden or obscured. From occlusion, first of all, but also different perspective issues that start to come in, 3D isn't always better. And from what I understand, there's kind of two major different types of information visualization. One is which that is connected to some sort of 3D space like the geography of the earth or the globe or perhaps medical imaging that's, you know, relative to your physical 3D body. But yet there's this whole other class of abstracted 2D visualization and data mining that I think Chris is a little bit more expert in. And so he's taking a little bit more cautious approach in terms of what VR is going to actually add to the process. But I think that one of the things that I really take away is that there's this sense of embodied cognition that can happen and that actually moving around in physical space is much better than panning and zooming within a computer screen. And so physical navigation seems to be a lot better than virtual navigation. So you can start to extrapolate what that may mean by being able to analyze all sorts of information within a VR environment. Now, is the resolution within a virtual reality HMD good enough in order to really be able to walk up to a document and read all the text? I think that it's getting there, but yet maybe another generation or two before it gets to the pixel density that someone who's doing a lot of text reading would really want to have it at. I think that it's already pretty good, but perhaps the high-density screens that are just 2D right now, being surrounded by one of those kind of interactive panels, could actually be a little bit better for the specific use cases that Chris is looking at, which is the intelligence community. And so another interesting kind of takeaway from this is that, you know, these different sense-making processes are pretty well established in terms of being able to go through a specific process. And it sounds like they're already starting to use machine learning and artificial intelligence to augment that process. And so I can imagine a situation where some sort of AI machine learning neural net has read through all the source documents and kind of knows some sort of representation of that knowledge graph, but still needs a human being to be able to help to analyze and make meaning out of it. And so as an analyst is starting to go down a certain analytical path, it sounds like there's these different machine learning AI algorithms that are jumping in and saying, hey, why don't you take a look at this or this? It's kind of related and clustered to this other information that you've flagged as being interesting or relevant. And so that starts to bring in all sorts of really interesting questions in terms of, you know, are AI algorithms going to be biased towards any specific information? Or are these intelligence analysts going to be relying upon something that's going to be having its own new set of prejudices or biases that may be inherent to the limitations of the technology. But it sounds like in the future, moving to multiple different algorithms at the same time, I think is something that is a big open question. So it'll be interesting to see how these four different areas of big visualization, big interaction, big cognition, and big algorithms start to develop over time. So again, if you're interested in more information about immersive analytics, then the workshop that was at IEEE VR this year has a link to a lot of the different academic research institutions that are really starting to look into this issue in depth. And I think I have at least one or two other interviews from people who were presenting there at the conference about immersive analytics. And again, I think the concept of embodied cognition came up a lot. And I just kind of flagged it as something that's going to be really super important and vital to the VR community as we move forward. Because think about the capability of VR to be able to actually track your body as you're thinking about different things. I think there's all sorts of different ways I think that VR and tracking your hands as well as tracking your feet could start to track your body and correlate that in some different ways to be able to support this concept of distributed cognition. So again, this concept of immersive analytics to me is super fascinating. It seems to be a whole domain and field where they're trying to figure out how to support analytical reasoning and decision making with all these different multi-sensory interfaces that support collaboration and allow humans to immerse themselves into the data that really supports real-world analytics tasks. So that's kind of immersive analytics in a nutshell and looking forward to hearing how this specific field of VR develops. So with that, I just want to thank you for listening. And please do consider spreading the word, telling your friends about the podcast. And if you enjoy receiving insights like this from the IEEE VR, which is the academic conference where I was literally the only journalist there covering it. And so no other news outlets were there covering a lot of these types of insights and discussions that were happening at this conference. And so if you're enjoying these insights and want to support more of spreading information and knowledge to both yourself and to the wider virtual reality community, then please do consider becoming a contributor to my Patreon at patreon.com slash Voices of VR.