Doug North Cook is an assistant professor at Chatham University, and he holds an immersive design residency at Fallingwater architectural landmark in Pittsburg. North Cook is trying to synthesize the many different design frameworks and principles ranging from industrial design, user-centered design, universal design, inclusive design, and architecture in order to come up with new frameworks that are uniquely suited for virtual reality. He’s also very much interested in accessibility and trying to break down some of the fundamental affordances of different types of VR input, and trying to figure out how to design VR for the most number of people.
North Cook and I also deconstruct different aspects of the keynote at Oculus Connect 6, especially around what wasn’t being talking about around the deeper ethical and privacy implications of where the technology is headed. He gets concerned over some of the somewhat religious zealotry language of being “true believers” of the technology without a broader conversation around the underlying business models that will sustain it, or some of the ethical design principles that could steer the technology more towards dystopic futures of surveillance, safety, or manipulation. We also talk about his efforts to be as inclusive as he can in empowering underrepresented minority artists and creators, and some of his recent experiences in the recently released Half + Half by Normal VR.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So continuing on in my coverage from Oculus Connect 6, today's interview is with Doug Northcook. He's an assistant professor of immersive media at Chatham University. He's got a new program there that just started this year, and he's also got an immersive design residency at Fallingwater, which is an architectural landmark there in Pittsburgh. So Doug is somebody who's a designer. Um, he's very influenced by industrial design architecture and trying to bring in all these different disciplines and, you know, trying to reckon this issue where there's a lot of proxies for these different design frameworks and trying to fuse them all together and take what's still valid from each of those frameworks, but still look at what's unique within the virtual reality medium. And it's this fusion of trying to bring together all these different design practices. And so Doug is really at the frontier of doing that and actively teaching a lot of students. So it was the first interview that I did after the keynote at Oculus Connect 6. So we're just kind of like jamming in terms of the things that were on top of our mind and especially around ethics and privacy relative to some of the announcements that were made. So that's what we're covering on today's episode of Oasis of VR podcast. So this interview with Doug happened on Wednesday, September 25th, 2019 at the Oculus Connect 6 conference in San Jose, California. So with that, let's go ahead and dive right in.
[00:01:35.164] Doug North Cook: I am Doug Northcook. I'm an assistant professor for immersive media at Chatham University, where I run a Bachelor of Arts in immersive media, which is a four-year undergraduate program focused on immersive technology and design. And I also run the immersive design residency programs at the Fallingwater Institute, where we bring immersive design professionals out into the woods for a week, take all of their technology away, and make them design things with pen and paper and cardboard.
[00:02:03.090] Kent Bye: Nice. So you are teaching people how to do VR. And there's this whole branch of project-based learning and inquiry-based learning where you make stuff. A lot of influences from architecture, so trying to do interdisciplinary. So as someone who's trying to teach people how to do VR, what sort of influences are you drawing from?
[00:02:22.761] Doug North Cook: Oh, yeah. I mean, you mentioned a couple right there. One, for me, is definitely architecture. Some of that comes out of the work that I do with Fallingwater, which is trying to see and adapt this long history of one of humanity's longest standing design practices that has existed before that was even a way that we thought of these things, which is just this primal urge to put ourselves inside of spaces and to create spaces, to make spaces, to find comfort, to find expression through comfort. So trying to embrace the way that architecture approaches issues of designing for multiple senses, designing for human scale, adapting accessibility principles from architecture like universal design where we're trying to design for a variety of human bodies, abilities, types, designing for aging, trying to embrace practices that embrace the fullness of what it means to be a human person and that that is not a static thing, that what it means to be human is an adapting thing and our technology has to be adaptive and responsive. And that can be adapted from product and industrial design too. There's this really great recent article by Don Norman reflecting on his career as a designer and his failures to help the design community design for the man that he is now, which is someone who has aged significantly. So we're trying to approach this through a design perspective and not just from a technology perspective. Computer science has a lot of really great thinking and methodology, but it is very disconnected from the humanities, generally speaking, especially at the university level. So really trying to make sure that we are looking at what it means to be human and our university has a really interesting approach and perspective. We were a women-only institution at the undergraduate level until five years ago and we have now gone fully gender inclusive and that is a really interesting place to start a program focused on immersive technology and new technology, a place that is radically feminist and progressive and embraces people from a variety of backgrounds. That is the core ethos and perspective of the institution, which has been around for 150 years. And from my perspective, that's maybe the most exciting place you could birth innovation, is a place that is already at an institutional level looking at how we can connect across boundaries and use the technology to do that and approach the technology from a very critical perspective that way as well.
[00:04:57.632] Kent Bye: Well, I keep saying that VR is like this interdisciplinary melting pot that is bringing together all these different disciplines, whether it's from neuroscience, or architecture, or education, or filmmaking, storytelling, phenomenology, all these different things that are being mashed together. And I feel like with that combination of all these different disciplines, there's a little bit of trying to find a design framework or principle that's able to accommodate all those many different perspectives. And I guess when I'm seeing people that are coming from these established research institutions trying to quantify the assessment of things and trying to turn things into a number, it gives them a certain lens where they're trying to turn presence into a number and quantify presence in a way that you can't quantify something that's fundamentally a qualitative experience. And so they clearly see that VR is engaging. But yet, still, it feels stuck in this trap of a lot of these paradigms for how to make sense of proving to the larger institutions of education to be able to adopt immersive technology, where I feel like there's some deeper principles of reconsidering what the process of learning, how it actually happens, which seems to be through curiosity, through exploration, through all these things that actually can't be turned into numbers.
[00:06:10.599] Doug North Cook: Yeah, no, I think that's right on. I think this connects to me very recently. I've been approached by several organizations and publishers and people who are like, we want you to help us write or work on best practices for VR. And anytime someone utters that term, I know that they're coming from that perspective. They're coming from the perspective of, we can quantify these things, we can lock them in, and we can then monetize them, monitor them, and analyze them. And anytime someone says that, I just immediately want to exit that conversation as quickly as possible. And go find someone who's much more interested in better practices than best practices and trying to get away from this really big push to try to... I think there's like a big push now. Everybody just wants to understand the best way of doing everything in VR. And the real answer is there is no best way. There is no single way. There is no optimal way. There's still an infinite number of possibilities and ways to explore. But I think people are having a hard time finding the room, the funding, and the space to do that exploration right now, which is one thing that I feel very lucky to be able to do at our university, is to have the funding and the space to be really experimental and to explore. Because I think a lot of companies, I think because the technology has become a little more mature now, some things are starting to get a little more locked in. And I think it's up to institutions that are not as economically tied up in the success of very specific parts of the technology to push forward experimentation and push forward new types of content, new types of engaging with content, and new ways of engaging with each other in this technology. And I think that's only going to come from people who don't have a vested economic interest in making money from a platform?
[00:08:18.346] Kent Bye: There's design theory, and then there's actual implementation that it is useful to come up with the theory of design for how to make immersive experiences. And I feel like it's a little bit of an open question for what that framework actually is. And one of the things that Mel Sater told me that Stephen Ellis from NASA had said is that any good theory of presence will have equivalence classes so that you have these different trade-offs. If you have more of this, then it's less of that. And I feel like there's a multitude of different types of equivalence classes when it comes to experiential design. And I think there's many different frameworks and approaches for how to do that. But at the end of the day, as you're designing something, you have to make decisions. And you have to know what decisions are impacting other decisions and how you measure those equivalence classes and those trade-offs. And so how are you approaching that problem?
[00:09:06.486] Doug North Cook: Oh, interesting. I mean, thinking about the tradeoffs and thinking about developing a new framework. And I think that's the difficulty of the space we're in right now is because there's so little research, documented, good scientific research related to the current state of VR. There's so little, I would say, informed, good design theory written about the current state of VR. So everybody's looking to other disciplines like architecture, product design, psychology to adapt these ways of thinking. And in that, the initial trade-off is that all of those things are not direct proxies. They do not have a one-to-one translation from their discipline to VR. So part of that is, I feel like a lot of my role is right now is to play the translator role between these things. But I feel like we're just now getting to a point where we have to stop just doing translation and we have to start thinking about the unique affordances of this technology and what do we have to unlearn from these other industries, these other ways of thinking, especially when we're coming from 2D and flat design, when we're coming from software design, when we're coming from game design. Because virtual reality is so much different. It is so much other than that. And it is really, yeah, more about presence and space. But yeah, with those trade-offs, I mean, the thing that I've been in a lot of conversations with people about recently is even just the way that we're engaging with sensory input and designing for individual senses and the limitations of just designing for audio and visual, that we have good ways of understanding that. But if you don't have a good framework for ergonomics and a good framework for human movement, and like we saw today, like we're going to see hand tracking in consumer headsets in 2020, and that's going to completely change the way that we're doing input. And that's going to totally change the way that people interact in space. It's going to change the way that we're designing for proprioception and the presence of our body in space. When we start to remove physical input and we get just to using our muscular system to do computing in a way that is going to be completely different, right? And that's where we're going into completely new territory of adapting our body and more parts of our body as a control interface in a way that isn't even connected. And you know, you saw the control labs too, that people are now working on neural inputs and other ways of starting to adapt not just our physical body, like our muscles, but also the way that we process information and using that and being able to train our bodies in these new ways. So, yeah, there's so much new work, but I think increasingly we are coming up against a wall where there's so little clear thinking about these topics, and that thinking is often siloed inside of companies with internal research groups. And academic institutions are notoriously slow to publish because it takes a really long time to publish peer-reviewed work. So I think we're in this weird place right now where there's so much great information, but a lot of it is locked away in research review in academia or inside of companies and is related to IP. And I think over the next couple of years, we'll start to see a lot of that come out and we'll start to get some clearer perspective that gives us a more cohesive picture of what is possible and what is informed and what works.
[00:12:25.660] Kent Bye: Yeah, when you start to have neural interfaces with EMG and brain-computer interfaces, reading your thoughts, reading your mind, and, you know, it feels like there's going to be... I like this concept of information theory, which is just that for every signal that's going out, there's always loss. Like, you always have loss of the signal. No matter what it is, there's always some sort of loss. And you have error correcting codes. You have ways of trying to make up for that loss. But there's a loss of the signal for the intention of the information that's going out and what's being captured. And so to what degree does that loss happen? There has been a lot of research in the difference between fatiguing and non-fatiguing interfaces. And so this sort of image that we have in our minds of Minority Report waving your hands air like that's okay for like small bursts but to really think about non-fatiguing interfaces how are the tablets and how are ways that we can actually use mobile phones I feel like there's going to be a synthesis where we're going to start to see how all these wearables and these other devices are going to be able to start to be fed into these immersive experiences but yet at the same time, be able to understand the ways in which you could use a tablet interface, but also the limitations and what you can get with finger tracking, especially with embodied cognition, but yet also the fatiguing side effect. And so seeing how each of this, you could start to kind of weigh the trade-offs where you are able to express certain things, but you're losing stuff at the same time. And so just coming up with a comprehensive map of all that, and then figure out how that drives different design decisions.
[00:13:51.053] Doug North Cook: Yeah, yeah, I think getting into that, and this has been an interesting thing that I've been thinking about, especially when we're talking about hand tracking and precision input, and also like what works, what is our ability as humans to gain new intuitions, to gain new ways of abstracting input, like the computer mouse is this perfect example of using a three-dimensional object on a two-dimensional surface to control a different two-dimensional surface on a different plane, and that we've gotten very good at that. But if you show someone that for the first time, they're terrible at it, right? But the hand, using a hand to control something, like if I try to hold my hand totally straight and I've had two cups of coffee and I haven't had breakfast or lunch, like today, my hand does not stay perfectly steady. It starts to shake a little bit. It starts to become tired and fatigued. The hand is not a precision instrument on its own. We augment that by using a tool, having something like a pen, having something like a lever, having something that allows us to use our muscles to constrain our hand to give it greater precision. And I think that's where hand tracking is really interesting. One of the events that we hosted at Fallingwater a couple of years ago with a lot of industry stakeholders, one of the conversations that came out of that was What do we do about precision? Are we going to enter basically like a rock band controller era where everybody has these like all of these peripheral accessories that are like made out of plastic that mimic physical objects to be able to give you that Precision that tactile precision. So I've got a plastic guitar and I have a plastic spatula and I have a plastic pen I have all these things that then my headset can pick up and I I have this tactile object that can be tracked using the same kinds of things we're doing, but I have greater precision, greater control, because just hands are not good enough for precision tasks. Which is why we're now starting to see the discussion of mixed reality being able to bring physical objects like a keyboard and mouse into VR, because you want that precision, you want the ability to do precision input, use the intuitions that we already have, use the things we've already adapted to to be really efficient and proficient instead of trying to just abstract these new things to just a voice interface because just a voice interface is also not a precision tool, right? So, I think that's one of the things we're going to be coming up against is how do we balance ease of use, comfort with the need for precision especially when we're talking about precision for whether it's for games and entertainment but really for work when people are going to be trying to do work. which does require precision when you're trying to adapt serious design tools, when you're trying to adapt surgical tools to be able to replace remote surgical operating things where you need an unbelievable amount of precision. And how do you create those tools, but also how do you optimize for ease of use?
[00:16:32.608] Kent Bye: Yeah, my sense is that there is going to be a fusion and that there's also going to be temperamental preferences for people that maybe someone really likes to speak and talk and use language in a way and that's going to be their preferred method and maybe other people are musicians or have very fine motor control and they're able to have this level of embodied precision and makes me think of Jaron Lanier who plays all these really exotic instruments and he told me he's like musical instruments are the the ultimate haptic device because you're able to get this real-time feedback and to see how you're using your body to do this emotional expression and so you have this qualitative emotional vibe that you get from someone playing a musical instrument that it's hard to replicate that with just synthetic technology and then you go into the more abstractions like the EMG or EEG or other ways of kind of reading your biometric data where it's able to isolate down to singular motor neurons in your body, then how can you do this translation in your body? If you're checking your hand, you're doing computer vision, you can see your fingers moving, but With VR and something like Control Labs, you can start to move these muscles, but then see how that is going to be translated into a different embodiment where you may be an octopus or robot with eight fingers instead of five. And so being able to use our muscles to then do this weird translation that then gets transposed into these ways that are completely different. So I feel like the future is going to be really weird, but it's going to be a fusion of lots of different parts of these input devices, whether it's brain control interfaces, or eye tracking, or EMG, or biometric data, or body, being able to use the fine muscle control of our fingers with something like we control a mouse or a tablet. And then everything from voice and everything else, it's all going to be a part of it. It's just trying to figure out how to blend it all together and also to maybe do an assessment for people to see how do we adapt this for people who maybe have different preferences.
[00:18:22.555] Doug North Cook: Yeah, so some of the work that I do is around accessibility and accessible design and one of the things that we talk about a lot is that when you're designing something to be accessible you have to design for a fair amount of failure in input that not everyone is able to give the same amount of accuracy into an input device that aligns with their intention. So a lot of work, and we have the tools now to do this, is to solve for user intent, and to create user profiles that are augmented by machine learning, that we can create profiles for individual input devices, and input types, and input styles, and then combine those inputs to create a user profile that is adapted for greater accuracy. So we see that really clearly with voice, where every good natural language processing system has an initial training onboarding for every user, where it wants to know, how do you say this phrase? How do you say this specific word? It stores that profile. It keeps it stored away. But we can do that exact same thing for our hands. We can do that for neural input. We can start to not just gather that, but then augment it, make it more precise for individual users by doing profiling, and those are the kind of things that I know we are already seeing and will continue to see more of at the platform level, where platforms will be optimizing that on an individual user basis, which I think is really exciting, but it's also a huge amount of data and personal data that is being used to optimize like a subset of tools for you that's storing so much biometric data, so much personal and psychological data and data about your personal preferences and movements and style and cadence and body language and Some of the things we're seeing with very lifelike avatars your your facial expressions and how you emote all of these will inevitably be stored away and profiled and and what does that mean for identity, what does that mean for representation, what does that mean for our ability to remain individuals and not be overly influenced. So as solving for all these problems also creates new social problems, new problems of agency, new problems of identity, and just new ways of thinking about what does it even mean to be a human person when we are solving for our own intent and using machines and systems and other people's intent to do that.
[00:20:36.705] Kent Bye: And I think the challenge for solving for that intent is going to be that intent may be context dependent. So whether or not you're at work, you're with a romantic partner, or whether you're at home, whether or not you're just thinking about your own body, your own self, there's going to be many different contexts that profile might change because you may have different ways of preferred communication or ways of engaging with people based upon what that context is. And so I feel like That's another part that I haven't talked too much about in conferences like this, but to look at the future of contextual computing or being able to determine that context. And that may actually be a problem that is going back to something like artificial general intelligence, which is that that's just a hard problem that AI hasn't been able to figure out, how to do common sense reasoning or to be able to determine what your context is. Because context can be so fluid, where we can be at a conference and then suddenly switch context. But I feel like part of what's happening with VR is that it's allowing us to do a hundred percent context switch from wherever we're at. We may be at home and now all of a sudden we're at work. But with AR, there's sort of a blending and a blurring of those contexts where you're kind of mashing them together in weird ways. But once you start to have this mixture of context together, then I feel like that kind of changes of what words may mean in one context versus another. So it's a whole layer of AI that I think is going to have to be a part of that. But thinking about context and how to model context relates to privacy and how to have a comprehensive framework for what should be public and what should be private. That goes back to context as well. And that, within itself, is an open problem. But context and contextual computing, I think, is going to be a big thing that's going to be thought about as well.
[00:22:06.109] Doug North Cook: Yeah. Yeah. Timothy West from Unity gave a really good talk about this at GDC this year in 2019, walking through her morning and like, what is like an ideal spatial computing morning look like? Not just different contexts moving from home to work, but different parts of your home and what every individual function and place and object in your home fulfills and the context of that and the systems needed to be smart about that and to minimize as much friction in those interfaces as possible. And yes, it's an unbelievably complex set of problems, but I think it's so many companies are pushing towards trying to solve this now. And I think what everybody's starting to see now is it's not necessarily a single device. It's a set of devices. Is it a pair of glasses? Is it speakers? Is it Earbuds is it a wristband? And I think the answer is probably yes. Yes. Yes. Yes, and yes, it's a combination of interconnected objects and devices some of which may be connected to the internet some of which might just be actual physical objects that have no components at all, but they have their own spatial awareness within the system where can I have just like a wooden cube in my house that has nothing inside of it, but I've created a context within my home that wherever this cube is placed, then it changes the context of the spatial computing network. Those are the kind of things that I think about. is how can I remove and how can I bring physical objects and non-technology things in that then affect the technology? How can I interface with my home computer by a series of jokes or object relationships? How can I have a softer relationship with computers and interfaces that doesn't require me to speak a very specific set of commands to a robot butler, which is essentially what my Google Home speakers are now? But how can I have casual interactions with technology? And how can casual interactions with other non-technology objects be fed back into that system? But also, the real question for me and I think for a lot of people right now is, what do I give up by doing that? What do I give up by letting companies have full 3D scans of every room in my house and every person in my family and me and access to all audio in my home at all times? What do I lose by doing that? And what do I gain by doing that? And I think there's a lot of questions. And the reality is that you as an individual now have very little choice in how you get to exist in that way because if a single person in your network opts into a system like that, then you are in that system and you can't opt out of that without opting out of your friends, which I think a lot of people will be very reluctant to do that. If you have a friend who has a pair of AR glasses that are mapping the whole world and mapping you and mapping your home, you might not opt out of that friendship. And I think people will just start to become more comfortable with it. And it'll be like the way it is with smartphones now, where it's just like everybody has one. It's gathering all this data all the time. Nobody cares. Nobody knows. It's just part of what it means to be a modern human person. and that there are trade-offs there. But I think an awareness of those trade-offs is becoming increasingly important so that we can work together to legislate that and have conversations about it. And for me, sometimes it's like having conversations as simple as that, if you're coming to my house for a dinner party, I would prefer that you leave your smartphone, like, in a box. not because I care about recording, but because this is a place for us to just meet people, just actual physical people, and let's try to just engage with physical things. And I think that a lot of this technology is moving in that direction. And that's, to me, still what's so exciting and gives me chills about VR and AR is that it's an invitation to interact with technology as a full human person. And that no other technology has offered that to us. Smartphones are terrible at that. They invite us to be hunched over, to be small. VR and AR invite us to be big and to be big in big spaces. And I want to be as big in the biggest spaces as I possibly can be.
[00:26:21.239] Kent Bye: Yeah, we were just after lunch here after the keynote this morning and just fell into a number of different conversations since then and people want to know what I think about what do you think and there's for me there's a lot of like what wasn't said is actually what's the most interesting to me because In the morning, someone said, what's your wild prediction about what's going to happen? I said, well, the most wild thing would be Facebook coming out saying, we're going to architect for privacy. We're doing self-sovereign identity. We're going to do privacy-first architecture. We're going to talk about David's sovereignty. But that would be wild. That would be very surprising. And coming here at F8, the San Jose Convention Center, I saw Mark Zuckerberg up on stage saying, we're doing Privacy is the future and then I was like, okay. Well, what about biometric data privacy? What about like the privacy of our homes? And so there's a lot of things of like, okay, this technology is amazing but what are the implications of the boundary between what's private and what's public and what data is able to be owned by a private company like Facebook and where's the third-party doctrine come in where If any data that they're collecting, then that all of a sudden allows the government to have access to that data. We're creating Big Brother situations here where they're blindly saying, like, oh, we're going to make all these tools without thinking about the deeper cultural and legal consequences, and to not even mention that at all. no privacy mentioned, no biometric data privacy, not even sort of mentioning that within the context. I think when Mark Zuckerberg was saying, OK, the future of the interface is being able to read your mind, there was a bit of a hum in the room of, whoa, do we want you to read our thoughts kind of moment? Is it the future we want to create? And is this what we're doing now? With no sort of discussion around what the ethics around that are. If I put this brain control interface on my body and give access to my thoughts to Facebook within the next, let's say, two to five years, if that's possible, then does that create a transcript somewhere? Does that make me eligible for thought crimes? Lots of implications with this stuff that I feel like It's not a lot of really sophisticated conversation about it. And for me, it just upsets me because it's without this deeper context of how to actually architect for a world that is going to work for everybody and not create these power asymmetries that is going to potentially lead to these really dystopic potential futures.
[00:28:33.703] Doug North Cook: Yeah, it's interesting. So there was one single mention of privacy, security, high level encryption, and that is only for enterprise. That if you're a business and you opt into the enterprise platform, that then your data is secure. Your data is not housed by Facebook. You own your data. It's highly encrypted. And that is an interesting perspective, right? And that's the thing that we're seeing globally in the last 10, 15 years, is the movement away from individual identity, individual rights, individual right to privacy, individual right to data, individual rights to protection, and the movement towards collective rights. And mostly that's for corporations. So corporations and countries are the only groups that have that now. Individuals do not. Individuals no longer have those core abilities and rights because they are only granted now on an organizational level. which is very strange. And we see that with the way that money exists in politics now, where companies are enabled and encouraged to exercise their right to free speech as persons, right? And that now we're seeing that it's no longer an individual right to privacy, it's corporate right to privacy. And like, you know, how will that shake out with the new things that are happening with this technology now? Like, we don't have a clear idea of what that looks like for the individual user. And that is very disconcerting. But it's maybe even more disconcerting that they've already solved for that at the enterprise level. And that it's businesses, because businesses demand it and they can demand it with dollars. The only way consumers can demand it with dollars is to opt out of the ecosystem. But by opting out of the ecosystem, you are exiling yourself to an offline island where you are incapable of participating in what is happening in the world with technology, which for some people, that's really beautiful and they can live very beautiful off the grid lifestyles. But increasingly for a lot of people, they don't have that option because the only way that they can make a living is by being connected in these ways. So yeah, I don't know where that's going to shift. You know, I mean, I think you see some really interesting movement in parts of Europe really advocating for privacy and personal identity protection and biometric data protection. With the majority of these companies not being in Europe and being in places where that is not advocated for, whether that's in parts of Asia or in the U.S., then a lot of the conversation gets driven there. It gets driven at point of origin, and it gets then dictated and re-regulated across you know, the EU and some other places, but yeah, it is a concern. I think especially with a rise of populism and governments that I don't think represent the majority of the global population, that as more and more sophisticated surveillance tools fall into the hands of increasingly conservative governments that have embedded racist policies and segregationist policies and anti-gay policies. When you have more data on more people, even just talking about that, I want to go in a room and cry because it makes me very scared. Because we're building these tools that have such an incredible potential to unlock so much connectedness. and so much genuine fun and engagement and love and the ability to share powerful experiences and design brand new experiences that have never existed on the planet. But at the same time, we're developing the most dangerous set of tools that could be gifted to a really dangerous person or set of individuals or to a really dangerous corporation with ill intent or a very dangerous government. of which there are dozens now. There are dozens of dangerous governments in the world that are not looking towards a future that is more connected. They're looking towards a future that is more homogenous, more isolated, more insular, and built around fear and hate. And the kinds of things we're talking about, given to a group of people that are emboldened by fear and hate, is very scary. And I think it's really important to see that, and to see the danger of getting really caught up in the romanticism of these technologies. Even some of the language used here today about being true believers in VR, that you're on your way to the promised land, all of this religious language, I find very disconcerting. To me, it ignores these problems. It ignores that there are two lands that we are on our way to. One of them is some utopia, and the other is some awful dystopia. And ideally, we will strike a good balance somewhere in between, because utopia is not possible. But dystopia is very possible. And we are on a rocket straight to dystopia in the world right now, all over the place. That kind of language in a corporate structure I find very unnerving, especially the use of religious language and religious zealotry language to talk about blindly accepting a technology as built for the good, which it is just not. No technology is inherently good. Only uses can be good. Only people can be good. Until we have some sort of sentient artificial intelligence, technology cannot be either good or bad.
[00:34:14.755] Kent Bye: Well, it's a relief to have this conversation after the big keynote, because I feel like there's these types of discussions that are left out, and then also very little outlets to have in-depth, honest conversations about these nuances that I think get left out in the absence of having conversations with the people who are in charge of doing this. It's kind of left for us to have those conversations and to figure out how to advocate for us as consumers as to what we want this future to be and have these other perspectives. But for you, what are some of the either biggest open questions that you're trying to answer or biggest problems you're trying to solve?
[00:34:53.623] Doug North Cook: Oh, I mean, for me personally, as a single individual, I can only do so many things at once, but I'm really lucky to be in a role at a university where I have a lot of support and a lot of freedom. We have some really generous foundation support from some really amazing foundations in Pittsburgh, where we are. One of the things that I'm working on over the next year is trying to pull together a group of developers, platform stakeholders, and studios that are working on VR projects, specifically right now, and building an open set of accessibility tools, example projects, so that when a developer releases a project that is really incredible and has incredibly high production values, but they don't put subtitling into their project. And they come back and they say, well, we just didn't have time. That that is a thing that can be solved. It can be solved by getting really good spatialized subtitling framework embedded at the engine level or just in really good open source and freely available projects that are being used by studios. So that's a big initiative for us at the university is acting kind of as the neutral arbiter. to bring together funding, bring together resources and developers to say, hey, let's put together 15 different examples of different accessibility interfaces, control schemas, subtitling, colorblind. Let's tackle a suite of issues. Let's release a couple of example projects. And let's also embed them in a couple of commercially available experiences. so that people can see them in use, have access to them for free, and know that they've been vetted and designed to have direct Unity and Unreal integration across multiple platforms. So that's a big thing that I'm really passionate about, is how can we make this technology something that's for everyone, and that in doing that, we make it better for everybody. Because subtitles are not just good for people who have hearing difficulty. They're really great for people that are in spaces like we're in right now, where there's tons of people walking around and talking. And if you have a really good experience and you're trying to demo it in this space and you don't have subtitling, nobody's going to be able to read or see your text. It makes it easier to localize your experience into other languages. solves a lot of problems all at once. But it's really difficult to do that as a single studio. It's also actually difficult to do it as a platform. Because if it's not available on every platform, you're going to have a hard time getting developers to implement an individual SDK level feature and do it in a way that goes cross-platform. So that's a big thing that I'm working on. The other thing is using the role and the space that I have to be able to empower a wide range of creators. So we've been pulling together funding at the university to be able to work with artists, specifically artists from backgrounds that are not very well represented in the immersive technology space and let them come to the university for a year, have full funding, have access to equipment, access to our faculty and staff to produce immersive art that isn't necessarily meant to be released on a platform. It's maybe just meant to be part of a gallery show, be installed in a museum. So, you know, we have a unique opportunity at the university to just do good. without having to make money. I mean, we have to balance our budget, but we're able to do truly philanthropic work without getting caught up in the necessities that everybody else who's in a studio or in a platform has to deal with. So it's a unique opportunity, and I think we've realized that there's a great responsibility there, but there's also a lot of opportunity there to invite a lot of different people to the table for a series of discussions. And those are the things that are really exciting to me, and I feel like a responsibility as an academic, as an educator, as a designer, as a white man working in this space that I've been given a lot of access and a lot of privilege and a lot of power and that if I'm not using that to do as much good as I can, then I have failed the gift that has been given to me. And I'm really trying to live into that. And I'm making tons of mistakes, of course, like we all are. And yeah, if people are interested in doing that kind of work, they can always just come and find me.
[00:38:53.559] Kent Bye: So yeah. And finally, what do you think the ultimate potential of immersive and spatial computing might be, and what it might be able to enable?
[00:39:05.548] Doug North Cook: Wow, the ultimate potential. I mean, I think for me, you know, still the things that get me the most excited I think about the few experiences that are just kind of burned into my mind, and a lot of those are time that I've spent with other people in VR. I mean, Half and Half just came out like two weeks ago on the Quest and on the Rift, and I spent an hour in there on release day with two of my friends who live in Boston, and we just laughed. for an hour. And I laughed so hard that I was crying. And having these moments of intimate connection with friends of mine who I only get to see a couple of times a year, in a way that like, maybe we'd get on a video chat and laugh, but I don't think we'd laugh that hard. I don't think I'd be giving like a wobbly armed hug. And there was something so precious about that moment for me that I'm like, I want more of that. You know, I want to be traveling in Europe for a month, lecturing at a university and be able to call my niece, who's 12, and call her in VR and go and play a game and have her show me a 360 capture of the science fair that she has a project in, right? I want to be able to put on a pair of glasses and be telepresenced into my grandparents' retirement home and be able to have them have a casual interaction with my Avatar or with like a projection of me in a way that's frictionless and Have me be able to share my work with them in a way that's also frictionless You know, I think for me it's a lot about that connection but it's still still so much for me about the things that we have no idea about and being able to create experiences that are impossible, being able to create connections to other people's dreams, other people's ways of imagining the world, getting to step inside of someone else's sense of humor in an embodied way and experience a joke in a way that maybe a joke that's an hour long, that is some sort of virtual reality joke that, like I think about accounting, which is still one of my favorite VR experiences, is just like a weird virtual reality joke. And I just still laugh every time I play it. And, you know, there's just so much opportunity for us to invent new ways of being people and in new ways of relating with each other. So really, I'm like, what's the potential? I'm like, the potential is something that I can't even imagine. It's something that I won't create. And I think that's why I'm so passionate about the work I'm doing and trying to educate students is I want to give other people the opportunity to create things that I know I never can or will because I'm really tired. And I have so much work to do. And I'm, you know, the most amazing things in VR are going to be made by people that have never seen a headset yet. You know, that's what's exciting to me. I'm excited to see what somebody who's born right now, like Michael Abrash was talking about his grandson who was just born last week, like, the world that that person is going to live in, the experiences that will be there, which I hope that I will still be here to see some of that and that the technology will have adapted with me to be able to experience that in a way that's meaningful as my body changes and ages. But really it's like I just don't know and that is so exciting to me and terrifying because I think, you know, like we already talked about, there's a lot of ways that these things inevitably will go. But I just can't wait to see what's possible. Oh, can't wait.
[00:42:56.266] Kent Bye: Is there anything else that's left unsaid that you'd like to say to the immersive community?
[00:43:02.275] Doug North Cook: Anything left unsaid? You know, I think there's... We have so much potential right now to work together and to break down boundaries. Because virtual reality is, at its core, one of the greatest potentials is to bridge distance. And right now we have a lot of problems in the world, like climate change and anti-globalism, which are things that can be solved by technology, or at least in part solved by having better technology that helps us bridge distance between each other. Reducing our need to travel and that as people working on this technology that has so much of this potential I think if you are ignoring that if you are ignoring the potential of this technology to change What it means to be a person on this planet that you should really take a look at that because there's a lot of potential to cause a lot of harm, and there's a lot of potential to cause a lot of good. And if the developer community and the platform stakeholders, if we can all work together towards something that is really good, then maybe we can save the planet from burning. Maybe we can work against the rise of fascism. I can't believe that these are things that I have to say, but I feel like these things need to be said. The world is on fire, and some of the people that are in charge of this flaming planet are trying to actively destroy the planet, destroy us, destroy people. And that, like, we are working on a technology that has the potential to change the way that people are in the world. And that we should be taking that really seriously. And that if you want to have a conversation about what that looks like, you can come and find us and collaborate with some of the research initiatives and projects that we're doing at the university. And we are here to try to help people figure out how to do this in a good way. Understanding that we're all making horrible mistakes all the time, because we're all wrong. Everything everyone is doing in VR right now is wrong. Some people are doing things better than others, but everybody's still wrong. Beat Saber is amazing. I love Beat Saber. Beat Saber is not the best VR rhythm game that will exist. It is not the best experience of music in VR that will exist. Is it the best right now? I don't know. A lot of people seem to think so. And it's so fun and so good, but I cannot wait to see what's the next thing that gets us moving, that gets us active, that gets us connected to amazing new kinds of music that haven't been made yet. Oh, there's so much good. Can we just be good and do good?
[00:45:47.662] Kent Bye: That's all I have to say. Awesome. Well, it's a good message to end on and to kick off my official coverage of Oculus Connect as it's begun now. But I just wanted to thank you for taking the time to talk to me and kind of digest and unpack some of this stuff. And yeah, just for joining me on the podcast today. So thank you. Yeah. Thank you, Kent. So that was Doug Northcook. He's an assistant professor for immersive media at Chatham University and also has an immersive design residency at Fallingwater. So I have a number of different takeaways about this interview is that, first of all, Well, accessibility was a big theme of this conversation and trying to do a form of universal design and looking at input and precision of input and how from information theoretic perspective, there's always loss in communication. And so with the input device, you have to be able to handle a certain amount of loss and figuring out ways to do models and training to be able to add another level of precision. And so thinking about where does that training data live? Is it on somebody else's server? Or is that something that is within your device itself? So just trying to figure out how, as we move forward to think about all these different ways that people have different abilities and different bodies that are capable of different things. And so. really looking at how to design for everybody. And by doing that, you start to expand the affordances for VR for everybody, especially as you start to expand out the technologies. So I think it's fascinating to start to look at all the different affordances of all like the speech and your EMG and your neural inputs and your eye tracking and your body movements. And then even with like tablets and mouse and keyboard, especially with the mouse being on a plane where you're able to actually have non-fading interfaces and do highly precise actions. So what is the equivalent of doing those types of things when you're in VR? Are you going to have some sort of tablet interface, maybe that's attached to your body, or are you going to have EMG sensors so that you're able to make these subtle movements to mimic things without having haptic feedback? And then Doug is asking, well, are we gonna enter in this whole new era of like these rock band devices that are gonna allow us to have the haptic feedback that we would need in order to have this precision input? And so, yeah, just thinking about like where is precise input moving in the future, especially as we move into spatial computing that is getting away from the mouse and keyboard, but still recognizing that there are tasks that are always gonna be better to use the mouse and keyboard than using these new immersive spatial computing interfaces. So, like I said at the top of the podcast, Doug is teaching these different students and trying to bring together all these different disciplines and design practices, and that there isn't an existing design framework that has all the solutions in terms of how to design for virtual reality. And in terms of evidence-based research, that is maybe caught up in peer review within the academia, but then also He's saying there's a lot of this is also tied up with an intellectual property that these individual companies have done on their own and so they're using that design their products, but that there's not really a great outlet for them to share that back. I mean, they could theoretically publish that, but it would be really great and beneficial for everybody if there was a little bit more of an open conversation for some of these findings that are found. But also just talking to Doug and seeing that he was talking about the education summit at Oculus and how there was this sense of trying to quantify things into numbers and for him having this sense of seeing a bit of a red flag of when anyone ever says like the best practices for design in VR, there's a certain mindset of trying to quantify things down and that From his perspective, things are still so early that it's difficult to lock things down so tightly. And so he has the leeway to be able to be at a university where he's able to do this open-ended exploration and research and not be tied down and to try to limit in terms of what this medium is actually going to be. Especially if you think about all these neural interfaces, the brain computer interfaces, the eye tracking, there's so many different other iterations of technology that with each wave of technology, there's going to be whole new ways of approaching all these things. And so Over the next five or 10 years, we're going to see a huge exponential growth in terms of the amount of input that we're going to be able to have within these spatial computing devices. And it's going to continue to evolve the user experience that we have. And so it's a bit of a moving target right now, even to try to say that this is locked down, that this is the best way. There's going to be so many new ways of being able to do the exact same thing in five to 10, 20 years that he's trying to take an approach of doing the best that he can with what's available. And to do this approach of designing a set of open source tools or tools that are available to be able to have some of the better practices when it comes to accessibility, whether it's for thinking about cross-platform compatibility for doing subtitles or. other ways that they could do assistive technologies and to just generally make the input options that are available more accessible. And that, he said, he really feels like it's a powerful thing that he, within a university context, can create something that's cross-platform, so it works in Unity and Unreal Engine, potentially eventually even WebXR, and then to put it within a commercial product so that people can start to actually have a direct experience of it, and then start to implement that on their own. I know Alchemy Labs, Job Simulator, Vacation Simulator, they've actually been doing a lot of accessibility work with subtitles and I look forward to hearing a little bit more about some of the work that they've been doing on that. And there's actually an upcoming workshop for the W3C, it's on inclusive design for immersive web standards. Mozilla actually sent me out to Amsterdam to go to the view source conference that they were putting on. And I had a chance to talk to a lot of the people from the web standards community and talking about the evolution of the WebXR standard. Some of the feedback that they got was to do a little bit more consideration for accessibility within the future of WebXR and to see how maybe some of that could be baked in with the standard itself. And so I think that's a part of the reaction to that is to have the web development community to look at accessibility standards and principles of inclusive design. Sounds like it's going to be free and open to the public for folks to attend. So I'm hoping to go up there and to get up to speed with what's happening within accessibility and the future of design. Looks like a lot of great people are going to be there. And that's going to be in Seattle on November 5th and 6th, 2019. So yeah, just to kind of wrap up other feedback from this conversation with similar concerns around ethics and privacy and the deeper business models of surveillance capitalism, you know, especially, you know, this is right after the opening keynote, where there was discussions about these devices being able to capture all the information all around you. And then. you know, Mark Zuckerberg talking about control labs and neural interfaces, neural input, but also eventually future brain control interfaces, BCIs are able to actually read your thoughts. And you know, the technology is super cool. And to be able to read your thoughts is actually going to make huge accessibility leaps for people who have no other ways of being able to communicate. And I think generally brain computer interfaces may actually be kind of like the mouse of the future, but there's all sorts of like ethical and privacy implications for where are my thoughts going? Is there going to be a transcript of my thoughts out there? As well as, as we start to create these devices that are able to capture the world around us and what does that mean? And Doug had this great point, which is this concept of opting in and opting out where If all of your friends and all your network is suddenly wearing around these AR glasses, let's say, project out a number of years when everyone is wearing AR glasses. And if those AR glasses are scanning everything in your environment and then taking that and mapping that information, then you may not want that within your environment. But if that's something that becomes culturally acceptable, then there's like these weird tensions between you participating within your friend network and having these different issues of what is and is not being recorded by these devices. This is an issue that cell phone cameras, there's a lot of similar kind of discussions about the fears around people taking photos of people within bathrooms. Eventually, the new normative standards are established, but at the very onset of this, I think there's a lot of discussions about the degree of which where surveillance capitalism has gone. And what's it mean if other people within your social network are using these different devices that are potentially encroaching on your private spaces? And so how do we do that if people are wearing these devices all the time? That's projecting out, but it's still important to start to think about as a design principle. You know, there's not a lot of robust discussion that was happening from an official capacity, especially during the keynote I understand during the keynote It's very marketing driven and to be able to announce the potentials and to get people's brains thinking about the possibilities but as they were doing that there was I think a lack of tact of discussing a lot of these open questions that have yet to be solved and That then has this whole back channel conversation that I think was really driving. A lot of my conversations was just kind of reacting to different people as they were having a sense of unsettledness in their guts, as they were listening to the future of where this is all going, especially for people who want to see open platforms, the open metaverse cross compatibility. And if a singular company is saying we're going to create the future of this next computing platform, and if it's completely locked down and not a lot of options to have other ways of getting content onto these platforms or without thinking about some of the ethical and legal implications, then that move fast and break things type of ethos, I think may have some unintended consequences. So to really have that reflective contemplative ways of taking a step back and I guess, inviting more critical dialogue about some of this and just recognizing and acknowledging that these are open questions. There's no answers and that there's a need for a larger community discussion about a lot of these different things. So I'm just glad that there was people that I was talking to that were able to articulate that and to kind of reflect, just as a sounding board, some of the initial reactions that we're both having. And, you know, it is have this, the best of times and the worst of times where we do have these potentials for amazing, incredible potentials for immersive technologies, but also there's a lot of dystopic potentials as well. And so to think about how we are moving away from individual rights, individual identity, individual rights to data, data sovereignty, and the protection around all of that, and more toward that collective rights to these companies where, you know, there is a need to have the companies be able to innovate without being burdened by too much regulatory burden to stop innovation from happening in the first place. But at the same time, regulation that is going to bring in the ethics that we need. So it needs to come from either the market or the culture, or otherwise it's going to come from the law. And Doug was talking specifically about some of the romanticized language that he was hearing during the keynote saying people in this room are true believers, which is a certain extent true because they're believing in a future that doesn't exist yet. And they're betting into these dreams of the science fiction future that is going to have all these amazing possibilities, whether it's from Star Trek, holodeck or ready player one education realms or the metaverse or whatever dreams that we're chasing, we are. believing in something that doesn't exist yet. So in that sense, we are believers, but to cast out this true believer, talk without talking about some of the other pragmatic aspects of the things that need to be considered some of these ethics and privacy issues, then it just creates this unsettledness when you have this religious zealotry type of language being put forth by a company. And it makes you question the deeper intentions, especially when the fundamental business model is still based upon this surveillance capitalism. and move fast and break things type of ethos. So I think the big takeaway for me is just that there needs to be a balance between both. And it would be nice to have this transparency and openness and honest conversations with Facebook. We need to have those open lines of communication, not just for me, but for the entire press corps and for everybody who's critically looking at this and trying to interrogate these things. So that was a little frustrating to me to not see that there was any opportunities to have official conversations and to bring up some of these questions Facebook and or Oculus which is functionally it's all Facebook now So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.