#639: AR as the Democratization of Architecture, Hands-On Spatial Computing, & Leap Motion’s North Star AR HMD

keiichi-matsudaKeiichi Matsuda went from being a dystopian filmmaker to becoming the vice president of design at hand-tracking company Leap Motion. Matsuda is probably the most famous for his HYPER-REALITY dystopian filmthat imagined an commodified & gamified AR future where companies are vying for your attention regardless of your physical context. He was trying to push the current philosophical orientation to the logical extreme not because he wanted to live into that future himself, but more from a perspective of a cautionary tale thinking about how this is actually a plausible near future if we don’t do anything different. One of the co-founders of Leap Motion reached out to Matsuda to invite him to help creating & influence the future of spatial computing since he has been creating functional and pragmatic spatial computing interfaces in his films since 2010.

Leap Motion just announced their open source AR HMD reference design called Project North Star, which has a 95° wide by 70° high field of view with 65% strereo overlap & 1600 x 1440 per eye. By default it will be a tethered AR HMD with a 180° x 180° hemi-sphere for tracking the hands. The full open source design will be released within the next week, and Leap Motion won’t be manufacturing their own version, but rather charge a hand-tracking licensing fee for their hand tracking software.

I stopped by Leap Motion’s offices during GDC to try out some of their latest hand-gesture user interfaces (in VR not in their AR prototype), and I had a chance to talk to Matsuda about his journey into spatial computing through architecture and making speculative sci-fi films, how spatial design can influence someone’s emotions, the iterative process of designing for fun and satisfying feelings when creating hand gesture interfaces, as well as the destruction of identity, the blurring of lines between digital and physical realities, the collaborative building of worlds, democratization of architecture, how spatial computing is more natural and intuiative, and building interfaces that are so immersive that we feel as though we’re inside of them.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here are a number of Matsuda’s films that are worth checking out to see his evolution of spatial computing design ideas over the past eight years:

HYPER-REALITY
May 16, 2016

Augmented (hyper)Reality: Domestic Robocop
January 6, 2010

Augmented City 3D
August 20, 2010

CELL
October 4, 2011
An exploration of the quantification of digital identity

Alchemy
August 27, 2012
Explores immersive interfaces to tell a spatial story of Veuve Clicquot wine

The Technocrat Retrofit of London
May 31, 2009

Bossarica – Neon Sign
Februrary 21, 2011
Music video blending projection mapping with 2D & 3D compositing

Essay on Cities for Cyborgs: 10 Rules

Matsuda has an upcoming immersive VR film called Merger

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality


Support Voices of VR

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So on Monday, April 9th, Leap Motion, which is a hand tracking company, they basically create a sensor that you can put on top of a virtual reality headset, and it brings your hands into virtual reality experiences. And so they've been working on this hand tracking technology, and they've decided to expand this out into augmented reality. One of the co-founders of Leap Motion was really frustrated by the limitations for how small a field of view that you had, and they really wanted to bring the hands into augmented reality experiences. And so they decided to say, hey, what if we try to build a reference design that is trying to maximize the highest amount of field of view that's possible and feasible, and to be able to really put your hands into an experience, an entire half hemisphere? So Leap Motion has just announced this Northstar Augmented Reality HMD. It's an open source design. It's going to be available for OEMs to manufacture, and there's going to be a licensing fee that people would have to pay in order to license the hand tracking technology. So I was at GDC a couple of weeks ago, and I had a chance to drop by the Leap Motion offices. Now, I didn't get a chance to try out the Northstar AR HMD yet, but I did get a chance to try out some of their latest interaction designs that they have been working on. this was within virtual reality and they basically had to do a lot of work to create an SDK that you can add on to Unity in order to create specific hand track gestures. This is something that I think is taking a while for them to get to a baseline where you can get some really satisfying hand interactions out of the box using their SDK but that hasn't been there up to this point and so I think it's kind of crossing that chasm in terms of being able to really do some really productive things with the hand track controllers. And it felt really amazing. I mean, you have this interaction where usually when you're grabbing stuff within VR, you kind of like to touch your other fingers, but they had this open-handed interaction so that you weren't getting that haptic feedback, which allowed you to start to grab virtual objects. And what happened was my mind started to get tricked into believing that I was actually acting with and manipulating these objects. So I think that adding hand-tracked controllers to both virtual reality as well as augmented reality is going to take the level of presence to an entirely new level. And so I'm super excited to see where this North Star HMD goes. It sounds like they're going to be releasing it to OEMs. They're going to be free to be able to produce it at their cost, around $100 per controller. headset. And this is going to be a tethered augmented reality HMD. This is not something that you're going to be initially walking around with with a mobile processor, although I was told that it would be possible to potentially add a mobile processor inside of it. And I think it'd just be a question as to the battery life and the tradeoffs for being able to drive the resolution that they have. So I think it's just a matter of time before more and more of this gets miniaturized. But they really wanted to push the edge as to what's possible with an augmented reality head mounted display. Now, like I said, I haven't had a chance to try this out yet, but I did get a chance to talk to one of their lead creative designers. Keiichi Matsuda is actually a really interesting character. He's somebody who's actually came into augmented reality through both the background of architecture as well as film. So with his combination of architecture and film, he was making these spatial designs that were going to be able to be related to humans. So some people think about architecture as just designing for buildings for other people, but it's more about designing spaces where people are interacting with them. And so he was trying to fuse together his design principles and to imagine this spatial immersive computing way back in like 2010. He started to release some of these initial prototypes and visions now remember the oculus rift didn't get kick-started until like August of 2012 and didn't come out until 2013 so he's a few years ahead of where this whole immersive computing thing really exploded and so he's been thinking about these types of designs for a long time at least for over eight years now and So Keiichi was actually making these dystopian films because he was worried about the commodification of these spaces and wanted to really show like, almost like a design parody. Like this is where things are at with our current culture and our philosophical foundations for how we run our society. And now we're going to project that out into the future and see what the logical extreme of that is. And it's like this ad infused world that is constantly trying to hijack your attention. essentially the worst aspects of browsing on the 2D web and extrapolating that out and putting that into every dimension of your life. So he went from creating these dystopian films into eventually actually going to work for Leap Motion to be able to design the future rather than just to complain about what the potentials are. So I had a chance to sit down with Keiichi Matsuda to be able to talk about his journey into immersive and experiential design, as well as some more details about this Northstar AR HMD. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Keiichi happened on Friday, March 23rd, 2018 at the offices of Leap Motion in San Francisco, California. So with that, let's go ahead and dive right in.

[00:05:25.772] Keiichi Matsuda: My name is Keiichi Matsuda. I'm a VP Design and Creative Director at Leap Motion. But before doing this, I was making dystopian movies about the future, trying to imagine what everyday life is going to look like in the context of augmented reality and many other different emerging technologies. So I guess I'm trying to apply my design skills to try to understand where we should be going with this technology.

[00:05:49.603] Kent Bye: Yeah, so hyper-reality, when it came out, it was sort of like this hell of sensory overload. If you think that the web is bad with advertising, then imagine what the world would look like if they were always trying to hijack your attention and gamify everything for not the benefit of anybody other than these singular corporations. So it was kind of like this, like, oh my God, that looks terrible. Is this what we're creating kind of moment? So maybe you could talk a bit about, you know, the backstory to that, like why you did this and what type of things you were doing. It's actually probably the best. AR design speculative AR design that I've seen. It's like, yeah, this seems totally plausible. And actually this is like super good design. I don't know if you were trying to show your design skills of the immersive future at the same time, trying to do like a black mirror, dystopian, like cautionary tale of where this all could go.

[00:06:41.733] Keiichi Matsuda: Yeah, I mean, well firstly thank you very much. In terms of the design, I think I pushed hard on that because really I'm a nerd for design and I really enjoy thinking through those things and I didn't have a particular timeline that I had to stick to so I was able to indulge my own kind of interest there. But I actually started making these movies in 2009, I put up the first one, which was during my Masters of Architecture. And during that time I was really interested in how emerging technologies could inform and affect the spaces that we live in. And I saw with AR a really amazing opportunity to be able to finally kind of merge the physical and digital design of spaces together. I think at that time architects had started to think about digital technology, the effect of the mobile phone on space, but really nobody had been able to engage. And for me, I felt that all of the really interesting things that were driving our world forward were in technology and communications and the internet. And as a designer, I obviously wanted to be close and connected to those things and working in those media. So I really initially saw augmented reality as a way to be able to express like spatial design skills within a kind of virtual physical setting. Later on though, I think I started to understand a bit more about the sort of native qualities of what augmented reality is. You know, one of the key things is that you're taking media out of these kind of surfaces, out of the glass boxes that they're trapped in, or out of the page of the newspaper or whatever. and you're bringing that into space. So in fact, really, you're making a new type of media, which is a spatial one, which you're always inside. And that means then that, of course, everything can then be commodified. You know, the relationship between me and somebody else, every conversation, any time I look at a product or I exist within a space, that's a location, it's a site for potential of commercial activity. And just the fact as well that it can have such a large impact on how we perceive the world how it can shape reality, I saw it as both an extremely powerful and also an extremely dangerous technology. So really I suppose the concept work that I was doing with those early films and with hyperreality later is trying to explore the possibility space, trying to understand what could happen if we do nothing.

[00:08:50.010] Kent Bye: Yeah, if we don't change anything, this is where we're headed. In fact, you know, we're well on our way there. Cause yeah, I think we've already kind of reached that state to some extent on a 2D world. And then if you sort of take that to the logical extreme into spatial design and augmented reality, that's the hyper-reality is where that is going. So what needs to change? To me, I find it fascinating that you're, we're studying architecture. It was like, cause I'm talking to Matt Milesings from 60 AI what he said is that when he's trying to find augmented reality designers He wants to get people who have industrial design background so people who are used to doing spatial design of real tangible physical objects because they understand the affordances of how to design objects so that people can interact with them, but architecture is about creating much larger spaces and being able to like invoke different feelings within people by designing space and usually architecture is as a process would take, you know, years or maybe decades of planning and, you know, just the building. And now it's basically lowered the barrier to entry for people who've studied architecture to go into augmented reality and create whatever architecture they want, you know, within the constraints of doing any other, you know, 3d pipeline of doing that. But what have you learned from architecture and started to apply that into like designing for spaces for augmented reality?

[00:10:07.457] Keiichi Matsuda: That's a great question. I feel like people think of architecture as the design of buildings, and that's fair enough. But depending on what school you go to or what that kind of background is, architecture can be much more than that as well. I see it really as a discipline which is involved with trying to design for people and environments. So really it takes into account lots of other things as well, like history, like sociology, elements of philosophy as well, and trying to understand the context of the space that you're working. So the design processes in architecture can actually be let out to lots of different things, and I think that's why you have people who are trained in architecture now working in so many different fields, because really it's a skill set that allows you to get into those things. I think when you compare to something like industrial design, which is focused on the object and making that a usable thing, that's a very interesting part. But what I think is important, and maybe slightly lacking from the conversation around AR, And to some extent VR as well is about more of this kind of emotional connection with the environment as a whole and how we engage and interact with that. And I think that's what architecture can really bring. It talks about the connection between the human being and the environment around it.

[00:11:15.719] Kent Bye: Yeah, and I think that you said there's some philosophical implications there. And when I think of like, the philosophy of reductive materialism, or dualism, you have this separate realm of like, your thoughts and beliefs, but also your experience. And so you have this objective and subjective split, but yet, something like maybe Chinese philosophy, where there's the yang and the yan, where they have maybe like a combination of for every external manifestation, maybe there's a corresponding internal dimension of it, so that there's less of an explicit dualistic split, but they're actually interconnected in some ways. So I'm curious to hear, from architecture, is there a range of different philosophies of how those get applied to the connection between physical space and our internal feelings?

[00:11:57.445] Keiichi Matsuda: Yeah, I mean, I think one of the concepts that I was really interested in is this idea of subjective space. What augmented reality gives you as well as being able to overlay your world is also the ability to be able to customize your world. The second film that I made in 2010 called Augmented City 3D showed a city where the protagonist was able to walk through and turn on and off layers of the city, and therefore kind of mix and match their interests. And I was really interested in this idea that your environment becomes kind of a projection of your own personality, of your tastes and interests. That obviously throws up then a lot of questions about what it means to live in a shared environment. are we going to just retreat further into our filter bubbles? But it kind of came back again when I came to make Hyperreality because I wanted to shoot from the first-person perspective to make it feel very real and immersive. But in film, often seeing the reaction of a character, the micro-expressions on that person's face, tells you so much about what's going on and really drives the story often. So if you can't see the face of the character, then how can you tell a story? In this case, the idea then came that you could tell the story of the character by how the environment reacts, and you could understand what the character is because the character is projected on the world around. So that was the idea, and my plan eventually is to make a couple more films in that series that shoots on the same day in the same city, but from the perspective of different characters. So you'd see a very, very different kind of representation of reality there. One of the nice things I suppose about AR as kind of a visual conceit, as a kind of medium to tell stories through, is that it's able to make a lot of the things that are previously either philosophical or perhaps abstract in some other ways, suddenly that all becomes very visible and tangible and you can do a lot through visual storytelling there. So I like the ability to be able to make that all feel very real.

[00:13:48.330] Kent Bye: Yeah, and maybe you could tell the story of how you went leading up to hyperreality and then how from at that point you ended up at Leap Motion after that.

[00:13:57.534] Keiichi Matsuda: Yeah, sure. So I spent a little bit of time between I put the first movies out to like working in different areas. I was doing kind of installation art and various different things. But I kept coming back to these ideas of augmented reality, IoT, smart cities, automation, all these different things that kind of interested me. So I decided to do like another film, which ended up being Hyperreality. Once I put that out, though, I got lots of different requests from movie people and technology people and found myself at an interesting crossroads of what should I do. My aim has always been to try and think of ways that we can improve the world and make society something that's fairer and accessible. And one of the things there is that I suppose, in a way, making dystopian movies about all the things that could go wrong is able to achieve some of those aims, but when David Hulse, the CTO at Leap Motion, got in touch with me and gave me an opportunity to actually design some of this stuff and try and set a precedent and shape the way that technology actually comes about, I felt that it would be kind of churlish to turn him down and say, I'm just going to keep making dystopian movies. As a designer, I feel that although there are many challenges ahead, and AR is an incredibly powerful and potentially dangerous technology, as a designer, you have to believe that there's a path through that. My first child, my son, was born on the 17th of October last year. And I think part of that as well is wanting to actually commit, rather than sort of standing on the sidelines, to actually get involved. I don't want anyone else to do it, basically.

[00:15:32.822] Kent Bye: Yeah, well I think the the vision that you had put in hyper-reality was like in some ways there's an underlying philosophical assumption that like your visual space is going to be taken over by these centralized entities that are basically centered in their own profit or their own interests. they're not centered in your interest necessarily. They may be trying to like hack your attention or trying to in some ways gamify these things and create these other point systems and currencies and have a social score for yourself and all these things are kind of designed to drive certain behaviors and to do these you know kind of hacking your fixed action responses to be able to drive actual tangible behaviors and the economy, which benefits them economically, which is one potential for using augmented reality for this kind of more competitive, young expression of reality. And then there's the yen expression of reality, which is more cooperative and collaborative and trying to be more receptive, trying to center you in your own experience, trying to get you in your body, trying to get you present in your own experience. And I think that that's what I find interesting with both virtual reality and augmented reality is that it's like this with Joseph Campbell, he has his hero's journey, the monomyth, which is very young. And I think that there's sort of a competing yen archetypal journey, let's just call it in that And with that, it's allowing us to find new ways to get connected to both our bodies and our environment, but also find ways to cooperate with each other. And how can we use augmented reality to tell the stories of layering on top of reality so that either we're more connected to our space or environment or other people that are around us. having that flip the opposite, like maybe utopian version of some of those things. It sounds like, I think it'll probably be a while before we get to even knowing what that might look like, but just that sense of like, you know, maybe this is more about centering us in our own experience and just get us more present.

[00:17:24.208] Keiichi Matsuda: Yeah, I mean, I've got some ideas about that. Obviously, I didn't try and make dystopian movies. It was more that once I started to unpack all the possibilities, I found myself going away from the kind of glossy Microsoft productivity vision kind of thing and towards something that I felt is more believable and kind of an extrapolation of how we already do things. The visual noise within it is something which a lot of people ask me about. And of course, it would be possible to do the whole thing without it being visually noisy. And you could have, quote unquote, nice, minimal design that doesn't get in your face but it's still able to track you in the same way and able to serve you up content in the same way and it still has those quite insidious effects so I kind of wanted to put that out there and just sort of show that as a possibility. But absolutely I think that there are lots of also really incredible things that AR can do. It's interesting that you talk about centering on the self I've been thinking a little bit about the possibilities for things like roleplay and the weaving of fiction into reality and how that's such a powerful and, again, potentially dangerous force. On the one side, on the bad side, you see the possibility for these filter bubbles and the fake news thing. But I think on the other side, which is maybe something that hasn't been explored so much, there's a potential for being able to kind of experience life from many different kind of perspectives. This idea here around making an environment that reflects you, it's all about the individual identity and who I am as a person compared to other people. But I find that to be maybe one of the less exciting recent kind of trends, this kind of move towards this extreme individualism where everybody has to have an opinion about everything and like you have to, you know, fight your corner and I understand why that's emerged in some areas as well, but I also find that I have the most fun when I try to kind of destroy my identity and have less of it, you know, and experience things from other people's perspective. I think that's where VR is quite powerful, but I also think that if you look at something like Pokemon Go, which is this, as you say, like a layering of a different reality over your physical world, I can now be attuned to more than one version of reality at the same time. So as I'm walking around a city, I'm actually you know also thinking about where all these different Pokemon are and if you imagine that you could apply that to some more expanded concept that maybe we're sitting here doing this interview but maybe I'm also a secret agent and I've got like a document in my bag that I'm gonna go and you know give to somebody when you know I see the marker come up and that person I meet is a total stranger but they're also playing the game And you could have 10 games, as many games as you want, all simultaneously happening within a space. I think that could be kind of interesting, the idea that you might be more fluid with your identity. And maybe kids growing up with that wouldn't have any reason to actually have a fixed identity in the first place. Why bother? It's much more fun to experience things as a kind of funneur.

[00:20:14.565] Kent Bye: Yeah, it reminds me the different trends that I see in storytelling, immersive storytelling, which is immersive theater, which is at this point, you have a usually go to a physical location where you kind of enter into the magic circle, and then you get the rules of this is the rules of this experience. And then there's a combination of audience members and cast who are moving around the physical space and you're interacting over a period of time and that and sometimes that bleeds out into real reality and there's cast members interacting with you and you don't know what's real and what's sort of part of this fictional reality of this experience that you're having and so I do see that this live-action roleplay of this immersive theater and adding these different stories to games, but also doing it in cooperation with other people, but also in cooperation with the physical location, the Earth, of being able to add these different layers of meaning to these different places. So, yeah, I guess, you know, let's talk a bit about, like, being here at Leap Motion, there's gonna be some announcements next week. Right now, I'm at GDC, and I have no idea what is being announced, but I imagine that there's a lot of the technologies that you've been working on, the user interfaces, it's been starting in virtual reality, but it hasn't really taken off for a number of different reasons. One is from the Unity game engine aspect of things, you need to get bootstrapped to be able to deal with the colliders, but also, that there's been other more compelling motion track controllers that were more in a larger field of range, but you lose the ability to be able to start to do things with your hands and use the affordances of embodied cognition that is possible with your hands. So I imagine that there's more of these overlaps between what's possible with the Leap Motion that, you know, you can prototype stuff in VR now, but when it comes to AR, there's going to be even more opportunities to be able to do amazing stuff. So with that, why don't you sort of tell me what is being announced and what's happening with augmented reality and Leap Motion?

[00:22:09.147] Keiichi Matsuda: Yeah, sure. So before I go into that final bit, I just want to be the company man for a second and pull you up on the thing about the unity, because we have excellent award winning modules that make it very easy to be able to integrate Leap Motion into projects.

[00:22:23.085] Kent Bye: Well, I guess what I mean by that is that it's taken a number of years for Leap Motion to build those, so it's a stopgap in the technology stack for how Unity had built it for a game engine for being able to do abstractions, like being able to exert your agency at a distance. Unity wasn't built to do interactive design with being able to grab objects and let go of them. And so there's been a little bit of engineering labor that has had to happen in order to fix those gaps in the baseline of Unity's engine, and that is the modules that you've created in order to enable that. But it wasn't there from the beginning, which I think is a part of the problem why it was perhaps a little bit more difficult for people to just seamlessly integrated. Had that been there two or three years ago, I think we would see a completely different sort of ecosystem with hand gestures, but that robustness of adoption hasn't necessarily happened there because I think there's, you know, finding that sweet spot of those use cases. Anyway, I just want to sort of respond to that.

[00:23:19.193] Keiichi Matsuda: It's a sort of incremental thing, right? We're a startup still, we're not a huge company with unlimited resources, so we are continually invested in improving our products. We'll have new tracking coming soon as well, so this is something that even using the kind of original devices that people may have, sitting on their shelves collecting dust. If you download the SDK now and plug it in, you'll be able to find that you'll be able to integrate it much more easily and the tracking should be much more robust than it was before. But anyway, yeah, I think all of our work in VR has also been trying to imagine how it could be used in AR and any of the other new flavors of reality which will be hitting the shelves in the next year or so. We have been recently doing some tests in augmented reality to try and understand how some of the design principles that we've been developing for VR can carry over and start to exist within physical environments. So I've been putting up on my Twitter account some little teasers, little tests that we've been doing that have been blowing up. We hit the front page of Reddit this morning, which was pretty exciting. So there's obviously kind of an appetite for these kinds of interaction. And for us, the closer we get to AR, the more sense it makes to use hands. I think within VR environments, if you're playing a game, it's fine. If you have to tell yourself to a computer anyway, it's fine to have a controller and, you know, it's not such a big deal. But as soon as you get into a situation where you want to be moving around, maybe you have multiple people within the same space and you can't remember whose controller is whose, or you move into a situation like in a classroom where you have like 30 kids, each one with different controllers that needs to be paired. The case for controllers is drops and drops. And then when we get to a situation like AR, where I can pick up a physical object with my hand, but if I want to be able to pick up that virtual object, I have to go and use my control to pick it up. It doesn't make any sense at all. So we feel that the more that we can progress the field in terms of how it's thinking about where the future of this is going, the better the case for using direct and natural interactions that you get through Leap Motion. So I guess by the time this goes out, we will have announced what we're working on. Essentially, everybody is concerned about the device. This is what's happening at the moment. Everyone is speculating. Is it a Meta 2? Is it a Magic Leap? But in fact, it's a device that we've made ourselves. This isn't something that we're intending at the moment to productize. It's more that we felt that one day all of these devices will be kind of commodified. And really what matters at that point is not the specifics of the field of view or anything like that, it's really to do with the experience. What experience does this provide? What's it for? How do you position it? How do you interact with those things? What are the mechanics that we do? So I think if you're building something like a mobile app, you have a series of very set conventions, like I've got my burger menu up in the corner. I can have a list here. I have drop-down menus. Of course, all of that needs to be thought of again for VR, for AR, and for these different controller possibilities as well, and with hands as well. So really like we're kind of focused really on trying to understand what the experience is and provide the best possible one and collaborate with as many different OEMs and manufacturers as possible.

[00:26:25.134] Kent Bye: So it's like sort of a reference design that other OEMs can start to implement? Are you building your own prototypes of it?

[00:26:31.947] Keiichi Matsuda: So we built the prototype as a way of really testing our own designs. We wanted to prototype using it, and we just built it pretty quickly. David Holz, our CTO, was kind of frustrated that all of the things that are on the market at the moment, because they had to go through that stage to become a product, they're all heavily compromised in one way or another, you know, either because they had to fit a certain form factor or a certain price range. So you have problems, you know, with field of view, resolution, many different things. And he was thinking, well, if we just don't care about the size and don't care about the cost, we could make a really, really amazing air experience. And sure enough, within three or four months, He did. So it's incredible. It's an incredible device to use. I've never seen anything like it. And there are certain very visceral and powerfully emotional things I got from using that for the first time. Everything was shot through the headset. So I was wearing this kind of pair of glasses with a webcam attached to the front of it and shooting through the headset itself. So I was messing around with some screen recording software on my laptop, and during that time I was holding a virtual cube in my hand. And as I was looking on the screen at all this virtual stuff, my brain really felt that that virtual cube existed. And since then I've been trying to work out why that is. Obviously there's parts of it which are just due to the technology itself, but also I feel like the aesthetics of AR, like being able to design things which don't feel like holograms but feel like they actually exist within the space, is something that's going to propel AR from being something that you use to look at a 3D model to something which actually becomes enmeshed within the fabric of our reality.

[00:28:09.160] Kent Bye: Oh, well, yeah, well, so I've talked to Paul Reynolds, who he used to be at Magic Leap, and he was working on this digital light field device and reports the first time that he saw a dragon, this virtual dragon that's flying through, not a real dragon, obviously, but a virtual dragon. It was a virtual dragon that flew over and landed on his what's that it's probably quite a lot more surprising So it came over and landed on his speaker and he reported that it was almost like he felt it like land on his finger obviously there was nothing laying on his finger, but I think there's this thing about the visual field dominating and the mind is able to fill in the gaps. So there's some sense of like we trust our vision so much that maybe this is a short term thing, maybe we'll be more skeptical about what we see and our brains will evolve and adapt so that we don't get tricked or fooled like this. And that's in some sense what Jaron Lanier feels is that it's sort of a cat and mouse game that we're in right now with the technology and how we grow and evolve with it. Yeah, that ability to have those experiences that have some sort of, so I like to think of both virtual reality and augmented reality as archetypal reality or symbolic realities. In some ways you're getting like the ideal forms, like a platonic ideal form that is in your reality. And so maybe there's some part of our imaginal self that's connecting to that. And it sort of gets into these deeper philosophical questions of whether or not Consciousness is emergent or fundamental and then you know, maybe there's something there like this, you know, the more that we go down this AR route maybe this will People have enough direct experiences where they report things of like feeling like these virtual objects are real and not being able to explain it maybe there's some sort of like imaginal component to that like I don't know there's That's more metaphysical, philosophical, that's yet to be seen, but the phenomenological experience is you see these objects that aren't real, but yet they're enough of a architecture of that object that you just believe it's real and you treat it as if it's real, especially if you express agency and it responds. And I think that's the thing, if it's responsive to your interactions, I think that is what I have found to be, responsive than I just feel it. And I felt it in this demo here, even if it was in VR, even though I wasn't like grasping my hands, I just like, OK, this is real. I can feel like I'm able to do things with it and just forget about the technologies even there.

[00:30:31.532] Keiichi Matsuda: Yeah, that's really wonderful. I think I joined the company in September and I spent a long time just trying to absorb all of the knowledge that people have here. Martin and Barrett, who you spoke to earlier, as well as our front end team, have got a huge amount of experience in trying to understand what feels good. And I've been really surprised that a lot of my design process from architecture has been around, you know, a kind of logic or a kind of artistic flow, which gets applied to something. But in fact, when we're designing for hands in virtual augmented reality, so much of it is about feeling. I was really surprised by that. We have ideas for interactions that work great on paper, but then if you put them up against other ones which should focus on the actual, the kind of texture of that interaction, the sort of magic there, and like, to use a phrase from the game industry, you know, finding the fun of that interaction, you can create much more satisfying things that are much quicker and more intuitive to be able to use. So I'm definitely learning about this stuff as well.

[00:31:28.128] Kent Bye: Well, I haven't had a chance to try this new AR headset that you've been working on. And I think just so for one, one question that I have, though, is that existing sensors that you put on the front of the virtual reality headset, it really requires you to have your hands in a position that are in front of the camera, which is a pose that is OK for short interactions, but for eight hours a day, that's not tenable. And so How have you been able to design for non-fatiguing interfaces, interfaces you'll be able to do for six, eight hours with these types of gestures? Because everybody that I've talked to that looks at minority port augmented reality design is like, yeah, that looks great. It's very cinematic, but it's actually terrible. You'll be exhausted if you do that. And so in order to do that non-fatiguing interface, I would imagine that you would have to have some sort of field of view that would allow you to have your hands further down without having to rise them up into like right in front of your face.

[00:32:27.518] Keiichi Matsuda: Right, yeah, so I mean that's a problem that we've already kind of solved with our new devices which have like a 180 by 180 field of view, which I don't think is actually possible.

[00:32:36.563] Kent Bye: That's a 360, what does that mean, like all around? 180 vertical and 180 horizontal.

[00:32:40.905] Keiichi Matsuda: Okay, so that's like a half of a sphere. It's a hemisphere, exactly, yeah. So I think it must be a little bit shorter than that once you embed it within a device, but we're able to get a very wide field of view, which is much wider than you'd be able to see through any VR headset that's around at the moment. That gives us quite a lot, but there's also this other idea here that interacting with a computer is about giving it messages and commands and pressing buttons and pulling levers, and that's the kind of way in which we will be interacting with technology in the future. I kind of take issue with that and I don't think that we will in the future be sitting at a desk for eight hours a day punching commands and interacting with computers in that way. I feel much more that as these technologies start to get integrated with our reality, the kind of communication that we have with machines becomes much less kind of device focused or interface focused and much more about existing in your environment. At the moment we're sitting here in a room and we're not punching any buttons or pressing any things, we're just here talking and I feel like that kind of experience is absolutely possible in AR and VR. We don't need to be thinking all the time about those different things. So I've been kind of talking for a little while about this kind of post-controller world where rather than thinking about our connection with computers as one being based around controllers and devices, to move to this kind of human-environment interaction where we just exist in this world and a software in that world is kind of always on, it's persistent, it's multi-user, it takes multi-modal input from hands, location, device sensors, but also things like our social media profile, who we are, who's around us, contextual information like location, time of day, all these things will be together understood in terms of how to make an experience.

[00:34:19.513] Kent Bye: Well, and I've gone to the Microsoft Build Conference and see a lot of the early HoloLens applications that are out there. They tend to be designed for the enterprise, for architecture, engineering, design, anybody that's doing spatial design. Also sales and marketing, especially for things that are spatial, so medical equipment. You can go into a room, scan it, and then start to position what that actually would look like. Obviously, with a lot of phone-based AR, we have a lot of things like Ikea and anything that you're starting to see, like, what does this refrigerator, what does this set of furniture look like in my room, in my space, in context? lower cognitive load for you to see it in context than there is for you to imagine it if you were on the showroom floor of a Kia so things are going to be moving more towards this putting objects into our space and so if you have like that level of hand controls like most of the stuff that I've seen so far haven't had that level of agency for you to be able to actually interact in compelling ways with these experiences and I think that with this Leap Motion headset, whatever it ends up being called, I would imagine that you're able to do more interaction with stuff, but yet, I don't know if that's gonna be gaming, or if you're gonna be more towards the enterprise and working things with physical spaces, or doing things where it's maybe productivity, or being able to do embodied cognition, or being able to do social applications in different ways.

[00:35:38.849] Keiichi Matsuda: Well, as I mentioned before, we currently have no plans to productize the headset itself, but we are in the business of licensing our technology to OEMs or people who are building headsets. We feel that, I personally, one of the things that was interesting to me about joining Leap Motion is because I just think that hands are such an obvious part of the future. I don't really feel like I need to work very hard to say everything needs to be using hands, it's like we'll need hands, so we kind of free ourselves up in a way to think about other things that we can do and how that future is going to unfold. So I don't know, there may be situations where you want a controller for some things, there may be situations where voice is a better interface, but you're always going to need all of those things together. We kind of have a luxury in a way of being able to talk about the future in a more general sense without trying to push our product really hard. But at the same time, we have to paint a compelling vision of what could be possible. Otherwise, people will just kind of rest where we are and say, well, it's OK, because we're just going to disappear off into VR for like a 30-minute session playing a game, and then we're going to come out again. And it will just be like watching a movie or something like that, where in fact, the potential is so, so much bigger.

[00:36:47.355] Kent Bye: Yeah, I know I saw a documentary with Steve Mann and he was walking around with augmented reality headsets and he had a corded keyboard. So being able to do like different combinations of those key presses kind of akin to a quartz stenographer being able to type out. And so I think, you know, typing in these augmented reality interfaces is going to be a challenge. I think typing with your fingers is so efficient. I don't know if like you can do like virtual keyboards with being able to type things with your fingers to be able to do the text input. I think that's been a huge challenge within virtual reality. Or if there's also going to be like an evolution of gesture control such that you know we all in five to ten years start to learn things like American Sign Language or different types of sign language such that you're doing symbolic translations of language with your hands to do different levels of embodied cognition. So there's maybe that's like further out where people make the connection between thinking and embodiment in different ways. But what are some of the other interfaces that are now possible? Are there keyboards or what kind of input can you do now with this?

[00:37:50.223] Keiichi Matsuda: Well, I was thinking about, you know, text input. Obviously, that's a big problem for VR anyway. Having kind of virtual keyboards that hang in the air were a feature of my first film in 2009. And as soon as I put it out, I realized what a ridiculous idea that was. how in the face of all this mind-blowing technology, we're still basically using typewriters, which just seems insane. And I think I started to think about not how do we solve text input, but what problem is text input trying to solve? And then how can we solve that? So if it's about you're trying to solve communication and chat is the answer there, then why don't we look at how else we could solve communication and avoid having to sit down and enter information into a computer again? often sometimes like focusing on problems that exist or like friction that exists in the world and then trying to find other ways to it is a better approach than trying to look at the components of how we currently do things and replicate them in AR or VR. So case in point, although I love what online banking can do for me, I really really hate the online banking interfaces that we have to use about trying to find that, I don't know if you have that in the States but In the UK we have these stupid little devices that you have to enter a password into and then you press the thing and it gives you a code and you enter it in there and then even when I'm in there I have to like find the right button and they always change the interface so everything's in a different place and I have to read lots of text and find it and what I want to do is actually something quite simple. I was thinking about how else that could work and I was thinking maybe that a nice interface for that would be like this little penguin who just stands on every street corner or just follows me around and anytime I need to access information about that, I can just sort of click my fingers and comes out in front of me and I can ask him questions. I think as well... It's like a Clippy 2.0. Exactly, we're bringing Clippy back. You'll be happy to hear. But I think what's interesting about that as well, I mean Clippy was... I'm going to talk about Clippy for a bit. We have virtual assistants now. We've got Siri, we've got Alexa, and they are... I think of these as kind of like gods within monotheistic religions. If you subscribe to the Amazon ecosystem, then all you should need in your life is Alexa, and she will solve all of your problems. You'll be able to ask her any question you like, and she'll be able to come up with the answer to it. And of course, that means that when she fails to do that, then you kind of feel like an idiot for asking in the first place and kind of create a bad relationship there. But us as human beings, we're very good at understanding who can do what, right? Like, I don't know you so well yet, but I feel like next time we talk, then I have a better understanding about the kinds of things that you've seen, maybe experiences we had together, things that you might know about, things you might not know about, and I can change, I can modify my interaction with you so that it meshes well with that. I think we could do the same thing quite easily with virtual assistants, especially if we are able to personify them in AR or VR. So I like the idea that when I look at my penguin, I know my penguin isn't going to be able to get me a taxi. I know I'm not going to be able to order food for my penguin. And I know that, in fact, my penguin is so stupid that it can only really understand a few simple commands. But I can use my social intelligence to be able to remember those things. And for me, having a kind of interaction with that character in that way is far more satisfying than going through that process of lists and buttons and passwords and logins. So I think with Leap Motion we've been kind of tending away from these abstracted interactions that involve memorizing a set of commands and performing them in sequence in order to be able to affect some action which is in a chain to then affect something else. That's requiring so much from the user in terms of thinking in the computer's language. What we want to do and what I think technology has been more successful recently in doing is about actually coming towards the user and being able to communicate with the user in a human language. Leap Motion is often seen as this very futuristic technology which kind of lives in this kind of tech future like Minority Report or something like that. But it is hard. It's the most natural and intuitive interface you can possibly imagine. The human hand is something that we use to operate any other interface. So really, we think that Leap Motion paired with AR or VR potentially should be more easy and intuitive and natural and obvious than a smartphone or any other kind of device that we use. It should be your grandmother's technology of choice. Otherwise, we're failing.

[00:42:11.524] Kent Bye: Yeah, I think that there is this trend of moving into spatial computing that it becomes much more embodied and that that level of embodiment means that removing all of those abstractions and it just becomes like the natural skeuomorphism of if there's a doorknob, you just turn the doorknob to open the door and just we get it because we have an embodied experience of what that means in the world. And I think that computing in the future is just moving more and more towards that. I guess one question though, is like up to this point, Leap Motion has been primarily focused on the hands. But if you talk about this hemisphere, I don't know where the cardinal line of going up and down is going to be, because if you're able to track your hands and your feet, then you start to get into the situation where you can actually get your more and more of your embodiment into computing and start to either express yourselves in social situations with gesturing or be able to just use not just your hands, but your entire arms and I realize there's occlusion issues and all that stuff, but I'm just curious if you have abilities to be able to track anything like beyond just your hands.

[00:43:14.248] Keiichi Matsuda: I mean, we have, you know, our engineers here are the best computer vision guys in the world. Hands are the most complex thing. Tracking body, legs, feet, whatever, we could add that. It wouldn't be a very difficult thing to do. It's more about, you know, what's the use case for it. I think what you're talking about in terms of embodiment is a major use case and we've seen in location-based entertainment, people integrating leap motion because It feels strange to not be in that space, especially in multi-user environments as well, where you want to be able to see what other people are doing and wave back to somebody. You can also see an embodiment, you know, you can inhabit the role of a character in a much more meaningful way by seeing, you know, what your hands look like. So I think at some point, adding body tracking, limb tracking is not out of the question, but we're a startup, as I said, and we need to kind of be focusing on the direct things ahead of us. So yeah, I think it's definitely in the future somewhere.

[00:44:10.330] Kent Bye: And for you personally, what are some of the biggest open questions or open challenges and problems that you're solving or questions that you're trying to answer that's really driving your work forward?

[00:44:22.542] Keiichi Matsuda: Hmm, that's a great question. It's a difficult one as well. I think since I joined, a lot of the things have been about trying to understand the capabilities of the technology. So, you know, I think of multiplayer as one thing, but in fact, there's many, many different types of multiplayer and all those different things have possibilities and drawbacks. And as we kind of move through the tech roadmap of not just us, but with other companies that we work with as well, we start to see, you know, new things becoming possible. I think the question that we're talking about is... the big one that has been centered around this conversation is really about what does the future of interaction look like. We've gone through the evolution of punch cards and command line input, and then the graphical user interface, and then mobile interfaces, touch interfaces, but obviously we're not at the end of that. We're just really beginning, and this whole new field of immersive media is opening us up to a whole new paradigm of computing. and defining what that looks like and how it feels like and how it works is really what our goal is here at Leap Motion. So I think, obviously, the hundreds and thousands of questions that come within that are probably best expressed as that kind of overarching question of, yeah, what is the next UI?

[00:45:39.200] Kent Bye: Awesome, great. And finally, what do you think is kind of the ultimate potential of augmented reality and what am I able to enable?

[00:45:48.645] Keiichi Matsuda: Wow, what have we talked about today? We've talked about the destruction of identity, We've talked about the collaborative building of worlds and the democratization of architecture. We've talked about the potential of being able to access things more naturally and in more accessible ways. I suppose this is leading to a situation where the interface becomes so close to us that we're kind of inside it, right? We've been kind of tapping on keyboards, then we've been tapping on screens, and now we're kind of tapping in the air. But I suppose people with a head really in the future are starting to talk, well, have been talking for a while about brain-computer interfaces and ways in which The line between virtuality and physicality can be purely demolished once and for all. I think a lot of people are asking about when that's going to happen, how it's going to happen. My focus has always been on why. What are these experiences going to enable? I think in a way we're a little bit behind in that regard. We've been making the hardware and getting it to a certain state where it's usable. But really, a lot of people are kind of struggling with use cases. And if you start to imagine how all of these things come together, like within my films, like hyperreality, you can start to see how it makes everything charged. It makes everything smart. It enables so much more to happen. But to be able to get to a point where that's possible is, I guess, another thing entirely. Yeah.

[00:47:11.703] Kent Bye: Awesome. And is there anything else that's left unsaid that you'd like to say?

[00:47:15.609] Keiichi Matsuda: Well, I'm just, you know, thank you very much for having me on. I've been a massive fan of the podcast. It's kind of one of the things that made me really engage with the industry. So, yeah, thank you for your work.

[00:47:26.348] Kent Bye: That's amazing to hear. Yeah, thank you for saying that. Were you listening to it before you made Hyperreality or before?

[00:47:32.771] Keiichi Matsuda: It was actually around the time I put it out I started listening. A guy called Pendleton Ward, who's the creator of Adventure Time, was a big fan of the podcast and we'd been chatting for a little while and he put me on to you.

[00:47:45.382] Kent Bye: Okay, yeah, I know, Penn. He's, he's hung out with me in VRChat. And I did an interview with him as well. But yeah, and I think it's, it's a, it's great to hear that. It's an honor to hear that. I think that it's a, it's about that yen archetypal journey of that information. Information is a yen currency, such that the more information I give out, the more information I get. So it's this cooperative thing and I think that that's what I see the companies especially like a valve who have taken a decentralized approach and they have the vibe wouldn't exist had they not had the listening and the the structure of an organization that would allow them to really push the limits of what the technology would enable and I feel like that they're having a lot of success with the trackers and doing the things and really listening to the community whereas I think there's other domains that are happening within the VR space with the miniaturization and mobile VR but just having come from GDC I'm just really struck by my experience of seeing what the latest that's out there and that Actually, like the most cutting-edge stuff was both at the valve demo booths and there's miniature stuff That's going to be broadening the ecosystem, but in terms of what's possible Valves pushing the edge the HTC trackers are pushing the edge and the stuff that I'm honestly seeing here with the hand interactions I think is is something that is probably been disregarded in a lot of ways within the VR industry just because we don't know what to do with the affordances of the hand gestures and the movements and how that affects our cognition how that fix our minds and It seems like with motion that going into this AR headset We're gonna not want to be holding controllers when have the freedom of our hands And I think it's gonna be much more natural intuitive interfaces and that yeah, so anyway, it was just a

[00:49:29.197] Keiichi Matsuda: As well, not just in AR, but in mobile VR as well, I think it becomes much more of a consideration, you know, the ability to be able to interact with things more directly, make it more accessible and bust it out to much larger audiences. But it's really amazing to be working in the industry at a time when we're just setting the rules, you know. It's this kind of era where it feels like we've, you know, just got the mouse and we're trying to work out what to do with it, you know, and until now we've been kind of prodding around in the dark. now we start to have the technology to be able to do really, really rich experiences which can start to replace and take over the kinds of things that we do today. So like setting up those rules and trying to understand how we're going to interact in this like next 10 to 20 years of computing is definitely something that we're like, that's our mission really in the design part of Leap Motion. But we don't think it's a space that's going to be defined by any one person or any one company. We think it involves the community, about people getting together. And in fact, even at any individual level, to take it back to the hyper-reality stuff as well, for people to think about really what kind of world they want, and then try to act in a way that enables that to happen, I think is, I guess, the message that I'd like to leave all the listeners with. that we are responsible now. We are in this mode of defining this next era of computing and if we do a great job then we're going to move into this incredible world where we'll be all superhuman and all these different new forms of culture are going to emerge and we'll have this amazing type of communication and we'll be able to banish all those frustrating interactions to the past. But if we don't get it right, then it could lead us into this really dystopian world where everything is tracked and commodified and we're not aware of who's doing that or who owns what. And I think there are many dangers to overcome, but I think that the rewards are too amazing to ignore. So I mean, having this podcast at this kind of center of this community is, I think, a really, really important thing for it.

[00:51:19.995] Kent Bye: Awesome. Well, it feels great. And thank you so much for joining me today on the podcast. Thank you. Cheers. So that was Keiichi Matsuda. He's the vice president of design and creative director at Leap Motion. So I have a number of different takeaways about this interview is that, first of all, I found Keiichi's background and how he got into this immersive computing industry super fascinating. I mean, he's coming from an architecture background, so he's been thinking about how designing spaces can impact someone's feeling and emotions. Basically, like how the external world is changing your internal subjective experience and that what he sees happening in the future is that more and more we're going to be projecting our inner subjectivity out onto the world through these augmented reality glasses. And so more and more we're going to potentially go down this path of isolating in within our filter bubbles and being able to have our own experiences of reality versus what are the ways that we're going to be able to actually connect to each other. And so there's these questions of what are the multiple user interfaces in these shared realities and what are the mechanisms by which we're going to actually prevent us from going down this extreme of these filter bubbles that are out there. Or is that just sort of a representation using technology that is more of a mere reflection of what is already happening? And that, is it going to amplify it and make that worse? And I think there's a lot of different, deeper implications for how this plays out. And I think what I think is fascinating is what Keiichi has been doing with his work for the last eight years has been using film and his background in architecture to be able to do these immersive spatial designs, to be able to look to see how the boundaries between our digital life and our public life into these spaces are kind of blending together and merging together. And I think that he wants to see this merging of these two worlds that come together, but also this deconstruction of identity, more of a fluidity of identity, not that that we're completely stuck in our individual identities, but that we could be in one physical space, but that within that physical space, there's a number of different contexts under which identity is being coming out. So you could be playing in augmented reality game where you're a character and a role that you're playing into but you're also like maybe this physical space that you're walking around. One of his films that he made called Augmented City 3D started to show how people were annotating physical reality with these little notes so you'd be walking down the street and you would see oh this is where this happened in my life or I used to work here or I used to live here or Different ways of people's stories are connected to the physical locations. And I think as we start to move down this path of augmented reality, we're going to see this seamless blending of your personal story and being able to potentially annotate physical reality with that and be able to share that with your friends. Now, there was an article that Keiichi wrote, it was Cities for Cyborgs. So it's the 50-year anniversary of cyborgs, and he was thinking about the 10 rules that you need to have in order to design different immersive spaces. And one of the things that he said in that document, and kind of briefly alluded to in this interview, was the differences between the home and the workplace are going to be blurring. And I think that One of the visions that he sees is that the context under which you're at home, but you're able to do work, but you're at work, but maybe be able to do things at home. I think that's actually the trend of where things are going, but I don't necessarily expect that there's gonna be a continual erasing of what's private and what's public. I think if anything of what may actually need to happen is more of a clear line of what is public and what is private, and what do you share amongst the context of your different peer groups. The potential for augmented reality to be able to annotate a layer of reality and to be able to potentially connect you to people as you are walking around in physical space and maybe have a little bit more serendipitous connections, you may want that. You may want to be able to have that serendipitous connection to somebody that you may actually be at the grocery store and there's somebody that's right next door and you may want to actually go have interaction with them and you may want to know about that. I think there's also some other implications where that level of detail of tracking you, there's all these different privacy issues. I think one of the other things that I saw within Keiichi's work was just this exploration of the commodification of our future and this quantification of our digital identities and what does that mean to be able to attach these different aspects of our identity and be able to broadcast that out. Part of the thing that he explored in his dystopian piece of hyperreality was this over-commodification and over-quantification of everything and what is it that we actually want to create and to be able to have us either be more connected to ourselves or to other people. And so I think there's just a lot of really interesting design decisions that we're going to have to make. But I was fascinated by the fact that he was saying that augmented reality was an opportunity to make what was previously invisible visible, and that there's new ways of exploring these philosophical ideas or abstractions using the spatial affordances of augmented reality, where it allows you to make it more visible and tangible. And I think that, to me, is super fascinating, is what does that mean to be able to overlay these new layers on top of a reality and how is that maybe going to allow us to understand and grok these larger abstract or higher level philosophical ideas. Now one of the things that I was also really struck by was what Keiichi said about how he was surprised how he originally thought that he was going to apply a series of logic to be able to do this type of interactions within augmented reality, but what he actually found was that you really have to do this iterative process to see what actually feels good with using your hands. And so there's a little bit of like this embodied cognition that happens with having these different satisfying interactions. And as you actually do those interactions with your hands, then you're actually trying to figure out first what feels fun and interesting to do. And then from there, you start to build out these interactions. And so It wasn't what he was expecting. He was expecting to just kind of have a logical workflow, but it actually became much more about the feeling of how it makes your body feel as you do these interactions. And I can say from actually going through a lot of these different interaction design patterns with my hand is that they really found these different ways that are fun to kind of interact with. I think it'll be this trade-off between using your hands versus buttons and being able to actually always have 100% consistency whenever you're doing things. And so what I like to think about is that whenever you want an action to happen 100% of the time and you never make a mistake, that's when you kind of want a button. For example, if you're doing like CAD drawings or something that you're actually pushing buttons, you actually click a button so many times within some of these programs that you actually kind of want a physical button to always work. The problem with hand gestures is that it doesn't always pick up 100% of the time, but yet you want to be able to sometimes have your hands free. So I think some of the unique affordances of augmented reality is that you're going to be embodied within a physical reality. Sometimes you may need to actually have your hands free to be able to interact with reality, and you don't want to have controllers in your hands. And so I think that's some applications where you are maybe mobile computing, or maybe you just aren't in a place where you have those physical buttons to make things happen. So at the same time, there's all these different interesting aspects of embodied cognition, where our body is an extension of our cognition. And so being able to actually use our fingers is going to allow us to think in new and different ways once we're able to find different user interaction patterns that are happening within both virtual and augmented reality. And that's what I think is What has been perhaps the overlooked thing within LeapOcean is the cognitive aspects of using your hands, being able to interact with computing technology, and it being much more natural and intuitive, not having to teach people how to hold these controllers and which buttons to touch, that it will hopefully be much more intuitive interfaces. Now, I think this is a pattern that I see a lot in technology companies where they use the trope of like, even your grandmother could use this. And I want to just say that there's problems with saying that. I mean, why not say this is the grandfather's choice of technology? So there's just some problematic things with introducing gender into that. So try experimenting by saying grandfather's device of choice rather than grandmother's device of choice. But I think the larger point is that it's just so natural and intuitive that you don't have to like teach somebody how to use it. You could just give it to them and they could just kind of figure it out. And finally, I think one of the visions that Keiichi is having with some of these design experiments of being able to potentially think about how can this technology allow us to be more connected to other people or how are we going to actually navigate people having different worldviews or multiple ontologies? And are there ways to represent some of this information within spatial metaphors so that we can maybe have an experience within augmented reality or virtual reality experience so that at the end of it, we come to a better understanding for other people's ideas or perspectives or their worldviews. And I think that perhaps an easy way to bound this, because first of all, how do you visually represent somebody's ideas or worldviews or philosophies? I think What might be more of a bounded way of doing that would be within the context of a story in an experience where perhaps there's an immersive theater experience where you have the augmented reality glasses and perhaps there's information that's overlaid on top of the experience as you're experiencing it so that you start to get more information about the characters and more metadata. And maybe there's gonna be other ways of doing sort of visualizations of what state information is known within the experience but also what someone's ideas or beliefs may be, being able to Say like this is my belief about this person. This is my level of trust I think there's a lot of things that are not quantifiable such that you have to try to figure out a way to quantify those things, but also come up with a Universal memory palace for all knowledge and because there is no universal representation of all of knowledge it's a little bit of a open question in terms of first of all what the underlying mathematical structure of all of reality is and to be able to create a knowledge representation of all knowledge that doesn't have a consistent mathematical structure is problematic. And plus, there's lots of paradoxes and things that you can't necessarily represent within a technology that is based on a form of logic that can't handle contradictions. So you either have to go to something like a paraconsistent logic, or you have to figure out how to be able to represent some of those contradictions in some way without it exploding. So I think that's actually a huge reason why within artificial intelligence there hasn't been a way to be able to come up with common sense reasoning. But the same problem of if once you were able to figure out some sort of methodology to represent common sense reasoning, then I think it may be easier to start to do spatial metaphors to be able to represent somebody's worldviews and their ideas. And this is an area that's probably ripe for research to be able to say, could you give somebody an experience with an augmented reality that could give you a representation of how they see the world? Now, that's challenging because there's all of these a priori schemas with their mind for these categories and metaphors that we have that is filtering reality. And so is there a way to use augmented reality to be Step in of a synthesis to be able to help imagine what someone may be perceiving the world Given those a priori category schemas in their mind So that's something that like I think going down the deep end of some of the things that he's talking about here is How can you give an immersive experience that allows you to experience? somebody else's life from many different perspectives and I think there's all these different issues that come up and trying to actually do that but I I think in the context of storytelling there's some experiments that could be done in order to actually start to experiment with perhaps you go through this immersive theater experience with just as repeating and you're able to actually go through that same experience looking at that same experience through different lenses of different world views and different perspectives and different data that's available as you're watching this. This is something that Rose Chochet has done with her series of perspective and it's a virtual reality experience that shoots the same scene from four different perspectives and you go through and you are re-watching things from different perspectives and there's stuff that is included and not included in each of those and so you have different information and depending on which order you see that then you're able to have an overall different experience but I had the experience of as I watched more and more of that story, you start to get more pieces of the puzzle and you start to understand more of what's actually happening. So I think that's likely a good representation for how you'd start to potentially mimic this process of giving somebody an augmented or virtual reality experience to be able to allow them to experience life from a different perspective. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member to the Patreon. This is a listener-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. So, you can donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show