I did an interview with Michael Markman at Meta Connect 2025 talking about all of the latest updates to the VR design and prototyping tool of ShapesXR, and then we start to dive into some of his hot takes after getting a chance to try out the Meta Ray-Ban Display glasses and associated Neural Band. He sees that the neural band is essentially transforming your hand into a mouse that is providing a simplified navigation system (probably closer to a D-pad on a TV remote), but the index-finger-to-thumb serves as a functional left click and middle-finger-to-thumb serves as a functional right click, which has been enough to build the foundation of most modern HCI for computer software for the last 57 years since The Mother of All Demos debuted the mouse in 1968. See more context in the rough transcript below.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.458] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my coverage of MetaConnect 2025, today's interview is with Michael Markman, who is a designer, founder, and head of design for ShapesXR. So I've been wanting to do an interview with Michael for a number of years now, and I finally published an unpublished interview I did with Inga as part of my coverage of Augmented World Expo back in 2021. So there's a lot that's changed within ShapesXR in that time. So I wanted to get a little bit of a catch up to see all the different things that went Working on they're sort of like a VR design and prototyping tool, and then sometimes moving beyond just prototyping and to actually building out scenes that could be shipped. And so adding more and more fidelity and more and more features rather than needing to have a developer to do some prototyping or layouts for designers. They wanted to actually create a tool for designers to kind of explore different user interface paradigms and kind of get a little bit of a rough sketch of what things might look like and then either take that rough sketch and more fully pull it out or sometimes even deploy directly some of those different things since they have direct integration with Unity. And so Michael also had a chance to try out the MetaRayBand display glasses and had a lot of thoughts he wanted to flesh out. And as a designer, he's super excited to see where the future of face computers are going to go. And the thing that he was breaking down with the neural band was that you're essentially translating your hand into a mouse controller, but it's a little bit more of like a TV remote, but to have left down upright with more of a two axis D pad with your thumb, with your index finger, having your thumb to index finger as accept or move forward. And then the middle finger to thumb is sort of like cancel or go backwards and so it's sort of like in some ways the index finger being left click and middle finger being right click that's basically been the core paradigm of human computer interaction for computer software for a long long time and to add that new ability to do like left click and right click actually is something that's difficult with just hand tracking alone to differentiate between that but having something like the neural band is going to open up new possibilities for software from a design perspective so that's a big takeaway that I got from this conversation, but also just some of his reflections of where all this is going here in the future as somebody who is a self-described face computer enthusiast who wants to see the future of both human-computer interaction and design, but also XR devices and what that's going to be able to enable here in the future. So, by covering all that and more, today's episode of Voices VR Podcast. So, this interview with Michael happened on Thursday, September 18th, 2025 at MetaConnect at Meta's headquarters in Menlo Park, California. So, with that, let's go ahead and dive right in.
[00:02:52.666] Michael Markman: Hi, my name is Michael Markman. I've been in the XR industry for over nine years now. has been a designer, founder. And today, I am the head of design for ShapesXR. We're a AR, VR design and prototyping tool.
[00:03:07.658] Kent Bye: MARK MANDELAIDISI- Great. Maybe you could give a bit more context as to your background and your journey into the space. DAN GALPIN- Sure.
[00:03:12.982] Michael Markman: I got into the space in late 2016. I founded a company where we also built a VR prototyping tool, but it was a bit early. Since then, I've worked as a designer on various apps, such as Poker VR and Arthur. And for the last four years, I've been at Shapes XR.
[00:03:30.233] Kent Bye: So my understanding of Shapes XR is that it's sort of like a rapid prototyping tool to help go in and do the type of gray boxing that would normally happen in Unity. But you could maybe do it in a collaborative environment or just a more optimized workflow for the idea generation and just start to put together different aspects of the user interface. But I'm just curious to hear how you describe what is Shapes XR.
[00:03:53.956] Michael Markman: Yeah, I think that's a really great way to put it. It's a collaborative 3D design tool specifically aimed at AR, VR. I think where ShapesXR is really unique is when you're actually designing in the medium that you're creating for. You put on a headset, and you have all your design tools literally in your hands. When you draw, you draw in 3D space. When you move around interfaces, you see different elements at the correct scale, at the correct distance from you. You're able to really feel out the design in a way that you just can't do on a flat screen or in Unity. It's at its best when you're able to reach this sort of flow state. And I know we use shapes to design shapes. And so I know when I'm in shapes, I'm vibing out the design. I'm just feeling like, oh, is this UI element better close to me, or is it better if it's further away? And is this UI element personal, or is it more general to the other people in this room? And you're able to literally feel the spatial relationships as you're designing. And I think that's what makes shapes really unique.
[00:04:57.601] Kent Bye: MARK MANDELMANN, Okay. And so, yeah, I know I had a chance to talk to Inga on the day before you were moving from the App Lab into the official store, Augmented World Expo. You had just won an Augie. And so, yeah, that's, I guess, around four years ago now. So what's happened since then at ShapesXR?
[00:05:14.502] Michael Markman: Yeah, the team has grown. We've been lucky enough to have a lot of amazing teams as our customers. As we launched, we went into monetization. Last year, we launched Shapes 2.0. It was a major redesign where we rethought the whole app. We'd added a lot of features, and so we needed to manage that complexity. About a month or two ago, we debuted a feature that I've been very excited about, which is animation. So now you can like animate between frames and shapes. And it's like really intuitive and it's still very spatial. Like we're always trying to find this balance between like using the medium of VR and AR, like, you know, spatial computing, if you will. Yeah, but yeah.
[00:05:57.278] Kent Bye: MARK MIRCHANDANI- And you'd mentioned that there was animation that was added. Was Tavori the original manifestation of shapes, which seemed to be more of an animation for cinematic VR?
[00:06:08.063] Michael Markman: Yeah, so Tavori was an animation tool, like a dedicated timeline keyframe-based animations. So sort of like character animations, more specifically targeting the creative field for making films. Shapes' animation system resembles more like Figma. So it's more kind of like you have your individual frames, as we call them, or the three different states of your UI or your UX or whatever you're building. And then it's just tweening the differences. And so it's a much simpler form of animation specifically for people who are building anything interactive. It's more for just getting like a more high fidelity feel, whereas like Tavori was a full on creative tool.
[00:06:47.106] Kent Bye: What's the genre of software that you self-identify with for ShapesXR?
[00:06:51.109] Michael Markman: I would say like design and creativity I think is the category. Like we like to say that like it's sort of the equivalent of like what Figma is for mobile and web design. We are for AR, VR, and spatial design.
[00:07:04.721] Kent Bye: MARK MIRCHANDANI- OK. And what are some of the use cases where you've found a lot of traction, either in different industry verticals, or how are you seeing that customers are actually using ShapesXR?
[00:07:15.449] Michael Markman: MARK MIRCHANDANI- Yeah. So I guess our beachhead use case was as anyone who was creating sort of AR, VR software. So that can be games and creative apps like Trip and FitXR are designed in Shapes, to teams at Meta working on AR OSs. Teams at Google that are working on Android XR, Microsoft working on Windows for VR. If you're building AR VR software, as well as it could be a lot of training simulation apps, use shapes. So anyone who's building interactive content for AR VR, like shapes is a great way to visualize that so that's been our major vertical but we're also seeing traction in more like what we call like real world design use cases so for example like mongolese initially was using it for training but we've also seen you know they're like packaging teams that are like oh like you know instead of like sending these like mock-ups over the mail to each other what if we just import the models into shapes and then we can just meet in vr and like look at them That's been really interesting as we've been expanding these more real-world design use cases with Mondelez, but also the Mayo Clinic. When they're thinking about operating room design, they can just step into the operating room. It's anywhere where spatial context is helpful.
[00:08:38.800] Kent Bye: When I think around collaboration in a 2D context, say like writing a document, you have comments and ability for people to highlight and do strikeouts. And so you're annotating the 2D text for a way that has a pipeline for things getting approved. And so I'm just curious how you've started to sort out the affordances of annotation and collaboration that is translating what we might think of as comments or other ways of pointing or animating or other things that may be unique to a spatial 3D context. So what are some of the different ways that you've been able to really facilitate this idea of collaborative design?
[00:09:16.963] Michael Markman: I think I have a great answer for this, actually. So specifically, we built this thing called a Holonote. And what you do is you can leave a comment, but what it does is it records your avatar, your body's movement and your voice. It's like leaving a hologram for someone else to find to explain elements of a design or leave a comment. So it's kind of like the way the hologram worked in Star Wars, where like, Obi-Wan, you're my only hope. But it's like, hey, I think this might be better if it was greener and bigger.
[00:09:45.953] Kent Bye: MARK MANDELBACHER- Nice. And so what are some of the other features that you're really proud of? Since you're thinking a lot around design and VR design and creating a tool for design, I'm just curious around what you think really stands out from a design perspective of what you're doing at Shapes XR.
[00:09:57.678] Michael Markman: DAN GALPIN- Yeah, I mean, I'm really proud of our just overall UI is one thing. But in terms of specific features that we've built, we recently released a measure tool. That was really cool. You can point at one wall, and then you point at another wall, and it creates a ruler between them. We found that's really useful, particularly for these real world design use cases. We have some cool features around designing for AR headsets. So one of the big limitations for AR devices is their field of view. And we actually have an artificial field of view simulator. So you enable it, and you can set a preset, like Magic Leap 2 or the Snapchat Spectacles. And it'll cut off content that wouldn't fit. And so you're able to design for AR glasses using a VR headset. And I think that's a real mind bending way of doing it. I think another feature I'm really, really, really proud of and we've seen a lot of really cool uses with is we have a really robust Figma integration. So we've seen as much as we've still seen. There's some UIs that still can be two-dimensional. But again, they still exist in the spatial context. And so what we've done is we've allowed designers to bring in Figma frames. But then when you update them in Figma, they'll update in shapes. Then there's this other cool thing we did. I'm very proud of this one. This is a good answer, I think. We'll see. We'll see. We'll see. That's true. It could be bad. So in Figma, you can create prototypes or links. And what we've done is we allow you to bring in the Figma prototype. It's almost like a web view. But most UIs in VR have a translucency. It's basically a web page. But what we've done is we added a special shader. And if you make the background of your Figma prototype lime green, you can then chroma key out the background of the Figma prototype. And it'll show up as a translucent, interactive UI in shapes. It's scrollable. It's full. All your interactions are there.
[00:11:56.253] Kent Bye: MARK MIRCHANDANI- Wow. So it's kind of similar to AR?
[00:11:59.455] Michael Markman: Similar, yeah. So it's for AR, VR designs, anything that's like flat, but you're doing it for anyone who's prototyping. In Figma, you can bring that into shapes. Gotcha. Very cool.
[00:12:07.240] Kent Bye: That is cool. So I guess in terms of these different types of rapid prototyping tools, there's always a question of export and different export formats. Is it like GLTF file? I know that there's a lot of talk with the Metaverse Standards Forum in terms of like, bundling in glTF into being more of a container for entire scene graphs. There's also USD formats. There's also FBX formats, which is what Unity usually imports. So what kind of exports are you able to do from Shapes XR?
[00:12:38.510] Michael Markman: So we support GLTF and USD export. So you can export a USDZ file, or you can export a GLTF file. Then we also have a Unity plug-in. And so you can just enter each space you create and shapes spaces, like what we call a file, because it's a spatial canvas. That has a code associated with it. And then in our Unity plug-in, you enter the code, and it'll just pull in all of your geometry.
[00:13:04.150] Kent Bye: Nice. So you can just start to composite everything within Unity, but do kind of the iterations and the actual changes within Shapes XR. Does it change the code every time you change it, or does it automatically pull from this link or association?
[00:13:18.695] Michael Markman: It'll pull whatever the latest geometry is. And what we found is really, again, Shapes is really useful for that mid-fidelity, where you're like, here's how big things are going to be. Here's their scale. Here's how far away from the user it's going to be. and then like a designer is able to really define and really tune all of that in like they're able to iterate before they hand it to the developer right and then the developer just is like okay it's gonna be here it's gonna be this big it's gonna be this color like what we see before shapes is you do a lot of iteration in development it's like imagine having to build a building to test out whether or not it was a good idea So it's like you build it and now you've already written a bunch of code and now you like You don't want to change everything like I you try it in the headset for the first time It's been weeks since you designed and you're like, okay That's like way too big or way too small or like this whole concept doesn't actually like translate to VR the way I thought it would when I designed it on a screen and in shapes you just like you immediately know if something works like it's it's hard to put it into words because the whole point is that you experience it in the headset and
[00:14:17.625] Kent Bye: Well, I know that when I think of different art programs within VR, like Tilt Brush is a very painterly, so you do paint strokes to create these shaders that then are overlapping and very non-optimized. And then there's Gravity Sketch, which has a little bit more of kind of a low-poly sculpting, more of shaping the planes, but also kind of manipulating them in a way. So it's a little bit more like molding clay, as it were. I guess Medium is probably more of a sculpting, and maybe Gravity Sketch is a hybrid. I don't know how you would call it seems like more meticulous of kind of shaping things, but you're able to have very optimized, kind of having a minimalist low poly approach of generating it out. And so those are my reference points when I think of creating spatial context. And so with shapes, what was your approach for trying to maybe simplify it or also create something that's like optimized, but also have the ability to give it a certain shape or structure that gives it a particular character that maybe like a gray boxing just to kind of lay it out and then hand over to someone else who's going to fully flesh out that idea in something like Blender or Maya. So I'm just curious to hear a little bit around like how you start to think around the tool set within ShapesXR in order to kind of enable this type of rapid prototyping with the primitives that you need to be able to do that.
[00:15:33.658] Michael Markman: Yeah. So what we've done is our tool set is really optimized around scene building. The metaphor I like to use is like Gravity Sketch is really great for making a tree, like modeling a tree, modeling its branches, how big is the trunk. Shapes is really great for laying out a forest. And so we have tools for snapping objects to the ground and manipulating them really quickly, scaling them up and down, changing colors, doing things that are really useful for scene layout. You can grab an object, then you push the joystick forward and the object flies away from you. And you can grab stuff from a distance, move it around. So it's really good for scene building. You also have the way our movement in Shapes works is you kind of grab the world and you can scale it up and down. Kind of like pinching and zooming but it's three-dimensional But you can't rotate the horizon line right and gravity sketch It's like similar, but you're rotating the object with your wrist whereas in shapes. You're always staying still like you're always Manipulating the scene and it's staying up right you're never like flipping the horizon or like rotating the horizon which again everything is optimized for scene building and then from yeah exactly and
[00:16:39.025] Kent Bye: MARK MANDELMANN- Nice. And so what's the normal workflow that you see folks, after the exporting from the Shapes XR, what do people typically do? After you get all the approvals and people say it looks good, then what happens to continue to develop out the scene?
[00:16:53.359] Michael Markman: So once they're exported out of Shapes, I mean, once you've exported out of Shapes at this point, you might have like so you might like record some videos in Shapes. So Shapes like will stay this like design artifact. For example, internally, what we do is we create Notion documents and then you can actually embed different views from Shapes into Notion as like little like windows, like little web web views. And then you have the export that goes into Unity as kind of like, here's how big it is, distances, and then the interactions that have been approved for the prototype and shapes. At that point, you build it. At that point, developers start writing code. And then you end up, you start moving towards a more, at that point, you're in development. The engineering part begins.
[00:17:34.938] Kent Bye: I think talking to Inga and remembering some of what she said, it was that typically when you started to do that type of gray boxing, you would need to go straight to the developers. But this is allowing more of an intuitive interface for spatial designers to come in and do a first pass that then could be handed over to the developers that then are able to translate that design work that had been done.
[00:17:54.798] Michael Markman: And it's not just the first pass. When you interviewed Inga the first time, at the time, we were like, Shapes is a really low fidelity, wireframing, rapid, rapid, rapid, rapid ideation tool. What we've done over the last couple of years have really increased the fidelity of what you can create in Shapes. Shapes isn't just for a first pass anymore. Now you can go mid fidelity, high fidelity, like we have new materials, like you can make things shiny, you can make them reflective, you can add custom textures, like normal maps, you can really achieve a level of visual fidelity that wasn't possible in an initial release. And we've seen that because Shapes is this tool where you're presenting Your work is being presented not just for designers. It's for other stakeholders. It's for product managers. It's for executives. It's for people who are signing off on budgets. And you're able to go through this whole process a lot faster because you're not writing any code. In Shapes, with animations now, with their new materials, you can create an animated high-fidelity prototype that really outlines in detail what your vision is. That's really important because a lot of people don't have a VR brain. They can't just look at a sketch and be like, that's what it's going to be like in VR. Fidelity matters. If you're pitching someone, hey, it's going to be a polished, good-looking experience, If you're not wireframing or grayboxing, you want a higher fidelity model of a car, for example. You go, hey, it's going to be shiny, and the door's going to open, and the door's going to open really slowly. And the door opening slowly is part of the experience, and it's part of what you're designing. When you're a VR designer, you're a director. You're directing experiences for people. And Shapes is that tool. It's a way for you to express yourself.
[00:19:37.150] Kent Bye: Now, you mentioned the animation a little bit, and I'm curious around dynamic user interfaces, because I guess one way to think about it is pushing a button that's physically in the space and then having something happen, so having some sort of scripting or if this, then that type of thing, but also user interface. And so pull up a menu and maybe choose one of the menu options, but have something change. And so how are you starting to wire together this way for people to walk through the user flow for either choosing options or pushing buttons that are in the environment? And just curious how you start to do that type of either visual scripting or coding or how that actually gets translated into something that could be plugged into Unity.
[00:20:20.808] Michael Markman: Yeah, so what we have is the model we use is trigger action. So we have when the scene loads in, or after a certain amount of time, or when this element is hovered on, when this element is clicked, when this element is touched, when it's collided with. these different triggers that we've researched. We talked to our users. We're like, OK, what are the different triggers that people use in a majority of XR interactions? And so we have that as triggers. And then the main action in Shapes is go to frame. Right. And then that frame is so like on one frame I have a button, what I've, you know, a cube that I've decided to be my button and it's full sized. And then in the next frame I squash the cube. And so then in the first frame I'll have on press go to frame two and in frame two it's squashed. And so when I touch it on frame one, it squashes as though it's a button. And then you're like, okay, after frame two, after, you know, 0.3 seconds, go to frame three. And in frame three, a car scales up or like a menu flies in. And like with animation, you can actually have it fly in from the left or the right or up or down and scales up and down. Like these things that are really important, like I think these micro interactions, these like animations are really important in VR. Because again, it's not an app, it's a place. The way it feels is much more important than the way my mobile apps feel. It affects the user's experience and your user's experience is their entire world. It's their entire reality.
[00:21:52.008] Kent Bye: Well, I've seen that Shape6R has done some sponsorships with Ben Lang and Road to VR, and he was doing a video series for a while, and I think he's still ongoing, and periodically he'll have some deep thoughts of looking at some design or user interaction that he's seen from his many thousands of hours of gaming and looking at all these different applications. And so he's cultivated some deep thoughts and opinions on design. And that seemed to be some overlap with ShapesXR and what type of things you're trying to promote in terms of XR design. So just curious to hear a bit about that partnership and some other things that you're seeing of other folks that are talking about or thinking around ideas of design within XR.
[00:22:29.198] Michael Markman: Yeah, I mean, that was I ran into Ben. We had the same flight going back from some conference. He's like, hey, I'm working on this series. I think Shapes would be like a really good fit. Can you put me in touch with Inga? I mean, yeah, that happened. I mean, it's just very aligned with what we're doing. And so it was just a very natural fit. But yeah, for us, doing that kind of design thinking, we have a design blog in Shapes where we've written some articles about, hey, here's things to consider when you're designing for eye tracking. From what we've seen, from what we know, from our own thoughts of designing our interfaces or the way we've seen Shapes used, hey, here's how big UI elements need to be. Hey, you need to have a certain amount of padding because eyes aren't as accurate as a laser lens. Yeah, for us, if we can invest in the community, the better people are at designing, the way we see is the more they're going to want shapes. If you're thinking deeply about XR design, then you're going to find more value in shapes than if you're thinking on a surface level. It's something we see when people are transitioning to XR design from mobile or web, from a more traditional design background. Like, they're not thinking about spatial problems yet. They're only going to run into them. And they may not know that, oh, the scale of a UI is going to be really important in a way that it's not for mobile and web. Oh, distance is going to affect how a user perceives the visibility of an app or a UI. If a UI is far away from you, you're going to perceive it as being public. Whereas if a UI is attached to your hand, you perceive it as local to you. And educating people on that or educating people on just VR design details, these little fine details that make all the difference, is very much just aligned with our goal and mission at Shapes.
[00:24:15.364] Kent Bye: What are some of the biggest mistakes that you see people do within the context of XR design?
[00:24:21.005] Michael Markman: Biggest mistakes. man, I've seen so many just huge UIs, just UIs that are just way too vague. They're sometimes way too close. I think people don't always have like enough feedback. I think sound is often overlooked. Like, I mean, and this is, I mean, again, like it always feels like it's the last thing it's like polished, but I'm like, Oh, it makes a huge difference in VR. Cause like sound in mobile and web design is just not a thing. You don't, Usually people don't have their sound on when they're browsing their phone. People almost always have their sound on when they're in a VR experience, right? It's like it's not secondary. It's a big part of just the primary experience. Other stuff I've seen is just UIs that don't look at you or like sometimes like where you spawn in, you might spawn in too low or too high. But yeah, just like not maintaining spatial relationships, I think, is like something you see a lot happen a lot, especially because like The less you define your design, like the less details and specification that you provide a developer, the more you're handing off design to development. Right. And like developers are not judged by the quality of the design. They're judged by like how buggy and how functional is this. Right. So they'll make decisions that might not align with you, but you as a designer weren't able to give them guidance. So if you didn't give them guidance, it's out of your hands. You can then go in and be like, oh, no, no, can you change it, can you change it? And they're like, I've already written this code and the way I wrote it means that change is actually really difficult. And you're like, oh. And so it's like, you have a lot of people with good intentions. And a lot of times, designers are like, oh, I know it's bad. But we kind of built it, and there wasn't time, and we had to prioritize all this, yada, yada, yada. And that's why I'm like, this is why the design process is so important. And it's like, tools like Shapes become important. Not that I'm biased on that, but.
[00:26:07.474] Kent Bye: MARK MANDELBACHER- So what's next for Shapes XR?
[00:26:12.153] Michael Markman: Right now, we are actively working on importing larger models. We've been really excited. Oh, I guess the next big thing is we are one of the launch apps for Android XR.
[00:26:25.126] Kent Bye: Oh, wow. Congratulations.
[00:26:26.767] Michael Markman: Thank you. Thank you. We redesigned a lot of, well, we adapted Shapes to work entirely with hands. So we had traditionally, you know, we only really, you know, Shapes was an app that you use with controllers. And we had to really, it was many sleepless nights of figuring out. We prototyped a lot of different interactions that are based on the hands and also eye tracking. So yeah, you know, we'll be able to talk more about that once that goes into release.
[00:26:54.520] Kent Bye: Well, another headset that has hand tracking and eye tracking is the Apple Vision Pro, but that's a completely different, well, I don't know, if you're doing Unity-based development, then it may not be so different, but I'm just curious if you've been starting to look at designing Apple Vision Pro within the context of ShapesXR.
[00:27:12.310] Michael Markman: Yeah, so we've definitely thought a lot about how ShapesXR would look like on the Vision Pro. It's definitely on our radar. It's somewhere on our roadmap. I can't speak too much about it yet.
[00:27:24.638] Kent Bye: OK. So a little bit too far into the future, I suppose. OK, so we're here at MetaConnect. And curious to hear any of your thoughts. I hear you might have some hot takes.
[00:27:38.367] Michael Markman: Yeah, I mean, the Kinect Keynote, I'm excited about the Ray-Ban displays. I just, just demoed them. And like, you know, I'm like a real like AR, VR, true believer. Like when I say true believer, I believe that eventually this is the next computing platform that this will replace our phones. It will replace our screens. It will replace our desktops and laptops. It will all be a combination of AR glasses and VR headsets and even, you know, hybrids in between. And so, you know, like it's a monocular 2D display, but I'm really excited. Specifically, what I'm most excited about is the neural wristband. Right. And like the level of fidelity, the fact that like I can, you know, swipe with my thumb. It's a podcast. I'm like swiping with my thumb. Yeah.
[00:28:27.697] Kent Bye: Yeah, you make a fist and you kind of like on your index finger, you're kind of like going left and right or up and down. But I think you can do it maybe without the index finger, but the haptic feedback just really helps you give the sense that you're actually like triggering. I don't know if that actually makes a difference in the neural band. What Mark Zuckerberg said was that over time that the neural band improves with the machine learning so that you can more and more make minimalist gestures. And I think I was sort of having to overemphasize all the different gestures. And in fact, I happened to have had the wrong orientation. They had it for left hand instead of right hand. And so they were like, okay, swipe right. And I'm swiping right. Nothing was happening. And then when I swiped the opposite direction, I was like, oh, this is sometimes, you know, you have the choice of like your scroll bar as to whether or not it goes up or down depending on the direction. And I was like, is it that? But they literally just had the wrong handedness. And once they fixed that, it was better. But for me, I have glasses and my prescription is negative 5.5. And they didn't even offer me corrective lenses if they even had them. And so-
[00:29:29.051] Michael Markman: It's a huge difference. Yeah.
[00:29:30.312] Kent Bye: So they didn't even offer them to me. So they were like, you can't wear your glasses. OK. But then I had a very blurry experience in a way that I regret not seeing what the corrective lenses would be. But yeah.
[00:29:44.100] Michael Markman: I would definitely maybe head back there, because they now have corrective lenses. Maybe they didn't earlier. So I mean, I wear glasses as well. They were able to more or less match my prescription. So I guess my hot take is that it's knowing, having seen Orion last year and knowing where they're headed, I'm like, okay, there's a lot here that's gonna make it into Orion. The UI is very, the visual language is very similar to what they showed with Orion. So you can see a lot of the work that went into Orion has also gone into these glasses. And I have a group chat with my friends, it's called Face Computer Aficionados. And so like there we talk about all things face computer. And that to me, again, it's AR glasses, VR headsets, like it's all the same thing. It's all spatial. It's all computers that are on your face. It's all like, you know, it's all blending and it's computing that blends into your day to day life. And from that perspective, this is a huge step forward. Again, I'm a 3D guy. I want everything to be 3D. I want holograms that attach to surfaces and then are occluded by other objects in the scene. And this isn't that, but it's headed in that direction. And so, like, on the one hand, it's like, oh, like, it's not a VR headset. It's not a new headset. It is still a step in the direction of that vision. Right. And it's like, I mean, for Meta right now, it's, you know, it's obviously important. Obviously, everything's about AI these days, right? Like we are in the peak AI, but it's still like the way they're applying AI is in ways that are going to be very relevant to spatial computing. you know, there's a map app that you use with swipes on your hand. Okay, well, the next generation, that map app will be more three-dimensional. It'll exist in a six DOF, six degree of freedom, you know, tracked space. And like, this isn't that, right? This is more like, you know, this is kind of like a follow-up to Google Glass in some ways, or like, you know, spiritually. But it's polished, you know, it's like they're making advancements to, for example, the hardware that they put into this is going to then inform the hardware that goes into whatever commercial device Orion turns into. So like, yeah, like I'm feeling optimistic about the overall future of spatial computing, because again, like you're seeing these steps, like these AI glasses, like this genre that, you know, in some ways, like you're seeing a bunch of startups that are like, oh, we're like AR glasses, AR glasses, AI glasses, you know, like, I'm like, okay, like it's, We're becoming accustomed to contextual computing, where you have a device on your face that uses the real world as context, that can see the world as context, and then adapts content and interfaces to that context. And that'll only become better with wave guides and holographic displays. I see the vision. It's so tantalizingly close. It's been nine years of tantalizingly close, but again, it's a marathon. It's not a sprint. I've been saying, oh, we're three to five years away from mass consumer adoption. I've been saying that for almost 10 years, but I really think we're three to five years away from mass consumer adoption.
[00:32:58.952] Kent Bye: I think it's still going to be quite a while of going through different waves and iterations. And I do think that what used to be the control labs now is the meta-neural band, I think what they're calling it. And I think this is going to open up lots of really interesting gestures. I think if you look at something like the Apple Vision Pro, it's been very exaggerated, like... index finger to thumb pinch and it has to be visible to the camera. And so I'm kind of used to it being very clear. And then to move from that paradigm to not having to be clear and to have what is essentially like a pinching between your index finger and your middle finger, but index finger to thumb is accept and middle finger to thumb is reject or backwards and so it's either index finger to thumb is forwards or middle finger to thumb is backwards and then to wake it is for some reason the middle finger clicking twice so the one where they're kind of associating it to from being asleep to being awake, that to me feels like a confirmation. So to have something that's more confirming in the middle finger that then from that point on for the rest of the thing is disconfirming, it messed with my brain because I kept doing the wrong one. So it was something that was simple like that and I don't know if it was just me or I've heard some other people that were like, oh, they were clicking the wrong ones. But it's not completely intuitive once you do it because there's no other equivalent other than web browsing.
[00:34:24.033] Michael Markman: so I don't know if you have any thoughts well it's new right like we're still figuring out the design team is still figuring out what works and like the first step of making something that works is making something that doesn't quite work right and then like you know they'll get feedback from people like us and they'll make improvements to it but one thing I do want to highlight is the fact that what the neural wristband has enabled is that middle finger pinch and index finger pinch are distinct gestures that you can now reliably do. Cause I'll tell you as someone who's designed gesture based interfaces, you can't really reliably do that with camera based hand tracking. Like you can do like index finger pinch is really the only reliable gesture. Like look at any of the major operating systems when they do hand base, it's always index finger. And like, You can sometimes detect middle finger pinch, but it's not consistent. Sometimes it gets confused. Like, what am I pinching here? Like, there's so many different angles. But the neural wristband absolutely knows if you're index finger pinching or middle finger pinching. And like, that opens up so much necessary complexity. Like, you need that. Like, just having that makes design. You can design more complex apps. You can design more robust applications. So like, yeah, maybe they haven't nailed it yet. Like, it took me a while. Like, I thought... I thought that left, right, swipe, and then this would be confirm. Oh, so with the thumb to your index finger? I just thought tapping the thumb to the index finger, and then he's like, no, no, you do this. I'm like, okay, so it's swipe, swipe, pinch. That's what I thought it was. Then eventually I got it. I'm like, oh, okay, that makes sense intuitively. And then I guess the middle finger pinch is like the home thing. But just like the level of fidelity that we have on the hand, like just the fact that I can swipe up, down, left, right on my thumb is crazy. Like that's not a level of fidelity. Like that's something that was previously just for controllers. And the fact that we can do that with hands to me is just like, oh, this, I'm like, okay, the vision for Orion, the vision for like actual smart glasses without controllers to me feels a lot more viable than it did before the neural wristband. And that is huge, right? Like that's such a huge unsolved problem. Like Apple got us, like Gaze and Pinch was something that Apple, I think, I mean, There are opinions on it. My opinion is that it's excellent. I much prefer it to a laser pointer coming off my hand. But combining eye tracking with something like the neural wristband, to me, feels like the killer input device. And then you have the haptics, and also the fact that the wristband gives you haptics is great. but just again it's like the level of fidelity that it enables is like that is so necessary and like i don't know if that's fully appreciated how important that is because like most people just haven't had to like design a complex interface to work with fingers and pinches and you just yeah imagine imagine imagine not having right click and trying to make a creative tool Just having one click. Just one click. A mouse, just one click. And you're like, great. Awesome.
[00:37:18.458] Kent Bye: Yeah, that's true. That's a great point. And yeah, I think for me, it's the most interesting thing that's come out of Connect for me is that neural interface. And I had originally thought, oh, well, maybe they'll eventually get two wristbands and you'll be in VR. But I really think that the form factor having one dominant hand interaction with the headset is really what you can only really rely on people having.
[00:37:41.001] Michael Markman: You don't really need more as like a primary input device. Like for games, I'm sure we'll have controllers or whatever, but like you don't have two mice for your computer, right? Like you just one mouse two buttons and we've made you know entire generations of computing work with that and
[00:37:59.090] Kent Bye: Yeah, so we've basically replicated, we've turned the thumb, index finger, and the middle finger into a mouse. Exactly.
[00:38:04.353] Michael Markman: Yeah, and then future headsets will definitely have eye tracking, and that, it's just, it'll feel like magic. It'll feel like magic. The Vision Pro, at its best, feels like magic. This device today, the demo I tried, at its best, felt like magic. You know, when I dropped my hand out of sight, but was still swiping, and then when I figured out how to pinch properly, and, like, just all clicked, and, like, the UI was reactive, and I just, you know, was focused in the right way, it just felt magic. I'm like, oh, OK. This is clearly the curve. We're heading in the correct direction. And that's the point of events like this, right? That's what you go. You go to Meta Connect, hoping that you'll walk away being like, yeah, we're heading in the right direction.
[00:38:43.289] Kent Bye: Yeah, I had a bit of a blurry experience with the monocular display. And I'm sure you've had experiences in Unity where there's like Z-fighting, where there's like different things where if you shut one eye and close the other eye, then it kind of changes textures and then your eyes see this kind of like conflict. I had a little bit of that experience of like just kind of with the soft gaze, then it just felt like a Z-fighting experience where my brain was like, okay, I know that this is here.
[00:39:09.067] Michael Markman: You got to try it with the corrective lenses. I think the corrective lenses are going to make a big difference.
[00:39:13.951] Kent Bye: Yeah, maybe that's because of the blurriness of that.
[00:39:16.673] Michael Markman: I mean, I did do the thing where I closed my left eye and I'm like, okay, that does look better. But I think that's okay for the purpose of this device. I did get used to it towards the end of my demo. I didn't want to take the glasses off because I'm like, ah, okay, I really feel like I got it now, especially with the pinch. But yeah, I mean, monocular is definitely not the end-all be-all, right? The future is not a 600-pixel monocular display in one eye, but... It is very much like it's, it has, you know, it'll get there, right? Like early VR headset had like horrible god rays and you could barely read text. And, you know, now we take like reading text in a VR headset for granted. That's crazy. That's crazy that we're just like, yeah, I can read text. Of course I can read text. What do you mean? Of course you could read text. This thing used to be just a bunch of pixels in your face.
[00:40:02.147] Kent Bye: Great. And finally, what do you think the ultimate potential of XR and the future of spatial computing might be and what it might be able to enable?
[00:40:10.313] Michael Markman: I mean, it's like I think the simplest way is that it's everything that we do now on computers like, you know, desktops, laptops, mobile phones will be entirely replaced by spatial computing. And like, that's the ultimate future. And like what that enables is everyone will be able to use a computer. If you look at the history of computing, specifically if you look at the history of HCI, human computer interaction, the first computers were huge, they were bulky, they were entire floors of buildings, and the way you would interact with them was you would type out, you would write out punch cards, and you would feed them into a computer that was translated to ones and zeros. That's a tiny group of people that were able to operate computers. Then, a few decades later, we have computers that you can operate with a keyboard. You can now type words to the computer. And a slightly larger cohort of people could interact, could get that. The level of abstraction was getting lower and lower. The next big innovation is suddenly you have a visual GUI and a mouse and a keyboard and a mouse. You have a mouse that you point and you click on stuff. And you click on a folder and that opens up the files that are in the folder. And it's getting less abstract and more and more like our world. Now, you have the touch, multi-touch. you can pinch and zoom on a page to get closer to it you could just use your finger to tap a screen and Now and that you know expanded who could use a computer to a larger larger cohort and then to me spatial computing is the final promise Where using a computer is just the way you interact with things in real life literally, right? Like I open a door by pulling its handle and pulling on it Like now that's how I'll interact with computing is through physical gestures and to me that makes Anyone can drive a car. like driving a car is actually like if you break it down as like a program is a hugely complex task. You see a light, you reduce like the angle at which your foot is on a pedal, you reduce the angle a bit to reduce the speed and then you're also like turning this other wheel at a different angle. It's like you drive with your butt because it's all feel. It's all physical actions and feeling and movement, but it's really intuitive because it's physical movement, because it's connecting these complex tasks to physical gestures. Anyone can drive a car and now anyone will be able to do complex tasks on a computer. And that to me is the promise of spatial computing is that it truly democratizes computing. Like, I don't know, it just breaks down. We thought that mobile phones were going to connect us. But like, how connected do you really feel when you're talking to someone and you just see like a tiny little piece of their face in a Zoom call? Whereas like right now I'm sitting right next to you, Kevin. Like we're sitting here and imagine in 10 years that it's the exact same experience we're having. Like I look at your eyes, I see that like you see me blinking, like you have the sense of presence of each other in this room. You could be anywhere. And it's just that it's just magic. It's magic everywhere. I think it's usable. It's like it's daily magic. And that to me is really exciting. Like there's two types of magic that we see in media. We have like in the Lord of the Rings, magic is this powerful thing that Gandalf uses, you know, and that's been what computing has been for a long time. But then you have like magic and Harry Potter where like, you know, everyone just uses it to make breakfast. And that to me is where we're heading. It's just like daily casual magic.
[00:43:31.946] Kent Bye: Nice. And the only caveat and pushback I'd give is that for everyone is a big asterisk because there's still a lot of accessibility work that needs to be done. So it's truly for everybody, which I think I don't see as much from Meta's side as I've seen from Apple's side and what their commitment to accessibility. So that's, as someone who has glasses and my own experience of that, it's, you know, things like the Oakley Vanguards or even my own experience with How the docents were treating like someone with glasses saying oh sorry you can't wear your glasses And then don't give me even the most up-to-date corrective lenses, so that's a big caveat We still have a lot of work for a lot of work for all of us right like we still have so much work to do like we're still There's so much work still left to do to get this medium to its potential Ray like there's
[00:44:19.480] Michael Markman: a ton of work on the hardware side. We have to still like, you know, conquer the laws of physics. We have to, accessibility is obviously going to be, it's still a huge issue. But I mean, that's why it's exciting to be in this industry, right? Like it's why it's exciting to be in it now is because like you're not dealing with solved problems. You're not just like implementing design guidelines that were written 10 years ago and have stayed relevant. Like it's just like we're on the frontier figuring out how to make it happen. And I like, to me, it's like we have to get them to unlock casual magic for everyone. Anything else left and said you'd like to say to the broader immersive community any any final thoughts um There's just been an amazing community to be part of for the last you know my entire promote almost my entire professional career like It turned 30 this year, and it's I've been in VR now for almost 10 years And it's just it's full of the most passionate excited people who are like really in it for the love of the game and You've been in this industry for a while. You've gone through multiple ups and downs. And you're in it because you care. And you're in it because you're excited about building the future. And it's just been amazing to be surrounded by that cohort of people.
[00:45:26.937] Kent Bye: MARK MIRCHANDANI- Awesome. Well, Michael, thanks so much for joining me here on the podcast. A real pleasure to hear some of your deep thoughts and hot takes on the future of design in XR, both for what you're doing in ShapesXR, but also what we're seeing here at MediConnect and the glimpse into the future where it's a form factor of glasses with this new neural band and 3D user interface that is taking these gestures and turning our body into a controller so that we can make magic happen with our own embodied actions. So thanks again for joining me here on the podcast to help break it all down. MICHAEL WALLACE- Awesome. Thank you very much, Kent. We did it. Woo! Thanks again for listening to this episode of the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

