The HoloLens is the most impressive augmented reality HMD on the market today, and their developer kit is already being deployed into production in industries ranging from architecture, engineering, design, sales, medicine, and education. Microsoft is taking a holistic approach with Windows Mixed Reality being baked into Windows 10, meaning that developers can create a single application that can run either on the HoloLens, on one of their partner VR headsets, on a Surface tablet, or as a desktop app. At Microsoft Build today, they’re announcing a new OEM VR partner with HP as well as inside-out, six degree-of-freedom input controllers.
I had a chance to sit down for an hour with two representatives from Microsoft to talk about the latest HoloLens updates, their VR headsets, as well their overall Mixed Reality strategy. Brandon Bray leads the Mixed Reality developer ecosystem, and Greg Sullivan is on the marketing team for the Windows & Devices Group. We cover a lot of the high-level mixed reality strategies as well as the low-level details for developers, as well as a wide range of topics from AI integrations with Microsoft Cognitive Services to the technical details of their new motion-tracked controllers.
LISTEN TO THE VOICES OF VR PODCAST
Microsoft has leapfrogged the augmented reality competition with the combination of having the best AR HMD with the HoloLens, a healthy ecosystem of enterprise developers, a suite of AI-driven cognitive services APIs, and a forward-looking Mixed Reality strategy. They have impeccable timing with taking a leap of faith to solve a lot of really hard problems in order to have created the HoloLens in the first place. Bray admits that there’s still a lot of remaining problems to be solved with the limited field of view, but that there was a tradeoff for being able to even create a battery-driven, tetherless, holographic computing platform that you can wear on your head that can do inside-out positional tracking.
The HoloLens developer kits are priced at $3000, and so they’re targeting enterprise applications. But their VR strategy seems to be aiming for the bottom to low-end portion of the market with their $399 price point for a bundled Acer VR headset with the motion-tracked controllers. I had a chance to have some hands-on time with the Acer VR headset, and I was not impressed with the motion-to-photon latency of the LCD screen, poor quality optics, build quality, or user experience of putting the headset on.
These tradeoffs in comfort were made in order to bring the price down, but the overall experience feels like it’s a small step up from a Daydream, but perhaps on par with the GearVR or possibly even worse. The high resolution of the Acer VR headset makes it one of the best VR HMDs to read text in and the inside-out tracking works pretty well with occasional jutter. But the LCD screen is not a low-persistent screen that seasoned VR veterans have grown used to, and so the resulting DK1 or DK2 era blurring when turning your head makes it feel worse than a Gear VR. But as long as you’re not quickly moving your head around, then you’ll minimize the motion sickness triggers.
The 6DoF motion controllers are inside-out and Bray said that they rely upon a sensor fusion combination of having a direct line of sight with the front-facing cameras on the VR headset, IMU sensors, and inverse kinematic probabilities. There were not any prototypes available for testing, and so I don’t have any direct experience with how they actually work. But I do have some concerns with their approach based upon my experiences with other line-of-sight controllers such as the Leap Motion. With the Leap Motion, you have to hold your hands up so that they can be seen by the cameras on the HMD, which will likely require developers to specifically design applications that optimize for this constraint.
This limitation of the input controllers may mean that it could limit existing room-scale Vive and Rift VR experiences from being easily ported. If existing Vive or Rift applications aren’t a good experience on these lower-end VR HMDs, then there’s going to be a huge gap of content to drive consumer adoption. If this lower price point is going to attract more consumer-grade users, then they’re going to need content. If custom entertainment content is need, then I doubt that the Microsoft enterprise developers are going to generate a lot of compelling and entertaining content.
But it could be that Microsoft isn’t concerned about having a library of entertainment for regular consumers of these VR headsets, and maybe they’re more interested in creating data visualization and enterprise applications. But if that was the case, then why not create something on par with the Vive and charge enterprise prices? Most of the mobile VR content designed for a 3DoF controller hasn’t been nearly as compelling as the full room-scale and 6DoF content. These Microsoft VR headsets look to be in yet another realm of quality and performance that’s slightly better than mobile, but a lot worse than the best high-end content.
If Windows Mixed Reality VR headsets are going to go anywhere, then there’s going to need to have content that’s compelling and drives adoption. Will these VR systems meet the needs of whatever Microsoft has decided is their target market? If it does, then all of this discussion is moot. But if not, then we’ll have another platform that is fractures the developer ecosystem and is left without a critical mass of compelling content.
Overall, I’m really impressed with Microsoft’s holistic approach to mixed reality. The HoloLens is the market leader for head-mounted AR right that’s actually being deployed into production. They are positioned to really own the enterprise and professional AR market as they create more integrations between Windows Mixed Reality, their cloud hosting, and AI-driven cognitive services.
There’s a lot of long-term promise in tetherless VR with inside-out tracking, but the early Acer VR prototypes are disappointing and risk fracturing the VR ecosystem in potentially needing specially designed experiences in order to really use the strengths of the platform.
Here’s a number of Twitter threads with more thoughts and impressions from Microsoft Build so far
Live tweets of first day keynote of Microsoft Build Conference
Impressed @microsoft CEO @satyanadella is extemporaneously delivering, not reading keynote.
Cited dystopian sci-fi for we future don't want pic.twitter.com/r9j8THz7WI— Kent Bye (Voices of VR) (@kentbye) May 10, 2017
Thread with highlights from the HoloLens YouTube channel
There's some great stuff on @HoloLens' YouTube channel. This shows the potential of spatially customizing AR storieshttps://t.co/mUpX2SjjQV
— Kent Bye (Voices of VR) (@kentbye) May 10, 2017
Twitter Thread of Hands-On Impressions from Acer headset
Tried the Acer VR. Legible text. Edges of optics blurred like Daydream. Some jumps in inside-out-tracking when moving head quickly.#MSBuild pic.twitter.com/u1DylnIn7z
— Kent Bye (Voices of VR) (@kentbye) May 10, 2017
Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR Podcast. So I'm at the Microsoft Build Conference here in Seattle, Washington, and I've gone through the first day of the conference and had a chance to have a lot of hands-on, both of a lot of the latest HoloLens demos, as well as the Acer VR headset. So, this podcast is going to be airing right when the second day of the keynotes are going to be starting, which is going to be having a number of different announcements. One is that there's going to be a couple of new OEMs that are going to be distributing the Microsoft VR headsets, including Acer, which has already been announced, as well as HP and Lenovo. And there's also going to be an inside-out motion track controllers that are going to be available for these headsets. And so I had a chance to sit down with a couple of people from Microsoft to talk about some of these announcements, as well as what's been happening in the HoloLens ecosystem. So I had a chance to have an hour-long chat with Brandon Bray, he's the Windows Mixed Reality team working on the developer ecosystem, as well as Greg Sullivan, who works in marketing for the Windows and Devices group. So this discussion is the most in-depth that I've had about the HoloLens so far, and I sort of jump between the high-level communication strategy and the low-level, what you need to know if you're a developer and how to get started and involved into building for Windows Mixed Reality. So, that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by the Voices of VR Patreon campaign. The Voices of VR podcast started as a passion project, but now it's my livelihood. And so if you're enjoying the content on the Voices of VR podcast, then consider it a service to you in the wider community and send me a tip. Just a couple of dollars a month makes a huge difference, especially if everybody contributes. So donate today at patreon.com slash Voices of VR. So this interview with Brandon and Greg happened at the Microsoft Build Conference on Wednesday, May 10th, 2017 at the Washington Convention Center in Seattle, Washington. So with that, let's go ahead and dive right in.
[00:02:28.564] Brandon Bray: I'm Brandon Bray. I work on the Windows Mixed Reality team at Microsoft, and my role is developer ecosystem. It is about making developers as successful as they can be with all the products that we're building from HoloLens, which I've been part of for a number of years, and all of our new work with immersive headsets.
[00:02:49.222] Greg Sullivan: And I am Greg Sullivan, and I'm a marketing guy working in the Windows and Devices group. And we help customers and partners and all of our consumers understand the products that we're bringing to market and help get some of that feedback back into the product teams and folks like Brandon.
[00:03:04.627] Kent Bye: Great. So we're here at Microsoft Build, and there's been a couple of announcements that have come out. The first one about the new headsets. You had announced Acer, but now you have HP as well. Maybe you could tell me a bit about these headsets and when they might be available for people to get access to them.
[00:03:20.662] Greg Sullivan: Yeah, the exciting news is that developers can order these headsets from Acer and HP today. And this is really continuing the journey that we've been on in mixed reality. HoloLens was the first device that shipped with the capability to use Windows Mixed Reality to blend the digital and physical worlds. and over the last year we've been sharing with the whole ecosystem the idea that that's just part of it. Windows 10 is an operating system that has mixed reality built in and so our partners were invited to come and built headsets and PCs that can deliver a mixed reality experience and that will be built into every copy of Windows 10. It's part of the Creators Update. It'll get even better with the Windows 10 Fall Creators Update. And we're excited that these devices are now broadly available for developers to start building incredible experiences that anybody with a affordable headset and a Windows 10 PC will be able to enjoy.
[00:04:16.406] Kent Bye: And I understand that a lot of the developer ecosystem up to this point has been using Unity to be able to actually develop these applications. So when you talk about having APIs or things built in, what is that actually giving you from a developer perspective? How does that kind of work with the Windows Mixed Reality, immersive computing paradigms mixed with Unity?
[00:04:35.866] Brandon Bray: Sure, so starting with Windows has this notion of the universal Windows platform and it's where all of the APIs that we use to create these experiences exist. The value of the universal Windows platform is that you can target multiple devices through one common API set and even build a single application that. that one application can then run on multiple devices. So HoloLens was the first of these experiences or the devices that supported this. And it had a full ability to move around your world, like create this notion of what we call world coordinates. The technology within HoloLens is creating a coordinate system that's grounding you into the world and then using other sensors that you can use your gaze and different select motions to basically position a hologram in the world. When we are bringing this technology from HoloLens, the inside-out tracking, to these new devices, we're recreating that same ability to have world coordinates. And the value of the speed that we've been at in creating these new devices is made possible because all that investment in HoloLens is actually coming directly to the desktop platform together. We only had to add really three new APIs to support these new devices. And the first one was a binary API that said, is the display occluded or not? Is it see-through? So that makes sense. HoloLens was always see-through. You could see the world as you go. With these new immersive devices, they occlude the view of the world, and so you have a fully immersive experience. The second API that we added was this notion of boundaries, being able to set up and configure where is the walls in your room so that the experience doesn't have you accidentally slam your head into the wall. And then the third one was actually just adding motion controllers. HoloLens had the interaction model of gaze, gesture, and voice. Now we've just added motion controllers to the API set. And that's all we had to add. All the other things for rendering, stereoscopic displays, being able to persist your space or have multiple views of things, that was already built into the platform from the beginning. And so if you've been building an application for HoloLens, whether it's on a platform like Unity, those same skills come forward to this new immersive side. Obviously there's value for both. With the experience where you're in the world, you actually want to take advantage of your environment. When you're immersive, you're actually excluding the environment entirely. So this one API that lets you know whether the display is occluded or not, lets you direct dynamically whether your experience should go one way or another. And we have a few examples of that in the store. HoloTour is one app that is both on HoloLens and on immersive headsets. And from the beginning, that was a very immersive experience. It pushed the boundaries of what HoloLens could do, and it's fantastic with these new headsets. The second app that's available is Galaxy Explorer, and one experience has you seeing the entire solar system in your living room, and you're actually able to walk around and follow the orbits of the planets around the sun. But then when we bring it to the immersive headset, the experience needed to change a bit. Instead of being in your living room, you're now in this void of space. And so you needed to add things like a star field, so you kind of felt like you were present and not totally empty.
[00:08:18.151] Kent Bye: Yeah, and during the first day keynote, there was a lot of emphasis about, you know, by the year 2020, four billion people will have up to six devices that you're using. So having lots of different devices moving towards like this serverless architecture with lots of different ways for many devices to be able to talk to each other. So I'm just wondering if you could kind of paint a picture of this future that you see evolving and how Microsoft with this Windows Mixed Reality operating system is trying to, and all the other things that you're doing, to really foster this ecosystem of all these different devices working together.
[00:08:51.937] Greg Sullivan: Yeah, that's part of what one of the big thrusts here at Build is, is that Windows, in addition to being a platform that can enable a continuity of experiences across a range of devices, is also the place now where developers can make their home for targeting all kinds of different devices, even those non-Windows devices. It's the place to build and test and deploy applications across a plurality of devices that includes iOS and Android devices because we recognize that in today's world people have among those six devices are probably several different platforms. And one of the things that we're doing in Windows is to bring together and to bridge the experiences that you have so that they can be mobile across those devices. It's really kind of the manifestation of a vision I remember Bill Gates talking about years and years ago, the notion of this disaggregation of the PC. We were used to having storage, input and output displays all right here, and then we would add networking. But now what you're seeing is not just the disaggregation of those components of what we consider the personal computing experience, but we have this heterogeneous array of devices that we expect our experiences to continue. And today there are a whole bunch of barriers presenting normal humans from figuring that out. And how do I get, you know, I was just on this web page on my PC and now I'm on my phone on the bus and boy, where was it again? So we've taken steps over time, but part of what we're doing in Windows is creating a framework so that These experiences can really roam from device to device, embrace a world where the storage of your content matters much, much less, which device it was on, which device you last had an experience or part of an experience on. So we're building both intelligence into the platform to have average users be able to do things that were not possible before. but also make it the best place for developers to conceive and build and test and deploy a range of applications that enable those kinds of connected experiences in a world where people have six or more devices.
[00:10:54.464] Kent Bye: Yeah, and some of the other news that is coming out here of Build is this new trackers and controllers, a sixth degree of freedom inside out. Trackers maybe could tell me what you're able to do with this input controls with your hands, but also if it's also for the HoloLens as well as for these virtual reality systems.
[00:11:12.955] Brandon Bray: Absolutely. So when we say six degrees of freedom, that is talking about motion. And originally, you'd start with three degrees of freedom, which are usually rotations, like pitch, yaw, and roll. When you add three more degrees, you can translate. You can move upwards, like, you know, side to side, forward and back. And, you know, that actually has a lot of value. Just from head tracking, which is where this inside-out tracking starts, you know, there's Effectively, two cameras or sensors built into these headsets that are looking out into the world and your environment. And they're seeing the structure of your environment so that when you move, obviously your environment moves with it. And Windows then can compute that coordinate system. For a developer, it's very simple. They start with high school geometry. You have an origin, and you can move in an x, y, and z direction. place something at a specific coordinate, and if you're animating things, you have a path that you go for. And so that inside-out tracking starts with those two sensors. Then you want to add additional input, which is where having controllers in your hands. You know, we're very tactile, and when you create these immersive experiences, or even holographic experiences, you want the ability to interact more directly with things. And the value of inside-out tracking is ultimately letting you wander around an entire space because you don't have to pre-configure a room to a specific size. You can literally walk your entire house, go to your office building, walk an entire hallway corridor, and build an experience that's that large. And, you know, if you want to add in, you know, a hand motion, then you have to have controllers. And so the motion controllers that, like we announced today, basically shows, you know, the ability to fuse the sensors that are built into the headset to see your hands. And so it basically is creating effectively what we call a constellation pattern that the operating system is then using to compute the actual location of where your hands are. And so you can have precise positioning of things, and you have this ability to pick up objects, move them about, and interact with them.
[00:13:33.740] Kent Bye: One of the things with the Leap Motion is taking an approach where you actually have to have a direct line of sight to see what is actually happening. If you look at something like a Vive game where you're doing archery, where you're basically shooting a bow and arrow, where you have your hand behind your head beyond any of the ability for the sensor to be able to detect that, I'm just curious if you're able to do that level of tracking with this inside-out tracking of six-degree-of-freedom controllers, or if you lose tracking if you have it out of the direct line of sight?
[00:14:02.012] Brandon Bray: Right. So the way that these controllers are actually working is a combination of input. And so we talk about sensor input, which is really environmental, like, you know, line of sight, as you point out. And that starts, you know, the first level of input. These sensors, or motion controllers, also have inertial measurement units. And, you know, finally, there's another piece of mathematics, which you can call inverse kinematics. And basically that is the fact that your arm can only go so far. And your arm can also only move in specific positions. And so computationally we tried to figure out how did the controller move from one position to the next position? Is that even humanly possible? It's used in robotics all the time. It's widely studied. So you combine those three things, you know, basically tracking, like visual tracking, the inertial measurement unit, and inverse kinematics, you can create a model that's actually quite useful so that all of those combine to give you a confidence level. You know, if I have all three inputs together, I'm very confident of the position. And then if I lose confidence in one of those, the API can start saying like, I know that the position of the hand is here, but I'm starting to lose confidence and at some point I've lost tracking entirely. And so those motions where you're picking up an arrow out of a quill and then putting it into a bow to shoot, that motion goes very quickly and you can actually build an experience based on that. The APIs in Windows cover all of those together, so as you lose confidence, you can kind of say, like, the operating system will say, I believe the controller is here, and over time it will degrade. But if you build the experience together, you can completely build something like an archery game, something where you're reloading some kind of firearm or something like that.
[00:16:04.127] Kent Bye: And so when it comes to augmented reality and mixed reality with the HoloLens, it seems like the big emphasis there is spatial computing on the one hand of anything that you have dealing with 3D space, but also blending in the real world in a way where you want to actually be connected to what's happening in the real world. Whereas with virtual reality, you're occluding yourself from the outside world. There are certain use cases where you need to be. Connected to what's happening in the real world and I think most of those use cases are in the enterprise Applications where I think Microsoft is kind of uniquely fit. So just curious to hear how you think about Augmented reality mixed reality kind of the the best case use cases that you sort of see is the strength of how people are actually using it
[00:16:44.910] Greg Sullivan: Yeah, I think Alex Kipman said it well in his presentation today, is that these are really part of the phrase, you know, augmented reality, virtual reality, they're part of a continuum that we talk about in an umbrella term with mixed reality. Anytime you have part of the physical world and the digital world coexisting to some degree, you are mixing realities. And so We don't care about the distinction between kind of where you are on that continuum. We think about it holistically and we have the only platform in fact that addresses it holistically so that devices can be created and a whole ecosystem is being kind of born around this platform. It is true that many of the use cases for HoloLens in particular have been in the enterprise and commercial space and it's not an accident because one of the things that HoloLens does is it gives people literal superpowers to enable them to do things that were impossible or certainly very impractical. There's many, many examples of this. I was just at Hannover in the manufacturing show there, and on our show floor alone I saw many examples of companies that are using HoloLens to do things that were literally impossible, or certainly impractical. ThyssenKrupp Elevator, who makes an assisted chairlift for home solutions, so if you have limited mobility and you need to get up and down stairs. They have a process today where they have to go out and do a whole set of very detailed measurements of your staircases, because it turns out no two staircases are exactly alike. And so each job is a custom-fit lift. And it was a very time-consuming and costly process to do all of that measurements, to take all the digital photography, to manipulate the data, to try and envision what the final stairlift would look like, and then to manufacture it and then come back and fit it in your home and make sure it fit. Well, they've reduced that entire process, which could take weeks and weeks, to a very, very rapid process using HoloLens by doing it digitally. And so the technicians, and in fact the salespeople, will be able to put on a HoloLens and look at your staircase and very quickly create a holographic version of the lift. And even in real time, modify based on the customer's desires about where the chair should be parked when it's not in use and so forth. and instantaneously modify the holographic digital version of that lift and then capture all of that information and bring it back and enable them to cut literally weeks off of this process. There are a whole bunch of examples of hololens being used to literally do things that were impossible. One of my favorites is Japan Airlines using hololens to train both flight crews and aircraft mechanics. And one of the examples I love is with a jet engine, If you want to understand how to work on and repair a jet engine, you need to kind of see it in three dimensions, volumetrically. You need to understand how the systems work together. You almost need to get inside that thing, which is a pretty dangerous proposition if that jet engine is real and happens to be running. You only do that once. So with HoloLens, Japan Airlines is able to create a holographic jet engine with all of the systems and elements of that very complex machinery interoperating and enabling people to learn how to build it, strip away elements, see how the subsystems interrelate, literally while they're standing inside a running holographic jet engine. And so that's something that, again, is a great example of something that was impossible and is now every day with HoloLens.
[00:20:14.211] Kent Bye: Yeah, and if I were to highlight the three major verticals that I see, it would be architecture, engineering, and design, medicine, and education that I see kind of some of the most compelling augmented reality applications for the HoloLens. I'm just curious to hear your perspective on the ecosystem as it's developing and some of the, either on each of those different verticals, applications that you're seeing are really compelling and other things that you're also seeing.
[00:20:38.160] Brandon Bray: I think where you're going after is really that important notion of where does bringing in space actually help you solve a problem in a better way? And I study, just as a hobby, a lot of neuroscience and enjoy seeing the value of using those skills that we as humans have really developed over our entire existence. And when you look at the ability to take something that historically has been a 2D problem, whether it's medicine, education, or something like that, and bring that third dimension into things, it just makes it so much easier to fully understand something. And there's plenty of studies that show this true lots of places, where you exercise that part of the human mind, you actually are able to do things faster and better. And so I think that's fundamentally what's coming out of it. And you're seeing these fields like medicine, architecture, and certainly education, where we've been held back because we're focusing on what can fit on a flat page, and we're going forward. But I would go further and say just look at all the other places where it's great to be able to have space. And so just like more consumer style things, like being able to visualize an art piece before you build it. And then certainly storytelling, being able to kind of see a story from multiple angles. I would say those are the opportunities that are just getting started. You kind of go back to the magician and saying, I want to see if he has something behind him or something like that. And we're just starting to see that coming together.
[00:22:25.158] Kent Bye: Yeah, I haven't been to Sundance Tribeca VRLA. I haven't seen a really super compelling story in augmented reality yet. I think it's coming, but I don't think that the independent creators have figured out exactly how to use space, spatial storytelling, being able to use the environment in interesting ways, location-based things where you may be in a specific location. Part of that is because the dev kits are $3,000. And, you know, it's still very early. But also, just for me, what I find covering this space is that you get a much deeper level of immersion and being able to be transported into other realms, into virtual reality. But with augmented reality, you kind of get this uncanny validity of a fidelity mismatch of your brain telling you that you're not necessarily believing that this hologram in real life is actual real. And so, for me, I haven't been able to necessarily feel that level of emotional presence in any stories that I've seen yet. I think there's still a lot of potential that is still yet to come. I'm not sure if you've seen anything that has been really super engaging in terms of the story within AR.
[00:23:28.426] Brandon Bray: It's a fair point to point out that the experiences that we expect and want, the imagination that so many of us have started into this field, like started back in the movies of the 70s and 80s, whether it was Star Trek or Star Wars, And we built up this expectation. I just go back to, this is a new medium. And if you go into the 19th century when movies were first created, and you watch some of the first movies that were really created, and what directors were creating at that time, it's clear they wanted something better than what the medium could offer. But those movies, frankly, they're terrible. By today's standards, they're so far off, even by the standards of 1920s or the 1900s. So you look at all sides of this new medium, whether it's on the immersive side, where you have 360 videos. Some of the early 360 videos was like, let's put the person right in the middle of the action. And you are constantly turning your head, trying to figure out where to look. And you're always missing the story because you're looking in the wrong direction. And you've seen the evolution, even over a very short period of time, of creating those videos which is much more user research focused. Let's watch where someone is looking and re-edit the video so that it doesn't force someone to constantly turn their head because it's actually not that comfortable. And I would kind of point out the same thing when you come to kind of a more holographic side where you can place content in the real world, is that, you know, we're in that journey of learning. But the value of that real world is it's your world. You know, it's your living room. It's your home. And, you know, when you can create an experience like what Fragments is one of the launch experiences for HoloLens, you know, it starts off with a murder. And there is like a, a dead body in your living room. And, you know, for most people they've never had that experience, you know, finding a dead body in their living room. But now you can. And, you know, just start with that. And there's a story that follows that. And, you know, do we want it to be better? I think that we all do. Like, that's why we're in this field. It's like to push the boundaries. And so if you just go from the beginning of movie storytelling from the 1800s all the way to now and see how far it's come, I'm so excited to see where this medium is going to go just over the next five or even ten years.
[00:26:04.753] Kent Bye: I think another thing that's holding back the immersion is the limited field of view. You know, you're kind of looking at the world through a window. And at Sundance this year, there was Heroes, which used the convention of using a big black wall and you're looking at something in the distance, so you're far away and you kind of see this dance scene play out far away. And for me, that was one of the most immersive experiences that I had in Hollands, but it was so far away, I didn't have that near field that I could really get up to it. And then getting up close and seeing the dimensionality of something, I sort of get this windowing effect. So just as a general question, I know that the launch date of the first official consumer launch of the HoloLens has been pushed back a couple of years to 2019, as last I've heard. Can you speak a little bit about that as a technical challenge of improving the field of view? Because as I talk to people, I hear that as one of the biggest complaints is that the field of view is too small to really feel that level of immersion. And so is that part of the roadmap to once you eventually launch to have a field of view that's going to be a little bit more immersive in that way?
[00:27:03.243] Brandon Bray: So, you know, it's clearly one of those things like any new technology is going to have the thing that we all want to see things get better. HoloLens is really years ahead of its time and the challenges that had to be solved to make it possible because it is a combination of it. It is a mobile device. You know, it's battery-powered. It's, you know, a full computer that you're wearing on your head and like when you're wearing it, you look really cool. So it just has this design aesthetic to it as well. And, you know, you can see your world when you see a hologram and, you know, it's placed in the world and you move your head or walk around, like it stays where it is. And beyond that, it's like you have the ability to read text, like so, you know, you can be productive, read your email, read webpages through it. And so it is pushing the boundaries of what was possible. And it is technology that was ripped out of the future. Of course we want it to be better. And that investment to say, what would you ask? What should be better beyond it? Having a larger canvas to see your holograms on is certainly one of the things that makes a big difference. But as you pointed out, the experience at Sundance with Heroes really showed that artists and content creators can make use of the medium in ways that take what they can do with HoloLens now and still create a great experience. And you see that in all of the mediums, whether it's commercial applications, where you need to see something very important and you can place it and solve that. The other thing that's again very useful is the human brain. As you move around, you have memory of what things are. What we find in the conversations I have with developers all the time is that it might be something that you expect something different to begin with, but once you get started, the experience of understanding what you see in HoloLens, it's something that becomes the second nature, and the content developers actually can use different techniques to make it not that prominent, and you get something even better beyond that.
[00:29:25.045] Greg Sullivan: I think it's also true that we think about this as a continuum across the whole mixed reality spectrum. and that HoloLens is a magical device that is still, a year after it began shipping, unique in its ability to do things that nothing else can. But they're also complemented by these immersive headsets that, if you are looking for a more immersive experience, and if the primary use case is, I want to feel like I'm transported to another universe, then that is an application for which a more immersive headset is probably the right And I think that the key difference for us is that we are taking a platform-based approach with Windows 10 and saying there will be a range, there'll be a whole ecosystem around here. It was never really just about HoloLens. It's enabling folks to do incredible things, but it's about an entire spectrum of devices across that whole continuum of mixed reality from magical, untethered, holographic computers like HoloLens to fully immersive experiences like we're seeing with the headsets from our partners.
[00:30:28.863] Kent Bye: Yeah, and one of the other things that I've noticed in looking at a lot of the videos on the HoloLens YouTube channel is that there's a dimension of collaboration that I think you can have with augmented reality and the HoloLens that you can't necessarily have with virtual reality. You can certainly have a social experience within VR, but you don't have emotional expression that you can see in other people when you're physically co-located with them. You have some distributed stuff where you get the same sort of virtual uncanny valley where you can't necessarily really see what's happening with their face, their facial expression and everything. And so I'm just curious to hear what you're seeing already with collaboration with having multiple HoloLens devices within the same co-located space and what is being enabled with that.
[00:31:09.771] Greg Sullivan: Well, I think one great example of that is what Case Western Reserve University is doing with their holographic anatomy class. You know, for a hundred years we've been teaching medical students the same way with, you know, gray's anatomy and cadavers. As Brandon mentioned, the notion of relying fully on a 2D interpretation to represent three-dimensional things requires some mental gymnastics. We've gotten used to that, and there's value that we can derive. But it's also not enough, which is why we have medical students operate on cadavers. It turns out those medical systems are not actually working in a cadaver. You can't see the heart beating, and you're also constrained just by the physiology of the particular body that you're operating on. What Case Western Reserve University is able to do is to show students digital bodies and have them see full volumetric representations of all of the systems in the human body while that heart is still beating. So that's, I think, a great example. And all of the students in that classroom can share. And the teacher can be lecturing and showing them what's going on in a particular systems. And the students have a shared experience. One of the really cool things that kind of talks to this notion of Windows 10 as the underlying platform is some of the demos that we're doing here at Build where we can actually show these immersive headsets and HoloLens, both two different people wearing these devices and having a shared experience in mixed reality that is reliant on the notion that we have an operating system that's built to enable mixed reality. And that is something that is a key difference in why we're taking this platform-based approach. Because not just two people wearing a HoloLens can have a shared experience, this platform enables it across a range of devices from different manufacturers. So having that interoperability among heterogeneous devices, enabling shared experiences in mixed reality, is a key thing that you need an operating system for.
[00:33:08.200] Kent Bye: Yeah, and one of the other things that I noticed in terms of the medical field is that not only just for medical education but also for people actually using the HoloLens in the operating room or to be able to do diagnosis where you have these sensors where you may be able to see a 2D image of that where you would maybe take a scan and look at it later but being able to potentially look at it dynamically live scanning your body, looking at the mixed reality vision of what, it's almost like a sensory addition where you're actually expanding your ability of your senses. But also I've noticed that there was one company that was putting additional OptiTrack or some sort of external markers to do additional precision of the mixed reality so that you could do spinal surgery. But maybe you could speak to that a little bit in terms of what you're seeing in the medical field and the level of additional tracking to give that additional precision to do all sorts of crazy things that wouldn't be possible before. But also just generally what is made possible by blending mixed reality into diagnosis with doctors.
[00:34:07.264] Brandon Bray: Sure. You see a number of different places like going after looking at how do you use this new superpower of seeing a hologram to really go after diagnosis and visualization. So what you're actually looking at in a lot of these operating room kind of situations is First, they're working off of some kind of scan that happened through existing medical imaging, and so tomography is usually one of those, where it's basically taking a slice out of a part of the human anatomy, and showing you a view of that or kind of taking a top-down, like, compressed view, an x-ray, you know, for instance. And they're trying to interpret from that 2D image. You see this in every medical television show, like, they're in a room, you know, let's diagnose this. And the thing that you have is, like, you have experts, you know, like, that are built in their professions for actually just looking at these images. You know, that's all they do is, like, and you'll have multiple experts look at the same image and come away with different conclusions. And it's just an example of this fact that we've taken a dimension away. And so there's this kind of old saw that you say, like, whenever you take a dimension away from a problem, you lose precision. You lose information. The easiest way to see this is taking the globe and trying to, like, create a 2D map. You're distorting the geography. But, you know, in medicine that's impactful because two people can have very different opinions and to a patient that's like two different paths that have very different outcomes. But when you are actually able to take all that information and a lot of these medical imaging equipments like, you know, tomography basically takes slices and you can then reconstruct something in 3D. And even just seeing that 3D image of something offline, even outside of the operating room, can have a remarkable impact on the outcome, just diagnosis. But then when you're able to bring that together and see that with a patient at the same time, it gives you this sense. And going back to spatial memory is one of the strongest human capabilities. And, you know, it's not the case that you want a patient, you know, in surgery with the surgeon wearing the HoloLens throughout the whole surgery. You know, that's not what's being done. What they're doing is they're visualizing it, you know, with the patient, you know, at one point. And then the surgeon then has that information. You know, their memory knows like what to look for and where to go. Which is something that, you know, comparing what they were doing before of just looking at an image on the side of a wall in another room and then trying to put that into space, it's kind of the same experience you would have with looking at a map and then trying to navigate your way through highways. You have very different experiences and so you're working even harder. When you're able to use spatial memory and apply that, you have an easier time and so the surgeon's able to focus on the problem at hand in a much better way.
[00:37:24.308] Kent Bye: Yeah, and I think one of the other trends that I see is Microsoft's taking a very holistic approach in that there's a lot of these exponential technologies with virtual reality, augmented reality, mixed reality, but also artificial intelligence and starting to blend in the Microsoft Cognitive Services into different applications. Like when I was at GDC, I saw Human Interacts, Starship Commander, and I got a chance to actually play it and have that deeper level of immersion and sense of presence by being able to actually have a conversation where I was speaking and then it was triggering these pre-recorded authored branches of the story, but it felt like I was having an interaction that felt plausible and that gave me this deeper level of immersion. But I'm curious to hear if you have other applications of HoloLens and augmented reality integrations with the Microsoft Cognitive Services, where you're starting to see this blending of artificial intelligence with mixed reality.
[00:38:15.699] Greg Sullivan: Yeah, no, it's happening in a whole bunch of dimensions. I think it's fair to say HoloLens itself is using AI and deep neural nets to do some of the amazing things that it does in order to world-lock holograms in three-dimensional space. But it's also true, the case that you mentioned, one of my favorite examples is an app that one of our developers did that taps into the Cognitive Services APIs and uses HoloLens to recognize objects and have the app say out loud, what am I looking at? It's a pretty profound leap in terms of the capability. You have a device that is, again, an untethered holographic computer that understands your voice as one of the input mechanisms, but it can tap into the AI and the cognitive services in the cloud. to even extend that superpower. And really this notion of, you talked about distributing things, and this idea that we have a whole bunch of new elements, including mixed reality is a key area where we're seeing a whole bunch of innovation and people doing incredible new things. And as we said, seeing new approaches to storytelling and a whole bunch of other things to learn how to take advantage of this new medium. But in parallel, we're seeing the advent of AI and cognitive services and some of these natural interfaces, for example. When you combine those trends together, that's when some really exciting stuff starts to happen. So one of the things that we continue, and Brandon is very close to this, we continue to be amazed by the creativity and the innovation of developers who get these tools in their hands. And we're constantly surprised with what we see them coming up with because they connect things like this in new ways and take advantage of some of the distributed assets that we have, combining them in ways that solve problems. And as we said, we're really just at the very, very beginning of this. And so as each of these kind of elemental technologies evolves and becomes better, we'll make improvements in mixed reality, we'll make improvements in AI and the cognitive services, and we'll also improve the integration and the ease with which developers can use these various tools together to do some pretty cool stuff. And like I said, we're just getting started.
[00:40:24.095] Kent Bye: Yeah, and one of the things that I've noticed as we move from this information age to this experiential age is that moving from text input to conversations and conversational interfaces. And so during the first day keynote, there was an integration, a use case that was shown where there was a Internet of Things cameras that were hooked up into the Microsoft Cognitive Services doing object detection. And then you could interface with this master artificial intelligence through either SMS or chat, where you're basically communicating through your phone. But I could imagine in the future that's going to be you essentially just talking into something like an augmented reality HoloLens glasses where you're able to then maybe see overlays into what's happening, giving you a certain amount of situational awareness. But if you have questions, being able to ask the AI what's actually happening. So maybe you could start to paint a picture of what you're starting to see with some of these integrations in the enterprise of blending in both the HoloLens as well as these other services.
[00:41:21.813] Brandon Bray: Yeah, so when you start looking at HoloLens, it has a number of new capabilities for environmental input, but voice input is one of its key features. You have the ability to both have a command system, which is just like, go do this, based on what in the moment is possible. Saying the word select causes the UI to have a select gesture. And then there's, of course, dictation, which leads to this conversational opportunity. That's actually built into the Windows platform itself. And so it's not something that is restricted to HoloLens on its own. It's something that crosses the entire universal Windows platform from desktop all the way to HoloLens and then comes to these new immersive headsets. And so you actually can have these experiences across the entire mixed reality spectrum, or even just in a traditional 2D experience on your desktop. It's the same technology that's powering the Cortana digital assistant on Windows, and you can bring that into your application now too. And, you know, thinking of it, it's just one example of input. You know, as you're bringing forward, like, what are the opportunities to bring voice into a mixed reality experience, you also have the ability to say, like, what am I looking at? Whether it's immersion or holographic, you can start saying, you know, something specific to that. You know, whether it's, you know, a command to move something or whether it's a character in a story that you can have a conversation with, you have that as another piece of input. And so when you start adding even more inputs coming forward, voice just leads to this opportunity of what's the most natural thing to do as a human? How would I interact in this scene? Rather than just focusing on how are computers designed years ago and limiting you to the possibilities of what computers could do, let's move to a more natural approach to interaction.
[00:43:26.128] Kent Bye: Yeah, and when I look at the other major players in immersive computing, I see that there's kind of two different categories. One is performance-based marketing companies of either Facebook and Google, which is very interested in collecting user data, feeding that into artificial intelligence to do very targeted advertising. And then we have HTC and Valve, as well as Microsoft, which maybe seem to be on the other side, which is may be more interested in doing applications for the enterprise where you pay up front, but it's not as much of an emphasis of gathering and collecting data on the users. But Microsoft could actually flip and be more on the performance-based marketing side of being able to do advertising into augmented reality experiences. So I'm just trying to get a sense of strategically how you are starting to think about privacy and virtual reality.
[00:44:14.003] Greg Sullivan: Yeah, no, I think and we could have a long conversation about the relative merits of different business models. I think it's pretty clear what one of the thing that really crystallizes this for us is statement of the company. Microsoft exists to empower every person and every organization on the planet to achieve more. And that's a very powerful, galvanizing, focusing commitment to why we get up every day. And I think part of the trends that you just described is we don't view advent of mixed reality as a new way to interact with digital information, as some completely net new thing. To us, it's an extension, it's a continuation of a journey that we've been on for decades. And in fact, we've organized our company, and the part of the company that Brandon and I work in, is chartered with one task, and that's to create more personal computing. And by that we mean, let's take personal computing and make it more human. And humans live and evolve and exist in three-dimensional space. And as we were discussing, when you interact with digital information, even in three dimensions, you learn it more quickly, you understand it more deeply, and you retain it longer. And so this is a step on a journey that has been going on for decades. I would characterize this as on par with the transition from character-based interface to graphical interface. And now we have touch and pen and voice. And what we're doing is we're taking the digital world and we're freeing it from the flat rectangles that it's been kind of trapped in. and bringing it and enabling it a more human way to interact with digital information. And so that's the charter and the mission that we've been on, and it's the journey that we've been on for decades. And so the business model aspect of it, this doesn't change. This is about how we can give people the tools and empower them to achieve more. And that really ladders right up to our mission statement and why the Windows and Devices Group is here to create more personal computing. Because we think that this notion of mixed reality is not some left turn that we're all taking to go live in some different universe. It is the continuation of personal computing. And we think that mixed reality and the future of computing is three-dimensional, to certainly some degree. And it's because it's more human, and that's why we're doing it.
[00:46:40.107] Kent Bye: Yeah, and I used to work at an IT automation company, and so I was looking at the different announcements that are happening today. I'm seeing this trend of moving from virtual machines and centralized servers to decentralized containers and serverless architecture with microservices. It's sort of a paradigm shift where If I look at the competitors in the cloud computing space, Amazon is clearly the leader of centralized cloud computing, and then Azure is in some ways making a play to be able to decentralize things, serverless architecture, and I feel like that's a strategic move that Microsoft is uniquely positioned in to make that bold of a move towards that decentralization. But I'm curious to hear more from your perspective of how this decentralization is going to kind of play into this overall ecosystem of the mixed reality.
[00:47:30.249] Brandon Bray: Sure. When you actually think of decentralization, really, at the end of the day, we're trying to make the life for the developer easier. And then the end user experience has to be kind of obvious. One of the examples that you'll see often is people want to use a HoloLens, you know, where you can go into a space and look at, you know, something that's a factory floor, for instance, and, you know, see rows that are fairly similar and each row, you know, has that same that you want to be able to look at the experience of a UI and say, like, how is this factory line working? And then walk to the next one and see how it's working. They all look the same. So you have to actually start, you know, adding in information from the factory itself. And that's where those services of putting content, you know, that the HoloLens or other mixed reality devices can take advantage of into the environment itself. onto the factory line. And they can report signals of how things are going. And then you can use mixed reality to visualize how it's going. And so a factory manager could walk around and see like, okay, this factory line is red because it started to glow. And that's information that's coming from the factory line itself. And it's that decentralization of services that's going to start making that easier and easier to do. And so I look at it, it's simply from a developer's point of view, what's the easiest way to get the job done? And between visualizing and placing content in the right place, we're actually moving towards the environment being a big part of how mixed reality is created. Now I actually have to start putting computing into the environment as well.
[00:49:20.901] Kent Bye: So what do you want to experience in mixed reality?
[00:49:25.666] Greg Sullivan: I was just thinking about this, maybe looking at my last several weeks of airplane travel. One of the things I'm kind of most excited about is the notion of presence, of being digitally present in a meaningful way, in a kind of three-dimensional and potentially someday even tactile way. it would enable me to spend less time on airplanes. And this notion of collaborating in a mixed reality space with people that are distributed all over the world has pretty profound implications, not just in terms of my quality of life, of not being away from my family as much, but also just how we can collaborate as humans, breaking down the barriers between us and enabling us to really just get to this friction-free collaboration space and to kind of free the creativity. I love that We're focusing on creators with Windows 10 in this time frame. The Creators Update has a whole bunch of tools that enable people to do things that were kind of the realm of the high priests of 3D or other areas. And now normal people can create stuff that they didn't dream possible even just a couple of years ago. So for me that idea that I can be given the superpower to be meaningfully present across long distance and have this kind of friction-free collaboration over space and time is a pretty powerful idea. What I think is kind of the coolest aspect of this whole space though is is we don't even know. In some sense, you know, mixed reality is a bit of a laser beam, right? It's this awesome thing, and we're just figuring out all of the powerful applications that can make our lives better. And I think most of those, we haven't even imagined yet. And that's part of what Build is all about, and we're having this community come together and look at all of the tools that we're providing, and then letting their creativity go nuts and do things. So the answer to your question, you know, I had one, but I think the real answer is probably something I haven't even contemplated yet.
[00:51:36.648] Brandon Bray: Elementary My Dear Data season 2 and where basically they go into the holodeck and like play Sherlock Holmes and like are challenged to create the mystery that like even Sherlock Holmes couldn't solve. And it ended up where the story continued outside of the holodeck. It was back into the real world and you know that fusion of storytelling going back in time to Baker Street in the early 1900s and then going back into the real world where the story continued like is just like this emblematic like you know that's like the high watermark that I want to see in my lifetime and like push forward for.
[00:52:21.890] Kent Bye: Great. And finally, what do you think is kind of the ultimate potential of mixed reality and what it might be able to enable?
[00:52:30.596] Greg Sullivan: Well, I think what we're seeing is the idea here is that today there are a bunch of barriers between The digital world, we've seen the digitization of things have a profound impact on many, many industries. And we're seeing now just the start of how we're using big data and AI to really solve new problems. And so this idea that the digital world has become a very powerful, powerful part of our lives is profoundly enhanced by the idea of mixing reality so that the barriers between meaningful utilization of that digital world and regular people are taken away. I'm old enough to remember my first computer science classes being done with holes in pieces of paper, and that is not a very human way to communicate, but it was the terms on which the computer communicated, so we fit our method of communication to what the computer did. And to me, over the last several decades, we've been on this slow march to changing the frame of the conversation and our interaction with the digital world from the terms of the computer to the terms of the human. And to me, mixed reality is one of the key things that will make our interaction with the digital world more human and more natural. And if I look at a digital object sitting on the table I don't need a manual to know that I could potentially reach out and pick it up and turn it around and look at the other side. Because I'm a human I live in three dimensions. We'll get to the point where The interaction with the digital world will be second nature if you are a human and we won't need a manual and we won't need to be trained how to do things and breaking down those barriers is one of the most exciting things that's happening.
[00:54:21.013] Brandon Bray: For me, obviously a lot of the technology is on its own just innately interesting. But I just look back at the history of time. We went from basically being able to record history on stone tablets to ultimately some paper forms. over the last century we've brought in video. And now we're actually bringing in our environment and it's allowing us to time travel. And we get to do things like record an experience. We have holographic capture as one of the technologies that from every angle you can see something. Whether it's family members that we can immemorialize forever to actual moments, whether it's 360 video being in the middle of an epic moment of time and being able to go back in time But at the same time, we get to travel in the future and our imagination, whether it's science fiction or just different storytelling, to imagine what the future can be like. We have this new medium that's just so different. And I'm excited to see how that evolves because I go back to, if we could just imagine what technology would have been 100 years ago, It's crazy. You just go back and read the predictions of technology, even from the 50s and 60s to what they thought it would be like right now. It was remarkably different than where we are. And so just that moment of what we're going to be just 20 years from now, 50 years, or 100 years from now, is so far off the mark.
[00:56:07.425] Kent Bye: OK. Awesome. Well, thank you so much.
[00:56:09.526] Brandon Bray: Thank you. Thank you.
[00:56:11.990] Kent Bye: So that was Brandon Bray. He's on the Windows Mixed Reality team, working on the developer ecosystem, as well as Greg Sullivan, who's a marketing guy on the Windows and Devices group. So I have a number of different takeaways about this interview is that first of all, I just have to say that the Microsoft HoloLens is just super impressive. I think that there's not any other inside out tracking, whether it's AR or VR, that's anywhere close to what the HoloLens is able to do. It is pretty magical. I was able to see about nine out of the 11 major HoloLens demos that were on the floor today. And, you know, the common theme that I saw in a lot of them is that it's mostly around collaboration, where they would be doing it through a sales process, where they need to be able to look other people into the face. And it's just something of being included would not be as good of an experience. It's that social dimension that makes it so much more powerful. But there was also some mixed reality experiences where you're able to start to use the world coordinate system that is able to be tracked within the HoloLens to be able to do things like measure stairs, to be able to do these custom design stairs. And probably one of the most visually compelling demos I was able to see at the build conference today was from Finger Food, which had an entire semi that was there on the expo floor. And you put on the mixed reality headset and there's no hood on the truck, but yet When I'm looking at the truck, I'm seeing the hood there and my mind is just believing that there's actually a hood there. It's probably the most convincing type of mixed reality experience that I had. So overall, what I'd say about Microsoft is that they have a huge developer ecosystem. I mean, there are just thousands of developers that are here and most of them are coming from enterprise applications, which is kind of interesting in the sense that in order to do HoloLens development, you actually have to have more skills of a game developer because you have to know Unity. Essentially, most of the apps for the HoloLens that are being written right now are being written in Unity. So in talking to Raven Zachary of Object Theory, one of the things that he told me was that either you're a game developer that's getting into enterprise app development, or you're an enterprise app developer that's getting into the game development workflow pipeline by using Unity. Now, in terms of the inside out tracking controllers that were announced today, I, there's going to be no hands-on of these controllers, but what I can say from the Leap Motion is that it is somewhat of a design constraint to have to have your hands within the field of view. But once you lose that direct line of sight and you kind of lose that hand presence, which when you're in augmented reality may be okay, but when you're in VR, it just is a presence breaker and kind of a buzzkill. Now, I think that you can design around that, but once you start to think about porting over other applications from, let's say, the Vive or Oculus Rift, then it's going to be pretty frustrating for people playing these games. Now, this is a little point that I have a little bit of an open question as to what exactly Microsoft strategy is with their VR headsets. which they were essentially never really calling them VR headsets. You notice they would always call them immersive headsets. But these VR headsets, the Acer, I had a chance to do a hands on and unfortunately I was not impressed. So a couple of things that I had problems with with this headset is that first of all, it's an LCD screen, so it's not a low persistence screen. So when you move your head quickly left and right, you get the same type of motion blur that you got back in the DK1 or DK2 days. So it feels like a regression back to that. Also, the optics are pretty terrible. To me, I think that the Gear VR, Oculus Rift, as well as the HTC Vive have really good optics in terms of like, there's no huge distortion that's happening around the edges. Now, the Daydream, on the other hand, the optics are pretty distorted and it makes me a little motion sick when I move my head around. And unfortunately, the Acer VR headset has the same type of really just poor optics that as I look to the edges, it kind of blurs and it just started to give me a little bit of a headache in combination with the lack of low persistence with the LCD screen. The other thing is just the build quality and actually putting it on was actually pretty awkward. Um, having a strap where you have to use both hands to push a button, pull it, and then it kind of falls down in your face. And, you know, anytime that you have to go through a lot of explanation to tell somebody how to put on the headset and you follow the instructions to a T and it still kind of like, doesn't exactly work the way it should. then that's a problem. And I think that it's going to be a little clunky for you to actually get onto your head. Now that said, once you have it fitted and you're able to essentially slip it back on, then it's probably fine. But point being is that I wasn't impressed with the build quality. It's a low price point. You know, they're going for, I guess, around like $399. They said that they're going to be having a bundle that's available with the motion controllers, but I don't think they're going to necessarily be mandatory, which is essentially like you're getting this $400 price point for what is essentially somewhere in between like a Gear VR and presumably an Oculus Rift, but yet the quality of the actual experience of the VR experience is actually less than Gear VR. And, you know, for me, I look at what they're doing with the HoloLens in their AR headset, which is essentially like this $3,000 dev kit, and it's for enterprises. And if they're making this enterprise play, I don't really understand why they're going to create a consumer VR headset that is going to be kind of worse than anything that's out there. And yet, if there's going to be a lot of content that's being ported, it's going to just be a frustrating experience. So I imagine maybe what they're thinking is that they have a low price point to have enterprise applications that are going to be a little less intensive where you're not going to be necessarily moving your head around very quickly and maybe like data visualizations. And for some people, it may not give them any motion sickness problems. But for me, I'm particularly sensitive, especially using kind of the high end VR. It feels like going back and using the DK1 or DK2. So for me, I basically say you get what you pay for with Acer VR headset. And to me, it's a little bit of a mismatch for what Microsoft's overall strategy is with that. Now, overall, if you look at their ecosystem of the enterprise development, I mean, they have all these other initiatives that are going on. And I just want to call out the Microsoft Cognitive Services with the artificial intelligence. I think that is going to be a huge thing. I've already had a chance to try out the human interacts starship commander, which I talked about back in episode 503. And I think that, you know, there's going to be a lot of using these conversational interfaces and interacting with artificial intelligence using with a HoloLens. You know, if you look at what happened with the mobile market in terms of iOS, as well as Android, Microsoft really missed that boat. So they're kind of taking this device agnostic approach. So they're really thinking about the future of this world where we have multiple different devices and you want to be able to take one experience and have it in many different contexts. So you could write an application for virtual reality, have it be on your HoloLens and augmented reality, and also have a 2D version of that and have it on a Surface tablet to be able to get a portal into that world. And I'll be talking to Raven Zachary of Object Theory of what they've been doing with that. So I think because augmented reality is really going to be bootstrapped by the enterprise market. I think when I look at the other companies like Google, Facebook and HTC Vive, you know, in terms of their ecosystem for doing enterprise development, I think Microsoft actually has got a much better positioning when it comes to really bootstrapping augmented reality. So as I'm here at the build conference, I'm seeing a lot of very polished, some of them actually deployed AR applications within the HoloLens. And a lot of it is doing sales, also architecture, engineering, design, anything that's working with 3D spatial data, data visualization. And there's not any specific education apps that are on the floor that I see here, but that's another market that I see is going to be a huge application for the HoloLens. So overall, I'm super impressed by the strategy that Microsoft is taking. I've got specific questions about the execution of what's happening with their VR headsets. I think that, uh, once people will get their hands on them, uh, it might be a little bit more disappointed. Um, I think some people had some high hopes of some middle of the range price range, but you know, there's different trade-offs and costs that you have to cut corners. And some of that, I think, unfortunately is going to be in the quality of the experience that you get. So that's all that I have for today. I just wanted to thank you for listening to the Voices of VR podcast. And, you know, I'm here on my own behalf, uh, traveling to all these different places, doing these really in-depth interviews. And if you enjoy the podcast and the coverage that you're receiving here on the Voices of VR, then please do consider becoming a donor to my Patreon. Just a few dollars above makes a huge difference. So you can donate today at patreon.com slash Voices of VR. Thanks for listening.