#1232: Low-Vision VR Design Innovations & XR Accessibility in Owlchemy Labs’ “Cosmonious High”

Owlchemy Labs has been on the leading edge of accessible VR game design since 2016. starting with Job Simulator, and then following up with key accessibility features for Vacation Simulator, and Cosmonious High. They were at XR Access Symposium showing off their pioneering work on low-vision VR features for Cosmonious High launched last March, and I had a chance to sit down with their Accessibility Product Manager Jazmin Cano and Senior Accessibility Engineer Peter Galbraith to unpack their journey into XR Accessibility since 2016, and some of their key innovations along the way as well as a deep dive into their low-vision VR features.

Owlchemy Labs shipped a Smaller Human mode to Job Simulator on May 3, 2016 saying, “We’ve received a ton of requests from some of the less vertically-gifted players to build in a way to reach all of the areas of Job Simulator that may have been previously out of reach. We’ve delivered!” This allowed kids to play the game, but also had a number of accessibility implications for folks who not able to stand up. Owlchemy Lab’s senior accessibility engineer Galbraith told me in this interview, “It allowed players who were shorter or seated to reach things that were higher up. And that was sort of our first foray into this. We understood that VR is a very physical medium, so we had to make considerations for the different physicalities that people have. That was one of our first things. We also, at that time, we didn’t necessarily realize we were making some accessibility features, but to combat things like the screen door effect that was present on a lot of headsets at the time, it was important that we made our text large and clear.”

Owlchemy Labs announced a one-handed mode in Vacation Simulator on June 2, 2019 saying, “We strive to create interactions that feel natural, and we understand that not all players may not be able to use two arms simultaneously. We now fully support one handed play for 100 percent of the game!”

Owlchemy Labs announced their subtitle update for Vacation Simulator on October 28, 2019 saying, “The elements of space and presence in virtual reality added a whole new dimension to the challenge that demanded we tackle things with a fresh perspective. After a boat load of research, prototyping, and user feedback, our subtitle system defines a new standard for VR… Our subtitles are both functional and playful. No matter where you look, the subtitles are always visible and can help guide you back to the speaker.  Transparency helps keep them unobtrusive and integrated into the world, while high contrast ensures they’re legible no matter what you’re looking at. Step away from the speaker and they’ll enlarge; turn your back and icons ensure you know exactly who’s saying what.”

Owlchemy Labs’ next title of Cosmonious High launched on March 31, 2022, and then later shipped their first accessibility update on June 09, 2022 that featured One-Handed Mode, Play Seated, Enhanced Object Interactions, Tutorials, and More Icons.

On March 23, 2023, Owlchemy Labs announced their first-of-a-kind VR features for low-vision users that included a vision assistance mode with high contrast object highlighting, haptic feedback for object selection and teleporting, cancelable audio of audio description, grab and release confirmation of objects, and audio descriptions of objects, environments, areas, and tutorials. Owlchemy Labs told me that they didn’t do any user testing with any blind users for these features, and in my brief testing these accessibility features felt much more optimized for low-vision users than for folks who are blind. But given how visual of a medium virtual reality is, then these are pioneering features that will no doubt provide inspiration for other features and implementations for blind and low-vision users.

Owlchemy Labs’ Cano and Galbraith gave an amazing presentation at XR Access Symposium going through some of the pioneering work on low-vision accessibility features that they’ve been working on, and I had a chance to sit down with them both at the end of a long first day of the gathering to unpack these accessibility features, how Owlchemy Labs incorporates accessibility design into their development process, their journey of solving some of the hardest XR design problems of XR accessibility through their iterative game design approach, and how they are embodying their mission of “VR For Everyone* *No Exceptions.”

Owlchemy Labs’ pioneering work in accessibility was consistently referred to participants at the XR Access Symposium as the type of proactive, innovative, and experimental innovation that is needed to happen on a much broader scale across many different contexts and applications of XR in order to move towards standardized guidelines in the industry. I do a deep dive into their process and philosophy with Cano and Galbraith, and it is super inspiring to see what can be done by independent developers who develop WITH folks with disabilities and have a commitment to Inclusive Design and Universal Design Principles.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the future of spatial computing. You can support the podcast at patreon.com. So this is episode 11 out of 15 on my series on XR accessibility. Today's episode, we have a deep dive with Alchemy Labs with Jasmine Kano and Peter Galbraith, who talk about some of the different accessibility features that they're able to implement for Cosmonius High, specifically the features for low vision. So this is a really innovative approach for folks who have low vision. Most of the virtual reality experiences, because VR is such a visual medium, there haven't been honestly a lot of applications specifically for folks who have low vision or blindness. This is probably more towards low vision because I tried to play through some of this without having any sight. And it's very difficult because there are some objects and stuff that you're interacting with that I think would be pretty difficult to fully understand what's happening. I think that's a good step in the right direction, but I don't know if it's at the point where it's actually for people who are fully blind. They actually didn't test the features up to that point, but there are some really innovative low vision techniques and user design patterns that I think is starting to take a first crack of what would it look like to have a screen reader type of functionality within an immersive space where you can hold out your hand and point at different objects and get more additional information. And when you're teleporting around, just get a little bit more context. There is this challenge of the audio overlapping across the narrative that's happening and what's happening with these audio descriptions as you move around. But yeah, I think this is the first crack at it. And I think it's actually quite innovative and something that's definitely worth a look at in terms of just seeing what is done and what might be able to be done in this realm in the future. So that's what we're covering on today's episode of the Voices of Europe podcast. So this interview with Jasmine and Peter happened on Thursday, June 15th, 2023 at the XR Access Symposium in New York City, New York. So with that, let's go ahead and dive right in.

[00:02:12.577] Jazmin Cano: Hello, I'm Jasmine Cano. I work at Alchemy Labs and I'm an Accessibility Product Manager and also Radical Kindness Practitioner. So what I do is design, research, user testing, and also a producer for some teams.

[00:02:31.781] Peter Galbraith: And I'm Peter Galbraith. I work at Alchemy Labs as our senior accessibility engineer and cyber pun king. I do a lot of the programming and design work for our accessibility and gameplay features.

[00:02:48.612] Kent Bye: Great. Maybe you could each give a bit more context as to your background and your journey into the space.

[00:02:53.757] Jazmin Cano: Oh, I like this one. So I started doing stuff in VR in 2015 and I started as a 3D artist. My first VR gig was at an educational VR startup making content for kids in hospitals who couldn't go to school to take their classes. So the content was made so that they could still learn while they were you know, away at the hospital. After that, I was doing a few freelance gigs around the Bay Area with some VR startups. And in 2016, I joined High Fidelity. And I was at High Fidelity for about six years doing open source, shared VR, social VR platform stuff. And at that company, I went from being a 3D artist to doing community management, user engagement work, and then that led me to product management. Towards the end of that time at High Fidelity, I was helping people have access to our products, making it accessible, and that's how I got my title of accessibility product manager. And after leaving High Fidelity, I joined Alchemy, and I get to do accessibility still as a product manager. Cool.

[00:04:05.800] Peter Galbraith: And yeah, I started first messing around with VR in 2013 with the Oculus DK1. I had gone to college for computer science, but not specifically game dev and the game dev side of things. So I taught myself Unity and C Sharp. and began doing personal projects and just messing around with VR because I thought it was a really cool new medium. And as I was messing around making small projects of my own and showing them off at local meetups in the Austin, Texas area, I eventually got noticed by Alchemy and they asked me to join in 2016 and the rest is sort of history. I've been there for six and a half years and slowly transitioned my role from Gameplay Engineer to now the Senior Accessibility Engineer.

[00:05:02.446] Kent Bye: I know that over the years, Alchemy has been a real pioneer with trying to push forward different levels of accessibility. I know that there was a lot of younger kids who were playing things like Job Simulator. And so there was height adjustments to be able to have kids to be able to play. And then there's been captions that have been implemented. Here at the XR Access Symposium 2023, you were showing a lot of the features for folks who either have low vision or blindness. And so, yeah, I'd love to hear maybe a little bit of a recap of some of the different accessibility features that Alchemy has been involved with over the years.

[00:05:36.032] Peter Galbraith: So one of the first accessibility features we had, as you mentioned, was the smaller human mode that was added to Job Simulator very early on. It allowed players who were shorter or seated to reach things that were higher up. And that was sort of our first foray into this. We understood that VR is a very physical medium, so we had to make considerations for the different physicalities that people have. That was one of our first things. We also, at that time, we didn't necessarily realize we were making some accessibility features, but to combat things like the screen door effect that was present on a lot of headsets at the time, it was important that we made our text large and clear. And over time, we've kept with this style and design because it has turned into being a very accessible thing for our games. Additionally, you know, from there we started doing things like planning for people who only have the use of one hand. So initially there were a couple small items in Job Simulator that required a two-handed interaction like the grape juice bottle. That's a family-friendly game, definitely not a wine bottle. But you had to, at one point, hold the bottle and remove the cork with the other hand. But as we realized that that was challenging for people who, you know, may have only had the use of one hand or maybe even they just had one controller die, it turned out to be a benefit to just let players remove the cork right away and do things like that. So we started thinking about one-handed accessibility. And then my first real focus on accessibility was with our VR subtitles system. During Vacation Simulator I was given the opportunity to prototype what I thought was important for a couple days, and I decided to spend that time working on a VR subtitle system which we eventually decided to turn into a full part of the game. And that allowed for, again, a lot of accessibility, it allowed for localization, and really showed the value of having accessible features in our games. And then from there my role slowly evolved and now we've gotten to do things like the recent vision accessibility update that we did for Cosmonius High to help players that are legally blind and have low vision experience our games and challenge this notion that VR is only for people with sight. So we're trying to expand VR for everyone. VR for everyone is our goal.

[00:08:34.009] Kent Bye: And Jasmine, I'd love to hear where you came in on this process. There's been a lot of history of different projects that Peter just ran us through. And so just a long evolution, actually a lot of great context for what is possible and what can be done. And so I'd love to hear where you dropped in in the mix of coming from High Fidelity over to Alchemy and where you started to get involved with accessibility.

[00:08:56.308] Jazmin Cano: Yeah, so right before joining, I was working on accessibility, high fidelity for blind users, and I was able to bring that with me. When I joined Alchemy, my manager told me day one, you're going to work on accessibility for players who are blind. And I was like, whoa, we get to do that? That's so cool, because I haven't seen that before in VR. And I was just ready. So I jumped in, worked with Peter, and then started thinking about the approaches we could take, looking at the research that the company had done before, and ready to go, prepared for it, and worked on it till the end of the year. And so I got to launch it this year in March, and that was really exciting to show people what I got to work on.

[00:09:42.411] Kent Bye: Yeah, maybe you could elaborate on that research process, because we're here at XR Access, where there's a lot of academics who are doing a lot of research for a lot of these variety of different things. And so were you able to leverage some of that existing research, or was this an area where you really had to do your own user research?

[00:09:58.715] Jazmin Cano: I would say kind of yes to both. So a lot of the research I had done previously was talking to people about XR experiences and if they hadn't experienced XR as a screen reader user, what have they done that's maybe like a game to kind of understand how they approach something that's not a traditional website. And then what else I did was Google a lot of words like video games for blind players. And there was not a lot out there that I could look at and reference. So the research that I did for that was mostly looking at what academic research studies have shared. So that would be mostly secondary research, reading what other researchers have written and collecting all that data together to help us commit to some design ideas.

[00:10:45.463] Kent Bye: Yeah, and you said that you had also talked to other consultants who were blind gamers as well. Maybe you could elaborate on that.

[00:10:51.868] Jazmin Cano: Yeah, so the team talked to Steve Saylor, also Blind Gamer Steve. I wasn't there for that, but I did hear it was a great time talking to him, these consultations. And there is a lot of documents already written up that the team had to use to help start the project. So the questions for Steve, there's a lot of questions for Steve, but it was mainly like, Steve, how do you imagine yourself using VR? And one thing I remember is that he had said he wants to have the force in VR, use his hands, place them up in space and aim his palms around and hear descriptions. And that's kind of like what we have now.

[00:11:34.453] Kent Bye: So yeah, I'm wondering, where did you begin? Maybe you could just talk about some of the different target feature lists or things that you were trying to implement.

[00:11:42.277] Peter Galbraith: Absolutely. Yeah, so yeah from that initial meeting with Steve Saylor. He sort of gave us his dream sheet of what he'd like a accessible to blind people VR experience to be like and so some of those things were like having every object in the game having descriptive text and You know, having settings for the text-to-speech so that it's customizable, having descriptions of the environment and what's around, descriptions for cinematics, descriptions for navigation as you're moving around the world, and just adding sound effects to things to better identify where they are and what they are in the world. And so we started with this and we, from there, created a sort of list of high-level goals of like, okay, what are our key things that we want to see in the experience? And so one of our core goals was to create a mode so that people with vision impairments could play our game from start to finish without any sighted assistance. We wanted every object in the environment to have descriptive text that would be able to read the object's name and give a short description via text-to-speech, similar to how screen readers work for content in a 2D plane on traditional screen media. And so that's where we set our initial sites and just tried to identify what we had to work with already that we had done. You know, some things were great, like having the large text already in our games really helped get us head started. But as well, we also had a wonderful system called World Items that provided names and descriptions of most items that could be picked up in the game already. And so once we had the text-to-speech implemented, we were able to hook right into this system and get a first prototype going to see what it would be like in the headset if we have descriptions for a number of these items.

[00:14:10.504] Kent Bye: Yeah, so in these virtual worlds, you have all these objects that you're interacting with. So it's adding the metadata in some ways to be able to describe what all those objects are. And I have a couple of questions, because first I'll start with, so I was talking to another researcher named Lucy who was looking at 360 video descriptions. And so people watch a 360 video and then have different descriptions of different stuff that's happening in the scene. And one of the questions I had is, well, how do you prevent those audio descriptions from going on top of different narrative moments that are maybe happening, either speech or dialogue? And so that was a 360 video, which you could maybe have a separate audio track. It's a little bit more difficult to have those collisions that basically is happening on the authoring side when you're making the video. But in this, this is more of an interactive game. So how do you manage the different layers of audio from what's happening diegetically with both the story and the sounds of the environment versus making sure that these audio descriptions isn't somehow overlapping or conflicting with this other audio that you have built into the experience.

[00:15:15.554] Jazmin Cano: Right. So that's a great point. That's something we talked about. How do we make sure that the audio is not conflicting? What we did was we lowered the volume of characters speaking in the environment because, well, the way we had built it, we couldn't really pause the characters from speaking when the text-to-speech was happening. And it's not really a problem because we have a way for players to get the info again afterwards if they do miss something. So the volume is prioritized in a way where the descriptions are the loudest for the player to hear. That kind of follows the standard of a screen reader as well. And then, yeah, like I mentioned before, if the player wants to hear what a character said, they can wave at them and then hear it again.

[00:15:58.949] Peter Galbraith: Yeah, the audio ducking when a text-to-speech request happened allowed us to prioritize what the player was requesting to hear. So it gave the player agency in what they are hearing. So we just, whenever the text-to-speech is speaking, we just turn down the in-game audio, but we also provide a lot of ways of signaling important information like what the characters might be saying to the player, whether that's through additional speech lines or through additional things in the environment.

[00:16:38.492] Kent Bye: Yeah, and one of the other things that you'd shown that was really quite interesting is a lot of VR games, you're actually locomoting and moving around a space. And so you have a teleport mechanic. And so just to have a little bit more of a constrained zones, because you could allow people to go anywhere. But if they can't see anything or they have low vision, then it actually is helpful to constrain these different locations that they could go to. And so, yeah, maybe you could talk about this, which I guess it reminds me of the Rick and Morty VR piece that Alchemy Labs did, where you were able to create these larger zones that you're going in between. And so here, though, you have specific locations that are tethered to another object that you may be interacting with some way. So you have an ability to explore around a room, but get a little bit more context for what's in that room and where you might need to go to be able to solve these various puzzles or get the information that you need to progress through the game. So I'd love to hear a little bit more about this teleport system that you created.

[00:17:32.920] Peter Galbraith: Yeah, so in Rick and Morty VR and Vacation Simulator, we had specific teleport zones that the player could aim at, and they would go to that zone, which the zone matched up with roughly their play space shape. In Cosmonius High, however, we've switched to granular teleporting, so the player can specify on a more nuanced level where exactly in the world they're going to teleport to. But we repurposed that zone tech from those games to indicate where certain objects of interest are or certain locations of interest are. so that when the player is aiming with their teleport pointer around the space, they can hear what the name of that space that they're trying to teleport to is, and by doing that they can sweep around with their hand as they're aiming around and feel a little haptic vibration whenever They move from space to space and then hear what the name of that space is so that they can know that they're heading to the right space and help them build a mental map of the environment as a whole. And additionally, once they do decide to teleport to a location, we also play confirmation audio to let them know that yes, they have made it to the location that they were trying to make it to.

[00:19:09.026] Kent Bye: Gotcha. So yeah, it sounds like these features are built for both people who have low vision, but also people who are blind. And as I was walking over, I said, hey, maybe I'll try to put a blindfold on and play through the game. And so you said, oh, it might be a little difficult. So it sounds like these features may be tuned for people who are low vision, but it still may be technically possible for someone who is blind to be able to play this game. Is that correct?

[00:19:31.853] Peter Galbraith: We were definitely aiming to make this work for people with total blindness, especially this feature of teleportation was to allow people with blindness to move around the space even without the feedback of the physical world, which is really one of the major challenges we encounter when designing for blind and low vision players in VR. That said, we didn't test with anyone that has total blindness, so we're not 100% sure whether or not it's possible to complete the game that way, but we'd love to hear feedback and see if you can do it.

[00:20:16.165] Kent Bye: Yeah, Jasmine, I'd love to hear a little bit elaboration on the testing process, because I know that that's something that folks who either have low vision or blindness, a lot of times they really don't have a reason to own a VR headset because they can't really use much on it. So as you're building what may be one of the first consumer VR apps that have a lot of really robust features for low vision folks, then maybe you could describe how do you overcome that, or how do you start to test some of these different features and collaborate with folks like VR Oxygen in order to do this type of user testing or at least get some feedback with whether or not you're on the right track.

[00:20:50.205] Jazmin Cano: Yeah, so VR Oxygen was a great help in finding people. They have a large community, I want to say thousands of people, who've signed up on their newsletter to be a play tester. And the way they approached this to help us was ask, like, who among you have vision impairments? And then there's a form for people to fill out that kind of told us a bit more about their vision so we can see if they fit sort of the profile we're looking for. and through there we found people who had a wide range of vision impairments and different levels of severity when it came to like blurriness or anything that obscured their vision. And we had more people sign up than we could playtest with, which is exciting. At first it was a challenge finding people and they helped us. They were also able to find people who could offer access to their Quest 2 to like a family member. Because you're right, there are some challenges for people to use this because the devices still need to be more accessible. And I would say that also contributes to why I wasn't able to play test with someone with total blindness. One piece of feedback I heard from someone who is blind said that they don't want to because they don't think they can use it on their own because it's not accessible to blind people. So the process there, I mentioned earlier people fill out a form. We just set up a call with them, see what their availability is. And we'll meet over video calls, do introductions, and then have them play a private build of our latest version that we worked on. And so we would watch them play in their headset in their home. We would also watch the screen of what they're seeing. so that way we can have a side-by-side view. After the playtest, we would just have a conversation after and get their immediate feedback. Like, was there anything challenging? Was there anything confusing? Was there anything that you experienced that you thought, wow, that's great? We kind of knew the answer sometimes based on how they were reacting. One of my favorite playtests was actually Steve Saylor. So we got to show him an early build and when he heard the object descriptions in our registration scene, his reaction to the cute accessories we have was awesome. It was like, He would reach over, scan a pair of glasses, hear the description, and go, oh, that's cute. Scan another accessory, hear the description, and go, that's cute. It was just so fun. Kind of felt like I was shopping with a buddy hearing him. I would say, yeah, that's pretty much how the process went with playtesting and being on these calls with people. Afterwards, there would be a survey sent to them to ask them some follow-up questions where they could answer honestly after thinking about it. Is this something they would use? Is this feature set something that they would recommend to other people they know who have low vision? Is this something they would recommend to someone with total blindness? There's a bunch of questions like that that also helped us gauge if we were doing something right, if we were on the right track, what could we improve on, and is this something people would use? We pretty much figured out early on it is something people would use because the responses were mostly, yes, I would recommend this, yes, this was easy to understand, and yes, I would keep playing with these features.

[00:24:13.026] Kent Bye: Yeah, I wanted to go back to this point that was made by one of the folks who was blind who said that they didn't want to try VR because they didn't want to have to rely on someone else. And that was a point that was made to some extent. And one of the first talks that we saw this morning at the XR Axis Symposium where they were talking about guides. And so having someone as an embodied avatar or maybe a parrot helping to guide another avatar through a virtual space. And one of the comments was that the ultimate guide would be an AI, so they wouldn't have to rely upon another human being. And so that seemed to be a theme where folks who are looking for these accessibility features want to, end to end, be able to just pick up a headset get into the application and be able to play without having to rely on anyone. I think probably once they get into the game, I'm sure that maybe they'd be okay. But I imagine if they just picked up a VR headset that a lot of the core platform functionality from the MetaQuest, I don't know if there would be the functionality to allow people to navigate, to even download and open up the game. So it feels like on the platform side, there'd be a lot of work to be done, but I'd love to hear any of that reflections of this aspiration to have this complete autonomy both from a platform level but also in the context of your game and to what extent do you think that the types of AI guidance might be implemented in these different types of features in order to close that gap so that people who have total blindness may be able to achieve this total sovereignty and be able to play these immersive VR games without the assistance of other people?

[00:25:41.080] Jazmin Cano: It makes me think of the social model of accessibility. So there's the medical model, which I think generally is about the medical issue, air quotes, I'm doing air quotes, and about fixing the problem, where the social model of accessibility is about removing any barriers that people would face, it's not that the person is disabled. It's that the environment around them, the people around them is causing something that is not giving them the ability to do what they want or access what they want. So when it comes to XR, I think it's important for people to think about that social model of accessibility. So instead of trying to fix something or is that if the apps and the products and the devices were designed accessibly, then people could access it without having to turn on a bunch of settings that would work just for them. If it was customizable, someone could customize the environment to work for them. And if that means AI is being created that would help make something more accessible to someone, then that's great too.

[00:26:48.460] Peter Galbraith: Yeah. And I definitely think that there's more that we can see on the platform side. Launching an application in VR is still something that would be very challenging for people that are blind. So, love to see more platform accessibility there. but we did on our end try and make it so if you at least can make it into the application even if you can't see you could still activate the low vision mode that we had created simply by holding the controller to your ear and double tapping the secondary button. It doesn't require you to see any objects in the environment. It doesn't require you to reach out and touch anything. It just, all you have to do is know where the secondary button on the controller is, which if you own the headset, hopefully you will. then use the proprioception of knowing where your body is to put the controller in the right place. And this was something that we definitely thought a lot about trying to provide players with that independence so that they didn't need sighted assistance to begin with these accessibility features in our game.

[00:28:10.280] Kent Bye: How would one learn about that location of where to put the controller and what button to push?

[00:28:16.140] Peter Galbraith: So I believe we've been trying to communicate that through a few different means. I'm honestly forgetting at this moment. Do you remember?

[00:28:27.744] Jazmin Cano: Yeah, we have some of it written on our blog. Also our patch notes for the update. Yeah.

[00:28:34.087] Kent Bye: So they kind of have to know the secret handshake in some ways to be able to activate it.

[00:28:38.162] Peter Galbraith: Unfortunately, that is still somewhat the reality of it. We would like to, again, if platforms had something innate on the platform that we could hook into and say, oh yeah, they've turned on the vision accessibility mode on their headset on the platform level, and we could read that, we could automatically turn it on for them at that point, but Unfortunately, that was something that we didn't really have the ability to do in most cases. So we were trying to come up with a way where players can do this without any additional feedback from this. And unfortunately, it is something that they do need to go to an external source to learn at first. That said, I believe we do tutorialize it. in the Vision Accessibility page we added to the backpack in the game. So there's information to be had, so if they have other friends that have tried this, or they found out about this through some other piece of media, maybe they would know. But yeah, I would love to see us be able to communicate that a little bit better to players. It's knowing where the players want to look and want to get that information from without it impacting the experience for all players in a negative way.

[00:30:06.817] Kent Bye: I see, yeah. So you're talking about this backpack that I just want to give a call back to Job Simulator, where when you're exiting, you eat a burrito. So there's these in-game ways of engaging with some of these different options. And so this backpack has all these different knobs and settings. So maybe you could talk a bit about what are the different toggles and settings that you can do with this backpack to control some of the low vision or blindness settings for Cosmonius High.

[00:30:33.948] Jazmin Cano: So it's just one toggle. That toggle, when it's turned on, it activates all the features that we built for the Vision Accessibility Update.

[00:30:43.372] Kent Bye: Is there any ways that they can modify it or tweak it?

[00:30:47.406] Peter Galbraith: Not on most platforms. On PlayStation, there are a few things that they can modify with the voice that is chosen, as well as the reading speed of that voice. But that is managed actually through, again, the system settings rather than the internals of the game there. but we turned these on wholesale to make it as simple as possible for the player to get in and have access to all of these features. In the future, I would like to see us add more customization to these things so that players can access more fine-grained control to customize the experience to their own needs.

[00:31:35.100] Kent Bye: Yeah, and I think you mentioned during the presentation that you've had 1.5 million or so text-to-speech requests. So each of these different object interactions in these virtual worlds, you've had a lot of folks using this for the first 30 days. Love to hear about some of that first reactions and engagement that you've been able to get out of some of these features.

[00:31:54.763] Jazmin Cano: Hearing that number was so exciting. So that number was told to us by the meta team. And I was shocked and amazed and so glad to hear it. Also, I'm excited to say we haven't gotten any negative emails or any sort of messaging from people saying they had issues with it. So I would like to think that by not hearing anything bad, I guess, that means people are having a nice time with it. It's working out for them.

[00:32:22.448] Peter Galbraith: No news is good news.

[00:32:25.530] Kent Bye: So I'd love to hear some of the other major points that you were trying to make here during your presentation that you were giving this morning.

[00:32:31.353] Peter Galbraith: Yeah. One of the other things that we did early on was we created vision simulation tools that allowed our developers that are sighted to test the game as though they had blurred vision, or color blindness, or complete vision loss. And this allowed us to do quick iteration and knock out obvious problems so that we could better respect our playtesters' time and get the most valuable feedback from them. Because obviously these simulation tools are no substitute for actually talking to players with vision impairments, but this was a great way to respect players' time and increase our iteration time so that we could get these updates out to players faster. So that was one of the process benefits that we had along the way.

[00:33:26.384] Kent Bye: Yeah, just a quick note on that is that I was talking to Joel from Booz Allen Hamilton, and what he was saying is that ideally, eventually, a lot of these features you would want to have either at the operating system level, like, say, with the Apple Vision Pro, or built into game engines like Unity or Unreal Engine, where it'd be just a matter of filling it out and implementing it where you can have these things featured. But it sounds like here at Alchemy, you have to not only build it and implement it, but also create the dev tools to be able to do that. But do you have any ideas to whether or not some of these different accessibility features might be open sourced or contributed back in some ways, or any thoughts on how some of these different things could be shared or implemented more at a core level with a game engine like Unity or Unreal Engine in the future, or just the design patterns of that to be implemented more broadly across the XR industry?

[00:34:13.843] Peter Galbraith: Yeah, so we are absolutely happy to share our design patterns. That's why we're out here giving the talks and trying to encourage people to take on these design challenges and learn from what we have done and iterate and make it better. We know that there's still room to go. We are early in XR still, in my opinion, and even earlier in accessibility for that. So we are definitely here to encourage people to copy our designs, learn from what we've done on that side, do the nature of our work. I don't think there's much that we can share in the way of specifically code. And it's really, outside the scope of what we as a company can do and maintain long term. So we'd love to see the community learn from what we've done and incorporate it into tools for open source. We'd love to see platforms integrate more of these tools and features at their level, whether that's Unity, Unreal, whether that's through SDKs for the different headsets. We'd love to just see more of that done on their side and to see the entire community benefit from the research and development work that we've done. A rising tide raises all ships, and if we succeed, we hope the community will succeed, and if the community succeeds, that'll help us succeed. It's good for everyone.

[00:35:49.238] Jazmin Cano: I want to jump in and share that at the conference, we've had people come up to us and tell us that they've stolen, copied, and done what we've done, just copied ideas. But I don't see it that way. It's like, yes, that's so great that you're doing what we're doing, because we want to see VR for everyone. And so if what we've done has helped other teams build faster in an accessible way, that is so great. And I'm really happy to hear it.

[00:36:15.175] Kent Bye: Great. Yeah, I was wondering if there's other things you wanted to share in terms of your talk slides, I have to say. It was packed with so much great information and, yeah, just a lot of really detailed, like you said, you're at this XR Access Symposium, coming back and sharing a lot of the different insights and learnings that you've had and, you know, all the research and some of these really great overview of all the different stuff and your journey of doing this. So, yeah, I'd love to give you the opportunity to share any other things that you were talking about this morning.

[00:36:40.386] Jazmin Cano: Yeah, I want to share that again, it's really important to talk to people from communities that we're designing. I wouldn't even say designed for, but designing with. So when it comes to accessibility, if someone has a disability that is like their target audiences, their own experience may be different. So if someone has a disability, it may not be the same as someone else's experience with the same disability. Does that make sense? Like everyone has their own perspective, experiences, maybe multiple disabilities at once. hearing from people as they're using these products and then sharing their ideas is so valuable. And that's why it's really important to keep talking to people. We started in September with these playtests and we just kept going through the end of the year because we wanted to keep hearing from people about what they thought and also hear about what they just thought about accessibility when it comes to low vision in general for XR.

[00:37:40.593] Peter Galbraith: And a lot of our design centers around player agency. So, you know, we had made some choices initially of, oh, just scan your hand around the room. Don't worry about it. And every item that you pass your hand over is going to read aloud to you all of a sudden. And that ended up being a cacophonous mess, loud, hard to understand for players.

[00:38:08.242] Jazmin Cano: I'm just remembering our playtest and Peter and I would look at each other and just like make these faces like we're like, ah, the sound, it's too much.

[00:38:18.278] Peter Galbraith: Yeah, there were some moments where we realized as the players started to act in the way that players do erratically and unpredictably, all of a sudden it was a lot to deal with. And so we learned early on in playtests that it was important to allow players to get these text-to-speech things only when they wanted it. So we implemented the use of that assist button I mentioned, so that when they press it, they will get information about what their hand is over, or if they can hold it down and they can scan around and it'll automatically cancel old text with new spoken text. and understanding that player agency is key. It goes back to what we were saying on ducking the audio levels. It's important to understanding why a player wants information. In some cases, We needed more than just visual information. We also needed to provide them with mechanical information about how an object works, especially in this environment where we're in an alien school and they may have an object that doesn't have a great real-life analog. So it's important to be able to dynamically describe how to interact with something or the state that something is in so that players know what's most valuable. And to trim the fat out of it as well, you know, initially we had some jokey descriptions attached to the informative things, but we found that Though they were fun descriptions, players really wanted the information that was most pertinent to them at any given time, so we ended up removing the jokey parts of the description in favor of the most informative parts and getting that to them as quickly and succinctly as possible.

[00:40:26.440] Kent Bye: Yeah, one of the things that Christian Vogler from Gallaudet University was saying was that he was wanting to see maybe a dial down to see how much information he wanted to have, like whether or not you have just a minimalist amount or a medium amount, or just people were able to handle that information overload. And sometimes I've heard some folks with screen readers have The texts go at an incredible amount of like 300 to 400 words per minute, something that's completely indecipherable for me. But for them, it's totally fine. And so I imagine that there may be a range of different amount of information. And having a way to dial in and control that information overload or to kind of really tune it in to what level people want, I'm not sure if that's something that resonated with you or something you came across as well in terms of having different tiers of information bandwidth and allowing people to tune it. get into the fire hose of that information overload, then providing options for them to be able to do that.

[00:41:21.236] Jazmin Cano: And the way we do that player's approach that, in the way of being able to press down on the Assist button and hold it, then when they scan around the room, they can hear the names of what they're scanning until they let go.

[00:41:34.787] Peter Galbraith: And that's slightly different from what they will get if they are aiming at an object. If they are scanning around with their hand, they are only going to get the names of the objects. However, if they find an object that they want to get more information of that way, and then trigger the assist button while still aiming at that object, they will get the name and the full description. So we do allow some amount of innate customization of how much information you want to hear. Additionally, if you're still listening to those descriptions and you hit the button again, you can cancel that description and be, I'm done with this and move on. But, you know, obviously customization is king in the accessibility world. So in the future definitely would love to see more options for that, but again writing a bunch of extra jokes that are unnecessary for the thousands of items in the game is a bit of a large task and writing thousands of jokes and having them all be funny is We've got an amazing writing and content team though, and they really knocked it out of the park with writing the object descriptions and the jokes that they did write that you can still see most of them in the game by taking a picture of an object and reading the back of that picture. So, didn't go to waste.

[00:43:02.669] Kent Bye: Yeah, I wanted to just comment on the discussion today at the XR Access Symposium, just how some of these different types of features that you're building in virtual reality for accessibility could potentially be translated into physical augmented reality accessibility features. And I'm not sure if that's something that you looked at in terms of taking design inspiration for making your virtual world more accessible and how that could be core foundational aspects for how that may be applied to the future of assistive technologies when it comes to augmented reality and the physical reality. So I'd love to hear any reflections you have about that.

[00:43:36.358] Peter Galbraith: You know, we definitely take inspiration from the physical world with a lot of our designs. This is spatial computing, it is a lot of physical environment design that we have to do, but we now have to do a lot of it without the benefit of the force feedback of the real world. But I could definitely see some of these design patterns being used in maybe an AR thing in the future or something from wherever. You know, you think about there are tools like Google Lens and whatever that allow you to take a picture of an object and it will identify that object and give you information. So tools like that already exist. But maybe in something with an AR or MR goggles, you could see that sort of technology merged with the design patterns that we've got. So as you're walking around the world, it scans and sees a sign and can tell you, oh, there's a sign over here. So there's definitely potential for some of these design practices to transition to an AR environment or a more real world environment and not just a virtual world environment. We just benefit from it being a virtual world where we know what discrete objects are already. We don't have to do any computer vision guesswork. The game already knows, hey, this is an apple, this is a banana. You know, no trying to figure out if it's a chihuahua or a blueberry muffin.

[00:45:21.042] Kent Bye: Great. And as we start to wrap up, I'm curious what is next for alchemy and assistive technologies. You're here at the XR Access Symposium. I'm not sure if you're getting any design inspirations or any encouragement to go back and keep pushing forward the bleeding edge of what's possible. But yeah, I'd love to hear what's next on the plate when it comes to these different types of accessibility and assistive technologies.

[00:45:42.549] Jazmin Cano: Yeah, these sorts of events are always very inspiring, especially talking to other people doing this in their work, or people who are interested in doing this in their work. And we're always looking at ways to bring accessibility forward with us to our next projects.

[00:45:56.796] Peter Galbraith: Yeah, we're always learning from the people around us. So much wonderful work has been done in the accessibility space by so many people before us. We recognize that we're standing on the shoulders of giants in a lot of ways. So we're out here trying to share our knowledge of what we've got and also learn as much as we can to take that back with us so that we can continue to bring forward the work that we have done, improve on it, and maybe do new things as well.

[00:46:30.077] Kent Bye: Great. And finally, what do you each think is the ultimate potential of virtual reality, augmented reality, spatial computing with accessibility and assistive technologies in mind, and what it might be able to enable?

[00:46:45.083] Jazmin Cano: I'm thinking about the future where anyone can enjoy XR, whether it's VR or AR, regardless of disability. Just want everyone to have fun and have it be useful for them, even if it's productive for work as well. It should be available for everybody.

[00:47:03.434] Peter Galbraith: Yeah, I really believe that the ultimate goal is to remove as many barriers as possible from people who may have been dealt a less than great hand from life. And if we can provide a better experience than reality in some ways, that's really the goal is to do better than reality in every area that we can.

[00:47:35.894] Kent Bye: Is there anything else that's left unsaid that you'd like to say to the broader immersive community?

[00:47:41.058] Jazmin Cano: Yeah, okay. There's this quote, nothing about us without us. Anyone who's new to this work should Google that. It is basically a saying that's heard often within disability communities. So when there's accessibility work being done, always include people from those communities.

[00:48:00.087] Peter Galbraith: And VR for everyone means everyone, regardless of ability or who they are or where they come from. VR for everyone.

[00:48:14.133] Kent Bye: All right. Well, I think that says it all. And I'd highly recommend folks go check out the talk that you gave today here at XR Access Symposium. We covered a lot of the stuff here in more of a conversational context, but lots of more additional bullet points and references. And yeah, for folks, if they really want to see what the cutting edge of accessibility is, definitely go check out Cosmonius High and your previous experiences as well with Vacation Simulator and Job Simulator. And yeah, just a lot of different ways that you're pushing the edge of accessibility in VR. Just a lot of praises from the rest of the community today of folks really appreciating how you're continuing to push the edge of this. And I think it takes startup independent developers like yourself at Alchemy Lab to be able to do that. And I'm hoping that a lot of your pioneering work can end up in either other games or at the platform level at some point. So again, thanks again for joining me here on the podcast to help break it all down. And congratulations on all the amazing work that you've been doing. So thank you.

[00:49:07.494] Jazmin Cano: Thank you. Thank you, Kent.

[00:49:09.737] Kent Bye: So that was Jasmine Cano. She's an accessibility product manager, radical kindness practitioner, and looking at design, research, and user testing, and also producing. As well as Peter Galbraith. He's a senior accessibility engineer, as well as the Cyberpunk King, who's doing lots of different programming and design work on Cosmonauts High. So I've had a number of different takeaways about this interview is that first of all, first of all, I just want to commend Alchemy Labs for taking the initiative to start to figure out what some of these different features might look like for folks who are low vision and potentially blind. You know, I think it'd be challenging to play through this game if you were completely blind, but it might be possible. I don't know. I kind of gave up at a certain point, but yeah, I don't know. I think there might need to be some additional context for the different objects that you're interacting with at different points. And there is an incredible amount of objects in this world. And as you start to put your hand out and push the B or the Y button, you get information as to what's happening in the world around you. And if you pick them up, then you get even more information And also when you're teleporting around, you can get a little bit more context as to where you're going and helping to orient you. I did have the audio accessibility features turned on, and I did play with my glasses off. It actually is a very interesting exercise. There's a lot of stuff that I can't see. I'd love to have a magnifying glass. Actually, I wanted to shout out the Seeing VR toolkit that was done by Yuhang Zhao, Ed Kurtel, Christian Holtz, Meredith Ringel, Morris E. L. Ofeck and Andy Wilson back in 2019 are shown at the XR-AXIS symposium and Shiri Azenkot mentioned it within the conversation that I had with her, but they have a number of different tools like a magnification lens, bifocal lens, brightness lens, contrast lens, edge enhancement, peripheral mapping, text augmentation, text-to-speech tool, depth measurement, and also like an object description tool, highlight tool, guideline tool, recoloring tool, and so yeah lots of different types of tools and These are kind of replicating some of the different low vision 2D assistive technology tools, like on a phone, and to bring some of those different functionalities within the context of like these different Unity files. It sounds like they might been able to like inject some of these different things into different applications that they were able to test it out a little bit. But yeah, I think just having additional tools like that, I think are going to augment something like the screen reader type of functionality. As I was playing it through it, like I said, it wasn't robust enough for people who are fully blind. But I think it's a really good start. I did notice that there sometimes was narration that was happening in the background and I was getting audio descriptions based upon my interactions. And sometimes I wasn't able to always stop it, or I'd miss some of the different dialogue. And if I had my glasses on, I could read the different subtitles that were there, but this whole feature is for folks who are low vision or blind. And so that wouldn't necessarily be an option. And. Sometimes waiting I wasn't able to always replay some of the different story beats sometimes that would play and it'd be difficult to have them Be repeated for something that I may have been teleporting or looking at object or something like that So this type of audio conflict issue I think is something that would need to be figured out and how to repeat different things or yeah if this is a feature that eventually is going to potentially happen within the context of physical reality, obviously, there's no like rewind button in physical reality, but I you know, to what degree as you're scanning a room or an object, at what point does that screen reader information take precedence over the other information or audio that might be coming through. So this is something that I think that folks who are blind have already needed to start to navigate. And I didn't see any options to kind of speed up the text. I know that folks who are familiar with reading using speed readers, sometimes they go up to like something that is completely indecipherable for me, like 300 or 400 words per minute for some of these. So it seemed like the voice that they're using, it was all at a consistent speed and there weren't very little options to kind of tweak it. So I think that as they move forward, having different customization options and more features as they move forward. And also just to make something like this to work, you have to have an incredible amount of metadata and object description. So they had to go through and write all this stuff up to describe all these different objects. So it's just like adding image tags to images. This is for objects in these different immersive environments thinking about adding all those different layers of metadata into these immersive worlds and how to do all these different descriptions. So that's certainly a lot of work to be able to do all that. But like I said, this is probably one of those different presentations that is definitely worth going to take a look at because there is quite a lot of information that they're showing. And also you can just buy the game and start to play through and try it out for yourself and to see how it works for you as you go through. Like I said, I don't think I would have been able to play through all the way through without having any visual feedback, but yeah, it'll be curious to see if someone who is blind or low vision, if they're able to play through the entire game with some of these different options. Also wanted to shout out this social model of disability that Jasmine was saying as opposed to the medical model of disability of the social model is just identifying the different systemic barriers and trying to implement more and more options and features rather than trying to think about disability as a medical diagnosis that's diminishing their capabilities or that needs to be corrected in fashion, but just thinking about how there's a variety of different ways that people move, that people are able to sense the world, that are able to have different feelings and emotions, but also how they're communicating and how they're thinking. So all these variety of ways that people are thinking, communicating, feeling, sensing, and moving and acting. So just thinking about all the different variety of options that might be available. There is this height adjustment as well as captions, but yeah, I think having more and more and more options is basically one of the themes that I got out of this social model of disability, which goes back to 1976. Fundamental Principles of Disability, the Union of Physically Impaired Against Segregation came out with something and then Mike Oliver in 1983 coined the term of this social model of disability. And Yeah. 1.5 million text-to-speech requests that were made within the first month of this feature. And yeah, people seem to play around with it and enjoy it. Also wanted to shout out this phrase of nothing about us without us, which was first invoked by the South African disability rights movement in 1990s. And yeah, just instead of designing for you're designing with. And so just like Jasmine was saying, they're in collaboration with these different. Consultants like blind giver Steve and other low vision folks who are doing the play testing with them to be able to give feedback as they're developing this and so yeah, just being in close collaboration with these different communities and starting to See how they can start to implement some of these different features this kind of superpower as you hold out your hand and Getting information about the objects that are in the world So yeah This is something that I could see as things move forward how to refine this to make it Useful for folks as they're out and about in the physical reality There's already all sorts of different tools that are being made by using your phone and identifying these different objects And so it's kind of taking that same idea but building it into these virtual worlds where you don't need to do the computer vision because you actually already know what all those objects are within the virtual environments and so you have a little bit more boundedness to push the limits for what kind of user interface design you might be able to do. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show