This is the first of fifteen in a Voices of VR podcast series on XR Accessibility based upon my coverage from the XR Access Symposium 2023 that happened in New York City on June 15th and 16th. I’m kicking off my 8 hours of coverage with an interview with XR Access co-founder Shiri Azenkot, who is an associate professor at Cornell Tech researching accessibility. She is focusing on making augmented and virtual reality technologies accessible as well as trying to leverage XR to solve accessibility problems. XR Access was started in 2019, which was an opportunity to bring the community together and highlight some of the pioneering accessibility research such as SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision.
Here is an overview of the 15 episodes in my XR Accessibility series:
- Shiri Azenkot on founding XR Access
- Christine Hemphill on defining disability through difference
- Reginé Gilbert on her book about Accessibility & XR Heuristics
- Christian Vogler on captions in VR & potential of haptics
- Six interviews from the XR Access Symposium poster session
- Dylan Fox on the journey towards XR Accessibility
- Liz Hyman on the public policy POV on XR Accessibility
- Mark Steelman on accessible XR for career exploration
- W3C’s Michael Cooper on customizable captions in XR
- Joel Ward on challenges with government contracting for accessibility and live captioning with XREAL glasses
- Jazmin Cano & Peter Galbraith on Owlchemy Labs’ pioneering low-vision features for Cosmonious High
- Liv Erickson on intersection between AI & Spatial Computing for Accessibility
- Ohan Oda on upcoming accessibility AR features in Google Maps
- Yvonne Felix on using AR HMDs as an assistive technology for blind and low-vision users
- Sean Dougherty & Jeffrey Colon on the challenges and opportunities in making XR accessible for blind & low-vision users
I’m also including rough transcripts for all episodes in this series as well as for my entire backlog of more than 1200 Voices of VR podcast interviews. I’m also in the process of adding categories to my episodes, which you can explore on this overview page showing the different categories. There’s still more work to be done in order to make my website fully accessible, and feel to reach out to accessibility@drawtheskies.com. if you have any specific requests, feedback, questions, or comments.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. It's a podcast that looks at the future of spatial computing. You can support the podcast at patreon.com slash voices of VR. So I'm going to be starting a 15 part series on looking at XR accessibility. So I had a chance to attend the XR access conference that was happening in New York city at Cornell tech on June 15th and 16th, 2023. And so I had a chance to do a lot of brief interviews with folks that are either co-organizers or researchers or folks from industry thinking about this issue of how to make virtual and augmented reality technologies a lot more accessible as they are right now. So I'm going to start with an interview that I did with Shiri Azenkot. She's one of the co-founders of XR Access as well as an assistant professor at Cornell Tech who's researching accessibility. And just a note about my process of this coverage is that there was a day and a half conference that was happening at Cornell Tech. And so there weren't a lot of breaks, and so I had to try to grab people when I could and stay after the first day and do a bunch of interviews. I ended up doing around six hours of interviews and coverage, both from this conference, but I also actually did a couple of other interviews previously that I'll be digging into my backlog archives and putting into this series as well. Also, I've been getting transcripts available for all of my backlog for voices of VR and it's a first stage of launching that so there's gonna be lots of errors and lots of cleanup that still yet to be done on that, but I just wanted to in spirit of covering different aspects of accessibility start to launch some of the other accessibility features of my own podcast of the voices of VR with transcripts and I'm sure there's going to be a lot more that needs to be approved on my website to come up to full compliance to make it fully accessible and it's an ongoing process but I did want to at least start with this series so folks can get a sense of what's happening in the research realm and there's still a lot of work that needs to happen at both companies doing innovation and user research and just talking to folks who have different disabilities or accessible needs, either blind or low vision, deaf or hard of hearing, or mobility limitations, cognitive impairments, trying to come up with a lot of different options so that as folks go into these different immersive experiences, then different ways that can make the technology a lot more accessible. And so right now there's still a lot of work that's happening at the research phase and then starting to percolate out into some pioneering work like Alchemy Labs some low vision features of their Cosmonius High, and yeah, just talking about some of the different themes that are coming up in the course of this conference. So we're diving into lots of different details as I start to go into each of these different interviews, but I want to start off with the co-founder of XR Access, Shiri Azenkot, to be able to get a sense of how this all came about and some of her backstory and journey into this space. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Shiri happened on Thursday, June 15th, 2023 at the XR Access Symposium in New York City, New York. So with that, let's go ahead and dive right in.
[00:03:06.067] Shiri Azenkot: I'm Shiri Azenkot. I'm an associate professor at Cornell Tech, and my research is in accessibility. And a big part of that is trying to make augmented and virtual reality technologies accessible and trying to leverage them to solve accessibility problems.
[00:03:22.026] Kent Bye: Maybe you could give a bit more context as to your background and your journey into this space.
[00:03:26.292] Shiri Azenkot: Sure, my background. So let's see, I grew up in the Bay Area, so technology was all around me. And at the same time, I have a visual impairment. So I understood the impact that technology has on our lives. But at the same time, technology was always difficult to use right out of the box. There were assistive technologies that I had growing up that people kept shoving in my face. And those technologies, I mean, I had a love-hate relationship with them, right? On the one hand, some of them were very helpful. But on the other hand, they were bulky and very expensive and needed, like, specialized trainers to teach you how to use them. And that's really not what you want when you're 15 years old and going to high school. So of course what I always wanted was for just mainstream technologies to be accessible and usable and to be able to also use those technologies to solve my accessibility problems. So fast forward, I got all my degrees in computer science. I got a PhD from the University of Washington. Yeah, and at the time, smartphones were becoming ubiquitous. So, you know, it started out when only like the tech nerds had them. And then slowly, more and more people started to use them. And then once the iPhone came out, then, you know, you probably remember, like, all of a sudden, everyone had an iPhone. And they were holding them and using them throughout the day. So on the one hand, that was a major step forward for me in terms of my independence. Because for the first time, I could take out my phone and use Google Maps or Apple Maps or whatever. And I'd know where I am. I could take the bus and not have to feel constant anxiety over not knowing when I'd reach my stop. Because I now could just pull out my phone and see. But at the same time, it was also difficult to actually use the phone to use the interface. And I had to take out a little magnifying glass and hold it up against the screen, and that was far from ideal. And eventually, Apple caught up, and they released built-in accessibility on their phones. But it was always a retroactive thing. So that's a pattern that we've seen with technology a lot, many times, every time there's a new technology. And in 2015, 16, 17, et cetera, I could see that XR technologies were now the next major computing platform. And even though at the time, and maybe still to some degree, people are using them for gaming, and it's a little bit of a niche technology. just like any other platform, just like smartphones, just like the web. They're going to become mainstream. We've seen this pattern before. We see the trends. But this time, we can do it differently. So I wanted to make sure that we are thinking about accessibility now, before they have taken over every aspect of our lives. And we're bringing the right minds together to think about the very difficult accessibility problems proactively. And we are connecting with the people who are working in industry who can actually take that and put it into practice. And also, of course, involving people who could benefit from these technologies. So the advocates, the end users, people with disabilities. So that's how XR Access was born.
[00:06:30.036] Kent Bye: Right. And were you a part of the founding of XR Access? And maybe you could talk about your role there now.
[00:06:34.299] Shiri Azenkot: Oh, yes. I'm the director. And I founded. So I guess I technically co-founded XR Access with Larry Goldberg, who at the time was working at Verizon Media, then Yahoo. And now he's retired. So that's why he's no longer an active member of the team.
[00:06:51.649] Kent Bye: OK. And so we're here at the XR Access Symposium. I know that over the pandemic, there was a number of different gatherings. And you actually had some other meetings before the pandemic. Is that correct?
[00:06:59.935] Shiri Azenkot: The first in-person gathering was in 2019, pre-pandemic. This building didn't even exist. We were in the building next door. We had 120 people come to join us. And that's where we launched the XR Access Initiative.
[00:07:12.691] Kent Bye: Gotcha. So what are some of the highlights thematically of this year's XR Access Symposium?
[00:07:18.935] Shiri Azenkot: So this year we highlighted some of the latest research. So the talks that we've had so far had some of that. There was the research project by two of my PhD students that was exploring how sighted guides can be incorporated into virtual reality to support accessibility for blind and low vision people. So that's cutting edge, light breaking research. We also saw an actual application from Alchemy Labs that has made this game, Cosmonius High, accessible to blind and low vision people. So that was really cool to see something that's actually in practice. Back in 2019, we didn't have anything of the sort. So that's awesome that they were able to come. speak about that and demonstrate it. And then we have a set of posters that are going on now. So again, we have a bunch of students and I spoke to an artist, an independent artist, and everyone is showcasing some of their latest work in all kinds of topics relating to accessibility and VR and AR.
[00:08:17.510] Kent Bye: Yeah, and a lot of people are excited about the Apple Vision Pro, especially in terms of accessibility, because I know that they've had quite a robust accessibility features in iOS. But we had Christian Vogler from Gallaudet University cautioning, saying that we shouldn't necessarily blaze forward using all these 2D patterns in the 3D spatial interface, because they may not actually work in a spatial context. I don't know if you've been in contact with Apple or people from the accessibility community that have been in contact, because it's been secret for a long time. And before it comes out, obviously, we don't want to rush and put something out that's going to set a standard that we're going to be stuck with in some way. So I'd love to hear any reflections about the Apple Vision Pro and any either excitement or cautionary aspects that you have about that. Yeah.
[00:08:56.516] Shiri Azenkot: Well, both is the short answer, both excitement and being optimistically cautious about what they will offer. I mean, I think we need to take Realistically, we're going to take small steps. And as long as we are taking those steps, then I'm happy. So, it does seem like Apple is thinking about accessibility. Just like Christian, I would caution about not imposing a two-dimensional framework onto this completely different paradigm that's very spatial, very 3D. Right? So, for example, you'll hear people talking a lot about screen readers in VR and how Apple's going to use voiceover in AR. And I think that to some extent, we can think about that, like AR and VR, you know, there is some system control, like you are manipulating menus to some degree and kind of these more traditional UI elements. But then at the same time, that's not the main aspects of the experience, like the main aspects of an XR experience really has to do with objects and people in the 3D space, right? So it's more, you need to think more about how we make the physical world accessible and think about how that can be incorporated into these experiences rather than trying to take a two-dimensional accessibility framework and trying to kind of fit it, squash it into this new paradigm.
[00:10:18.273] Kent Bye: I'm curious to hear if there's any virtual reality experiences that you feel like meet the minimum bar of accessibility features that you can go in and experience some of these experiences, or if it's still at the point where it's a bit of a black box, especially with a lot of Unity tools where things aren't necessarily built in, and if it's not quite there yet for you to really enjoy virtual reality much at all at this point.
[00:10:40.545] Shiri Azenkot: I think there's been some great research in the area. One project I will highlight, in addition to our projects, like the one about the guide, which I just mentioned, I'll highlight one by my former PhD student, Yuhang Zhao, which is called Seeing VR. And she developed a set of tools to make the VR experience accessible to people with low vision. So specifically looking at low vision, people who can use some of their remaining vision. And she had some really cool ways to enhance the environment to make it more visible. And then of course, Cosmonia's High, the game we were talking about, that is a very cool example of one way to make a VR game accessible to blind and low vision people. So we're only talking about visual impairments here, and that's the population I'm most familiar with. That's a population that a lot of times is easy to think about when you're thinking about accessibility, so we do need to continue to think more broadly.
[00:11:36.557] Kent Bye: So what do you see as the next big steps as we move forward as an industry for what needs to happen in terms of both innovative creators, the platform providers, and the end users as well?
[00:11:46.459] Shiri Azenkot: Yeah, I think, so let me say what we've done so far as an overall effort. There are some current guidelines, like XR Access has been involved in some, but there are also some platform-specific ones, like for example, for the now MetaQuest, used to be Oculus, they'll have a set of accessibility guidelines. They're pretty general right now. They kind of talk about general rules of thumb. I think that we can do some work to make these guidelines more specific. and to think more about the 3D interactions and the interactions among people in virtual reality and also in augmented reality. So I think that a lot of what we need to do is think about these different standards. We need to do what Alchemy Labs did and actually just go for it and try to make games accessible. Because we learn so much just from trying things out and having target populations try them, sharing them in venues like this. So I really commend them for what they did and I think that that's what we should all be doing.
[00:12:44.135] Kent Bye: Great. And finally, what do you think the ultimate potential of virtual reality, augmented reality with accessibility in mind might be and what it might be able to enable?
[00:12:54.324] Shiri Azenkot: Well, two things. So first, we have to make the mainstream technologies fully accessible. So I do think there is potential there for full inclusion. Absolutely. I think there are some very interesting challenges, but I believe that we as a community can tackle them and we can really design a very good experience for everyone. And then there's a very exciting opportunity to take these technologies, especially augmented reality, and to use them as accessibility tools. So to look at how we can use these new platforms to solve current unsolved problems that specifically people with disabilities experience. So we've been doing some of this work, like for example, we design systems for people with low vision to help them walk up and down stairs and to help them find targets in visual search tasks. So for example, finding a product in a grocery store. So I think that's just the tip of the iceberg. There's so many other potential applications out there. It's a very exciting opportunity.
[00:13:52.316] Kent Bye: It also seems like a possibility to potentially prototype some of these different systems in virtual reality and then maybe deploy them out in physical reality with augmented reality. I don't know if that's a pattern that you see as a possibility.
[00:14:03.231] Shiri Azenkot: Absolutely. There's a lot that we can learn from the physical world and apply to VR. As I was saying earlier, there's a lot that we can learn from VR, just like you suggested, making VR accessible, and using what we learned, the techniques we developed, and transforming them to AR to make the physical world more accessible. So yeah, it's this really fun synergy.
[00:14:23.565] Kent Bye: Awesome. Anything else left unsaid you'd like to say to the broader immersive community?
[00:14:27.417] Shiri Azenkot: Um, anything else I'd like to say? I think when it comes to accessibility, we should all have fun with it. You know, accessibility is not just about these monotonous text-to-speech, you know, automatically generated voices giving objective descriptions. I think there's a lot of room for creativity. And it's just a really fun space to explore and solve very difficult but also very fascinating problems. So I think that I would just encourage people to think about it in that way because that's really what brought me into this world and why I love my job so much.
[00:15:05.998] Kent Bye: Awesome. Well, I'm really glad to see that you started XR Access and are bringing this community together. I think it's really one of the biggest open problems in the XR industry. And I think it's just really encouraging to see so many people here from the industry and academia to be able to start thinking about how to start to solve these problems. So thanks for starting this community and for taking the time to talk to me today. So thank you.
[00:15:25.168] Shiri Azenkot: Yeah, thank you. Thanks for being here and for spreading the word.
[00:15:29.310] Kent Bye: So that was Sherry Azenkot. She's a co-founder of XR access as well as an assistant professor at Cornell tech researching accessibility. So I've had a number of takeaways about this interview is that first of all, you know, one of the things to reiterate here was that it's not just about taking a lot of these 2d tools of accessibility that have been usually within a 2d frame and applying it into XR and thinking that's going to be sufficient, but. Shiri says that you also have to think about how to make the 3d physical world more accessible Rather than just retrofitting these different 2d frameworks into a spatial context of XR and so that's a theme that comes up again and again and thinking about how to have this completely new paradigm shift for what is going to be required for making these different immersive technologies accessible whether it's haptics or adding different aspects of spatial sound or and finding stuff that's going to preserve the different immersive quality of XR, but also make it so that these experiences have multiple different modalities and choices for a wide range of different users. I just wanted to shout out this project that she had mentioned, Seeing VR, which was done by Yuhang Zhao, as well as a number of different folks from Microsoft Research, and it has a number of different tools that they were able to add to different immersive experiences. I think features for blind and low vision is probably the least covered when it comes to immersive technologies because VR is such a visual medium. There's different affordances for captions for folks who are deaf and hard of hearing, but actually making some of these immersive experiences work with screen reader tools requires special Unity implementations that haven't really launched or even just middleware tools to be able to read that. But a lot of times you'd still have to add the metadata for all the different objects and Yeah, it's going to be a long process for understanding what the equivalent of the DOM that is usually what the screen reader is reading with this text format. And so with these different spatial environments, there currently isn't a good way to make that metadata available within the context of Unity or even have a screen reader software that works either at the platform level or with anything with Unreal, Unity, or even with WebGL and WebXR. It's still kind of like a black box. So there's still a lot of work that needs to be done. But this project called Seeing VR Has different tools like magnification lens bifocal lenses brightness lenses contrast lenses edge enhancement Peripheral mapping text augmentation text-to-speech tools and depth measurement So these are all the different things that you could add on to different immersive experiences I think it might be a tool that they're able to add on to different immersive experiences not sure if it's something that they're injecting into the software or not, but either way I These are the types of tools that need to happen at the platform level, either like MetaQuest Pro or for what Apple Vision Pro is going to be doing with bringing all the stuff that they've done in the 2D realm and bringing that into their Apple Vision Pro and their take on spatial computing, which is bringing a lot of the different tool set from iOS into an XR environment with spatial computing. There are a lot of things that are working within 2D and should be ported over, but there's also things that need to be completely reimagined as we start to move forward. And I think that's a lot of what Shiri and other folks like Regina Gilbert, other researchers, Andrea Stevenson-Wan, and other folks that were speaking here at XR Access, who are trying to think about what are the different new paradigms and heuristics when it comes to VR and AR technologies. And it's a long journey that has just begun and We'll be diving into much more detail as I continue on with the series. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a list of support podcasts, and I do rely upon donations from people like yourself in order to continue doing this coverage. You can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.