#1226: Poster Session Interviews from XR Access Symposium 2023

I had a chance to do six brief interviews during the poster session of XR Access Symposium 2023 that I’ve compiled into this single podcast episode. This gives a great overview of some of the emerging topics for XR accessibility including XR accessibility heuristics, representing disabilities through avatars, audio descriptions for 360 videos, using AR glasses for live captioning, accessibility implications of augmented reality art, and the assistive technology potential of digital twins.

Below I’ll highlight each of the posters with an image as well as as the representative I was able to speak to, the co-authors, and be sure to check the alt image information for more information on what the poster says.

I spoke to Spandita Sarmah about the poster titled “Formulating Inclusive and Accessible Heuristics: A Developmental Approach.” Here is a Google Doc copy of the poster.

I spoke to Ria Gualano about the poster titled “Expanding Inclusive Avatar Design: Understanding Invisible Disability Representation and Disclosure on Social VR Platforms”

I spoke Lucy Jiang about the poster titled “Beyond Audio Description: Exploring 360° Video Accessibility with Blind and Low-Vision Users Through Collaborative Creation”

I spoke to Su Chen from LLVision about the poster titled “AR Subtitle Glasses: Breaking Down Communication Barriers”

I spoke to $NP Designs‘ Denise Coke about the accessibility implications of her augmented reality art.

And I spoke to WINNIIO’s Nicolas Waern about the assistive technology potentials of digital twins.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So this is the fifth episode of 15 of my XR accessibility series, and Today's episode is actually a compilation of different poster sessions that were happening at XR Access. Generally, at different conferences, you have the different papers that are more mature research that's being presented, and then the poster session is more emerging research, so the very early phases of just getting started. These poster sessions give a nice cross-section of what was happening in the context of the cutting edge and the bleeding edge of what's emerging when it comes to accessibility with XR. I had a number of different brief conversations with these six different folks, Spandita, Ria, Lucy, Denise, Sue, and Nicholas, and it'll give a nice overview. So I'm going to go from one after another and at the end I'll have some final thoughts as well. So that's what we're covering on today's episode of the WastelessVR podcast. So these interviews with Spandita, Ria, Lucy, Denise, Sue, and Nicholas happened on Thursday, June 15th, 2023 at the XR Access Symposium in New York City, New York. So with that, let's go ahead and dive right in.

[00:01:26.462] Spandita Sarmah: I'm Spandita. I'm a UX researcher. I'm a human-centered UX researcher, and the focus of my work is accessibility. The focus of my work has been art and accessibility and XR and accessibility. So this poster that I'm presenting, I work with Professor Regine Gilbert, and this is part of the Ability Project at NYU. We are trying to create a set of heuristics for AR and VR separately for designers and developers to use so that they can incorporate accessibility while building something, like an immersive experience. And there are a lot of guidelines available right now, but it's not being used correctly because of various reasons. So we're trying to bridge the gap. And for that, we're doing usability testing with people with disabilities, products, and experiences. And on the other side, we're also interviewing designers and developers to sort of understand what are the gaps in their development and how to incorporate accessibility. So we are identifying pain points from the usability testing and sort of going backwards and trying to answer the question, what could have been done here in the development process or the design process that would have eliminated this pain point? So this is an ongoing research, and we would love to collaborate, Henny.

[00:02:43.049] Kent Bye: Yeah, I'd love if you could maybe walk through some of these observations that you made here. You have a list of bubbles of different sizes. So yeah, I'd love to hear some of the different observations you had from this paper called Formulating Inclusive and Accessible Heuristics, a Developmental Approach.

[00:02:56.087] Spandita Sarmah: Absolutely. So the first set of insights that we had was interviewing developers and designers. And we were trying to identify what are the gaps? Why are the products inaccessible? Do they even know about accessibility? What does their design process look like? So the problems that we identified were limited time and support for developers. Lack of accessibility awareness. This is a very big one. And it's hard to implement accessibility. Everybody says that. And we've also had people say that hearing participants with disabilities for usability testing is expensive. So they don't do it or they do it in a very lesser capacity. And for observations, this is from the user's point of view, what problems do people face while trying out an immersive experience. There are no accessibility features, little to no accessibility features, and lack of visual and audio feedback. The tutorials there are not easily understandable, or they're not even there. Unclear navigation instructions, so if you're moving through a space, Maybe the navigation instructions are not clear or not present. Lack of subtitles. This is a very big one. And I've noticed it in a lot of experiences. And I've had participants say this. There are no subtitles or captions. And even if it is there, it's not language friendly. So, I mean, if somebody is not proficient in English, they would not understand. So, difficulty with controls. This is complicated hand movement. Somebody with a physical disability cannot use controllers. Sometimes you have to grab something from far away. in some games and the person cannot move their hand beyond a certain distance. So that's a challenge. So there are a lot of obstacles and there are a lot of insights from the development and design process and we just have to keep doing it, keep getting more and more people, keep working with people actually instead of for people because there's no room for bias anymore.

[00:04:50.190] Kent Bye: So yeah, it sounds like you've done an initial audit of literature review and some usability testing and trying to define the problem. And I guess, what's next with a project like this? You're formulating these heuristics, and then what's the next step after you've identified the landscape of all the different problems and observations?

[00:05:06.856] Spandita Sarmah: Yeah, so we have an initial set of heuristics. We got that by consolidating all the already existing heuristics. There are a lot of already existing guidelines for AR, VR, MR, and we consolidated all of that, and we're in the process of refining the set of heuristics. So before that, we did a card sort, a closed and an open card sort, to sort of categorize the heuristics, what categories audio, visual, interaction, what categories the heuristics fall under. And we did usability testing on the side to sort of use those heuristics in the design process. And right now, we're in the phase of doing developer interviews. and co-design on the side. So we're trying to work with people with disabilities, and try to make something using the heuristics, and trying to see how accessible and how usable the heuristics are, and what's needed. And on the other side, we're just trying to understand more and more how designers and developers actually work, and what's their process.

[00:06:04.989] Kent Bye: ROB DODSON Great. And finally, what do you think the ultimate potential of XR and accessibility might be, and what it might be able to enable?

[00:06:13.462] Spandita Sarmah: I think what my professor says always again, the future is accessible. And this is very, very important. People are getting excluded from entire experiences. Be it art exhibitions. Be it any visual game. Communities are getting excluded from these experiences. And they have to be included. It's not an option. It's like a responsibility. And I think research, more and more research, and of course, participatory design, co-design, including people, working with people and not for people. There's, again, there's no room for bias. And I think if we start working with people, testing with people, understanding what they go through while they experience like a game or something, I think that would be the future of including everybody.

[00:07:04.374] Kent Bye: Awesome. Well, thank you so much.

[00:07:07.114] Spandita Sarmah: Thank you. Thank you so much.

[00:07:09.746] Kent Bye: So that was Spandita Sarma, who was working with Regina Gilbert. And that poster was called formulating inclusive and accessible heuristics, a developmental approach. And I'll be diving into more detail at the end, but I wanted to move on to the next conversation with Ria Gelano on expanding inclusive avatar design, understanding invisible disability representations and disclosure on social VR platforms.

[00:07:32.207] Ria Gualano: Hi, my name is Ria Galano, and I am going into my second year as a PhD student in the Cornell Department of Communication, where I study disability representation and digital and virtual technology accessibility. So the poster that I'm presenting today is a project that I'm co-leading with Lucy Zhang and Kexin Zhang. And we are essentially looking at how we can expand the inclusivity of avatar design platforms to include customization options that can represent invisible disabilities, such as chronic illnesses, neurodivergency, et cetera.

[00:08:08.151] Kent Bye: How do you imagine some of these different disabilities might be represented visually within the context of an avatar?

[00:08:14.526] Ria Gualano: Right, well there's already been some preliminary data collection and it seems that participants, some of them are interested in incorporating different pride or representation like in their apparel. So one of our participants talked about how they are interested in seeing like zebra print options because that speaks to rare disease month print. But we also have other participants who talked about wanting to maybe have it float above their head in some way or to maybe even include it in the gamer tag.

[00:08:44.111] Kent Bye: A lot of these platforms don't have much customization options. And so maybe speak to what's the impact of that.

[00:08:51.053] Ria Gualano: Well, it's important that when we're building inclusive spaces, we're allowing for people to self-express. And one of the key components of self-expression is having things available for various identity groups. And it doesn't necessarily have to be just disability. There's a lot of participants who are also calling for racial diversity, body diversity, all sorts of forms. Because as we continue expanding our uses of VR and XR, people are going to continue to become embodied more and more frequently, so it's important that they can use those avatars as an accurate self-expression tool, so they can feel fully comfortable in their virtual skin.

[00:09:22.211] Kent Bye: And what were some of the findings that you've had within this expanding inclusive avatar design paper so far?

[00:09:28.216] Ria Gualano: Well, what we found is kind of mixed. So some participants were very interested, and now some of them were advocates too already, but some were very interested in representing their invisible disabilities, while other people were interested in not disclosing at all through their avatars. So a lot of times when people did want to disclose, it's either because they were already disability advocates trying to spread awareness, or they were especially interested in specifically support group spaces in VR, so they wanted to be able to spark conversations. But yeah, we found a ton of different applications, a ton of different ideas for how people want it. Again, floating above their heads, in the gamer tag, on their clothes. But it seems like this is something quite a few people could be interested in, given its broad applications across invisible disabilities.

[00:10:11.922] Kent Bye: Yeah, you have a number of different drawings here. I'm wondering if you could maybe step through some of these different things, like earbuds, and giant spoon, and the floating energy bar, and just kind of expand on what some of these different representations are calling back to in terms of these various different disabilities.

[00:10:27.889] Ria Gualano: Right. So I'm very interested in feminist disability theory and critical disability theory in general, which is part of what comes from, I guess, like my background in situating it in the literature. So we had participants complete a whiteboard task during their Zoom interview where they essentially were given 10 minutes to draw exactly how they would want to represent themselves and their disabilities in VR. So as you can see on the far left, there is an energy bar above someone's head and that ties into an P1 over here, they had a big spoon that they're carrying. Now those are both callbacks to spoon theory, which is an energy-based theory that essentially posits that everybody has a certain number of spoons that they enter their day with. And these spoons represent their energy. So people with chronic illnesses might have less spoons going into the day as a result of their chronic illness. So that means that everything you do, whether it's taking a shower or playing a video game or meeting up with a friend, takes away a spoon. So when you have limited spoons, you have to really determine how you can allocate them. And we need to be thinking about ways that we can incorporate that into design processes. So this energy bar could say, I'm low energy today. at this point, because my energy's already been spent. Or this giant spoon could say, hey, I'm a spoonie, and I identify with this community. Then you can see P8 has earbuds in, and that participant in particular said they usually wear headphones. That was the participant who did not want to disclose at all, ever to anyone, because of perceived negative repercussions of that. So because they don't usually use earbuds, they considered that to be an extension of them concealing their disability in non-virtual worlds as well.

[00:12:12.471] Kent Bye: So the idea being, if you're wearing headphones, then you're not going to talk to that person. But that's a way of kind of masking out a social construct of not having to engage socially, I guess, in the physical world. Is that the idea, that you could wear virtual headphones and then you get a similar effect?

[00:12:28.326] Ria Gualano: Perhaps. I mean, I could definitely see that interpretation of it. But I think this participant in particular was like, I'm a headphone user. But in VR, I'm going to pretend that I'm an earbud guy. So instead, looking at how you can represent yourself differently in the virtual context and how that could be another representation for them of concealing that disability.

[00:12:51.351] Kent Bye: OK. Yeah, so I guess what's next with this project?

[00:12:54.465] Ria Gualano: Well, next we are going to gather a few more participants so that we can submit to a conference. So we're going to be writing a conference paper out of this and then hopefully presenting it next year.

[00:13:05.332] Kent Bye: Great. And finally, what do you think the ultimate potential of spatial computing and accessibility awareness might be and what it might be able to enable?

[00:13:15.907] Ria Gualano: Well, all it takes in an organization to get things moving is at least somebody who speaks up and brings these accessibility ideas to the table. So whether we're talking about educational institutions, whether we're talking about the workplace, applications of VR across all different types of consumer and professional industries, it really takes spreading that awareness that people say, hey, we need to start branches that are studying this. We need to hire researchers who are thinking about these possibilities before we even finish designing the product so we don't have to go back and make alterations later, right? So rather than including disability as an afterthought, hopefully disability will be what people are considering first and then designing.

[00:13:52.106] Kent Bye: Awesome, well thank you so much.

[00:13:54.507] Ria Gualano: Thank you.

[00:13:55.768] Kent Bye: So that was Ria Galano and the poster that she was presenting was called Expanding Inclusive Avatar Design, Understanding Invisible Disability Representation and Disclosure on Social VR Platforms. Moving on to the next conversation, this is with Lucy Jiang, who had a poster there called Beyond Audio Descriptions, exploring 360 video accessibility with blind and low vision users through collaborative creation.

[00:14:18.132] Lucy Jiang: So hello, I'm Lucy Zhang. I just finished my first year of my PhD at Cornell University up in Ithaca, New York. And today I'm here at the XR Access Symposium to be talking about the project that I just did this past year on exploring 360 degree video accessibility for blind and low vision users and also with blind and low vision users through collaborative creation. So what we did is basically we kind of tried to understand how can we convey this idea of immersion, which is something that's really missing in all of our translations between 2D and 3D content. How can we convert that into audio descriptions, into other audio cues that can really give blind and low vision people equitable experiences in 360 degree spaces, even when they're not using their sight. So that was kind of the goal of the project, and we did this through a couple of interviews and two design workshops, actually, to basically bring together a bunch of different people with audio description expertise. We included people who are blind and low-vision audio description users. We included some sighted writers, some sighted audio description creators in the industry, and also blind and low-vision audio description creators, actually, who have this very unique intersection of expertise as both users and creators, who really brought a lot to this design session. And through that, we found out that there's a lot of different elements that work together to create this immersive experience. It's not just audio description. It's actually audio cues, sound effects, which are a little bit different than audio cues in that you can have earcons, which are basically little audio cues that kind of give you a sense of where you are in the space to help you orient yourself if you're trying to figure out what would be interesting to look at. And then also making sure that we're also considering haptics, tactile elements that could really increase the sense of immersion as they were talking about a little bit earlier today. And that's kind of an overview of the project, I suppose. Are there any specific questions?

[00:16:03.645] Kent Bye: Yeah, I guess the first question is, I know there's a lot of 360 videos on YouTube. And there's a platform for watching that. And so is this something that would go in the middle of these systems? Like you could use an application that would be able to add these things? Or is this something that a lot of these tools have to author this specific information and then bundle it together on either a custom application? Or is this something that would need to have expansive tool set for a platform like either Vimeo or YouTube to be able to integrate some of these different features that you're working on? Yeah, I'd love to just hear a little bit about the technical background for how you're starting to prototype these different accessibility features for 360 video.

[00:16:40.256] Lucy Jiang: Yeah, definitely. So that's a great question. Right now, we're in the design phase. So really just trying to understand what people want. But I think what you brought up about there could be a couple of different ways that this could be implemented, either as an in-between layer or straight on top of some of these applications. Ideally, what would be happening is that creators of 360 videos would be listening to these suggestions from blind and low vision users and creating audio description when they're creating their entire video, creating these immersive experiences that are non-visually accessible. So that is not really an afterthought. Unfortunately, in real life, that might not always be the case. So ideally, there would also be a way for blind and low vision people to maybe crowdsource or use artificial intelligence to create that accessibility for themselves so that they can still be included in this transformation of content from 2D to 3D, even though, unfortunately, maybe not all video creators are really hopping on this accessibility trend yet. So I think that would be where we imagine that this could go as hopefully something that really guides people to pursue accessibility from the get-go, but if not, something that can be used to add a serviceable and enjoyable audio description and audio experience while there isn't anything better yet. But hopefully, again, the ideal would really be to have the best possible option created by the people who are the creative minds behind the project.

[00:17:54.287] Kent Bye: A lot of times, you'll have 360 videos that have people speaking or other things that are happening. Do you imagine that this type of system that would have people watch it maybe multiple times, maybe just to get a sense of what the context is? Because there's a lot that's happening in these spaces. And so I'm just wondering how you manage when people are speaking versus helping to set the context with all these visual cues. And if you expect people to maybe pause it and have some of this stuff happen, or if people are going to maybe just watch it once and then they get whatever they get, but there's kind of an overwhelm of all this information and yeah, just how you start to navigate this information overload dimension of all these audio cues on top of the narrative that may be explored within any one of these videos.

[00:18:34.716] Lucy Jiang: Yeah, definitely. So I think this information overload point that you're bringing up is really important. We heard about it earlier today. And I think it will continue to be a pressing issue as this technology gets closer and closer to our faces, right? But I think this is where options are really helpful. You know, maybe some people on their first pass, they'd really love to get all the information that they can. And maybe some people on their first pass, they just want to see if they're even interested at all. And then they can go back and watch it again and again and kind of get put together the pieces that they want from there. Extended audio description, as you mentioned, which is including pausing and then adding additional detail, that could also be a really great option. In this project in particular, we didn't explore it because the video itself didn't contain too much dialogue, but a lot of participants were actually pretty interested in that if they really wanted to find out more. But they also acknowledged that extended really does extend the amount of time that you have to spend consuming content, which in itself can be a little bit inequitable. But if the user chooses to do that, then that is obviously up to their choice and they want to do that. And it's great that we can make it accessible to them.

[00:19:30.365] Kent Bye: And we heard Christian Vogler from Gallaudet University talk about how, as a deaf person, he wanted to have more haptic integrations to start to facilitate some of these different information from sources or tonality or intensity. And he was warning that a lot of the downfalls with even the captioning is from the legacy of low-resolution TV, and as we move into a specialized environment, we have new affordances that are made available. So I'm wondering how, for blind and low-vision users, you're starting to use different types of haptic or tactile feedback in some of these different experiences, and how you're using that to help provide another channel of information to unpack some of this information.

[00:20:09.968] Lucy Jiang: Yeah, that's a great question. So some of our participants, we actually asked them specifically about haptics, but beyond that, also like, you know, olfactory senses, so smell and also gustatory taste. For the specific videos that we included, they were not too enthused about, you know, maybe smelling the bathroom or the tube or anything like that, but they were actually very interested in haptics in particular for indicating maybe loud noises, like Christian was saying, loud noises, especially when you had to duck them for additional description purposes. That was a really great use case. Also proximity, so maybe if you were going through the green tunnel and you wanted to kind of indicate that you were being surrounded by something, maybe you could have some sort of tactile either through the controllers or through a wearable or something that would feel a little bit more constricting just for the purposes of getting that feeling of being in a tighter space just for that moment. Especially because the juxtaposition between that tighter space and the next scene which is a wide open grassy field could really be amplified through that use of haptics. But those are some of the examples that people brought up with haptics. Of course, right now, a lot of people were thinking about either like Disney World rides or 4D theaters as the only examples of really responsive haptics that they had had. But hopefully in the future, you know, with more advanced technology, we can really make this a more everyday experience so that everyone can get not only audio feedback for blind and low vision people, but also tactile haptic stuff for deafblind users or even just sighted people who want an additional experience.

[00:21:31.242] Kent Bye: Yeah, one of the things that you have in YouTube is that you can turn on or off subtitles, and I know that you have an opportunity to upload, like, an ambisonic, spatialized sound for some of these videos, but oftentimes there's only one soundtrack, so I'm wondering if you think in the future we might have, like, multiple soundtracks so that people could switch on to, like, just the audio description soundtrack that would have, like, Ambisonic specialized sound that people could just like listen to the environment because you would have conflicts if you're trying to mux down All the different audio descriptions with specialized sound on top of the audio it would get again this conflict or overload So yeah, I don't know if started to play with some of these ambisonics to do the specialized sound with the videos Yeah, no, that's really interesting.

[00:22:11.418] Lucy Jiang: So I'm just to make sure I'm understanding your question Sorry, so you mentioned there's a bunch of different options for like, you know turning things on and off making sure that you're not like overloading yourself because you have that option sometimes to turn off things but

[00:22:22.038] Kent Bye: Right now, you can only, as far as I know, upload one audio channel for a video. It would have to be on the platform side for YouTube to enable you to have multiple audio tracks. So just like you have a turn on subtitles, you could turn on the audio description subtract, which would be a whole other experience. But that's something that would have to be at the platform level. But it's just an idea, because as you start to do audio descriptions, you have the problem of potentially conflicting with the other ambient soundscape that might be there.

[00:22:49.343] Lucy Jiang: Definitely, yeah. Actually, what's really exciting is YouTube has added on select videos and audio description track, primarily actually on ads is where I've seen it, which is maybe not where people really want their audio description to be because, you know, who likes watching ads? But it's really exciting that they're starting to kind of move into this. But actually, speaking of that information overload, with a lot of my, at least, sighted friends. I personally prefer watching a lot of stuff with audio description just because I enjoy it. I think it's an art form. But a lot of my sighted friends find it way too overwhelming. They're like, I'm already seeing this stuff. I don't need someone to tell me that I'm seeing this stuff. It's just making me really confused. And in that sense, yeah, I think definitely having this option to turn stuff on and off, maybe even set the description verbosity or the level of detail that people want is something that AI could certainly help with in terms of summarization or adding more detail or something like that. There's a lot of different possibilities. This is just kind of the tip of the iceberg, but we're really excited that this kind of work is really applicable to what may be the future of entertainment and everything else.

[00:23:45.145] Kent Bye: So what's next with this research then?

[00:23:47.140] Lucy Jiang: Yeah, so for this research in particular, so we're looking at how context might affect how a user wants to consume a video. Specifically, again, for blind and low vision users who are using audio description, but there's a lot of different contexts that you might be watching a video in. So, for example, you could be watching a cooking video on YouTube, but there are a lot of people on YouTube who are doing cooking for kind of entertainment purposes, and then others who are doing stuff for how-to purposes. And how do you understand what audio description or what amount of detail that you want from those videos when you're watching them in different contexts? So we're really hoping to understand that a little bit better through our next study, which is on context-aware audio descriptions, video descriptions, and maybe even audio cues to improve this experience and cater to users' goals rather than just trying to find something that's one-size-fits-all and casting a wide net that is OK for everyone but not good enough for anyone.

[00:24:38.592] Kent Bye: Right. And finally, what do you think the ultimate potential of spatial computing with accessibility features might be, and what it might be able to enable?

[00:24:47.603] Lucy Jiang: That's a great question, once again. I think my current thoughts are that Spatial computing and accessibility could really open up a new world for interacting with each other in a way such that it feels very natural, but you're getting all the assistance or guidance or whatever it is that you need in the most natural way as well. I think with spatial computing, with these headsets and everything like that, there really is a lot of potential for people to get discrete accessibility and access needs fulfilled in such a way that it also doesn't compete with other people's access needs or it doesn't require disclosure if people don't want to, stuff like that. I think that there's a really great possibility for spatial computing and accessibility. There's really a lot of potential to celebrate disability and the different ways that people interact and, like, view the world, interact with the world. But I think there's also a great possibility to, in celebrating that, make sure that they're getting the accessibility that they need in a way that, again, doesn't conflict with other people's access needs, like the overload or something, because it is all much more personalized to the end user. And with the spatial computing, it brings it into, yeah, this more natural mode of communication that hopefully brings people closer together as well, instead of keeping them apart, as we've kind of seen with some technologies.

[00:26:06.395] Kent Bye: Awesome. Well, thank you so much.

[00:26:07.579] Lucy Jiang: Thank you so much. It was nice to meet you.

[00:26:10.122] Kent Bye: So that was Lucy Chang. She had a poster there called Beyond Audio Descriptions, Exploring 360 Video Accessibility with Blind and Low Vision Users Through Collaborative Creation. The next poster and demo that was there was from Sue Chen. It was a Augmented Reality Subtitle Glasses Breaking Down Communication Barriers. This was from LLVision. And the demo actually wasn't working yet when I tried to try it out. So I did a very brief conversation and didn't have time to go back to see if it was working later on. But I just wanted to have this brief conversation just to show some of the different assistive technology use cases for augmented reality.

[00:26:46.462] Su Chen: Yeah, my name is Su Chen and I'm representing this company called LLVision and we are showing this AR subtitle glasses. So the AR subtitle glasses is where when people talk, you wear these glasses and you can see what people are talking in subtitles. So the use case for this is for people with hearing loss or hearing impairment. So when they see other people in front of them trying to talk to them, they can see what they're talking. So it will be helpful in different situations like work, like everyday life, or some emergency situations.

[00:27:23.862] Kent Bye: Great. Maybe you can give a bit more context as to this company that you're representing here.

[00:27:28.485] Su Chen: So the company's name is called LL Vision. They are based in Beijing and they started to develop this technology I think around 10 years ago and they launched this version in 2021. So they are trying to expand the market to North America and then they start everything in China. They had a pretty good success of the first trial testing in the Chinese market and now they want to learn about the like what are some cultural differences that they need to concern, what are some policy guidelines around products like this for emerging technologies to be used, especially for people with disabilities, for people who have special needs that need to be accommodated by technologies and innovations.

[00:28:10.933] Kent Bye: Great. And so what is the speech-to-text system that you're using? Is it something that's in the cloud? Is it recording the audio and then sending it up? Or is it all done locally? Maybe you could give a bit more context as to how the actual speech-to-text synthesis works.

[00:28:24.666] Su Chen: Actually, I don't know about that.

[00:28:25.587] Kent Bye: Oh, OK.

[00:28:25.827] Su Chen: We can't answer this question.

[00:28:27.949] Kent Bye: OK. And yeah, so what are the different types of stuff that you're doing here at this accessibility conference then?

[00:28:34.234] Su Chen: So for this conference, we are trying to get a reaction from the audience, from the panelists, and from the attendees who may have needs for this type of technologies, but also people who are professions in the industry, learn about their opinions, their concerns, and if we can get inspirations of some potential use cases or collaboration opportunities, that would be awesome.

[00:28:59.343] Kent Bye: Awesome. Well, thank you so much.

[00:29:00.884] Su Chen: Thank you.

[00:29:01.926] Kent Bye: So that was Sue Chen with LLVision talking about AR subtitle glasses. Then the next conversation was with Denise Koch of Snoop Designs, who's an augmented reality artist. And so just to get some of her take on how she's starting to apply accessibility for what she does with her AR art.

[00:29:17.305] Denise Coke: My name is Denise Cove. My business is called Snoop Designs and I'm an augmented reality artist. So what I'm showing today is an augmented reality art piece. So essentially what you would do is that once you download the app and it's available to anybody with Android or iPhone. you would hold it up to the piece and then it comes to life so right now what we're looking at is that the static piece is standing on the easel but what popped up is a animation of sunflowers and it's dancing around the girl and essentially the reason I did this is because I wanted to add like an extra dimension to the animation. Also, for people who may not be able to specifically see the piece, you can add sound to it, it can talk to you, describe to you what you're looking at, and what I was able to do last year was also do a in-person event as well as an online metaverse event where people were able to create avatars, walk around, and then also be able to interact with the augmented reality artwork as well.

[00:30:14.640] Kent Bye: OK, so yeah, we're here at the XR Access Conference. And so I'm wondering what type of specific accessibility features that you've built into this application.

[00:30:23.533] Denise Coke: Yes, so what I wanted to do was screen readers. So essentially, if somebody was on the metaverse gallery, when they go to each of the different places or images that they see, when they tap on it, it'll describe to them what they're looking at. And then also I talk about accessibility as if you can't get to a gallery or a museum, you're able to just access this artwork from your home. So I think that that's really great, especially here in New York City. Everybody can't get on the train and get to a gallery in the city, so they can now start going to art galleries at home.

[00:30:57.191] Kent Bye: And what was it that got you into doing this type of work?

[00:31:00.355] Denise Coke: Yeah, so I learned about augmented reality in 2016. Someone had showed me an example and I had looked up more information about it because I already was an animator slash motion graphic artist. And what I was able to do was start implementing it into my actual artwork because I am a digital artist who prints on canvas. And from there, I've just kind of always been doing it. And I was really passionate about getting into the accessibility space because I actually had a conversation with Reginae about what she does in the space. And yeah, from there, I just hope to keep building upon my craft. And as an artist who already loves tech, I think it's a great way to kind of merge it together.

[00:31:39.345] Kent Bye: What do you think the ultimate potential of augmented reality and accessibility might be, and what it might be able to enable?

[00:31:48.134] Denise Coke: I think the potential of it is that it just gives people the opportunity to look at things from a different point of view. You're able to understand really what the artist is trying to tell as far as storytelling goes. So while you may have your own perception of what a piece may look like, you're actually able to dive a little bit deeper into it through augmented reality just because you're adding in that animated portion to it. Or you can even have the artist start talking to you and explaining to you What you're looking at? Why did they do this the way they did it? What do the colors mean? What do the different images in here mean? So I definitely see it as a way of connecting more to the artist's story and then just making it fun. You know, galleries should be fun. Museums should be fun. So I think this allows it to be fun.

[00:32:31.042] Kent Bye: Awesome. Well, thank you so much.

[00:32:32.163] Denise Coke: Of course. Thank you.

[00:32:33.937] Kent Bye: The next interview is with Niklas Wern, who is an attendee of XR Access and was thinking about how to start to add XR accessibility to different types of digital twin applications.

[00:32:45.243] Nicolas Waern: My name is Niklas Wern. I come from a company called Winio. We specialize in digital twins. So it's not only using the metaverse to escape reality, but it's actually using this VVR, AR, mixed reality to tie it to real-time data to improve reality. So we do that for buildings, smart cities, and now heading over to Orlando next week to talk about Olympics planning. So basically you prepare something, you copy the real world into a virtual environment and that allows for extreme collaboration and communication. So basically asking the virtual city, which is a clone of the real city, how do we get to supporting all the people that need to go to Brisbane or to LA during the Olympics, and for that to happen virtually first. So it's not only using the virtual world just for virtual sake, but it's actually maybe starting virtual reality, but then having the intent to go into augmented reality and mixed reality as well. So that's sort of what we're doing with also coupling that with real-time data. And the stuff that I'm showing today is more for smart buildings. So where we have instrumented real buildings with sensors measuring energy, temperature, humidity, CO2 levels if people are there, and also tying it to actuating capabilities. So basically I call it reality gamification. So you play a game, like all the stuff that you see here with Oculus or MetaQuest or all these kind of things, but you can actually improve reality. So whatever you do in the virtual world, if you say, you know, turn off the lights, turn on the lights, close valves, improve the radiators, or improve the energy efficiency of buildings in a virtual world, then if that is tied to the real world, that would actually happen in the real world as well. So the reality gamification tech space, I think, is a natural next step. when it comes to not just using these kind of things to, again, escape reality and just doing it for gaming, but you actually play a game, but you improve reality. So that's what we're doing, that's what we're focusing on, primarily in the smart building segment, but also, again, for smart cities. And another company that I'm here with, we have an XR orchestration platform. So today, all this augmented reality, virtual reality, it's just siloed and it's just one-to-one. But we aim to be like the YouTube for AR content so that you can create something in augmented reality, you can share it with someone else, they can jump into your space, edit it, and you can work on these kind of things seamlessly without any boundaries of time and space. So I think that, you know, for me, Utilizing modern tools in the right order to solve real-world problems, and especially with a focus on accessibility, that's my primary reason for coming here, so that whatever we develop and whatever I advise for, again, your Olympics planning, so that when you actually come to the Olympics in five years, in nine years, that is not an afterthought, inclusivity or accessibility. It has to be there natively, and that's why I'm here, to learn about these kind of things.

[00:35:44.037] Kent Bye: Great. And can you speak a little bit about some of the accessibility features that you have already in development? Or are you in the exploration phase here, just kind of learning about the community and figuring out how to integrate some of these different aspects into the digital twin products that you're creating?

[00:35:58.717] Nicolas Waern: A really good question. Starting from the second part of the question, it's here to learn as much as possible, because the data is here, but it's really, really interesting to hear how the data that exists in buildings, or that actually exists in cities, can be transformed so that it leads to an intended outcome. So, we have all the components, and we're ready for it with using some called taxonomies, ontologies, and standards. to take the data that we have and rapidly turn that into information, insight that leads to action, and an intended outcome. So it's two ways, actually. So we're ready for it, but I'm here to understand more contextually how we can take the data into something that is really, really valuable, not just for the traditional people, but to a whole wider source of communities.

[00:36:45.450] Kent Bye: Great. And finally, what do you think the ultimate potential of spatial computing and digital twins might be, and what it might be able to enable?

[00:36:53.622] Nicolas Waern: Oh wow, that's a great question. I think it's moving towards spatial data platforms, to intelligent spatial insight platforms, and even more so to outcome-driven platforms. So basically, an example is the company I work with 5G planning and 6G planning. So then again, you go into a city, you take the real-time data from maybe existing networks, and you package that together with satellite data. So you create a 3D virtual representation of reality that is fed with real-time information. And you ask that spatial compute platform, how do we get to 100% connectivity? How do we get to X, Y, and Z? And then the platform will then natively show this. It will show here are where the cell towers are going to be. Here is where you know you'll go towards 100% connectivity and show and you know if you look up into the sky you can see the connectivity where it's now like fragmented you have gaps in it but in the future it will be completely opaque because you know you have great connectivity everywhere and that on its own is really really important because this is pretty complex stuff So, utilizing Digital Twins, XR, to make it understandable for a 5-year-old, a 10-year-old, for someone with disabilities, and having them not only be subjected to information, but be able to participate and have that creativity aspect, that's really what I think we can reach that even today, but we're not really using it in that way possible. We're using it for gaming, we're using XR for entertainment, but we should be using it to solve really, really critical problems through methodologies like extreme collaboration practices. And where we solved one problem somewhere, let's say in New York, and we can copy-paste that to the entire planet and level up as a species. So that's really what I see with you know, the future of XR and also natively incorporate accessibility standards and make it more useful for people. That's what I see.

[00:38:53.828] Kent Bye: Awesome. Well, thank you so much.

[00:38:54.969] Nicolas Waern: Okay. Thank you so much. Thanks.

[00:38:56.922] Kent Bye: So that was Nicholas Warren, who was thinking about how to add accessibility to digital twin applications and was here at XR Access to learn more information. So I have a number of different takeaways about some of these different poster sessions. First of all, this was a quick hit of lots of different topics and research that was happening in the bleeding edge of research when it comes to XR accessibility. So with the heuristics, I did want to just list through some of the different types of early categories from Spandita, who is in collaboration with Regina Gilbert. So some of the early heuristics of looking at things from like sight disabilities of saying that there's unclear navigation instructions, limited sensory experiences, the lack of visual audio feedback. Also lack of audio feedback for folks who have auditory disabilities, lack of subtitles, and limited sensory experiences. Also for mobility disabilities, there's complicated hand movements, difficulty with controls, freedom of interactions, and then there's cognitive disabilities and non-speech and speech impairments. as other categories to add other heuristics as they continue to do this research. And then some of the other types of concerns that were brought up were lack of accessible features, more user testing and co-design, the need for inclusive gameplay design, community support and resources, and easy tutorials are needed, and then that there's a complicated setup, which is a barrier. The poster was called Formulating Inclusive and Accessible Heuristics, A Developmental Approach. And part of the developmental approach was to look at the process of development and to talk to the developers and to see what are some of the different barriers for why they're not integrating different accessibility features and at what point in the design process would it make sense to start to integrate some of these different design features. And so generally there's a limited time and support for developers, there's a lack of accessibility awareness generally, and it's hard to implement these accessibility features, especially when it costs a lot of money and is expensive and there's limited resources for folks. So that was some highlights for this poster that was called formulating inclusive and accessible heuristics, a developmental approach. And this is one to keep an eye on, especially because I think this is a theme that came up again and again of needing to have different guidelines. There were some existing guidelines that were put out by the XR association in collaboration with XR access. There's a chapter three of their developers guide around accessibility, inclusive design, and immersive experiences. And they. break down a whole range of different things in there. We have this whole matrix of considerations that are across these different things from sight disabilities, auditory disabilities, non-speech and speech impairments, mobility disabilities, and cognitive disabilities. And also be sure to check out the MetaQuest virtual reality check guidelines. These are the VRC guidelines for if you want to get an application on the Quest store, then you have to meet some of these minimum baselines. And again, these are recommendations, they're not necessarily always enforced. And so also, they're guidelines in the sense that there's still a lot more that needs to be done, especially for blind and low vision users. So there's a video that I'll link to as well that you can get a little bit more information for some of those existing guidelines. And the GitHub that is for XR access also has a lot of resources when it comes to some of these heuristics and guidelines. It's a hot topic and something that's needed, and I think as they move forward, in order to get standardization, you need to have a plurality of different types of approaches. Standardization is a topic that came up again and again. When I talked to Neil Chevet of the Kronos Group, you never want to do research and development of standards by initial implementations. The best thing to do is to have a variety of different types of implementations that are trying to do the same thing, and then once there becomes a critical mass of innovation of lots of different types of approaches, then you try to pull out the common strands and what is working for folks, and then try to standardize from that point. So I think it's still very early to try to come up with the standardization, but at least with these initial guidelines and heuristics, that's going to be the next logical step. And that's a lot of what the work that Regine and her graduate students are going to be working on. But it's still very early to do that. At this point, there needs to have both of the platform level for Unreal Engine, as well as Unity, as well as for MetaQuest coming up with some guidelines or even tools on their end. And then Apple Vision Pro is bringing in a lot of the different accessibility tools that are coming from the 2D iOS and starting to deploy it out. So that's also a thing to look towards as this continues to go from 2D into 3D and whether the new affordances and design patterns when it comes to accessibility for XR. So Rhea Galano talking about the different aspects of avatar representation and the diversity of that and having different ways of expressing your identity. And are you disclosing your disability or not? Some folks who are disability advocates are wanting to have different ways of representing their disability in the context of their avatar. And those features are not always available, especially when it comes to like wheelchair users, as an example, there's no default wheelchair option for folks. And yeah, just the spoon theory, representing your different energy levels or even zebra print options or just other ways that people can choose to either disclose or not disclose. One of the things that said on their poster was that because of the virtual reality context, sometimes folks feel more comfortable for being more open about their disability because they have this certain level of anonymity because it's not their physical identity that they have. They can feel free to explore different aspects of their identity. So there's still a lot of need to expand different options for avatar representation when it comes to different aspects of disability. The next poster of Lucy Jiang looking at the beyond audio descriptions exploring 360 video accessibility with blind and low vision users through collaborative creation. I think one of the big takeaways for me on this is just how 360 video as a medium is something that's generally overlooked in the context of virtual reality. but it can actually be a way to prototype and really explore the potential for pushing what's possible with doing captioning and audio cues and where to look and these different audio descriptions. Right now it's not possible to upload multiple audio tracks in the context of YouTube but if they do have the option of doing like an audio track then it might be possible to do a whole ambisonic rendered out audio descriptions that have the spatialization to them that can start to point to different things that are happening in the context of this immersive environment. And I think the extended audio descriptions, meaning that when you pause the video, that you could still have access to some of these, that's an interesting idea. I think you'd have to go into more of an application at that point, rather than some of these video distribution platforms like YouTube where you have 360 video because once you push pause then you don't have any ability to continue to play these audio descriptions. Having these things collide over the existing audio that's in these different applications is the thing that comes up again and again even with Occamy Labs' approach with audio descriptions. You know, you kind of miss sometimes different aspects of the dialogue that happens and you can go back and listen to it. And so I think it is not as equitable as for people who are sighted just because you can get a lot more information that's coming through visually that you'd have to watch it multiple times to really hear the different layers of the audio description and to also hear what's happening in the context of whatever dialogue is happening within these immersive experiences. The AR subtitle glasses, I wanted to include this very brief interview just because the assistive technology potential is going to be really big for having these glasses that are being able to listen and send that audio up into the cloud to be able to do translation. And then eventually at some point there might be different neural networks that are happening on your phone or even in the glasses themselves at some point if their processing is included in there to be able to do real time translation. Again, I didn't have a chance to try this out. It wasn't working when I went by there at the very beginning. It may have been working later, but by that time I was already off doing interviews for the rest of the time. And there's another conversation that I have with Joel Ward, who is digging into a lot more details for what's happening with Xreel when it comes to some of these assistive technology uses of overlaying transcripts. Denise Koch from Snoop Designs looking at augmented reality as an artist to see how you can start to add audio descriptions in the context of this AR art or make it a little bit more interactive and more engaging and also to do it remote so this hybrid idea of there's size-specific augmented reality art that may be at a gallery but if folks can't go there then there might be an online option for folks to experience of our art as well and so thinking about accessibility in this more hybrid approach for virtual events and conferences and experiences that she's making available with her technology. And then Nicholas Warren, who's talking about these smart cities, and he's mostly there just to learn more about what's happening with XR accessibility so we didn't dive deep into what he's specifically doing with accessibility more than just kind of thinking about what's going to be possible when you think about these digital twins and having a copy of what's happening with the physical reality and with this virtual digital version then how can you start to use that digital twin as an accessibility tool maybe have an ability to explore some of the different dimensions in a virtual environment before you actually go into these different physical environments and what are the ways that you can prototype different accessible features in the virtual context and then how to translate that into an augmented reality context when you're actually physically embedded into those different situations. So lots of work to be done there as well. So that was the poster session. Hopefully that gave you a nice overview of some of the different conversations and discussions. There is some breakout sessions later in the day for the avatar representation as well as the 360 video audio descriptions and brainstorming around that. And there's a whole video that dives into the takeaways of all those breakout sessions that I point folks to to get more of an overview of some of the different conversations that are happening there. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show