Liv Erickson is the ecosystem development lead at Mozilla focusing on innovation in spatial computing and AI technologies, and they are the former director of the Mozilla Hubs team. They gave a presentation at the XR Access Symposium that they summarizes in their blog post on “Multidimensional computing accessibility in the age of XR and AI,” and they also facilitated a break-out session on Spatial Computing, Data, and AI. I had a chance to catch up with them at the start of the second day to talk about their talk, highlights from the break-out group, and their other blog post exploring how “In the Era of Multi-Dimensional Computing, XR is the Future of Front-End; AI is the Future of Back-End.” There is a lot of potential in the future of AI and personalized assistants for accessibility, but also how this intersects with the increased communication capacities and collaborative sensemaking potentials of spatial computing and immersive storytelling. Erikson suggests that perhaps we should think about AI as more as “Augmented Intelligence” rather than “Artificial Intelligence” as a semantic shift that contextualizes AI as a human-centered technology in the spirit of the 10 principles of in Mozilla’s Manifesto with 4 addendums towards a healthy Internet. We explore all of that and more within this brief discussion.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the future of spatial computing. So this is episode 12 of 15 in my series of looking at XR accessibility. In today's episode, I have a talk with Liv Erickson, who is the ecosystem development lead at Mozilla, who's focusing on innovation and spatial computing and AI technologies. So live was both facilitating some group discussions around spatial computing and AI and how those are intersecting, but also giving a talk about multidimensional computing there at the XR access symposium. And so, yeah, this is a hot, hot topic. the intersection between virtuality and AI. I have at least a couple of dozen interviews that I hope to dive into hopefully within the next month or two as I start to unpack the seven different trips I've taken over the last two or three months. But lots of different discussions around this intersection between artificial intelligence and virtual reality. So In this conversation, Liv is really looking at taking her experience of being the lead at the Mozilla hubs and understanding this co-creative dimension of having a bunch of people who are in these spatial environments and sharing different text and audio and videos with each other, this collaborative sense-making that can start to happen. What are the ways you can start to use the medium of virtual reality to have these other modalities of communication with other people? How can AI start to assist to that especially when you start to integrate different things like generative AI? but also what about this idea of having like your own AI personal assistant that is able to be tuned to your information consumption needs or almost like an intermediary that you're able to lean upon and to be able to help express yourself to other people. So that's something that Liv has been diving into. And that's what we are going to be talking about today on today's episode of the Voices of VR podcast. So this interview with Liv happened on Friday, June 16th, 2023 at the XR Access Symposium in New York City, New York. So with that, let's go ahead and dive right in.
[00:02:12.749] Liv Erickson: My name is Liv Erickson. I am the ecosystem development lead at Mozilla, focusing on innovation in spatial computing and AI technologies. I'm the former director of the Mozilla Hubs team, and I have been looking at the intersection and use of artificial intelligence and spatial computing in XR for about 10 years now.
[00:02:35.139] Kent Bye: OK. And yeah, maybe you could give a bit more context as to your background and your journey into this space.
[00:02:41.073] Liv Erickson: Yeah. So my background, I'm a computer scientist by trade, but have been focusing on creative technology and policy advocacy through product and the development of businesses built on top of open source software in the XR space. My interest is in looking at computing as a tool and mechanism for what we are trying to do as humans and connecting with each other and tell our stories and build the world that we want to see.
[00:03:10.258] Kent Bye: And maybe you could give a bit more context for how XR and accessibility started to come onto your radar.
[00:03:16.342] Liv Erickson: Yeah. So my first experience working with XR was a demo where I got to step into the Star Wars universe, which was one of my absolute favorite universes as a kid. And I was immediately struck by the potential of this technology to embody the stories that we both had been told, but also facilitate new mechanisms for telling stories. I am autistic and telling stories and being able to communicate and use my voice was very challenging up until I found XR Technologies because it allowed me to create worlds and experiences that were grounded in computers, which I loved, but also be able to create new products and experiences that didn't necessarily pick one storytelling mechanism. I could use a combination of movement throughout a space and images and videos and other people and digital agents to create a more complex and nuanced emotion-based experience for what I was trying to convey. And through working with these technologies, I realized that they were an accessibility tool for myself in understanding how I could navigate the world and have the type of impact that previously felt very inaccessible to me just in terms of how I had grown up experiencing the world and so accessibility really got on my radar back in 2017-2018 as I was getting diagnosed with autism and understanding how much of myself had been shaped by a world that was not necessarily developed for me, but also with this new technology that I felt empowered by to shape my own world. And I started to get involved through the W3C working groups in XR accessibility, looking at how machine learning and open standards could be used to make 3D spaces more navigable and describable and understandable and through that I've been able to really dive deeper and deeper into the importance of context and sharing information in a variety of different modalities in order to give a more complete picture for who we are as people and how we're engaging with the world around us.
[00:05:31.983] Kent Bye: Yeah, and so you're here giving a presentation at XR Access Symposium, but you also led a group discussion yesterday. So I'd love to hear some of the big takeaways from that group discussion.
[00:05:41.777] Liv Erickson: Yeah, I did. So the group discussion that I led yesterday was really looking at the surface of the potential and the opportunities and the challenges of incorporating artificial intelligence into spatial computing and the different ways that we can share context. The conversation really kept coming back to context and how spatial computing and XR could provide convening mechanisms and transformational spaces for people to maybe communicate in one mode and have that communication be transformed through XR and through AI so that someone else could understand it in a way that feels more natural to their context and framing of the world. And through that, we talked about some of the key fundamentals of making sure that there's an understanding of where spatial information is being manipulated by different players who are part of the computing stack that we live in today, and understanding where there's the nuance between giving folks additional capabilities and context to the world around them, but also understanding where that may be manipulated and looking at that very fine line in some cases and also making sure that there is sufficient education and literacy that can be developed around the use of AI technologies and spatial computing and I think there's so many different facets to that. But what really kept coming up is both of these technologies facilitate relating to our environment and other people in different ways. And what we want to do is avoid repeating the mistakes we've made in the physical world and how we've made that inaccessible to so many people when we are starting to build software that aims to augment or replicate that.
[00:07:34.087] Kent Bye: And how do you see that artificial intelligence and machine learning is going to play a part in all this?
[00:07:39.570] Liv Erickson: I have a variety of timelines that I can think of when it comes to figuring out what that's gonna look like in practice. So, I think in practice we're in for a bumpy few years of generative AI replicating a lot of the parts of the internet right now that aren't accessible. What is going into these training sets for large language models and other types of large models may or may not be accessible. The field is moving really fast and a lot of the content that's being developed around AI may also not be accessible. But I think long term, one of the things that I'm really optimistic about is the idea of these technologies enabling the development of a truly personal user agent, whether that's in the form of a particular software application or even at the operating system level, where people can use these technologies to build a computing system that actually works for them. I think something that we're seeing with the current interest in machine learning that we saw over the last decade with spatial computing and XR is people are hungry for agency. They want to be able to build software and tools that are allowing them to express themselves and exist authentically, interact with people authentically. And, you know, that's the vision that I have is maybe very optimistic and hopeful that we can get to a point where Fundamental computer science literacy is accepted and popularized and people can really feel like they're in control of building an experience that works for them. So fundamentally taking accessibility from something that's added to applications or to operating systems and enabling people to build a system that is truly accessible for that unique individual.
[00:09:30.339] Kent Bye: Yeah, and I'm wondering if you could elaborate a bit on the connections between your work that you were doing with Mozilla Hubs and then you looked into AI and the W3C with AI and maybe what you're doing now with Mozilla and some of the AI tie-ins there.
[00:09:44.402] Liv Erickson: Yeah, so one of the really interesting things about Hubs is how it helped onboard such a wide community to presenting and interacting with media and information in a different way from a 2D rectangle on the screen, even though that's often the mechanism that people experience a 3D world for the first time. the act of co-constructing a space with other people and bringing in media and information and sharing that in a particular spatial way helps people envision the potential of spatial computing, I think, much easier. And I recently wrote a blog post that's a little bit tongue-in-cheek, but talks about how spatial computing and XR is going to be the next front end, and AI and machine learning is the next back end. Because it's not that these technologies are existing in isolation. It's that they're just shifting the way that we build software applications. And they're going to be really tightly connected. And there's going to be various ways that people can specialize into those. But there's also a need to have a really broad understanding of how all of those pieces fit together. And I think that my work on hubs has been a really good place to play in understanding the interaction paradigms of working with information in a 3D manner, even in advance of headsets being widely accessible to the vast majority of people, and helping people develop the skills to feel confident that they can build a reality. It's a very low stakes entry into building reality and influencing the world around you. And then as you start to do that in these virtual spaces, increasingly you become more and more empowered to do that for the world around us, which I think is really, really key. And so I've talked about jokingly with folks that AI has always kind of felt like the sidecar to the metaverse technology stack for me, because I have been playing around with it in the side and it's just, that hasn't been as front and center, but now that it is, I think it's really key to take what we've been doing in the space with hubs and building out this ecosystem around agency and accessibility and customization and personalization of 3D environments and spatial environments and applying that to machine learning and AI because It's so transformational and yet it does require thinking about the world and computers in a very different way. And so my work now is really focused on how do we build a coalition of creators and developers and individuals, regardless of background, who really feel empowered to use this technology and shape its future in a way that is inherently giving them control over their data and their information and what's being shared with these large language models. There's a lot of continual reliance on huge cloud providers to put everything in a place where we may or may not have access or know how that information is being used to train other systems or what it's being mixed with. And I think it's really key now to continue to push forward the principles that we see in Mozilla's manifesto around making the internet and our relationship to computers very human-centered and keeping that in mind, especially as we're talking so much about artificial intelligence, which frankly I think should be more framed as like augmented intelligence because it's ultimately still being used in service of humans. And so my hope is we can bring some of these creative world-building experiences to the AI space and use that to help people envision a future for the technology that challenges the status quo and breaks the defaults of what we experience right now with building software so often.
[00:13:39.906] Kent Bye: Yeah, and you're going to be giving a talk here at the XR Access Symposium 2023, so I'm curious to hear what are some of the big points that you're trying to make there.
[00:13:47.925] Liv Erickson: Yeah, the biggest one that I'm trying to make is to really help shift from thinking about accessibility as a binary or a set of discrete things that we can do and shift it into a continuum where it's not figuring out making AR accessible or XR accessible or AI accessible, but it's how do we rethink the fundamental assumptions that we make about building software and use that as a jumping off point for truly personalized computing that is accessible by default.
[00:14:24.369] Kent Bye: Great. And what do you think the ultimate potential of this fusion between XR, AI, and accessibility might be, and what it might be able to enable?
[00:14:33.832] Liv Erickson: I think right now my latest framing for the ultimate potential, and I think it changes quite a bit just with how fast everything is changing, is using the idea of Jarvis from the Iron Man series as an example, where it's a computer system that's highly personalized. to one person, to their experience. It understands how they communicate, what they're looking for, what their context is. And I really think that spatial computing, XR, AI can all really help move us towards that vision of a truly personalized agent that can help us navigate the world. But it's really critical that that happens with the users being fully in control. Because as these technologies advance, there will be so many opportunities for them to be used for harm and manipulation. And that's why it's really key to make sure that we're approaching this next frontier as locally as possible for the individual.
[00:15:30.535] Kent Bye: Awesome. Well, I look forward to hearing a little bit more in your talk today. And thanks again for your explorations in this very hot topic of this intersection between VR, AI, and the future of accessibility. So thanks for your time.
[00:15:42.897] Liv Erickson: Thank you.
[00:15:44.198] Kent Bye: So that was Liv Erickson. She's the ecosystem development lead at Mozilla, focusing on innovation and spatial computing and AI technologies. And she's the former director of the Mozilla Hubs team. So I found a number of different takeaways about this interview is that, first of all, Well, I just love this idea of using spatial computing as a new mode of communication. I've certainly seen that with immersive storytelling experiences that I've seen at Sundance, South by Southwest, Tribeca, Venice Immersive, and IFADOC Lab, as well as many other regional film festival scenes. So immersive storytelling, just this idea of all the different ways of using the modality of virtual reality to be able to communicate a story or an idea I think this intersection of the web and the internet and the open web in particular, pulling in all sorts of new information and data and kind of synthesizing it, visualizing it, but also using it to be able to make an argument or to communicate. This is something that Liv herself has found especially helpful as she's somebody who has been diagnosed with autism and had this direct experience of how She's been able to communicate with the combination of using movement through a space and images and video and other people with digital agents to create a more complex and nuanced and motion-based experience as you're traveling your virtual embodiment through these virtual spaces to be able to communicate all these big ideas with a lot of complexity and nuance. And so she's found it especially gratifying with her work in Mozilla Hubs And as you start to integrate all sorts of other artificial intelligence technologies, then how can you start to add even more context to not only the ability to communicate with other people and to provide even more context to what you're trying to say, but also to provide these different contextual domains to have like contextually aware AI, which is something that Meta has been talking a lot about. But specifically in this case, having an AI that's very much tuned to your own needs of what you want to consume or communicate out to the world. So kind of like your liaison in a way. So I think this is something that is already starting to happen more and more as we start to see the capabilities of these different large language models and whatnot. Certainly, there's a lot of capabilities. Also, there's a lot of limitations. And so there's a lot of stuff that it can't do, and there's a lot of stuff that it's not complete, and there's certain biases that are embedded into these models as well. So that's another point that came up, which is not only the power of what these systems can do, but also the ways that they could be used to either spread misinformation or to cause harm as well. The ethical dimension, I think, is always kind of integrated with both XR and AI as you start to explore some of these different topics. There's a couple of blog posts that I'll link down into the show notes. One that Shia mentioned, which was in the era of multi-dimensional computing, XR is the future of front-end and AI is the future of the back-end. And so Just like you have JavaScript on the front end and you have the database structures of stuff in the back end Using that kind of web metaphor to think about XR is this new front end for the embodied experiences that people are going to have with the spatialization of all this but then AI is The back end that's drawing all these different connections together and then in the context of her talk that she gave the video will be online you can go check that out in a report back from her brainstorming of this intersection of XR and AI and but she has a whole post on multi-dimensional computing accessibility in the age of XR and AI. Also worth a look as she starts to unpack a lot of these issues and her slides that she presented as well. So yeah, certainly this is an area that I'll be digging into much more with, like I said, like a couple of dozen of interviews that I've done at many different conferences from Laval Virtual to New Images to Augmented World Expo and lots of different conversations about Tribeca, about artificial intelligence and And yeah, it's just a hot topic that tends to come up everywhere I go these days. So at least a couple of dozen interviews I'll be digging into after I finish this series and after I dig into some of the different immersive stories at Tribeca. But yeah, looking forward to digging into this intersection of AI and XR, as it's certainly a ripe topic. I did a whole panel discussion at Augmented World Expo that was on a discussion with Alvin Wayne Graylin, Tony Parisi, and Amy LaMeyer. So that was some nice debates that I had with Alvin, rekindling some of the discussions that I had with him at Southwest Southwest in a conversation and having this dialectical debate about AI. And then I gave a whole half hour talk on some preliminary thoughts on artificial intelligence that I'll link in this episode as well if you want to look at some of my other thoughts about artificial intelligence and where it's going. But like I said, I'm going to be digging into a whole series hopefully here soon within the next month or so. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.