Michael Running Wolf is a Northern Cheyenne/Lakota/Blackfeet indigenous man who grew up in Montana. He’s worked for Amazon, but eventually left in order to pursue his lifelong goal of building XR experiences that integrate with AI for language education and to reclaim and preserve indigenous languages. The biggest blocker is that most natural language processing approaches have a hard to dealing with the infinite words that come from polysynthetic languages like many North American indigenous languages.
I had a chance to catch up with Running Wolf at Augmented World Expo where he talked about his aspirations for researching solutions to these open problems, and eventually creating immersive experiences that can create a dynamic relational context that alters how indigenous languages are spoken. Also be sure to check out Running Wolf on a panel discussion about “New Technology, Old Property Laws” at the Existing Law and Extended Reality Symposium at the Stanford Cyber Policy Center along with fellow panelists including Mark Lemley, Tiffany Li, and Micaela Mantegna.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. It's a podcast that looks at the future of spatial computing. You can support the podcast at patreon.com slash Voices of VR. So this is episode 6 of 17, looking at the intersection between XR and artificial intelligence. And today's episode, I'm going to be talking to Michael Runningwolf, looking at an indigenous perspective on artificial intelligence. So Michael Runningwolf is a Northern Cheyenne, Lakota, and Blackfeet who grew up in Montana. He spent some time working at Amazon, but he has since gone off and is now pursuing his lifelong goal of building XR experiences using artificial intelligence for language education in order to reclaim and preserve indigenous languages. So, I first met Michael Running Wolf at the Existing Law and Extended Reality Conference at the Stanford Cyber Policy Center in the beginning of the year this year, where he was on a panel discussion talking about different issues of data provenance and how there's a lot of indigenous culture that has been stolen over the years and what the impact of that is when that type of data is being integrated into these different large language models and different aspects of bias and data provenance. And so that's a discussion that I'll include in the show notes that you can go watch. But in this conversation, we're going to be mostly looking at indigenous languages, what makes them unique and difficult for some of the existing artificial intelligence architectures, and his goal to have broader natural language processing support for these indigenous languages, just to be able to preserve those languages, but also to create these immersive experiences that have these additional contextual domains that actually change the way that the language is spoken. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Michael happened on Wednesday, May 31st, 2023 at the Augmented World Expo in Santa Clara, California. So with that, let's go ahead and dive right in.
[00:01:55.298] Michael Running Wolf: I am Michael Runningwolf. I am a Northern Cheyenne, Lakota, and Blackfeet. I grew up in Montana, United States of America. And I worked in industry, formerly Amazon and other companies. And I have this lifelong goal to build XR experiences using artificial intelligence for language education. For example, I want to put on a headset, and be able to speak Lakota and experience a Lakota buffalo hunt or engage with conversations with digital avatars or chat bots in VR as an educational experience and as a way for us to engage with technology and also reclaim and preserve our languages.
[00:02:39.822] Kent Bye: So yeah, maybe you could give a bit more context as to your background and your journey into working in this intersection of AI and XR.
[00:02:47.455] Michael Running Wolf: Yeah, so it began, honestly, with my mom. She was a Hewlett Packard engineer, laser lithography, so she would actually make microchips. And so when I was a little kid, she taught me how to do a slide rule, you know, all these old analog calculus tools that she taught me. When I was growing up, I was very mechanically minded and I got really interested in computer science. And when I went into undergraduate for my degree, I thought, why can't we use these tools somehow for indigenous cultural knowledge? And since then, I've been pretty much pursuing How do you create educational technology? Because in the United States of America, over 90% of our indigenous languages are severely endangered, where most languages in North America, only a handful of elderly speakers are fluent in these languages, which is quite unfortunate. because North America represents some of the most diverse ecology of languages that we are at risk of losing. So I've been focused in the past five to ten years on language technology and educational experiences and I started out with little mobile applications and then when I discovered virtual reality with Oculus SDK and SDK 2, I did like a roller coaster experience on the VR headset And I was just floored, like I literally almost fell to my knees because I was just like, oh my god, I'm in a roller coaster. And I'm afraid of heights. And I never thought this toy would make me feel the vertical feeling and so I thought we could definitely use this technology for language education because imagine if you are participating in a buffalo hunt riding a horse and you know having to coordinate in an environment with fellow hunters in Lakota and like this would be a really engaging way to educate the language. and do that for as many tribes as possible. And the key problem was way back then, in 2014, 2013, when the modern VR era started, the AI wasn't there. And so I kind of gave up. I did some literature reviews, I kind of dug around to see if there was any technology, but there really wasn't anything oriented toward indigenous languages. And so fast forward, In 2019, I was working at Amazon Alexa as a privacy engineer, big data work. And I encountered this randomly in Hawaii at a conference, a Maori tech team who had built their own indigenous Maori automatic speech recognition system. And they were calling it the Maori Siri. And I started asking them all these different questions of like, how did you do it? You know, there's all these technical obstacles, being familiar with how. Alexa worked, and I was curious at how they overcame some of these problems. And they only used 300 hours of audio, transcribed audio, with a very small text corpus. And I was astounded, because what they had done, and they didn't realize it, they had conducted a miracle. with so little data and with so little text because these systems require Google and Siri. They have a small army of people transcribing audio from the internet and audio they've collected. And it's a huge undertaking, you know, it takes hundreds of millions of dollars to build these automatic speech recognition systems. And to see a small five-person, you know, handful, less than 10 people, build a Maori automatic speech recognition system with very high reliability was inspiring, so I pivoted my career. I took internal training at Amazon for machine learning and left Amazon. I really enjoyed my time at Amazon, but I really wanted to pursue this idea of artificial intelligence for indigenous languages, so I pivoted to becoming a lecturer at Northeastern University, and now I'm a PhD student at McGill.
[00:06:44.174] Kent Bye: What's your PhD in?
[00:06:45.735] Michael Running Wolf: So I'm a PhD student in computer science focused on automatic speech recognition for low resource indigenous languages. Building integrated artificial intelligence into XR experiences. So there's two parts to this. Number one is building the AI and the second part is building the XR experiences and right now focuses on the AI because that's where there's very limited research. There's literally only probably two maximum three published papers in machine learning or even in the linguistics articles addressing how do we build AI for of indigenous languages. And part of that problem, and we can talk about that a little bit later, is that there's just very few practitioners of AI at all looking at this problem, or any of the problems that affect indigenous people in the AI space.
[00:07:35.038] Kent Bye: Yeah, maybe at the high level, there's obviously differences with these languages, the way that they're structured. And so what is it about the indigenous languages that makes it so hard for AI to make sense of it?
[00:07:46.122] Michael Running Wolf: Yeah, so there's a couple of key problems. Problem number one is lack of data. And we have a lack of data because there's not enough speakers of these indigenous languages. And so we're working with communities that only have 16 fluent speakers. And they're never going to create a million hours of annotated audio. So we need to come up with strategies that handle very sparse data sets. And number two, our languages in North America are fundamentally different than languages in Western Europe. And so AI like Siri or Google Assistant, they fundamentally assume languages are similar to either English or German or French. And indigenous languages are very different in that they're highly polysynthetic in addition to being phonetically incompatible. Those sounds that just aren't made in Europe in our North American languages. But the more key problem is the morphology of our languages are very different. Our languages in North America exhibit a high polysynthesis where words contain as much information as a phrase in English. So like the red car. You have three morphemes there which morpheme is the smallest amount of sound that conveys meaning and say the red car so that's three morphemes a in business language in North America would just turn that into One word like you could also embed in that word who owns that car the relative distance, you know, like how far is the car from you is it a within arm's distance, or is it a car across the river, way far away, and who owns that car? And so these are very information-dense languages, and the result is that there's an infinite amount of words. There are as many words in these languages as there are stars in the universe. And fundamentally, current AI assumes that there's a discrete dictionary. Like for English, the dictionary is around 50,000 words. which seems like a lot, but it's not really when compared to indigenous languages when you have an infinite amount of potential words. And every word is constructed on the fly, given the context. Like if I'm talking with you, because of our relationality, you know, you're a friend, but you're not family. I would construct these words differently because of the context we are in. So you can't really build a neat statistical dictionary simply because you can't. It's not possible. And so part of my PhD research is around how do we deal with polysynthesis at scale that happens in North America. And to clarify, all languages have polysynthesis, like the red cars. Now you have four morphemes, you know, the red car and plurality when you put the S on top of the word. So all language, it's a gradient. As with anything, it's not discrete. English is particularly isolate without polysynthesis or relatively low polysynthesis. In North America our languages are on the other end of high polysynthesis and there are languages in between like German. German has compound nouns and so they exhibit more polysynthesis than English for example.
[00:10:48.470] Kent Bye: Well, I know that in the Chinese language, there's contextual dynamics and relational dynamics for how you have to understand the full context to be able to understand the meaning. And so are there analogs or similarities of different types of either polysynthesis or relationality when it comes to, say, the Mandarin Chinese language? I'm just curious if there's similar analogs to other languages or if it's very unique with this type of polysynthesis with these indigenous languages.
[00:11:13.824] Michael Running Wolf: Yes. Short answer is yes. Like I said, all languages exhibit polysynthesis, and how they exhibit polysynthesis differs by the language. For instance, Salish family languages in the Northwest, for example, Nuu-chah-nuu-thahm, they don't have nouns. They only have verbs. So everything is verban. You don't describe a tree as a subject. What you describe as a tree is a object that's holding the ground down. And so it's like, what is the action of that item? You don't think of things as a specific object. For other languages, like Cheyenne, my mother's tongue, you describe things in terms of animacy. You're like, is something active? And it's not necessarily a concept of alive or dead, but it's more like inanimate versus animate. It imbues a different dimension to a subject, depending on the context. And it's a wide array. So when you start talking about Asian languages, which I'm not super familiar with, I imagine there's also similarity where Asian languages have tonality. I imagine every language exhibits polysynthesis in a different dimension than every language. And so, like with German compound nouns, you create nouns based upon different functions that you want to build into the word that you constructed. if that makes sense. It's not an easy yes or no. It's a gradient, basically.
[00:12:37.853] Kent Bye: So what was the Maori innovation that they were able to have a breakthrough? I mean, did they figure out something fundamental? Or is it that you can actually do it without having to have hundreds of millions of hours or just have a sparse data set?
[00:12:50.165] Michael Running Wolf: Well, they created a highly customized technical solution. They took open source software, Baidu's Deep Speech 2, an open source version of that, I believe, originally created by the Mozilla Foundation, and they completely reworked it for Maori to suit their needs, their particularities of Maori. And the key thing about Deep Speech 2, or the strategy they used, was that it was more compatible with Maori than an AI model intended for English. Because Baidu is a Chinese company and they themselves found that AI intended for English simply does not work for Mandarin or Cantonese. And so they had to reinvent their own system. And they created very efficient functions like CTC. There's some interesting stuff in there. And that really worked well for Maori. And also, This is also just theorizing, the speculative part on my side and in conversation with them. These are actually my mentors. It's very likely that indigenous languages are easier to build technology for because they're highly regular. meaning that there's high structure, there's very clear, definite rules, versus say something like English, where there's very little rules. It's a very irregular language. If anyone speaks Spanish or French and trying to learn English, the rules are all over, there is no rules. And so it's very likely that English is the hardest language to do ASR for. And that's my hope anyways. So the TLDR is that it's very likely there was this necessary technological breakthrough called Deep Speech 2. And number two, it's very likely that because Maori is so well structured that maybe the barrier is relatively low anyways. And I'm hoping to use that effect for indigenous languages in North America. Unfortunately, we have experimented with systems like Deep Speech 2, and they simply do not work because of the morphology is different from Chinese languages and also Maori. Every language is unique, unfortunately, from a technical perspective. One strategy for one indigenous language won't necessarily work for another indigenous language.
[00:15:04.268] Kent Bye: Yeah, well, as I listen to you speak about these issues, the thing that comes to mind is that if someone is speaking an indigenous language and they want to have like a conversational interface with their technology and their technology doesn't understand them, then there's going to be an incentive for them to start to use the English language where it does work. And so you have this colonizing force of the technology that is slowly homogenizing everybody to be moving towards the erasure of some of these different languages. And so, love to hear some of your reflections on that as a decolonizing effort to create more robust technological support for these languages so that you don't have this other colonizing force of forcing people into this homogeneity.
[00:15:44.917] Michael Running Wolf: No, absolutely. I think AI And voice user interfaces are going to be fundamental for the metaverse or whatever we call the integrated social XR. And if the dominant language is English, we're going to be imposing English upon everyone. It's not just indigenous peoples, but every other language. And right now, business is conducted in English. You know, if you go to Germany or France or Asia, business is conducted in English. And that does have an effect where diversity is being suppressed by English to a degree. Now, if you do it using XR, Now you have this technical barrier where you must speak English to use these technologies. And so we're either going to be left out of the modern economy or we're going to be forced to give up our languages. And so my objective, my personal objective is to enable other language diversity, particularly for indigenous languages in the metaverse, in the social XR space. And imagine if we would be able to put on the headset, if you and I right now were wearing headsets and we were being live translated. Like you and I, you're speaking in English and I could be speaking in Lakota and neither of us would know the difference. But I could exist within my reality and you can exist within your reality. You can even maybe switch it to French because you're trying to learn French or something, or Spanish. And I think there's some power there that will help us preserve our languages, or reclaim our languages. enable our languages for new economies and part of this goal is to I want the indigenous youth to be able to use these technologies and be comfortable and be proud of their culture in a new space because right now In the modern, current economy, we're just now emerging from this era of boarding schools, where languages were being suppressed, where it was economically disadvantageous to speak your own language. Like, my grandmother didn't encourage her children, my parents' generation, to speak English. And it has an effect. Like, if the preferred language is English, that's going to affect the way the family and the language diversity even just at a small scale. And so if you start thinking about the metaverse, if the lingua franca is English, then this is another layer of colonization. And so I would like to change that dynamic. And part of that dynamic is also enabling the next generation of indigenous engineers and AI scientists and XR creatives.
[00:18:18.462] Kent Bye: Yeah, I'd love to hear a bit more context for how the immersive virtual reality or XR component is directly tying into this language learning. The relationality that you have with context-specific situations that is going to somehow change the way that language is learned or spoken, especially when it comes to indigenous language where there is this fundamental relationality that may be dependent upon what the context is is going to change how something is even spoken about. So I'd love to hear about how you foresee this integration of virtual reality technologies and XR into this process of learning indigenous languages.
[00:18:55.506] Michael Running Wolf: Yeah, so I'm a former Amazonian. I like to think of we did everything as an engineer and from working backwards. So the reality I envision is Lakota youth, say my nieces or nephews, put on a headset and they can speak in Lakota and play Fortnite or whatever game, the Roblox in the future and XR. So how do we enable that so that you can speak to AIs, generative AIs in Lakota? So what we need to do in other indigenous languages is that we need to have obviously the XR component and the other side is the AI component and the AI is fundamentally green space research right now and so there's no technical solution for indigenous languages in North America and that's where I foresee that and so why do we want to do this which is kind of going back to your question is Cognitive research is showing that education in XR is unreasonably good. I have not seen, there might be, I'm not the cognitive educator, but I've read a lot of papers and there's a lot of research coming out that if you just do flashcards, of Spanish or French education in VR, you have higher retention, meaning that if you get a lesson in French in VR using standard methodology, nothing fancy, just someone shows you a flash card and you happen to be in the VR headset, you retain more of it. You have more memorization. And so something's going on in VR where language education is really beneficial. Any kind of education generally seems to be very beneficial. There's interesting research out of Stanford where, you know, just doing 30 minutes of class in VR is very useful for outside of, you know, math and science. It's really useful as a teaching methodology. And I would love to harness that technology for indigenous languages because we do face a barrier of language education that's very difficult and when you're trying to learn a new language you're often very shy and also it's hard understanding these languages that are totally different. There's a lot of cognitive science where VR is very useful, and when you're learning a second language, you have all the inhibitions of the first language you're speaking, like say it's English, and you're learning Lakota or Cheyenne, your mouth muscles are not structured for the new sounds of Lakota. There's a lot of languages that happen in the back of your throat, like noises, and so you need to train for that. And you feel shy when you're making these new languages. And if you say the wrong noise or make the wrong noise, you change your word. Like the word for grandmother in Lakota is very close to the word for poop. And so if you pronounce the wrong phoneme or sound at the wrong time, you accidentally will say the word poop instead of grandmother. And that's because the morpheme gets swapped out somewhere and there's phonetics. And so you're shy. You're just going to be worried. I don't want to accidentally call grandma poop, you know. And so you have the personal inhibition. But if you just have talking to an AI, like putting on a headset and talking to a Google Assistant in Lakota or something. There's going to be a whole range of benefits where you can just sit there and get to training, feel comfortable, and have a neutral, patient AI judge you and say, you missed this, you should say that better. And, you know, the cool thing is that I'm not blazing a new trail. The Maori, the technical team, Tahiko Media, they've actually done that. They've actually built pronunciation models, because this is something of a problem. So I'm really just sort of still following in their tracks. That's where I want to be eventually, but we still need to get to the point where we're building the ASR model, the fundamentals down.
[00:22:43.223] Kent Bye: Yeah, and so I guess what type of experiences do you want to have when it comes to this integration of language recognition for indigenous languages and XR? I mean, if you already speak the language, but are there other things that you would like to do personally and experience?
[00:22:57.914] Michael Running Wolf: I personally would love to do a social game, you know, like Pokemon Go or something in indigenous languages. That'd be like, there's some work already being done. I have some colleagues who are doing a digitized DC. I have a friend who's building AR experiences that rematch Washington DC to how the original indigenous people were there. But I'm also excited to see things that I can't imagine now. And I'm kind of going to pivot here a little bit. Next month, me and some friends are putting on a Lakota AI Code Camp and I can't wait. Well, what we're doing there is over the course of three weeks we teach basic AI, basic computer science, basic data science, basic machine learning, and I introduce them to using Unity to build an AR app. And I can't wait till they're able to begin building their own XR experiences using AI. I can't imagine what they're going to do and I'm excited for being delighted and surprised by what they're going to do with this technology. Because right now I'm focused on the fundamentals. At the end of the day, I'm just trying to build an SDK or an API for them to use. And in the future, in four to five years, we're going to start seeing Lakota engineers creating their own XR, going in directions that I can't even imagine right now, building upon the work that me and my friends are building off of right now.
[00:24:29.580] Kent Bye: I did an interview with Daniel Leufer of Access Now where he was talking about the AI act that's happening in the European Union. And one of the things he said is that oftentimes with people who are trying to promote certain artificial intelligence applications will say something along the lines of, well, it works for 95% of the people. And oftentimes it's the 95% of the people that are maybe speaking English or maybe white. have underrepresented minorities that are having a disproportionate amount of harm that is coming for them because it doesn't work for them. And so it sounds like you're speaking directly from that perspective. And so I wonder how you counter some of that type of utilitarian reasoning where you hear these applications that are sent out. And there's this dilemma often within AI, which is that you often have to ship things before you understand what harm is done. But sometimes you know what the harm is done before you put it out. But yet still, there's this dynamic where Progress happens when you put stuff out and you get feedback. But yet, sometimes that harm that comes from doing that means that the people that are in that 5% that it doesn't work for are experiencing different levels of harm. So I'd love to hear a bit of how you make sense of that, of that dynamic, of both the fact that sometimes there does need to be that iteration to have progress, but also how do you deal with those harms that are being done?
[00:25:45.754] Michael Running Wolf: I guess I would push back on one point. You know, AI or technology, even technology, let's just talk about the internet. Internet only affects the minority of humans. And when you say in very high speed broadband, it's a very small minority of the world that even has access to that. and even in North America. And so when you say 95% of people this will work for, you're really only talking about people who use the internet, who have electricity, who have the ability to afford a $1,000 iPhone or Samsung or Pixel. And you're talking about a very affluent population that probably makes up less than 20% of the global population. So when you say it only works for 95% of the people, you're talking about 90% of a minority on the global stage. And so now you talk about indigenous peoples, we're at the bottom. of that, that 95% doesn't exist. That's an illusion because they only interact with other technologists, affluent individuals. But what about rural indigenous people who are in South America or Africa or Asia or even Montana? In the United States of America, Native Americans, most of us do not have access to broadband of any type. roughly 20 to 40 percent of this. There's some variance in statistics by different Indian Health Service and the census. NSF has done research and they kind of disagree on what exactly, but basically less than 40% of us for sure and maybe as low as 20% have access to the internet at all. Like on a cell phone, through 4G, 5G or whatever. And only about 20% of us have access to a computer and the internet at the same time. Most of us experience it on a mobile phone. So we just have this access problem on a cell phone plan where we have a limited data plans. And so we just have a fundamental problem of disproportionate access to technology. Now you start talking about XR and AI. It's currently the domain of the affluent. And so what you're doing is we're kind of in an echo chamber with high technology. And so what I'm advocating for is that we need to enable under-resourced communities to participate so that we can begin to participate in the development of technology through things like the Lakota AI Code Camp I talked about earlier. Fundamentally, we need to participate. We need talent to build this technology and contribute to our voices. And we also need to transform our economies. You know, COVID has demonstrated that we can work from home. You can work from rural South Dakota, rural Montana, and still work at a Fortune 500 company, because if you have decent internet, you can work anywhere in the world. And so we need that policy change, unfortunately. But kind of back to your question. I'm sorry, I kind of diverted a little bit. But I think the fundamental issue is that we're just not hearing about the bottom 80% of the world population and we need to start enable the uplift these communities so that we can participate because from a scientific perspective, like with indigenous languages, it's a huge gap in the science and most languages in the world are highly polysynthetic. It's just a small minority that dominates the technological space, which is basically colonization, the West. Western Europe, North America, they dominate technology. They only talk to themselves, and they don't see the rest of the population in the world. And there's this fundamental science beyond linguistics that's not being conducted because we're only addressing the problems of the affluent at 95%. I guess I challenge that whole concept. That's an overcount. Yeah.
[00:29:37.632] Kent Bye: Well, when I ran into you at South by Southwest, you said that you just launched a new lab. Maybe you could talk about that lab that you had started back in March.
[00:29:45.565] Michael Running Wolf: Yeah, pretty excited. So I am going to school at McGill University, and McGill University, alongside UDM, University of Montreal, have a research lab that was founded by Yasuo Bengio and others, who's a big name in AI, and that's called MILA. And it's a research institute in Montreal, and part of the AI for Humanity, AI for good, I forget there's a term, sorry. It's being supported by them and we're building the first languages AI reality initiative, or FLARE. Part of this initiative is to start documenting in an ethical way, in accordance with indigenous knowledge. And so we can document in low resource languages so that we can start the AI pipeline. And once we start collecting data, we're also going to start conducting foundational research for natural language processing for indigenous languages. And so we're at the very beginning. Actually, in three weeks, I'm going to have interns, which is, we have some kids from MIT, from Stanford, and from McGill. And they're going to help me construct the very first beginnings of an entirely indigenous effort to build natural language processing technology, automatic speech recognition technology for indigenous languages. And these interns represent communities that are Cree, Diné, Navajo, and across North America. Basically, we have people representing communities across all of North America. And I'm pretty excited that we're going to bring these indigenous future computer science practitioners, you know, undergrads, and contribute to foundational AI science.
[00:31:27.206] Kent Bye: Awesome. And finally, what do you think the ultimate potential of virtuality, XR, and AI might be, and what it might be able to enable?
[00:31:37.597] Michael Running Wolf: This is like a big, big question. Is that what it is? I think it's going to transform how we live and how we conduct business. Yes, we're in the early stages and we've had a lot of setbacks, but every new technology, it's pretty young, this technology. How long did it take computer science to really take off? It took decades. This whole idea of a thinking machine running calculations took a long time to take off. in the coming decades, it's going to be fundamental to the Western economy, similar to how the internet is. You can't imagine a life without the internet. Can you imagine the 1990s when you didn't have to worry about email? Or iMessage or chat, WeChat and all these things. And it's going to be similar. I believe that the AR, XR, social XR experiences are going to be the fundamental way how we live our lives and conduct business. And I think it's critically important, like I mentioned before, that underrepresented communities participate in this technology and also participate in a meaningful way where they can represent their authentic selves using their own languages, using their own cultures, and participate in a respectful way with other cultures and invite allies to build these new communities.
[00:33:01.459] Kent Bye: Awesome. Is there anything else that's left unsaid that you'd like to say to the broader Emersive community?
[00:33:05.885] Michael Running Wolf: Yeah, I think it's important to be a strong ally for underrepresented communities, particularly indigenous. We are some of the least served communities in the world, indigenous communities in Asia and indigenous communities in Africa. There's just a lot of work that needs to be done. We need electricity. We need water. A lot of communities in America don't have water, like in the Southwest. We're running out of water. America is running out of water. the Colorado is being drained. And who's being most impacted? Indigenous communities who are trying to preserve their food ecology. And we're most impacted by climate change. We're most impacted by a lack of representation and technology. And so I would say just be an ally and look around where you are. If you're in the United States or in North America, you're on Indigenous land and there's a community nearby that would love your help.
[00:34:04.059] Kent Bye: Awesome. Well, thank you so much for sharing all your latest explorations and your aspirations for where this can all go in the future. And thanks for sharing all your perspectives. So thank you.
[00:34:13.686] Michael Running Wolf: Thank you very much. Thanks for the opportunity.
[00:34:15.707] Kent Bye: So that was Michael Running Wolf. He's a northern Cheyenne, Lakota, and Blackfeet who grew up in Montana. And he worked for Amazon for a bit. And he's now pursuing his lifelong goal to build XR experiences using AI for language education and to reclaim and preserve indigenous languages. So I have a number of different takeaways about this interview is that, first of all, Well, I'm really happy that I had a chance to catch up with Michael Brenningwolf, just because this is a conversation that I had right before going out on stage and talking at the Augmented World Expo on a panel called The Intersection of AI and the Metaverse. What's next? And so as we were in the waiting room and just had a chance to catch up with Michael again, since I first met him at the Existing Law and Extended Reality Symposium that happened at the Stanford Cyber Policy Center in January. So yeah, just really appreciated the insights and perspective of Michael thinking about not only the unique polysynthetic nature of these indigenous languages, but also some of the unique machine learning architecture that's going to be required in order to more adaptively understand and discern some of these different languages, especially when there's not a fixed dictionary. There's literally an infinite number of words and so it has to take these phenomes and find new ways of making sense of how all these things are being combined together with a very strict rule set. So while working at Amazon there was a group of Maori indigenous folks who had created a pretty robust and effective model for the Maori language and that was an inspiration for him that at that point decided to do a career pivot and to go full-on into computer science and machine learning looking at how to create these machine learning architectures to be able to understand indigenous languages. So it's also really fascinating just to hear how there's a contextual relationality when it comes to some of these different indigenous languages, such that based upon your positionality and also your relationship and space and time, but also all these other relationships that are embedded into the language itself, And how, as you start to have these immersive virtual reality experiences, you can start to simulate some of these different unique contextual dimensions and how the language might be adaptive and changing based upon whether things are on the left or right and the distance and all this stuff that is embedded into the positionality and spatial awareness with some of these different indigenous languages. and how the XR as a technology is going to be able to fuse all these unique elements together, but also to reclaim and preserve these indigenous languages, especially as the colonizing force of English has English-only types of technologies pushing out all these other languages that go dormant. And Going Dormant was language that came from Lina Herzog, the wife of Werner Herzog, who created a piece called Last Whispers that was visualizing language extinction. And I had a chance to talk to her at DocLab back in episode 851, creating an immersive experience to visualize the erosion of these different languages. So yeah, language diversity is something that these different immersive technologies, both XR as well as artificial intelligence, has the capability to start to help preserve. However, there's still a lot of technological innovations that have to happen in order for that to take place. So yeah, always appreciate hearing about these indigenous perspectives on artificial intelligence. And there's often this connection between process relational philosophy as well, as I think about the indigenous philosophy and the principles of all my relations is something that runs deep. And I think the technologies of both XR and AI also have this relational nature to it. So I feel like in some ways, these emergent technologies are trying to create this paradigm shift that is moving more into this process relational mode of thinking. And I think that as these architectures are figured out, then I think that's going to reinforce these different relational ontologies as well. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of ER podcast, and if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue making this coverage. So you can become a member and donate today at patreon.com slash voicesofer. Thanks for listening.