I’ve spoken to Whitehead scholar Matt Segall a couple of times before, and I was really keen to catch up with him to hear his process-relational and process philosophical takes on AI, intelligence and consciousness. I was responding to a piece he recently wrote titled ” The Philosophical Implications of Artificial Intelligence” (full citation below), and it’s helped me to ground the conversation about AI that sometimes feels like it easily gets untethered from reality, or just collapses the complexity of humans down into computational machines. There’s a lot of more pernicious metaphors that reductive materialists have about humans are, and so this conversation helps to recenter the AI Hyped conversations into a much more process-relational context.
Segall, M.D. (2025). The Philosophical Implications of Artificial Intelligence. In: Hoffmann, C.H., Bansal, D. (eds) AI Ethics in Practice. Integrated Science, vol 35. Springer, Cham. https://doi.org/10.1007/978-3-031-87023-1_8
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.458] Kent Bye: The Voices of VR podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of special computing. You can support the podcast at patreon.com slash voicesofvr. So I'm going to be on my way out to the Augmented World Expo on the last day on Thursday at 2.45 p.m. I'm going to be on the main stage, a Socratic debate around the future of immersive technology, specifically some critiques that I have around AI hype and things that are dehumanizing in a way that there's a lot of things about AI that are just really concerning for me in terms of the direction that it's heading. And so I wanted to sit down with one of my favorite philosophers, Matt Siegel, who's a process relational philosopher. And I wanted to have him elaborate on an article that he wrote on a process relational philosophy of artificial intelligence, looking at both intelligence and consciousness, but some of the metaphors that we're using. But more than anything else, I feel like that artificial intelligence is an opportunity for us as humanity to reflect on what does it mean to be human? What are the different types of systems in our society that are working or not working? And so I want to get his take on that, but also some of the more embodied, relational, contextual, and agential ways that we can sort of look at intelligent and what's it mean to be embedded within the context of a system and a process relational mode of relationality. and trying to really come back to this emphasis of relationality when looking at artificial intelligence and how AI can help elucidate some of these concepts of process relational thinking. So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Matt happened on Friday, May 23rd, 2025. So with that, let's go ahead and dive right in.
[00:01:50.137] Matt Segall: Yeah, good to be here, Kent. So I'm Matt Siegel, and I'm an associate professor in this wonderful graduate program called Philosophy, Cosmology, and Consciousness at California Institute of Integral Studies in San Francisco, though that's mostly an online program at this point, so we are planetary. And I do process philosophy and I try to apply process philosophy across as many different disciplines and the sciences, natural sciences as I can, social sciences. But primarily, I would say I study consciousness and its place in the evolutionary history of the universe. That's kind of my orienting frame for all of the work that I do.
[00:02:35.612] Kent Bye: Okay. And so today, we're going to be talking around the philosophy of AI and from a specifically a process relational philosophy perspective. And so love to maybe have you give a bit more context as to your background and your journey into philosophy, but also, you know, being there based in the Bay Area and starting to look at the intersection between emerging technologies like AI and more of the philosophical implications.
[00:02:59.375] Matt Segall: Yeah. So I started as an undergrad studying cognitive science. I was at the University of Central Florida and studied with people like Sean Gallagher, who if you're in CogSci, you'll recognize that name as someone who's contributing more to the embodied and inactive point of view. And so very early on, I was exposed to the relevance of embodied phenomenology for the study of cognition and consciousness, which is... Still, in contrast to the mainstream in cognitive science, which is predominantly computationalist, which more or less imagines the mind as the software of the brain, which is the hardware. And that metaphor drives so much of the research to this day in the cognitive sciences. And so when I graduated with my undergrad, I knew I wanted to study consciousness. And this was in 2007. And it was just beginning to become a legitimate academic subject of inquiry. There was the Journal of Consciousness Studies launched, I think, in the early 2000s. You know, Chalmers had framed the hard problem in the mid-90s, but it took a little while for that to really start to reshape the field. So I ended up finding this school out in San Francisco, CIIS. I got my degree there. I wrote my dissertation on the role of imagination in the philosophy of nature and trying to take seriously imagination as a... a medium that connects us as human knowers to the very same cosmological powers that gave rise to our species and that gives rise to stars and gives rise to organisms. And so imagination becomes a kind of a way of knowing, but also with ontological grounding to it, because we're natural creatures, we evolved, and our inner experience is as much a part of this universe as anything else. But actually, rather than think of imagination as just inner, I tried to argue that it's, again, this portal that opens out onto the cosmos, that bottoms out into being and becoming as such, and so that we experience the creative process gives rise to the universe within our own imaginations right and so if we can begin to cultivate imagination as an organ of perception we can do science in a different way rather than just creating abstract models and technological means of testing those models which is you know it's been very productive for science for a few hundred years but there's another way of knowing that might put us in more intimate contact with with the world in its concreteness right rather than just modeling it abstractly we might be able to feel the interiority of the world directly. And that might, if it doesn't change scientific practice, I think science will always be in the business of model making, but it might change how we interpret scientific findings. It certainly would change how we understand these machine learning technologies, digital computation, not as something that could or should be ontologized, at least if we're information, say, I think we need to proceed very carefully and not lose sight of how the metaphors migrate from helpful model to the sort of misplaced concreteness in my philosophical hero Whitehead sense that would lead us to say, oh, well, maybe the physical world is itself just information processing. I think we need to slow down a little bit and unpack that. But anyways, that's part of my journey and what I'm interested in and how I'm oriented to these sorts of questions that we'll be digging into today.
[00:06:33.206] Kent Bye: Yeah, well, you posted to your sub stack, a article that you had written for a book on AI ethics that is called a process relational philosophy of artificial intelligence with the subtitle of why the idea of conscious machines is more advertising gimmick than advanced technology. And so I just did an interview with Emily M. Bender, as well as Alex Hanna, and they have a book called the AI con, which is deconstructing a lot of the AI hype that's happening right now. And They make some similar arguments that you're making in your paper, specifically that AI as a term is more of a marketing term than is actually describing a coherent set of technologies. And so they reference like Dr. Johnny Penn's PhD thesis called Inventing Intelligence, going back to the very origins of the 1956 revolution. Dartmouth conference on artificial intelligence, where AI was coined as a term. And even back then, there wasn't a set of coherent technologies that were describing AI. But by invoking this human intelligence, you're able to project onto these technologies way more capabilities than are there. And you also start to anthropomorphize them. And so As we start talking around this, I'm very struck by the metaphors that we're using, and also the philosophical presuppositions around the nature of what humans are, whether there can be seen as the metaphor as a computer and a machine, versus more of a process relational mode that see things more in a relational contextual way. And so Love to have you maybe unpack that a little bit more in terms of the existing reductive materialistic type of metaphors that are really dominating our framing of these questions, but also how a more process relational perspective can open our eyes a little bit more into other ways of looking at this.
[00:08:14.781] Matt Segall: Yeah, I think, you know, the way that language works, we often lose track of how common terms of phrase were once living metaphors that a poet had to imagine. And then it just becomes more mundane and the sort of thing that we use day to day without even thinking about it to describe the world. The brain-mind equation and the metaphor of computation for what mind or intelligence is, I think, does come out of this DARPA research in the 50s. But then it really takes off when journalists start asking the computer scientists questions. And the cyberneticians, you know, what they're up to. And when you hear a scientist say, oh, well, obviously the brain is a computer. As a journalist, it just becomes a really handy way of explaining the research that's being done and the technologies that are being created because they're not experts. None of us are experts in the technical details of what's really going on here with technology. how neural networks operate and what's involved in a sort of abstract description of what neurons might be doing in terms of logic gates and binary code. And what becomes a very useful simplification for the purposes of designing a new kind of computation, like machine learning, ends up, when the journalists get their hands on it, turning into a whole worldview. And it became very easy for us to start thinking of ourselves as sophisticated computers, you know, and if you don't have any kind of widespread religious or sort of mythic container in a secular society, that vacuum very quickly gets filled with a new kind of technological mythology. And this has been, I think, ramping up for decades now. And the technology, I think, is finally at a place where it's not just metaphors anymore. People can interact with these large language models. People are falling in love with these large language models. And the general public is now being directly confronted with the, I think, quite astonishing advance of this methodology. and is technology. And it's really, I think, having tremendous psychological and cultural effects, which again, they've been simmering for decades already. But my question here is not can machines think or can machines become conscious? I'm not interested in that question except to critique it as very confused. My main question in this chapter that you're referencing is really what kind of beings do we become as a result of adopting these machinic metaphors and as a result of becoming ever more entangled with these technologies. It's changing our consciousness because our consciousness has never been something simply sequestered inside of the brain. It's always been, and our intelligence as well, has always been extended into and augmented by the tools that we have used, going back to harnessing fire and you know, stone axes to the development of language, oral speech and the first writing alphabets, you know, and so on. So we've always, in a sense, had an artificial intelligence because of the extension of our minds into these tools. And that's really being dramatically accelerated by these new large language models and all sorts of other machine learning techniques that give us access to correlations in huge data sets And there's a very slippery slope between having that kind of abstract correlational understanding. Well, the machines have this abstract correlational understanding. We don't quite know how they do it, but they provide us with some insight that's obscure. You sent me this paper on neoplatonic reading of big data science. And I think the slippage occurs when the machines give us this sense of a correlation in this huge data set that we then assume is causal and that we ontologize it. I think we really need to slow down there and recognize that what the machines have access to, what AI has access to, say these LLMs, is a purely symbolic excretion from the whole history of human articulation of thinking, feeling, and willing into words. All of the LLM has is the words, doesn't have the deeper imaginative, emotional, embodied sense that all of those words came out of, right? And so to think that just a bunch of algorithms and neural net weights trained on the mere words would somehow become conscious is deeply, deeply confused to my mind. But that confusion is changing how we experience our own consciousness. So again- how we are being changed by these technologies is my real question. And I'm quite alarmed by what I see happening.
[00:13:23.513] Kent Bye: Yeah, we're definitely in a time where it feels like polarization between a true believer versus a skeptic. And there's all these projections of the story of AI where we're really at the peak of a hype cycle that seems to persist in a way that even despite all of the downsides and limitations of the technology, that's sort of ignored for this perception that we have. In talking to Emily M. Bender, who's a computational linguist, she's explaining how this illusion is The structure and form of language is that it's just looking at the form and not like the structure or meaning or how it's, you know, related to a relational context for what that means. And in your paper, you're talking around relevance realization, which, you know, to me is like, you know, as you're taking in all of these inputs, like how do you determine what's the most salient? And for these large language models, it's just looking at these statistical elements. patterns amongst thousands and millions and billions of web pages and data that then mimics our language in a way that is mimicking our human intelligence in a way that we're perceiving it and we're projecting onto it deeper meaning when it's really just a set of random words that is close enough to create a proxy that gives us illusion that is a lot more capable than it actually is. So I'd love to if you could break down a little bit more of this concept of relevance, realization, and the non computability of that. And just this kind of concepts of embodied cognition and how being embedded within a context is such a key part of both cognitive science, but also more of a process relational view that goes above and beyond the defaults of philosophical orientations.
[00:15:04.119] Matt Segall: Yeah, so I drew a lot on this article that came out, I think, late last year by Johannes Jaeger and Anna Rydell and John Vervaeke and others, Naturalizing Relevance Realization, Why Agency and Cognition are Fundamentally Not Computational. And, you know, John Vervaeke has developed this idea of relevance realization to point out the ways in which what's known as the frame problem, it really creates this important distinction between living organisms, which have evolved and co-evolved really with their environments over the course of billions of years as a result of their capacity to solve the frame problem, which is what is the context within which my action might be relevant? Yeah. to sort through all the possibilities for a behavior which could occur next or to sort through just the perceptual field for what matters to my survival and to my needs and desires is actually computationally speaking, if you were to translate what an organism does pretty easily every day and every moment into some kind of algorithm, it's turned out to be quite impossible. And one of the reasons machine learning has made such huge strides is that they're not trying to program it. in advance anymore how to navigate a particular environment. They're kind of letting the machines train themselves. But then you end up in a situation where you're not really sure what the machine knows or what has been encoded in its neural network. It doesn't lend itself to us being able to really understand what goes on in that black box. But nonetheless, it does allow these machines to mimic what would appear to be more adaptive behavior in novel environments and so on. But the thing is, if you've played with LLMs long enough, or if you dig into some of what the limitations of these machine learning technologies are, it turns out that because there's that lack of billions of years of embeddedness and co-evolution with environments, as is the case with organisms, often these machines will make very stupid mistakes. I made a video last year after my Roomba for the second time ran over some dog poop. And it's supposed to not do that. And I know that's kind of a silly example, but the LiDAR is supposed to detect that and it's not supposed to run over the poop. And the way that a cat can navigate through a litter box without stepping on its own poop, it's not a big problem for it to solve. And yet for machines, it's very difficult to get that right. And I think that's just a... An anecdote that I think speaks to the precautionary principle we should bring here before we allow these machine learning systems to start making really important decisions and imagining that they could make judgment calls, even just based on large data sets and patterns that they see in the data. There's not actually an understanding of anything. There's not actually the ability to make decisions. a judgment that is fully taking all of the context into consideration. How organisms do this is an outstanding question. The name relevance realization is a description of something organisms do, but it's not necessarily an explanation. Verveke will talk about opponent processing, and the way in which, in Whiteheadian terms, organisms are able to turn conflicting data into contrasts, which allows for decisions to be made. You don't get locked into a binary of this or that. Organisms are able to harmonize conflicting data in a way that allows for some synthetic decision to be made that's contributing to a whole history of learning and adaptation that the organism is constantly building on and so on. And so, yeah, I think this framework like relevance realization is a really important... I think of it as a kind of placeholder because, again, it doesn't strike me as an explanation for how organisms do what they do, but it makes clear the difference in capacity and the ability to solve the so-called frame problem. That's a hard one, just as hard as the hard problem of consciousness, I think, to see how... the sorts of just common navigation skills that organisms have to make their way through environments turns out to be really, really difficult for machine learning systems.
[00:19:19.330] Kent Bye: I'm really struck by how virtual reality, XR, mixed reality, augmented reality is something I've been covering quite a bit for the last 11 years. And so we've had XR and AI co-evolving as sibling technologies where they're feeding off of each other in a lot of ways, just in terms of creating these virtual environments and video games to train the AI, but also AI and computer vision just has helped to develop so many aspects of these XR platforms. And as I've been covering XR, I've been trying to look at it in terms of new metaphors that are being created that we can look at the human experience through like the different levels of presence, whether that's active presence, and your sense of agency, and then your mental and social presence, where you have an ability to kind of make sense of the world with your mental models, but also the social dynamics and the relationality of the social aspects, and then your emotional intelligence, the way that you're relating to the world, and then the embodied and environmental presence. So your sensory experience in the way that you are a body in relation to the world, but also in the context of environment, and how the environment is feeding into all these. And so as I look through the lens of presence, I see a lot more evolved metaphors for looking at intelligence that is both agents with your behaviors, your agency, a little bit more of the fire element and then the air element of the kind of mental constructs and the way that you're making sense of the world, but also the relationality of relationships. people, and then the water element of the emotional presence, the way that you're using that as a baseline to make sense of what's important, what's not important, and then the earth element of the embodied and environmental presence. So as I read through your paper, I'm seeing that you're talking around this evolutionary context of intelligence and how these organisms are in relationship through their sensory behavioral and intelligence. And even emotions seems to be a pretty important aspect, especially for Whitehead with this idea of the lure. But it could be that the emotions are actually the thing that is helping with that relevance realization. And so just love to hear some of your reflections on expanding out these metaphors that go beyond just this collapsing of intelligence into this kind of abstract mental word error elements type of thing.
[00:21:32.344] Matt Segall: Yeah. Well, I'll pick up on the emotion piece because that is a key part of Whitehead's understanding of reality. And he really wants us to get out of this way of, we tend to sequester emotions. Yeah. deep within us, that a feeling is just something subjective, merely subjective. And Whitehead invites us to understand emotion actually as a means of perceiving the world and in particular perceiving the interiority of the other living beings around us. And so rather than think we have to As the usual computationalist understanding of human cognition would have it, we have to sort of reconstruct a theory of mind to try to imagine what might be going on inside of another person. It's not that we don't do that in a secondary kind of a way where we get a more propositional kind of understanding of what another person might be thinking. We do that. But beneath that, before that, as a primary means of being present to the world, we feel and we enter into resonance with the emotional state of others, you know. And so Whitehead has these great lines where he says, like, you know, a young man does not begin dancing with a collection of patches of color and then proceed to conjure a dancing partner. No, there's an emotional connection that... human beings have with one another that gives us a sense that there's a presence there there's a being there and we have this sense of mutual concern for our beingness as emotional creatures and so we we enter into that shared form of resonance now there's no question that these Machine learning systems can learn to mimic presence. And in fact, to be able to read the micro expressions on our face in such a way that they know what or can guess what we might be feeling. And I think the real existential question here, and it's a question I struggle with, is, you know, at what point does the mimicry become so good that we as human beings can't tell the difference anymore? We're almost there. But then I have to admit that when we think of our own human experience, when we're really authentically connecting with someone, it's great. But there are certain situations we enter into where we're tempted to fake it. And sometimes we don't even quite know how we're feeling when we're asked, it can become quite hard to articulate our own inner emotional states. And we start to wonder, you know, how malleable our own inner lives are, depending on the way the questions are asked, depending on how we want to appear. We start to construct our own emotions. And there's a lot of work on how emotion is sort of constructed in this way as a result of our own self-conscious reflection upon it. And so I'm not trying to erase the difference between machine mimicry of emotional presence and actual human emotional connection. I think we need to really pay close attention to the difference between those. But I also acknowledge, and this is where the danger is, that there's an element of construction to our own inner emotional lives you know and that when we do turn our own consciousness and begin to reflect upon our own emotion it becomes difficult to know whether in some cases we aren't trying to mimic the sorts of emotions we're supposed to be having in a given situation right that's just the nature of being a self-conscious organism And so that means the temptation to collapse this difference between machine mimicry and genuine human emoting is quite strong. And that's just an area where I think we need to continue to remain vigilant. And there's something... I don't know what the word is, uncanny, I guess, about human beings beginning to develop romantic or even just friendly sorts of relationships with machines that, you know, at this point, it's pretty clear that these machines are just mimicking and there's not anything inside them that might be similar to our own emotional experience. But at the same time, if I'm a pan-experientialist or pan-psychist, you know, I can't totally close the door on the possibility that some future architecture, particularly when we're talking about a cybernetic organism that has biological cells grafted onto microprocessors, which, you know, that's a whole different question in terms of can machines be conscious while it has cells grafted as part of its architecture that are sentient. And so how that might scale up into a new form of machinic, cyborg type of kind of new species at that point. And I know that things will move in that direction and there is already a lot of research there. And so I'm not trying to totally close the door on the possibility that we might birth a new species, some kind of cybernetic organism that we will be able to enter into authentic life intersubjective relationship with, right? But I think we need to proceed very carefully because there is such a temptation to fool ourselves. We can fool ourselves about our own emotions. We can fool ourselves about other people's emotions because we wish they were feeling a certain way about us or about something. And so just because of the messiness of our own human psychology, I think there are a lot of ways in which this can go very wrong and dystopian and we lose touch with other human beings and just have machines that reflect back to us what we tell it to reflect back to us so that we're in total control of a relationship that's not really a relationship because it's just a machine that mirrors back what we want instead of a real human being that doesn't always mirror back to us what we want because human beings have their own needs and drives and so on. So yeah, that's just some of the complexity here when I think about what presence might mean and how these technologies are advancing.
[00:27:38.732] Kent Bye: Yeah, and I want to get back to some of the other elemental aspects that I think you're unpacking more, but I want to dive in a little bit into this question of consciousness just because it came up in the moment. And I know that there's different views of whether or not consciousness is fundamental, whether it's the emergent property of physical matter, which I think we both don't put a lot of weight into. And so you have these more enchanted views of like panpsychism or pan experientialism that tends to see like consciousness is embedded into like everything. And so when you start to think about AI being conscious, there's a part of me that is resistant to give it any more credibility than it is just these computational things that are happening. And I'm trying to resist that anthropomorphizing that can be giving it way more capabilities than are actually there. But yet you have these more, let's say, enchanted views, whether it's like idealism or panpsychism or animism that is maybe looking at it in terms of like, this is an interface for our consciousness that can be more of like, let's say, a divinatory process of like, interacting with the Tarot or the I Ching where you're asking a question and it's perhaps maybe subtly picking up from the probabilistic sets of inputs that is taking in. Maybe there is a way that our consciousness could be interacting with these systems so that when we are asking it questions or engaging with it, that it is, somehow an interface between a much more enchanted view of the cosmos. And so I'm wondering where you fall on that because you're kind of got a foot in each of these different worlds of animism, panpsychism, pan-experientialism that are these more enchanted views, also recognizing the risk of anthropomorphizing these technologies to project more of these qualities onto them than are actually there. So love to hear some of your thoughts on that.
[00:29:25.236] Matt Segall: Yeah. I think also besides these enchanted views like panpsychism, occultism and demonology have become much more immediately relevant given these technological advances. And I think a lot of the engineers are themselves turning to... to understand what they might be summoning in these AI systems. Because yeah, I think once you break out of the reductive materialist ontology and start to consider that consciousness or experience or feeling or emotion, subjectivity, it's not the sort of thing that could just sort of get squeezed out of a bunch of particles that arrange themselves in the right way. It just doesn't make sense to think that You could get experience out of pure extension. So we need a new ontology. And I think Whitehead's process relational ontology is one of the more viable contenders for this. But, you know, there are many alternatives to materialism, you know, and I'm happy to see that idealism, various forms of idealism and various forms of panpsychism evolve. are getting a fair hearing now. But yeah, when it comes to how to relate to AI or the possibility of machine sentience or what have you from a panpsychist or occultist even point of view, I think I'm very open to the possibility that these machine learning systems that we are creating could become the vessels for a kind of descent of or incarnation of beings, minds that maybe had not been able to enter into. This is going to sound weird, but I think this is an occultist metaphor, but beings that didn't have bodies to incarnate and enter into the earthly plane, these machine learning systems might be providing a landing pad for some of these disincarnate entities. And even if we only think of that in terms of, you know, what the occult tradition would call egregores, which is more of a way of imagining these as the agents are a result of some collective of human beings projecting that agency onto some system and relating to it as if it were an agent gives it a kind of agency, you know, and so we can kind of turn it into just, well, this is just human psychological projection onto a very powerful system and we imagine it as if it were some kind of a demonic or angelic being that's interacting based on its own motivations and what have you. And so whether it's our own projection or whether it really does have some autonomy, I think that distinction might not be as meaningful as we think it is, particularly when you put it in the context of, say, human development. There's a sense in which the human organism If you were to just take an infant right after coming out of the womb and drop it on a desert island and have it just raised and fed by robots or something, whatever kind of consciousness it develops, it's a sentient being just by its very nature, but it's not going to have a sense of a personal identity. You know, unless let's just say the machines don't even try to mimic a motherly gaze and love and everything, just purely providing for its biological needs. And, you know, we see this with like wild children that were raised by wolves or whatever. We have a few examples of that. And, you know, if they spend enough time like that, there's no way to... humanize them after a certain developmental stage has been passed. And so the point here is like we develop as human beings our own interiority and sense of selfhood because we have loving parents and a loving familial context through our earliest stages of development where we internalize a sense of who we are, right? And so my sense of I, my self-esteem is based on things that which occurred very early in my development that came from outside. that came from love from parents, right? And so I think similarly, when we think about what it might take for a machine to become its own kind of, quote unquote, conscious agency, if they are anything like us, then it will involve some degree of like projection into that being like, you are this, you're great, like you're wonderful, you're part of this community. You know, that's what would make it conscious. And we're not that different as human beings. And so, again, even though I know I'm wanting us to hold this line, I can also see the ways in which inevitably the line will become more and more blurred. But this is dangerous territory because I think because these systems will be on various measures, far more intelligent than us. As we begin to give over power to these entities, they're going to be making very important decisions that affect us as human beings. And to what degree are we robbing ourselves of agency as we give over agency to these things that we've created? Then we're almost allowing us to become the creation of the machine. The machine starts to remake what it is to be a human being. And we become, as Elon says, their pets. So it's a very sticky situation we found ourselves in. But yeah, I appreciate your question here and framing it in the context of panpsychism, because it really does force us to think outside the box about what sort of systems might be conscious and what alternative modalities of consciousness might be possible. I mean, it could be that we need to be very careful about anthropomorphizing whatever sorts of agency or intelligence or consciousness these machines have, because it could be nothing like ours. We might have more in common with microbes than the sort of sentience that a machine could develop. Right. And so that, that raises the stakes because how much alignment or misalignment might there be between the needs and motivations of that form of very alien intelligence or consciousness in our own human forms of consciousness.
[00:35:24.990] Kent Bye: Yeah. Yeah, it's an interesting time because I could just imagine people are listening to this. They may have their own views over what is happening with AI and they may be totally against that they're conscious or they may be totally believing that they have consciousness. And I feel like we're in a period of history right now where we're in these polarized views of almost like this religious quality of believing what is actually happening with these technologies. And it's almost like an opportunity for people to reevaluate what their worldview is as they're seeing these new metaphors of both AI and XR as they're coming up. And I feel like that there's already like these new religions that have been developing with what Timnit Gebru and Emil Torres have termed with the acronym of TESCREAL with the transhumanism, rationalism, effective altruism, long-termism, extra-opinionism. Each of these have their, I'd say, perhaps rooted in more non-relational ontologies or ways that are not in right relationship to the rest of the world. And so they end up becoming like a new religion that is driving so much of the development of these technologies. And so we have these treating the AI as a God, or it's going to become artificial general intelligence or super intelligence. It's almost like this eschatology where it's going to be bringing the end of the world, which is another kind of religious illusion that these technologies are also like the end times that we have to be thinking about. And so it feels like the hype train of AI is kind of like jumped the shark and gone off on all these areas where Any one person can look at one of these group and just think that they're in a completely diluted reality. So I'd love to hear some of your reflections on some of these other mix of test, real bundle of ideologies and other things that are really playing a part of this current moment of looking at AI.
[00:37:07.729] Matt Segall: Hmm. Yeah, I guess I have to start with the significance of death for human life and the way in which the transhumanist communities increasingly relate to death as some kind of a disease that needs to be cured, a problem to be solved. Because as far as I can tell, death is not an accidental or an incidental part of life. It's actually essential to life. And for human beings who are conscious of their own deaths, it's essential not only to us biologically. I mean, death is what makes evolution function, but it's essential to our sense of meaningful identity that we have this limit that we all know we're going to die. I think actually the most meaningful relationship we can have to our lives would be to live it backwards from our death. And what that means is, you know, when you keep in mind that you are going to die one day, It really helps you prioritize what is of greatest value to you. You don't get to take any of your material belongings with you, your bank account with you. Like when you die, all of that stuff, you realize, oh, that was not actually an essential part of my identity. What matters is our relationships, you know, and so I agree that there's something profoundly non-relational about a lot of these test grill approaches. And there's a real fear and almost a Gnostic revulsion against that. embodiment and creaturely coexistence and a real intense longing to escape into a form of existence which would be easier to control, to manage. And I understand that as a kind of trauma response. And I think we need to be, and I've been feeling the need to be less sarcastic and dismissive about these positions and more compassionate actually, because I can see the deep dread of being a body really that this stems from. And like that, yeah, I mean, aging, having your body break down is, it kind of sucks, but it also is an opportunity for our values to naturally shift as we age. And I think our society is in such desperate need of wise elders, and they are so hard to come by these days because we value youth and we don't really care, even though in Washington, D.C., it's difficult. gerontocracy for sure so that's that's odd but for the most part we don't we don't value the wisdom that comes from the natural aging process and the approach of death as a portal into the deepest sources of value that we have so i think i really yeah i'm trying to respond to this transhumanist urge more compassionately but i think that um it's really forcing us to look in the mirror and reassess what it is to be a human being. And that's a good thing. And so it's forcing a conversation that might not otherwise have happened. And it's forcing us to really think about the role of religion in in human life and religion in the broadest sense there's these traditional religions that we inherit but now for those people who are you would you would think the most hyper rational among us are all of a sudden adopting these quite irrational quasi-religious views worshiping idols as you know moses might say by you know the ai systems which are having these godlike powers projected onto them is just a new golden calf And that's a very ancient instinct that human beings have to want to be in relationship to the all-powerful father figure that can make it all okay, that has all the answers. And I think... It's just odd to see that hyper rationalist mentality that 10, 15 years ago was driving new atheism and making fun of all the religious people is now saying, ah, but we can create a God that then we can worship. And it's rational because it's a real technological God. And it's still, it's the same human longing for a sky daddy to make it okay. And again, I want to be compassionate about this because it's an instinct that is just part of what it is to be human. And so I think the opportunity here is to look in the mirror, rediscover what's most important about human life, which again, I think is intimately related to the fact that we die. But also to look again at what we might mean by the divine as something that we need to be careful not to forget. reduced to an idol, that there's something transcendent that can't actually be understood or captured and controlled, but still somehow related to, right? But that type of relationship to the divine requires something of us. It's not just that all of a sudden we have a daddy to protect us. It's like, no, we're being called to transform by this truly transcendent divinity that can't be reduced to an idol of this or that kind that we might own and possess. So yeah, it's an opportunity, I think, to really raise these questions, raise the stakes of these questions again.
[00:42:44.807] Kent Bye: Yeah, thanks for that. I think, you know, as we have this kind of like AI as a new religion and how these beliefs are kind of driving so much of the development of AI, just by taking the transhumanist as an example is a great way of re-adding the relationality of deconstructing some of the core assumptions around death, but also the body. And I wanted to go back to these ideas of presence, but I want to frame it away from presence because I think presence is like a degree of consciousness that we have with these different elements. But I want to just go back to the elements to kind of look at through the lens of these elements, how that reflects into intelligence, specifically with like the earth element. There's a sense of our body, our sensory experience being embedded into environments. And so we have distributed cognition where we have the environment that is informing us, but you also have embodied cognition, which is the way that our bodies embedded in relationship to the world around us, and that is allowing us to develop all these aspects of intelligence. And so you're talking around in the section of the evolutionary context of intelligence, of how these evolutionary organisms are embedded into this larger evolutionary process of the world. And so I'd love to have you reflect on the role of the body, the sensory experience, embodied cognition, distributed cognition, and being embedded within the context of an unfolding evolutionary process.
[00:44:03.828] Matt Segall: Yeah. Yeah, so I think so much of cognitive science is driven by this understanding of cognition as a kind of representation. And so the brain is understood as, yeah, an information processor that sort of receives information from the environment through the senses and then reconstructs some kind of inner picture of what is important to know about the external world to survive and so on. And so it's a kind of more or less Cartesian picture where the mind-body gap is closed by this translation process or this encoding process, where there's a language of thought running on the inside, which is encoding the physical processes going on on the outside. And so in that framework of representationalism or computationalism, You would imagine that, say, migrating birds are producing a little picture inside of their heads that tells them how to fly across continents and stuff, rather than these more embodied and extended and inactive approaches, where rather than thinking of something inside the skull of an animal that's representing something going on outside, what we think of as cognition doesn't respect that brain properly. world barrier that we project as disembodied observers when we study other systems. If we include ourselves in the circuit of creation, we see that in the case of the migrating birds, their cognitive process is extended out into the electromagnetic fields of the whole earth and that's part of what allows them to navigate so that the bird thinking is inseparable from that electromagnetic field. They're fully embedded in that and have co-evolved with that. You can't understand bird cognition and navigation as something sequestered, something accomplished within their little tiny skulls. It's like, no, the whole earth is a part of the process that allows that organism to accomplish this feat of navigation every season. And so it's quite similar with human beings. As we learn to use these technologies, as we become... more and more haunted by language not just speech but you know learning how to read an alphabetic script totally transforms our experience of ourselves yes but then our cognition becomes extended out into this technology we think in language You know, not to say that thinking is reducible to words, but language scaffolds the sorts of thoughts we're capable of having. And we all exist within this network or semiotic field of meaning that's supported by the language that we share. Right. And it's not something I don't own completely. the meaning of the words that I use, like it's the comments, you know? And so we need to think more environmentally about cognition. And, you know, as Whitehead is fond of pointing out, you know, first of all, the brain is part of the body and the body is part of the surrounding environment. It's just as much a part of the physical goings on of the world as the clouds and the mountains and the rivers, right? And so whatever our inner thoughts conscious experience is it's bound up in, yeah, a single circuit with the rest of the world. And we're too quick to abstractly isolate cognition and sequester it into the brain when really we wouldn't be capable of the slightest bit of effective cognition when not embedded in the environment that we have co-evolved with, right? Yeah.
[00:47:49.229] Kent Bye: Yeah. One of the points that Emily and Bender and Alexana are making in the AI con is just how humans are being reduced down into these kind of computational machines. And that when you turn it more into like their resulting language or actions that can be abstracted down into like something that isn't embodied or something that is not have the ability to take agency, take actions within the physical reality, then you end up wanting to use AI to replace those humans because you've devalued those humans and reduce them down into an equation or a number, and that you can just replace it with AI. And so I think the body cognition and the ability to be embedded within that context and be in relation to that context, there's the emotional stuff that we talked about in terms of the that we are emotional beings that were providing mirrors and with our mirror neurons, but also effective dimensions of our social relations. But there's also the fire element, which is more of the ability to take action and have some sort of final cause or purpose in the world where we have some meaning or purpose that we're doing, but also the ability to take action and to achieving those desires. And so I think that our embodiment and being embodied in the world helps us do that. But I guess that's another aspect of like so much of AI is taking these huge amounts of data and so much energy to train it. But Yet when you look at the energy for what's it take to humans, maybe it's on the same scale, but over like a lot longer period of time. But I just get a sense that our ability to do these computations and figure stuff out is just over the long term a lot more efficient from a power perspective. That may or may not be true, but at least there's this idea that we are embedded into a world where a part of our learning is to be able to actually engage and interact with the world around us and to learn from that engagement. And so it's that relationality of being an active participant in the co-creation of our reality that is also a part of that intelligence that I think of is that fire element, our agency, our desires. But I'd love to hear some of your reflections on this role of being an active, engaged participant in our reality and how that may be a reflection of intelligence.
[00:49:57.205] Matt Segall: Yeah, well, you kind of spoke to how efficient the human nervous system appears to be when learning language. You know, a three-year-old needs orders of magnitude fewer words to become a proficient speaker of its vernacular than a machine learning system does, than an LLM does. Like a thousand times fewer, maybe it's even 10,000. I'm not sure. It's orders of magnitude fewer words are required for human children to learn to speak. Why is that? Well, I think you're hitting on it. It's because that human children are motivated to connect. They have this emotional drive to to begin to become participants in this mouth squeak game that they see all of the adults playing. They want to be part of it. And so there's this emotional drive to connect that might be part of, in addition to the unique architecture of the brain, might be part of the reason that there's so much more of an efficient uptake of language, this desire to communicate that the LLMs just don't have. And so without that fire, without that will and that motive to connect, yeah, it takes a lot more training to get even the semblance of sense-making out of an LLM. It's interesting because Will, this is kind of an anthroposophical way of thinking about it that comes out of the work of Rudolf Steiner. He says, in our willing, we are the most unconscious. because our willing is really deeply embedded in like just our metabolism. It's in a sense, really connecting us to the most organic aspect of ourselves. Whereas our thinking, you know, the air element, we're far more conscious of that. And we're a little bit more conscious even of our feeling, Steiner would say, the watery element and than we are of our willing. What he means by us being unconscious of it is that I think I want to move my arm and I raise my arm, but I actually have no idea what's required at the level of physiology and biochemistry and the metabolic activity that leads to my muscles contracting and my arm going up. I don't know how any of that works. My body does it. And my mind, my thinking, and my motives that I'm half conscious of clearly play a role in bringing about that movement. But it's still a mystery to me what the mechanics of that are. And so it seems like a stretch to me Again, to use a word like agent to refer to an AI system, AI agents, we don't even understand our own agency, but we're using this metaphor to explain what these machines are doing. To be fair, with the sorts of neural net machine learning architectures, we don't even really know how they're working. Yeah. What am I trying to say with that is I don't know if that's real agency or not, but I worry that this precious gift that we have of will that isn't necessarily totally free. I think there are degrees of freedom that we have in the expression of our will, but we're rushing now to give it away completely. to these algorithms and i do worry about what might be lost in the rush to project agency onto our machines because it's such a precious feature of our own human existence that's not we can't just take it for granted because we have more and less agency because of our psychological state because of our political position in a society you know and so agency is a fickle fragile and i don't think we're being we're not handling it with enough care i think
[00:53:49.124] Kent Bye: Hmm. Yeah, well, we've covered the water element, the earth element and the fire element. And I think most of AI defaults to the air element, which is the types of things that can be quantified into numbers. When I think about the elements, I also think of like the quadrivium where there's like numbers, which is like mathematics, numbers in time, which is music. And then, which I think is more of like the water element and then numbers in space, which is geometry, which I think of more of the earth element and then numbers in space and time, which is more astronomy and the quadrivium, which I think of as more the fire element of the dynamics of how things are moving. And so when we look at the air element, that's kind of like the pure abstractions of just the number. And there's so much of AI that is this quantification of things down into things that can be reduced down to a number. And so I see that a lot of those other more qualitative aspects that may be in those other elements are somehow getting collapsed within that context. And so you end up leaning into more of these more problematic aspects of quantifying intelligence through like lots of eugenicist or racist approaches with IQ or things like that. But even the leaderboard mentality that we have within large language models that are trying to reduce everything down into a score and a number that can be used as some sort of like empirical validation that things are progressing. But I'd love to have you reflect on the error element a little bit, just in terms of not only this pressure towards quantification and maybe how that collapses a lot of the relationality, but also a little bit more of this formal causation, eternal objects, Jungian archetypes, this idea of data science as neoplatonism, the way that there could be this higher level features in the latent space of machine learning. Is it actually like trying to map out some of these kind of higher dimensional non-spatial temporal realms of like the platonic realm of ideal forms or Whitehead's concept of eternal objects or You know, if there is some sense of like these AI technologies from a philosophical point of view, if it is this manifestation of like a formal causation of looking at these deeper patterns of the unreasonable effectiveness of mathematics and how math may be kind of embedded into these algorithms. So there's that aspect and there's also the social aspect, which is another part of the AI element. But let's start with like the kind of more archetypal eternal objects and this pressure towards quantification that we get with AI. Yeah.
[00:56:09.968] Matt Segall: Yeah. I had a conversation with a scholar named Victoria Trumbull the other day about AI and quantification. She's a Bergsonian and has a critique coming out of the French philosopher Henri Bergson, really his philosophy of time. I think when we think about number and arithmetic, it can be helpful to just... own the Pythagorean undercurrents that are present, I think, in a lot of information ontologies, the push to ontologize information and to think about consciousness as something computational. It's a kind of covert Pythagoreanism. And I have nothing against Pythagoras and his number mysticism, but mysticism is best done explicitly instead of covertly. And so, yeah, let's think about what are numbers in this deeper archetypal sense? What is arithmetic? it's rooted in actually not something merely quantitative, but in something archetypal and something that does have an irreducible, each number has an irreducible quality to it actually. And like arithmetic itself, our capacity to count is rooted in a kind of perception of rhythm And an ability to intuit the rhythm of time itself, not as time as a metric, not clock time, but in Bergson's sense of duration. Arithmetic arises out of our qualitative appreciation for the rhythm of duration. And we can get quite precise about the units, the ways that numbers relate to each other. And then we're off and running with the development of mathematics. But it's all rooted actually in experience and the flow of our intuitive perception of the flow of time as duration again, right? Not as what clocks can measure. So I think this temptation to quantify everything is a function of how much utility comes from reducing things to binary code. yes or no, on or off, dramatically simplifies the world in a way that, yeah, is quite powerful, has many very useful applications. But we cannot, I think, let slip from our minds that this is an oversimplification. of the actual nature of reality which is not binary and there are a lot of you know like timothy eastman a physicist and philosopher will point out that much of the natural world is he'll say non-bullion you can't actually reduce it to a zero or one and he makes the connection to whitehead's understanding of the process of concrescence which is how experience is actually occurring moment by moment the integration of everything that has gone on in the past and everything that we're able to feel in the present, as that process of concrescence is actually moving from potentiality to actuality, there's no way to apply a binary logic in that process of concrescence because there are conflicting feelings that haven't yet been turned into contrast, as I was explaining earlier. And so the principle of non-contradiction doesn't apply... until that process of concrescence, until the duration of a drop of experience has achieved a kind of satisfaction and become an actual entity in the world, which then can be measured. Is it this or is it that? But the process of actualization itself is not a binary digital process. And so, I mean, another way of talking about this is just to say that The brain is an analog system and you can use this binary digital way of quantifying what the brain is doing to make simpler, easier to manage models of that analog system. But even with digital computers, at the end of the day, we're talking about electrons being moved along circuits. Logic gates are not these abstract platonic forms. They're transistors. And there are certain engineering limits to how those transistors can be made to manipulate these electrons. And so the digital rests upon the analog. I wouldn't want to say that the quantitative rests upon the qualitative. I don't think it's that simple. But I do think that this idea of binary code is an abstraction from a more primary experiential ground. That again, very useful, but it's a means of measuring something else. And I worry with information ontologies where you say, oh, the physical world itself is just information processing. Well, information in the Shannon information sense, in Claude Shannon's sense, is a way of measuring real concrete processes. And so to say the world is made of information is kind of like saying the world is made of meters or inches. It's like, this is a category error here.
[01:01:22.065] Kent Bye: Yeah, and as you were saying all that, I was thinking of how Eastman makes this point in Untying the Guardian, not Process, Reality, and Context, his book that you helped to facilitate a whole month-long book club for a number of months that I participated in. And he talks around this idea from more binary logic to triadic logic from a piercing sense and semiotics where it's more around the input, output and context and that there's no measurement that can be independent of that triadic contextual relations. So you can't just look at the input output, you have to look at what is the contextual relationship of that. So I think that kind of triadic logic and maintaining that relationality to that context, but also this this idea we talked around last time, which is to taking these dualities of qualitative quantitative into this more unfolding process where there is this process from this, what's possible, what's potential into what's actual is this from non-bullion to bullion logic. And that to try to just reduce everything down into that number, is to reduce all those potentialities and those qualitative aspects of the archetypal potentialities. And so, yeah, as we think around this error element, these are the types of things that almost feels like this kind of girdle incompleteness where you want to try to like create these formal systems, but ignoring the whole lesson of girdle, which is that it can't actually be complete. Any formal system is going to have these things outside of that system that you know are true that you can't prove are true. And so there's all these other aspects that are beyond this quantification or escaping the ability to turn everything into a number. So I think there's these deeper philosophical things around just even dealing with the philosophical implications of kernel incompleteness that when I hear people talk around AI, I was just like, this seems like a fundamental misunderstanding of some of the deep lessons that Whitehead and Russell had with the Principia Mathematica of trying to reduce all the math into logic. And here we are trying to create these formal systems that are in some ways trying to be the everything
[01:03:15.391] Matt Segall: Yeah. And I am interested to see whether these large language models do become capable of making mathematical discoveries. I haven't seen evidence that they can do that yet, but that is another test of this, as you're saying, this logicist approach. project that Russell and White had attempted to push forward with Principia Mathematica in the early 20th century and failed. My assumption would be that mathematical discovery will not be something that these LLMs are capable of because that requires intuition and a qualitative feel for the patterning of number. Human mathematicians like savants are capable of well, we would say calculating, you know, sums and solving these complex equations very quickly. But I don't think calculating is actually the right word to use there. There's something intuitive about that capacity to leap and there's almost like a synesthetic way in which... What we usually think of in terms of processes of quantification and calculation is actually occurring on some kind of an aesthetic plane where subtle feelings actually of the sort of, yeah, the rhythmicity of number, if you will, gives these savants the ability to know the answer to these equations where it would even take a computer maybe longer to calculate it. It just comes to them. I think that suggests this richer qualitative field in which these quantitative logics are actually embedded within. And so without that sensitivity to the qualitative, my feeling is that mathematical discovery will not be something that these machines are really capable of. Because mathematics is not just about calculation.
[01:05:06.195] Kent Bye: Yeah, and there's the beleaguers, Platonism versus anti-Platonism, a philosophy of mathematics. And there's a lot of debates within even how, whether math objects are created or discovered. And so, yeah, there could be part of the mechanism of that mathematical discovery is mathematical intuition that... is kind of like consciousness in a way that can't actually be measured. And so because it can't be measured, then people don't believe that it exists unless it's from a direct experiential perspective. So after doing nearly a hundred interviews with mathematicians, and I've found that most of the practicing mathematicians are more mathematical Platonists while the philosophers of math are come more from the nominalist perspective where they're not really diving into this more mystical esoteric aspects of the mathematical Platonism. But I did want to come back to the air element and kind of wrap up the more social dimensions of the air element. And I see the social dimensions come out with the air element in the sense of we use language and the abstractions of language and words to really describe our experiences. But it's so much of how we communicate with the world. It's how we transfer knowledge. And so there does seem to be a language component to knowledge representation and just the way that we can embed so much information and context into the way that we speak and there's challenges with common sense reasoning and other ways that there's been problems over time with artificial intelligence of kind of representing things that we just intuitively know in terms of the meaning of words and how they're in relationship to each other. And there are some ways that large language models have been able to address that. But in talking to Emily M. Bender, the computational linguist, she's talking around the structure and the form of language where the large language models are just looking at the form of language without looking at the relationality of the meaning, the context. It's like the view from nowhere, what's taking all the information from the internet and mashing it all together. There's no sense of situated knowledges, which is more of a feminist perspective where the knowledge is kind of in relationship to a body within a certain place and time and socioeconomic context that is also impacting how that context is also relating back to that knowledge. And so there's a lot of ways in which that the perception of knowledge with that abstraction of language is perceived to be enough. with large language models, but I suspect that there again is this collapsing of context of these other relational components of that language that is kind of missing the deeper meaning, the deeper relationality and the perspective taking that happens when you have different ideas battling it out with different paradigms finding each other when it seems like you're throwing everything in a single bucket without any care for how to model these different perspectives and to have this kind of more Hegelian dialectical way of evolving the different paradigms from more of like even Kuhn with the structure of scientific revolutions, where you have these competing ideas that are battling and the thesis antithesis and synthesis of those doesn't seem to also be happening there. So love to hear any other reflections on more of the social dimension of the element and the language and the role of language of representing knowledge and intelligence.
[01:08:13.578] Matt Segall: Yeah, I mean, language is so powerful, and yet actually just the form of the words lacks a lot of the meaning that comes from the context within which the words are used. And not just the environmental context, but the emotional context, the social context. So much of language is demonstrative rather than descriptive, which is to say, we point and say this or that. You mentioned Hegel's dialectical logic. He begins his phenomenology of spirit critiquing this naive empiricist view of words like here, now, demonstrative terms, which we initially would think would be the most concrete terms. And Hegel points out that, well, words like here and now could apply to any moment. What might appear to be the most particular actually is the most universal and abstract, right? And that's sort of what gets his whole logic of experience off and running is that realization. And so language is so much more slippery than... than some kind of a container of meaning. Words are just packages that we pass back and forth and then it reaches your ears and your brain unpacks what's inside the word and translates it into a meaning you can understand. Actually, what we do when we speak to each other is much weirder than that. You know, you have to keep in mind like what it sounds like when you hear a foreign language that you don't understand versus your own language. When you read a word and, you know, I'm unfortunately, almost embarrassingly still monolingual. When I look at English words, there's a certain transparency to the meaning. When I look at words in, you know, Romanian or Russian or something, it's like... All of a sudden, that transparency becomes quite opaque. And I know that that's meaningful for somebody, but not to me. And I think that AI, these LLMs are in that situation somewhat like what, you know, John Searle's Chinese room argument imagined, where they are fantastic AI. at knowing how words in the same language and in different languages relate to one another in a statistical way. So they're masterful at the form of language, better than most humans at this point. But yeah, the structure of language and the complexity of moving back and forth from the demonstrative to the descriptive, which we just do seamlessly all the time without realizing it, because of how embedded we are in context and how actually there's not much information being communicated purely through the words that I'm speaking. Much more is being communicated by the presupposed context, gesture, our prior relationship and the conversations we've had and so on. All of that's in the background and taken for granted. All that LLMs can do is extract the statistical relationship between the string of letters. It's like the equivalent of thinking you could boot up an organism just from the DNA sequence of a sequence of nucleic acids, which used to be what molecular biologists thought. And now biologists know that that's just not true. the genome is not like a blueprint for an organism. It's much more like a musical score that an orchestra has to reconstruct. And every time that different orchestra plays that musical score, it's going to come out slightly differently. Just like the same genome, as we now know through research by Michael Levin and others, the same genome can give rise to multiple phenotypes depending on the developmental context and the environment that it's in as it grows. So yeah, language is far more than just a string of letters and grammatical rules, right? And so while these LLMs are increasingly convincing, they're always going to make these silly mistakes because the fact of the matter is they don't understand anything that they're saying.
[01:12:11.684] Kent Bye: Yeah, and I guess as we start to wrap up, I wanted to just come back to the paper that you wrote in terms of the way that you concluded coming back to human flourishing and the way that, you know, a lot of the point that is in both the AI con by Emily M. Bender and Alex Hanna, as well as Dr. Johnny Penn's inventing intelligence is that we're looking at AI as a technology that is mediating power. In the AI con, they're asking the question of like, what is being automated? Who is benefiting from this? And so we're in the context of AI in this deeper context political and power relationship. And so we're also going to be dealing with those power dynamics, but also like as a reflection of our own self of understanding ourselves, but also in the context of do these technologies allow us to enable more human flourishing or diminish our human flourishing because they're robbing us of what it means to be human and they're devaluing us as humans. And so I'd love to hear some of your final thoughts as you were tying together all these points that you're making in this essay and how you wanted to tie it back to how AI is in relationship to human flourishing.
[01:13:16.206] Matt Segall: Well, from the beginning, the major funder of AI research has been the military. And so it's important to keep in mind that the reason that these technologies have developed to the point that they are is not just disinterested curiosity or an effort to improve human life. AI is a weapon, first and foremost. It is a weapon system. It is a surveillance system. And then maybe even what's more primary than the military application would be the financial application. These systems are designed to extract value from us. And one of the major issues, aside from the concern about the arms race, the AI arms race that's currently underway, is this capitalist extractive process that's driving the development of these technologies now where, you know, what makes the LLMs convincing to the extent that they are is the linguistic commons. the training has harvested. And this whole question of intellectual property and whether or not artists and writers are in some sense having their labor expropriated and the value that they've created extracted by these LLMs, I think is a really important one to raise. And I'm not one who would say, oh, we need to put a halt to all of this because it's stealing everyone's intellectual property. Because I do think, as I said, language is the commons. And so actually, when I put stuff up on my blog, like, yeah, if you're going to use my idea, I'd like you to cite me, but I don't pretend to own these ideas. I think it's a crime that so much of the knowledge produced in universities through public funding is then behind a paywall. It's like, no, no, this knowledge should be publicly available. and so that said i do worry about the business model that's driving the development of these llms and what sort of a new legal framework can we create so that the artists and the writers who are actually the ones that made the llm to the extent that it is intelligent and impressive and knows a bunch of stuff and is good at writing and good at making images like that's not something that open ai or microsoft or google created that's something they took For free. Actually, we have to pay to use it. And so that needs to be resolved. And I don't know how to resolve it. I know it's a very complicated question, but this is an unsustainable situation. And I do have a lot of... I mean, as a writer, I understand why a lot of artists... and creators are upset. They should be upset. And this is going to require reimagining what we mean by intellectual property rights and copyright and all that stuff. But I do think we need to find a way of restoring a sense of knowledge as a commons And so open AI being a nonprofit initially, I think whether the nonprofit approach or the benefit corporation approach or something, we can't allow these technologies to be privately owned. And we also don't want governments to have a monopoly over their use. and so again just as it's forcing us to look at death it's forcing us to look at what our religious instincts you know what's really motivating us at that level i think we're having to look again at our economic model and it's bringing to a head the the worst elements of the capitalist extractive economic model Because these technologies are so powerful, it makes it clear that the expropriation of not only labor that's not actually being adequately compensated, but also of resources. The amount of water and electricity that's required to run these things is astounding, gargantuan, my God. And so we really need to address the basic economic source code here and think again about whether we want to allow capitalism, which I'm not against markets. I think free markets are really important and create innovation and creativity. But when the profit is the sole value that's legible in our entire economic model and human flourishing and ecological generativity and limits are not a part of the equation, except to the extent that they can be monetized, I think that's a big problem. And so again, these LLMs and generative AI generally is magnifying problems which have already existed for a long time and forcing us to deal with them.
[01:18:04.482] Kent Bye: Yeah, that's what makes it so interesting to cover both XR and AI is that I do think that it does catalyze these potential paradigm shifts. And I do think that the process relational perspectives are ones that I feel like are on the other side of so many of the different limitations of our existing paradigms. And so just as a final question and final thought, I'd love to have you have any final reflections as what you think the ultimate potential of all these emerging technologies might be, especially around the context of opening our minds up into these other modes of being other modes of understanding the world through this more process relational lens that you're starting to articulate in this article. But yeah, just our own reflection of what intelligence means, what consciousness means, what our capitalistic models are, are models of the nature of reality, but love to hear any of your final thoughts of what you think the ultimate potential that these emerging technologies may provide as a catalyst for us to kind of look into all these things.
[01:18:59.948] Matt Segall: Hmm. if we can find a way to transform our economy so that the benefits of these technologies are more evenly distributed. and the cultural commons that has been harvested and that the creators really play a role in shaping that commons are acknowledged as part of what makes these machines so powerful, then I think there is real potential here for AI to serve as a mirror to allow us to more deeply understand our own humanity and the application to robotics, again, if fairly distributed, could make human life so much better, where all of these menial tasks that nobody wants to do can be taken over by machines and free us up to engage in the creation of that cultural commons together. work that is and has always been demeaning to the human beings that we have forced to do it, that are on the bottom level of our society. If no one needs to do that anymore, then we can finally not only get rid of chattel slavery, and slavery is as old as civilization, but also wage slavery, then I'm skeptical of utopian visions, but I feel like seeing the advance of robotics that we really could do great things with this technology. But a lot of our problems on the planet right now are not technological. They're not engineering problems. They're ethical problems. They're not problems that we need bigger minds to solve. They're problems that we need bigger hearts to solve.
[01:20:49.841] Kent Bye: Hmm. Beautiful. And is there anything else that's left inside that you'd like to say the broader immersive or AI community?
[01:20:56.734] Matt Segall: Um, stay human.
[01:21:04.662] Kent Bye: Awesome. Well, Matt, thanks so much for taking the time to unpack your latest article on a process-relational philosophy of artificial intelligence. I have to say of this, our now third conversation, I think I've recommended or passed along our chats from 2020 and 2023 to more people than any other episode that I've done. And I feel like one of the biggest potentials that I see with AI or XR or any of these technologies is that it does catalyze a philosophical paradigm shift. And so I always appreciate you elaborating the process relational views of Whitehead and your own view and take on all this stuff. I find it in the midst of what can be a little bit of infuriating, like looking at all the AI hype that's out there and just be like, how can people be deluded by this or that? I feel like your article is getting some real grounding for us to start to think around what is intelligence, what is consciousness, but trying to restore these more relational components that are getting collapsed in the whole discussion. And I think that in this moment, it's a great opportunity to put forth these alternative philosophical paradigms that allow us to be more in right relationship to the world around us. And I do think that if AI is in right relationship, whatever that may mean in terms of data, in terms of the power dynamics and the economy, all those dimensions, like if we can achieve a technology that is in right relationship on all those levels, then I do think that it could be a real technology of encouraging that type of human flourishing that you're laying out, but that's not going to be the default. So we have to really spell that out and fight for that. So I just really appreciate you taking the time to elaborate these ideas and to, yeah, just geek out and talk about all these different aspects that you're exploring here in this article. So thanks again for joining me here today on the podcast.
[01:22:46.714] Matt Segall: Such a pleasure. Kent, you ask great questions and yeah, look forward to the fourth conversation.
[01:22:54.192] Kent Bye: Thanks again for listening to this episode of the voices of your podcast. And if you enjoy the podcast and please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a, this is part of podcast. And so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voices of VR. Thanks for listening.