#1580: “Neuro-Cinema: From Synapse to Montage” Explores Bioethics Moral Dilemmas & BCI-Controlled Editing & Robotics

I spoke with Graham Sack about Neuro-Cinema: From Synapse to Montage at Onassis ONX Summer Showcase 2025. See more context in the rough transcript below.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.438] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my coverage of different events in and around Tribeca Immersive 2025, I'm Today's episode is with Graham Sack, who had a piece at the Onassis Onyx Summer Showcase. It was a piece that was called Neurocinema from Synapse to Montage. So there's a lot of moving parts to this piece. First of all, there was a film that Graham had shot called Neuroplastic. Read the description, it says, After being paralyzed in a driving accident, Emil, 18 years old, is enrolled in a clinical trial for neuroplastic, an experimental brain-computer interface, BCI. Emil's world opens up as he trains himself to control a robot's arm, type and draw by concentrating, and compete in online sports using his mind. But the wonders of Emil's BCI implant come at a price, making him vulnerable to the whims of a for-profit company. After a scandal involving violations of neural privacy, Neuroplastic abruptly halts Emil's clinical trial, and as his implant becomes obsolete, Emil loses his newfound capabilities and, in a contemporary version of Awakenings, slowly returns to his initial state of paralysis. I mean, we didn't get to see the full neuroplastics film, so I'm hesitant to dive too much into it. But there's also this whole other part where Graham has been working with the Dracopolis Bloomberg Bioethics Ideas Lab at Johns Hopkins University and got a grant to start to look at EEG interfaces to do this kind of like interactive narrative part. So he would look at the scene that had this QR code type of thing, but it was like pulsing at a different frequencies. And so he would look at the different frequencies And based upon what he was looking at, then the EEG would detect what frequency was being shown. And then that would trigger into different parts of the movie. So it was kind of like a nonlinear exploration of this linear film. But then there'd be other things that he could do to either like slow it up or make different edits or to make the music come up. And so there was brain computer interface controlled, like modulating this narrative that was unfolding. And then also there was like this big robotic arm that was in the center of the exhibition space. And so when he would do different jaw clenches that are detected by the EEG that then are triggering different movements with this robotic arm. So the Summer Showcase is an opportunity to kind of show these early prototypes. So it's not like a finished project that we're able to really fully unpack all the nuances of all the narrative and story and everything else. But I thought it was important because it does kind of point to like this larger theme that I'm seeing in terms of like robotics, seeing a lot more robotics at Augmented World Expo, at CES, and just like in general ways that robotics is kind of more of a trend of emerging technology. And I'm expecting to see more of like the intersection of XR, AI, and robotics here moving forward. A number of folks like Sex and Nima and Amanda Watson are doing this kind of like robot fighting startup with Rekt. So, you know, it's just kind of interesting to see this intersection of robots, but also a brain computer interface. The interview that I actually did with Nita Farahani around cognitive liberty and the battle for your brain book that she released ahead of South by Southwest in 2023. That's a great book as a neuroethicist looking at all these different issues. And it's something that is going to continue to have these different emerging technologies start to be fed into all of these kind of like brain computer ways of detecting what's happening inside of our brains. Yeah. We're going to have ways that that's being expressed within the context of XR and AI. So, becoming all that and more on today's episode of the Voices of VR podcast. So, this interview with Graham happened on Sunday, June 8th, 2025, at the end of the Onassis Onyx Summer Showcase in New York City, New York. So, with that, let's go ahead and dive right in.

[00:03:59.824] Graham Sack: My name is Graham Sack. This is my second time on the podcast. The last time was for a virtual reality limited series called The Interpretation of Dreams. I've been doing, I guess, XR or kind of interactive work for close to the last decade. Initially, a lot of virtual and augmented reality projects. So I started off with a piece called Lincoln in the Bardo, which was the first ever adaptation of a novel into virtual reality. It was funded and distributed by the new york times it was adapted from george saunders man booker prize winning novel lincoln and lombardo and was shortlisted for an interactive emmy then i did the piece you interviewed me about the interpretation of dreams with samsung which is based on freud's case studies and dream interpretation it was a series of kind of immersive dreamscapes in vr i did a um sort of feature quote-unquote feature-length version of hamlet in vr with google and then a big interactive augmented reality installation for tribeca immersive and with new york theater workshop called objects and mirror closer than they appear and then you know beyond that i've had projects with felix and paul studios and then more recently i started teaching on faculty at johns hopkins university which has a program in immersive storytelling and emerging technologies. And I've been focused there. I'm also finishing up a fellowship and was just appointed as an assistant research professor with their bioethics Institute. It's called the Berman Institute of bioethics. And they have a program focused around bioethics and storytelling. And so what I've been focused on alongside of all these things, I do a lot of traditional media as well. I just wrote a television series with four-time academy award nominee alexander rodniansky called debriefing the president which is about the capture and interrogation of saddam hussein i've sold films had them on the blacklist things like that so i do a mix of traditional media and interactive media the last couple of years at hopkins what i've been focused on kind of between these two departments interactive media and bioethics is looking at how do we move beyond maybe just the narrow definition of xr virtual reality, augmented reality, now more kind of AI interactivity and look in particular towards where emerging biotechnologies are going. I think that everyone is talking about AI and there's good reason for that, but I kind of think the next frontier of big, really transformative technology that's quickly coming at us is in the biotechnology realm. CRISPR, obviously, it's strange to me how quickly that seems to have come and gone in terms of the news cycle. But then more recently, brain computer interface technology. So when I kind of started in residence at the Berman Institute, and then while I've been kind of developing research and teaching, In Immersive Media at Hopkins, I've been kind of zeroing in on the brain computer interface technology space, which really has two branches. One branch of it is invasive BCI, brain computer interface technology. BCI is like Neuralink, so it's actual implants. They're frequently used for people who are paralyzed, people who have ALS, Lou Gehrig's disease. There's a separate branch that's related to deep brain stimulation, which is used on people often who have incurable depression through other means. But these are all things that involve direct either implantation or, in some cases, manipulation of the electrical activity. or neural activity in the brain, or very active measurement of it. And those things, when I started working on some of these projects at Hopkins, which now includes both the film and then two interactive media projects, which I'll talk about the film, is called Neuroplastic. The interactive media project that I just showed here at the Onyx Showcase during Tribeca is called Neurocinema. And then I have this big grant from Johns Hopkins called the Nexus Research Grant for a larger encompassing project called NeuroTheatre. So anyway, I'm very topically interested in kind of the implantation side of things. That first kind of started right when I was kind of arriving in earnest at Hopkins when Neuralink was cleared for the first human clinical trial. They hadn't yet gotten a participant. That research had been going on at Hopkins for about 25 years. They've implanted multiple people, people who are paralyzed, people of ALS. And so there were a lot of extraordinary people in the field. And I was like, I got to interview folks and kind of understand what the state of the technology is. And then alongside of that, I was interviewing actually a prominent neuroethicist who wrote a wonderful book called The Battle for Your Brain. Her name is Nita Farahani. And I was working on a film at that point, which I've since made. That was about a young man who's quadriplegic, who gets a BCI implant. That's this film Neuroplastic. And Nita said to me, you know, this is exciting stuff. It's very, very important. But a lot of the action right now is in non-invasive brain computer interface tech, including wearable BCI. So that includes things like all of these commercial EEG headsets that are starting to hit the market, the Muse, the Emotive. Neurable, but also companies like Apple. Apple has patented a future version of the AirPods that have like EEG components and will pick up neural activity around the brain. So you're going to start to see these things embedded into a lot of traditional consumer devices. And they're going to begin to show up that that data is going to begin to show up in probably in a bunch of neural marketing. But I was particularly interested in, well, how can these things begin to produce new forms of interactivity for immersive and interactive experience so I have a few big active projects in this area one as I was mentioning is this film neuroplastic which I just shot in May it's based on about a year and a half of research extensive interviews with neurosurgeons who do actual BCI implantation. We have several at Hopkins. People who have received these implants and lived with them for a while. There's an incredible group called BCI Pioneers, which has a number of people who've gotten implants for various reasons at different stages, gone through trials that maybe lasted months or years. in some cases kept the implant, in some cases have them taken out, what their experience was. And all of that kind of became this. And then a lot of neuroethicists, actually, who I work with at Hopkins, who think a lot about what does it mean, you know, how does something like this kind of change who you are, or what is its potential for, you know, kind of use, but also abuse. So that became this film that I just made. It's a 20-minute long film called Neuroplastic, which is speculative, but it's very, very grounded in research. So it's about a young man who's quadriplegic. He gets a BCI style implant or a BCI implant in kin to Neuralink. And it allows him to learn to control a robotic arm, manipulate a digital avatar on a screen and kind of opens up the world for him in a variety of ways. Also causes a lot of problems and questions. And really what it's kind of focused on is how does having an implant like this change your sense of identity, your sense of your own body, and especially like what does it mean to have a literal kind of connection to hardware and software. What does it mean to feel like hardware and software is an extension of your own body, is an extension of your own brain. It's literally speaking to your motor cortex in the same way that your arm would or in ways that are similar to how your arm or your real physical appendages would. And suddenly these things are extensions of ourselves. I think a lot about Marshall McLuhan. you know who famously talked about technology as like the extensions of man or the extensions of the human and i don't know if i'll get the quote exactly right but he talks about you know the wheel is the extension of the foot the radio is an extension of the ear television is an extension of the eye and then he says notably electric circuitry is an extension of the human nervous system And when McLuhan said that originally in the 1960s or 70s, it was a little bit metaphorical. It was about kind of electric circuitry and television and cameras and not even really computers at that point being an extension of ourselves. But with brain-computer interface technology, this becomes very, very, very literal and very real. And the people who are at their frontier of it, there's a lot of talk about transhumanism and cyborgism, including in the art community. There are a lot of people who do voluntary body modification. and that's all fascinating and incredible work you know people like stellark or the cyborg foundation but in my opinion the people who are like really really really at the frontier of transhumanism and living it and living cyborgism for real are mostly people who have really serious spinal cord injuries or als or parkinson's or other things that involve like direct interfacing between technology and the brain and they're living with it and they're living with the complexity of it where on the one hand it's a form of repair or giving back certain functionality that you may have lost or never had or maybe slowly eroding on the other hand it really creates real forms of enhancement you know there are people with bci implants who describe being faster at playing video games and more accurate at playing video games than normal human players because all they have to do is think about where the cursor needs to go in a first-person shooter game, you know, and it's like there should maybe be a special league for them, right? And people have talked about similar things with folks like Oscar Pistorius. you know with the blades that he used as replacements for his own legs which actually were kind of mechanically faster than normal human legs the physics of it how it bounces and is in a sense maybe something similar happening with certain kinds of computer interaction through bci and the people who are experiencing that in this very complicated way are frequently people who have dealt with really serious injuries or really serious degenerative neurological diseases And so it creates this really complicated and I think fascinating space with a lot of big moral questions. And it's not easy to dismiss. I think that, you know, a lot of times when I talk about BCI or I mentioned Neuralink in passing, which I think is a fascinating company, it's also deeply problematic in a host of ways. People just kind of like set it aside either because I don't know who owns it. Right. Who's financially involved in the in the thing, Elon Musk, whatever, what they pass over is the really complicated space of, on the one hand, there are enormous benefits to this technology. By the way, Neuralink, not the only company at all in the space, a company called BlackRock Neurotechnologies that's very supportive of the arts that's been around for a long time and a whole other set of folks in the space. There's an incredible company out in Switzerland that's been jumping the spinal gap, help people move again, but they're undeniable, like kind of massive life benefits for people who get the implants. And then there are a host of really complicated issues about neural privacy. What does it mean to get an early prototype or an early version of a technology in your head? And then you don't know where it's going to go. You might be putting beta max in your head and the future is VCR. And you got the wrong thing, right? Or it can be very difficult to take these implants out and replace them with the next generation. So it's almost like you bought an early iPhone. That's the only phone you're ever, ever going to have. Because if they try to take the implant out, it causes like neural scarring and they can't put another one back in place of it. And so the issues around this, like what does it mean to be locked into an early version of a technology for the rest of your life? potentially right or like your patient one and your patient one and um sorry one sec we're in the middle of a loadout on this installation right now so try to be quick but um but anyway so i could go on about this but um But yeah, so there's a fascinating set of like kind of bioethical and neuroethical questions there. So what the film really explores is that. And I shot this in early May after about a year and a half of research down in Baltimore and around Johns Hopkins. We used a bunch of their surgical spaces. We have probably one of the most realistic brain implantation sequences that has ever been on film, but it is a work of fiction. The lead actor is quadriplegic. I did a national search to find a remarkable actor of disabilities named Jock Metellus, plays the role. We have Tony Award nominee in it, Myra Lucretia Taylor. an actor named Seth Barish who's played doctors 50 times on television. And then I got to do the robotics legacy effects who did the Iron Man suit, the Shape of Water monster, literally the puppeteers from Baby Yoda and the xenomorph puppeted the robotic arm that's in the film. So I shot the film. It's 20 minutes long. I'll do a feature behind it. But then in addition, alongside of doing this work that's kind of cinematic and about BCI, I'm doing a lot of research that's actively using the kind of wearable side of brain-computer interface technology and exploring how it can be used for interactive media art and performance. And I have a grant at Hopkins University. titled Neuro Theater with a big interdisciplinary team from media, performance, AI, neurotechnology, neuroscience that are involved in it. And so I've been doing that exploration. And then I decided to converge the two projects into what I'm showing here, which is a multichannel video installation with interactive EEG input. It's called Neuro Cinema from Synapse to Montage. And basically what it is, is I use a Unicorn G-Tech headset, which is a research grade EEG headset to basically steer the film. So it's to take and kind of remediate this material from a film about brain computer interface technology using brain computer interface technology. So it's really form following content. And what I'm able to do with it is to effectively kind of edit the film live using neural activity. So I can concentrate on particular scenes and double-click into them and make them into immersive parts of the installation. I can speed up or slow down the edit pace as the multi-channel film plays based on how I concentrate or how active I am. And I can kind of dynamically score it. as well by kind of leaning into like swelling the music by concentrating, things like that. There's also an interactive robotics component where I can control a robotic hand, open and closing it, moving it around simply by concentrating or clenching my jaw. And part of the idea behind the robotics piece of it was to kind of give people experiencing the installation, you know, kind of what would it mean a little bit or a flavor and echo of what it would mean to be in the position of someone like the protagonist from the film who can't move the rest of their body and is kind of reliant, kind of shoulders up to manipulate a robotic arm. And they're kind of in a similar place, obviously not without an implant. So it's, there's kind of two ways to experience the piece. One way is as an audience, you can kind of drop into a comfortable seat, experience it in a more kind of immersive cinematic way happening around you while I kind of steer it a little bit, perform this kind of neuro cinematic editing process. or we can if people want to put on a headset and get gelled up they can experience it too and probably in future iterations we'll use some lighter grade headsets that are a little bit easier for more people to pop in and out of make them use but for this we wanted really high data resolution quality given the complexity of what we're doing so there's two ways of doing the piece and it's been really interesting to work on it and actually to run it because it is like a form of performance or kind of neural performance where i'm responding live to the piece of the film, the emotion of the scenes, and then kind of leaning into by like concentrating or focusing or shifting or adjusting my body, you know, how to like speed it up, slow it down, amplify kind of ancillary material from the film, insert shots, things like that that are cutting around the main story action, swelling music. And so it becomes this really responsive thing. And yeah, and then it's been fascinating, you know, kind of watching audiences do the same thing. you know it feels a little bit you know it's it is in a sense a form of kind of telekinesis or telepathy and then we're getting to the point where that stuff is no longer magic you know i love this arthur c clark quote any sufficiently advanced technology is indistinguishable from magic one of the characters in the film says it has a little bit of a pitch around this device called neuroplastic and it's like that it kind of feels like that i think now where we are so yeah

[00:20:19.116] Kent Bye: Yeah, it feels like that we're in this phase of, you know, at CES, there's a lot of robots and robotics. And so like moving from AI into being more embodied forms, but also these brain computer interfaces that, you know, have been around for a while. And in fact, there used to be a conference called NeuroGaming. Do you ever hear of the NeuroGaming conference?

[00:20:36.709] Graham Sack: No, I didn't know about it.

[00:20:37.630] Kent Bye: No. they actually closed down and stopped because they realized that agency and neuro gaming actually wasn't like a compatible thing. Like the type of signals you get from EEG are not able to have the agency enough to take action. So they actually rebranded from neuro gaming to X-Tech, which was more around the experiential technology or more in the medical context. And so I guess I've done probably a dozen or close to two dozen EEG demos over the years. And so I didn't unfortunately get a chance to do this one, but I've had enough other prior experiences that I kind of know. You were there, you just didn't put the headset on. Yeah, I was there. I watched the kind of your performance, but I've had enough of my own experiences with EEG. And my number one complaint is that it is difficult for you to see any sort of distinct agency with EEG. Now you're using clenched jaws, which is, I guess, more technically EMG, which is electromyography.

[00:21:30.837] Graham Sack: We're using a few different techniques. So with an EEG headset, what is an EEG? So an EEG, electroencephalography, you have 86 billion neurons and they're all like little drummers or like little fireflies flashing on and off, right? Or drumming all at once. And so what you get is this big kind of aggregate measurement of millions or billions of neurons and kind of how they're firing off right and it creates a series of different kind of frequencies you know if you're in a state of deep sleep you know they're not firing very much at all if you're very active and awake they're firing a lot and this can kind of be broken down into these major waves right alpha beta delta gamma theta When you're in deep sleep, delta. When you're actively awake, it's gamma, beta, alpha. Alpha is associated with this state of kind of calm wakefulness. But you get a lot of other things when you're measuring through an EEG headset, a hybrid EEG headset. You get a lot of muscular activity, right? So if you clench your jaw, raise your eyebrows, whatever, it picks up on that. There are also a lot of things that you can pull out, behavioral things that you can pull out of EEG. So, you know, if you close your eyes and you calm yourself down, right, that ups alpha. You can also do a technique called SSVP, which we're using for scene selection. What that does is your visual cortex is highly responsive to what you're looking at and the frequency of what you're looking at. So if you put up, for example, like a flashing light or a spinning wheel or something that's at 7 Hz, 9 Hz, and 11 Hz, we can back out by looking at the main spikes in your neural activity. If they're at seven Hertz, nine Hertz, or 11 Hertz, we know which thing you're looking at, right? So you can begin to find in the patterning. And honestly, we haven't even thrown that much machine learning at this. Even with big aggregate data, if you throw enough machine learning at it, you can get a bunch of other elements too. For example, we trained one of the main BCI collaborators on this, this guy, Griffin Millsap, who's extraordinary, one of the best BCI researchers in the country. has built this system where you know you can clench your fingers close your eyes and it can differentiate what the signature in your brain looks like when you're doing different kinds of movement right you can do that with a wide variety of things so i think even though i think with some of the more conventional eg stuff and use what they're doing so it's actually a good device but some of the stuff that's thrown on top of it will just be like we're doing alpha measurement for meditation or something but you can drill deeper on that data What I would say beyond that is there's a spectrum. EEGs at one end, that's like the most coarse data. Then at the opposite end, you have full BCI implantation where you're getting neuron level firing data. What you can do with that is extraordinary. Like you can quite literally read people's thoughts. You can read what words people are trying to think or really the way that works is you imagine writing some letters with your hand and it can identify what's firing when you're mentally writing an A or mentally writing a B and begin to decode what words you want to spell out, right? And there's a bunch of different ways to do that. But so, they're at a point with full BCI implantation where they can just read language directly out of the brain.

[00:24:40.188] Kent Bye: So that's at the opposite. Using machine learning and sort of extrapolating. So it's not, I'd be hesitant because I don't think it's that far. I don't think they're at that point of literally being able to read brains because they're training models and there's errors and it's not like reading someone's mind.

[00:24:54.360] Graham Sack: Let me rephrase. So with something like that, you're right, you're right. What you're doing is kind of finding these accessible kind of hooks or angles or hacks that get at it, right? So with a BCI implant, you can only really get at kind of the high level surface of the brain even. It's hard to kind of probe deep. They may get there. But what you can do with something like that is look at the part of the motor cortex that controls the hand or controls the mouth. With an ALS patient, this really works. With an ALS patient, this happened at Hopkins, they did an implant that was reading the part of the motor cortex that controlled the lips and the tongue. And they could figure out what letters, what phonemes, morphemes he was trying to sound out, even though he was losing the ability to speak. They send that out to a computer model of his own voice. And this guy lost the ability to speak, who can suddenly speak again, right? So, okay, in that case, what is it doing? It's not reading the language center of the brain. It's reading the motor cortex part of the brain that controls. intended speech, similar to the handwriting thing, right? So those are hacks. It's not exactly thought.

[00:25:56.632] Kent Bye: I don't know. So a number of years ago, I was at the Canadian Institute for Advanced Research. They had a whole intersection of neuroscience and XR that was held in New York City and they invited me out and And I ended up doing a number of interviews there. That's where I met Control Labs and did an interview with them. And so someone gave a presentation there around BCIs that was like, you know, working with Meta at the time and that there was kind of like a progression of what is easy to do versus what is hard to do. And I don't remember the exact things, but are things that like you know reading things from the motor cortex so like imagining your muscle movements or writing your hands is a lot different than imagining your thoughts or thinking about memories or thinking about your imagination and thinking about your dreams and so there are different sort of cognitive functions that have like a tier of difficulty of what these technologies can detect and that the basic challenge is getting enough of that neural resolution to really get enough of their neurons either through infrared or different types of invasive versus non-invasive. And so there was a lot of research that Meta was doing for a long time in invasive BCI, but then Meta themselves realized that with the control labs, they were actually way more interested for them to look at EMG, which is electromyography of looking at muscle contractions. And so that they could actually start to detect what they claim to be able to detect the firing of an individual motor neuron. So to be able to put a device on your arm and to be able to then through even thinking about moving your hand could start to be firing that. So in other words, rather than actually moving, you can think about moving and that could actually start to animate your actions within either robotics or within the context of a virtual world. It's squishy though.

[00:27:30.833] Graham Sack: So actually I know that research because Meta partnered with a portion of Johns Hopkins University to do a bunch of that work. And one of the projects that was related to that is in this squishy middle ground where what they were trying to do is kind of back out, well, what are people, if I show you an image, can I figure out what image you're looking at? by looking at neural activity without basically the real issue is like, how do I get BCI level resolution without having to do an actual implant? And the technique there is an intermediate approach called FNIRS. Don't ask me what it stands for, but what it's like an infrared.

[00:28:06.389] Kent Bye: So like functional, like functional infrared, functional near infrared.

[00:28:11.271] Graham Sack: i don't know what the s is spectography or something like that yeah i'm um so what they did in that case is fascinating so they fired basically a ton of lasers at the visual cortex in the back of the head and what you're able to do there is whereas eeg picks up like electrical activity neural spiking right and similarly with you know something like a bci implant like what BlackRock Neurotech or Neuralink or whatever does, they're picking up generally electrical activity. This is looking at actually, well, when a neuron fires, it gets a little bit bigger. So you can just spot enlargement of individual neurons. And so once you're able to do that kind of thing, you kind of know what individual neurons are doing. They're firing very small clusters of neurons. So they did this just by shooting lasers at the back of the head. And what they figured out was so it's really fascinating. It's not that you're backing out like so you show someone an image of a bear and it's not like you get back. They're looking at a bear. What you get back is they're looking at something that is alive because there's very clear different parts of your brain that activate if you're looking at something alive or something inactive. inanimate for reasons that make a lot of sense evolutionarily if you're looking at something that's like moving or not moving if you're looking at something that's like large or not large if you're looking at something that's maybe dangerous or not dangerous if you're looking at something that's fuzzy versus not fuzzy and then you put those concepts together and it's like okay you're looking at something that's like large moving fuzzy and dangerous and we're down to a fairly narrow corridor of things you might be looking at including bear And so what's so interesting about that is it's starting to be a series of like platonic, more constitutive concepts that compose like what are the ideas of the image that you're looking at and how are they related to specific neuron clusters? So it's like I totally 100% agree with you. But then there's this fuzzy middle ground of like, well, I don't know, that's kind of semantic. That is semantic content. That's like at a higher level of abstract thought. But anyway, what I don't want to be is some evangelist for this stuff. I'm not. And what I'm really interested in is the very messy middle ground where some things are transformative and phenomenal and then some things are problematic. And I'm really interested in kind of the prototype experience that real people have with these things. You know, there's a tendency with technology. And I saw this with Facebook. virtual and augmented reality for years we all did is to kind of jump ahead to some distant future like Ready Player One and it's like that's not where we are with VR and AR like whatever Meta wants to tell you that's not where we are and with whatever Neuralink or you know whatever wants to tell you BlackRock whatever wants to tell you actually I think they're pretty good about how they market but whatever they want to tell you, it's not where PCI is yet, right? It's not like going to the dentist. These are prototype devices that are going into people who have really serious and complex medical complications. And it's that messy middle ground. And at the same time, some of the programing is incredible and some of the hardware is incredible. And so it's making sense out of that, you know, in-between space. so yeah but anyway i wanted to what i was trying to do there a little bit was to lay out the spectrum so you're right about eeg and i agree with you even though there are more things that you can squeeze out of it than you might imagine but you're right it's limited because it's very aggregated and where you start to really where the rubber meets the road is with granularity where you're seeing individual neurons and individual neuron clusters. So at the opposite end of the spectrum, PEG is full on invasive BCI implantation. A step in from that is you get stuff like this, like laser FNIRs approach, where they're getting almost not quite BCI kind of data by looking intensely at one particular part of the brain and looking at laser expansion. In the middle, there's also MRI studies. right, where you're getting like flow of blood in the brain, which is highly spatial. You know, it's still at the level of like large kind of voxels of the brain, right? So it's not anywhere near as granular as individual neurons. It's millions or maybe billions of neurons that you'll get in an MRI image. but you're getting more specificity, right? And you can work out what happens when someone's watching a film, what parts of their brain are responding. So there's a lot of play in there. FNIR is kind of an exciting, cool thing. There's a headset that we'll probably try to integrate into the next version of this that uses portable FNIRs. There's a company called Kernel that's developed that. It's not that different than what we're using in an EEG context, kind of a swim cap. type thing but it suddenly gets you a higher resolution of spatial data there's some almost like a quantum problem a little bit with that like you trade off on time specificity with spatial specificity but you know you can start to assemble these things and you can get some pretty powerful insights but yeah for the purposes of the piece we're using both the neural elements of the eg alongside of the artifact in a pure neurology context or neuroscience context you would try to eliminate all the artifact in an interactive installation or media arts environment. I'm really interested in the accidental muscular artifacts, jaw clenching, moving your brow, turning your head. You know, you get accelerometer and gyroscope data out of this. which is how we were able to do things like you know you could move a robotic arm with the motion data of the head and then clench and unclench it using jaw clench so stuff like that so we can start to like actually leverage a lot of that which maybe in a pure pure neuroscience context you'd be like well it's not neural data but it's a combination of information. You can also start to stack these things, right? So there's other biometric data that you can intersect with neural data, skin conduction, heart rate, other things, right? And it all starts to build like a pretty interesting instrument for performance or for experience. So anyway, I think there's like a lot there in the space.

[00:34:00.230] Kent Bye: So yeah. And in terms of part of the context of my history of looking at BCIEG is the neuro gaming. And so moving from being able to have agency that is in the moment. And then what I found is like the sweet spot is in this realm of meditation where you're looking at a period of time and that you're reaching a certain brave wave state, but it's more of like a rolling average. So you maybe not know immediately what's going on. When you're achieving it. So a lot of the applications I've seen is that more of the biofeedback. So you're doing some way that you can maybe visualize what your brain waves are doing so that you can maybe be even more present within the context of your body. And so I feel like that with your project, you're starting to look at within a theatrical or storytelling context. I can certainly understand the use case of like a invasive BCI for someone who may be paralyzed and they need to perhaps use their brain waves to control some robotic arm to do some functional tasks that they wouldn't be able to do. And in the context of like a story, like a film from a cinematic tradition, you could basically abstract out all of the BCI controllers and then make it into like a series of buttons that you're pushing with your finger. And then at that point, it's like, to what degree is these knobs that you're dialing in terms of everything else? Does it make a compelling interactive experience for you as the controller? experiencer and also as the audience, because, you know, we've had interactive films like Bandersnatch on Netflix, exploring it. But it feels like that, you know, kind of a sweet spot for me, at least for the BCI is how can this start to connect me more to what's happening within the context of my body or my environment, maybe even a virtual space, rather than something that is making the edit go faster or smaller when the story is already kind of set. So I'm just curious to hear some of your thoughts on that.

[00:35:37.312] Graham Sack: I mean, so there's a lot of other, like, I mean, this was sort of iteration one. There's a lot of different, I don't know, say features or forms of interactivity that are possible. Here, working with edit speed, music, scene selection, I can do things like playback speed. I'm interested in the next iteration of this, of like taking the entire film and being able to sweep through it, like using head motion, and then select into particular things using this SSVP technique, right? So you could kind of like sweep through and pull out a scene and then expand it, right? And we can start to lean into a lot of different elements like that by combining artifact with neural activity. So there are more things. I would love to continue this interview, but I'm in the middle of loadout. I'm in the middle of loadout on the installation. So I happen to be back on the show, but I'm going to have to run in a minute.

[00:36:19.835] Kent Bye: Okay, great. Just one final question, just to kind of wrap things up, because I know that I appreciate your time and, you know, there's lots more that we can dive into everything, but just curious to hear some of your thoughts on the ultimate potential of all these kind of new emerging technologies all coming together, robotics, PCI, XR, cinematic storytelling, theater, and what they might be able to enable.

[00:36:38.131] Graham Sack: Yeah, I mean, I do think they're going to converge, and I think it's sort of necessary to some extent that they converge. I mean, you know, it's fascinating to kind of watch AI algorithms do things independent of us or replicate human behavior or replicate forms of human creativity. It's also extremely alarming. But then there are a lot of opportunities. And one of the interesting things about BCI is it creates these opportunities for interaction with AI and the broader kind of neurotherapy version of this project. I'm doing a piece that's based on Solaris by Stanislaw Lem and, of course, the famous Tarkovsky film. which is about a sentient planet that reads people's brainwaves. So the idea was to kind of put together this iconic story about a planet that reads people's brainwaves with the actual technology around brainwave reading. And there what we're doing is building like a virtual character, kind of synthetic intelligence representing the planetary intelligence. of this thing that's taking in BCI information, EEG information, or maybe FNIRs, from an audience and performers in aggregate, and then responding to it with dynamic AI-driven visualization that has a strong climate theme to it, music and sound. And there it's beginning to, in dialogue, we still will tell the story of the novel, but it's putting in the middle of it this virtual character that is dynamically responding to BCI information and they were bringing together AI with brain computer interface tech. There's a lot of other wild elements of this. I mean, like the robotic space, you know, I was talking to, again, one of the main researchers on this project, you know, sort of saying, well, you know, are we going to get to a point where like, you know, pilots are flying planes just using brain computer interface tech? I was like, no, you would never exactly do that because there are forms of response time you know, steering adjustment, et cetera, that are much better done with AI. You know, they're going to be radically, radically faster. But what you could have is an interaction between them, right, where it's like I'm triggering or I'm thinking a higher level thought about steering. And then the system is responding with the specifics of how to implement that within the device, right? And that's in the AI, BCI form of interaction or, you know, with home robotics or something. You might think, you know, I want a cup of coffee. And then a spot robot walks over and gets a cup of coffee rather than thinking, I want to lift the arm now. I want to close the hand around the cup. I want it right. And so there's levels of abstraction of thought or intentionality that could be enabled with that interaction between BCI and AI. And I think we'll start to see that work. I'm trying to play with it in some of these other projects.

[00:39:07.478] Kent Bye: awesome well thanks for this little peek into the future and i'll let you get back to breaking down and thanks for joining me here on the podcast to help break it all down i hope i'm back on the show a third time all right thanks a lot okay all right thanks thanks again for listening to this episode of the voices of your podcast and if you enjoy the podcast and please do spread the word tell your friends and consider becoming a member of the patreon this is a this is part of podcast and so i do rely upon donations from people like yourself in order to continue to bring this coverage so you can become a member and donate today at patreon.com slash voices of vr thanks for listening

More from this show