Breathe is a Magic Leap, mixed reality experience with four people in shared environment that uses a biometric sensor measuring chest movements in order to explore how our breathing connects us to each other and to the atmosphere. It’s a powerful example of how mixed reality can help us to visualize dynamics that were previously invisible, and how being able to see this exchange of breath can help to feel more connected to the world around us.
I had a chance to talk with director and transdicsiplinary artist Diego Galafassi, and XR artist & key collaborator Stephen Mangiat, who used to work at Magic Leap where he worked on Tónandi. We talk the evolution of this project, the experiential design process, and why using a particle effect aesthetic encourages interactions and provides a powerful way to visualize how we exchange air with the people around us.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to The Voices of VR Podcast. So continuing on my series of looking at some of the experiences from Sundance 2020, specifically around the immersive storytelling innovations, technological innovations, but also the experiential design process. So today's episode is with Breathe, which was a Magic Leap experience for four people. You go into this whole beautiful installation that they had set up, and you're wearing the Magic Leap, and the whole story is around the air that we breathe, how we exchange breath with the atmosphere and with each other, and trying to draw this deeper connection between the relationship between climate change, the change in the atmosphere, and the air that we breathe. I think it was a very powerful experience just to be able to feel like I had this connection to both the people in the experience, but also the world around me. So the director of this experience, Diego Galafasi, he's a transdisciplinary artist. He collaborated with Steven Mongeant, who's an XR artist, used to be at Magic Leap, and now he's a key collaborator working a lot of the visuals and the particle effects within this experience. So that's what we're covering on today's episode of Voices of VR podcast. So this interview with Diego and Steven happened on Saturday, January 25th, 2020 at the Sundance Film Festival in Park City, Utah. So with that, let's go ahead and dive right in.
[00:01:30.703] Diego Galafassi: My name is Diego Galafassi and I am a transdisciplinary artist. I've been working with different kinds of media, more recently with film documentaries and I just premiered here at Sundance my first AR piece called Breathe.
[00:01:46.713] Stephen Mangiat: My name is Steven Monjot. I'm an XR artist. I used to work at Magic Leap and then since I left I've been doing some freelance XR development. Met up with Diego and was a key collaborator on Breathe. Worked mainly on the visuals with Diego.
[00:02:02.687] Kent Bye: So yeah, maybe you could give a bit more context as to your background and your journey into immersive technologies.
[00:02:09.843] Diego Galafassi: Well, a lot of my practice evolved through interactions with communities. I've been doing a lot of work with participatory theatre and working on questions around global environmental change and climate change and really exploring the role of the arts in these very critical moments in which we're living in and the challenges we're facing globally. So I'm currently working on a documentary film called Life of Lions and out of some of the conversations and the interviews that I've been having in that documentary, the idea of creating an experience that people could come and participate and have their own journey within that story was really appealing to me and that's where Breathe, the project that we are presenting here, came to be created. It's really about immersing people in the story that is opened up through the documentary film.
[00:03:05.700] Stephen Mangiat: So my previous work that we talked about a few years ago was Tonandi, which was a collaboration with Sigur Rós. So I've been moving towards music and XR and the past few months I've sort of been playing around with particle effects a lot and Diego saw some of those and we started chatting and it just seemed like the right fit in terms of what I personally would like to be making now and the concept of Breathe. So yeah, in the last few months it really came together really well.
[00:03:33.037] Kent Bye: Yeah, so maybe you could just walk through and describe the experience and just kind of give a sense of as people go into the experience like what is the experience and what is the deeper message you're trying to get across.
[00:03:43.064] Diego Galafassi: So in Breathe you arrive to a installation which is open so you can actually always see inside and we have used the Magic Leap platform and we developed a biometric link between your breathing and the Magic Leap in order for us to be able to create all the visuals and the sound design that you've experienced through that information that we acquire through that link. So as participants walk in, you'll be wearing a Magic Leap device and a breathing sensor. And you go basically through a journey of three different steps. The first one is about visualizing your breath from the outside, so you are able to see your breathing and the pattern that it creates. And then the second step, you connect in a multiplayer setup, you connect to the breathing of other people that you're sharing the experience with. It's a four players experience. and finally it opens up to a larger ecosystem around you so you're able to visualize the currents of air and how your breathing is related to these global dynamics that we speak all the time in terms of climate change. What we wanted to do here in this piece is really to connect the personal and the planetary so that you're able to understand that even in a very small gesture of breathing you are in a conversation with these very global dynamics that are part of our everyday life.
[00:05:08.231] Kent Bye: In the experience, you are having this visual depiction of your breath. And some of the earlier experiences that I've seen, like We Live in an Ocean and Air by Marshmello Laserfeast, they had an actual detector on the headset that was more of detecting your breath that was coming out. And so they were able to do this really high-end fluid dynamics visualization of that. In a social VR experience, but in this experience, it's a little bit more I'd say abstracted in a way that it may not be like the perfect Navier-Stokes equations, but it's sort of like at least giving you the sense but probably two main things one is that it's connected to the sensor that you have that is more wrapped around your chest and so you're measuring somehow the compression of your chest and then trying to match the visual depiction of that chest movement into these particles and You're interacting with them with the other people around you and eventually you do a little bit more of like stylized spatial storytelling with the particle effects, but let's focus just on the breath right now and Maybe talk about that process of trying to connect up some of those effects with the actual breath
[00:06:14.784] Stephen Mangiat: Yeah, so when Diego was showing me references, many of them were fluid simulations, and I was pushing back. I said, oh, we probably can't do that. And like any good director, he said, well, but that's what I want. So it was my job to sort of give that impression, even if developing a 3D fluid simulation was a little bit out of scope. So it was a lot of fun. There's tools like the VFX Graph that's built into Unity now that give you access to GPU particles. And using various tricks, you can give the impression of air, which is a fluid, but it's a compressible fluid, so it's difficult to model in 3D. But yeah, using various tricks, you can kind of give that impression. So the sensor itself gives you a volume estimate. And mainly the visuals are driven by exhale and inhale, so the derivative of that value. And then there's just a lot of tuning and making it feel like something that you would expect air to look like.
[00:07:15.160] Diego Galafassi: Yeah, we explored a lot of different kinds of sensors. There are so many different ways to measure breathing through heat perhaps that comes out of in front of your face to different kinds of methods and we landed on this sensor because we wanted to get an experience that was very very tight so that your agency within the piece was very clear. And as Steven was saying, it gives you a value, basically a volume of air. It's an abstraction, but it's really that inhale and exhale becomes very, very clear. And just out of that parameter, you can abstract quite a lot of things. Like you could say, it tells us if an exhale was slow or fast, or if any given breath was shallow or deep. And all of those become really interesting parameters to play with and when I started to look into Steve's work it was really inspirational because one of the very early ideas in the piece was to, exactly as you said, to connect to this ocean of air that we live in and to realize that we do live inside a fluid environment. And if you look at a human being from the perspective of air, our bodies, they don't end at the level of our skin, but they are completely spread across the whole world. So the air you are breathing right now will eventually spread across the whole world and in the next breath that you take there might be particles that were inside your first breath in this planet. So this idea that our bodies are much more extended was something that I really wanted to tap into and for me that sort of fluid visuals was really an essential part of it and I work with different teams along the way and I was never quite satisfied until I came across the work that Steve has been doing and it's been really really fruitful collaboration.
[00:08:53.813] Stephen Mangiat: And from early on, Diego was interested in also hand interaction. And I think that's a big part of it because you're creating this synesthetic experience and if you see these particles come out, there's some instinctual thing to come out. Can I affect these if I touch them? And I think having that interaction also be very air-like was important to making it feel realistic.
[00:09:17.605] Kent Bye: Yeah, in my experience, I had a couple of profound moments of just tuning into my breath and seeing my breath come in and out. And also, and very similarly, in Marshmallow Laser Feast, we live in an ocean air of being in this experience. It feels like this almost like meditation app that allows you to visualize your breath in the way that something that's invisible, but allows you to connect deeply to it and almost give this visual reference. But I also had a couple times where either the sensor lost, or it was sort of like there's sometimes a gap between what I feel like I'm doing and what I'm seeing. And my breath actually maybe dropped out for a few moments, and then it came back. But I wanted it to be perfect simulation of my breath. But obviously, you're doing a symbolic abstraction. And so maybe talk about that. process of trying to even detect or quantify all the different varieties of different body types and ways to do quality assurance and ways to make sure that you can tune this process through a number of different things like what can you shift in order to kind of adjust it.
[00:10:19.131] Diego Galafassi: Yes, the project was developed in a residency that I've been having at the Phi Centre in Montreal. The Phi Centre, there's a whole community around it and they have XR exhibitions so there's a lot of public and audiences that come through on a daily basis and so being there was really fruitful as a way to actually experiment and prototype the piece. And so again the sensor that we used was in a way prototyped there at the FI Center and we explored different ways to wear it and what it could look like and feel like for different body types and how the onboarding process would be like. And we tried to take care of all of those aspects and features but as you said I mean there are a lot of things that we still need to iron out and All the sensors are connected to a server and through a Raspberry Pi and Bluetooth. So the whole system is quite complex and we need to just continue improving on that. But it's been surprisingly quite robust so far at the festival.
[00:11:13.795] Stephen Mangiat: I just want to say, having worked with Magical Leap for many years, the amount of setup that was required to make this happen. With the breathing sensor, there was a built-in headset to really give immersive sound. The whole installation, which had fans blowing on fabric around you. Yeah, it was just so much to put together. It was a multiplayer experience. So the Five Center really, we worked with a team in France, Nova Lab, to create a multiplayer experience, which has only been done several times on Magic Leap. So that's another thing I should say, when we were first talking about it, I was like, oh, I don't know. It's pretty difficult to do, but I'm glad Diego pushed, because I think that's one of the strongest parts of the piece, is when you can breathe with other people together.
[00:11:57.120] Kent Bye: I've had a chance to do at least three or four different social VR experiences here at the Sundance New Frontier. And what I'm taking away is that sometimes the other people that you're with can somehow shift the experiences that you have. There's like group dynamics that are emergent that are difficult to predict. And so depending on what group you have, people may have a completely different experience of what I was going through. My experience, I sort of walked in there wanting to have this big meditative experience and then one of the people that was there was almost like a teenage gamer that was trying to like almost like break the system and like moving around and like jumping in between people and stuff. It was like trying to push the limits of the technology but I could see that he was really engaged with it and having his own experience and you know it was I guess the part that for me though that was super powerful was because you have your chance to connect to your own breath but then you have this opportunity to just look at other people and share your breath with each other. even after all that erratic energy there was this moment of just kind of calming down and then being able to kind of look at each other and to share the breaths with each other and I felt like something about that of being able to take something that is invisible and that's happening and be able to give some sort of visual depiction of that was for me one of the most powerful aspects of this experience to really take home that level of interdependence and interconnectivity that we have through the breath.
[00:13:13.737] Diego Galafassi: Yeah, and we wanted to work with a very delicate aspect of visuals, also to create a kind of a memorable and visceral experience of these visuals, so that you can walk away and perhaps still see particles around you. And to me, working through this process, maybe because I was looking at it for too long, but I really had that experience of actually carrying with me those visuals. And this was one of the key reasons why we chose to work with the AR platform, regardless of how difficult it has been. I think it's an important part of the piece to be able to see others in your space and have those unpredictable interactions, right? And well, I think what you said also makes me think that the importance of the way we frame these experiences as well, the whole onboarding and if we speak the topic itself, it lends itself to so many different communities and conversations. So it could be a piece that speaks into well-being aspects or it can be a piece that speaks into environmental questions. And I think the way you frame it, the way you open, can really shift the kinds of interactions that can happen inside. So it is important to think about that onboarding as well. And again, the FI Center did a wonderful job in bringing the experience that they have in creating a whole, very smooth transition into the piece.
[00:14:27.635] Kent Bye: I'm wondering if you could expand on the mechanics of trying to visualize the exchange of breath.
[00:14:32.920] Stephen Mangiat: So there what's happening is you're exhaling particles, but also as you approach someone else, there's an attraction towards them, and you see their breath as well, and that breath gets attracted to you. And what's cool is if you get more than two people together, you can start to see some swirling effects happen. People are only in there for a few minutes and some people like you're saying are more interactive than others But you could really start playing with it where you can actually catch someone's breath and if you move back It'll follow you which is really it's kind of fun to do and I think That idea can be even expanded further where people just start playing in a more open sense with their breath and having different effects happen Yeah, it's really interesting to play with that
[00:15:18.642] Kent Bye: Well I know that the Phi Center, Miriam, I see her all over the world at all these different film festivals helping to curate but also in Venice they had a whole pop-up there showing a number of different immersive experiences of artists and they brought in Punchdrunk to show one of their immersive theater 360 degree video experiences that they had but the other thing that I think was probably the most striking about this experience was the amount of work that you were doing to try to onboard people in a way that was I'd say scalable to have maybe one person that's there to have some self-direction for people to start to put on this equipment themselves rather than having a docent that is there for each individual person that people can start to do the process and if they need help then they can be assisted but to try to accelerate the process of onboarding and offboarding because You know, that's a big friction for throughput. And if you can get more people through it, then it makes more economic sense to not only lower the cost, but to be able to have a system that could be replicated. This in particular, you have a breath sensor that probably not a lot of people have had. And so you're, you're stepping through this, putting on all this equipment, but just the attention towards trying to create an onboarding experience with this, that is a little bit more self-contained. So maybe we could just talk about that design process.
[00:16:32.220] Diego Galafassi: So what we have in Breathe is a four users experience and so we designed an onboarding system that has four iPads. So first you get introduced to the piece and you're given a brief information about what you're about to experience and then you're asked to step into one of these four stations. and there you go step by step by first putting your breath sensor and then onboarding with the Magic Leap and finally you get to a video that you can watch until all the others are ready. So we want everybody to start in a synchronized way and there's one docent that is exclusively dedicated for monitoring the experience so we know which Magic Leap is connected to which sensor and if it's well calibrated and all of that. and another dozen that goes around and helps people if they have any particular issues. And well, this was part of, we've been thinking, as we were producing the piece, we've been thinking about user experience and onboarding and offboarding and how that could be as smooth as possible. And yes, build those ideas of scale so that multiple people could onboard at the same time. It's really interesting to notice though that even though our iPad has very clear instructions, step by step, very simple steps, people, I've noticed that they tend to step up to the station and already grab the device and try to put it in their head. So even though it's a very new field, I think people are already developing some habits in terms of like how I should start and onboard in these experiences. so our docents have to constantly just say pay attention to the instructions follow the steps and the same thing at the end we have as you exit the piece you are asked to go back to your iPad and there are instructions how to remove the headset but also information about the air that you're breathing right now and how it will travel across the world and people rarely stop to think about that they just simply put the headset down and and and they just walk away. So there's interesting dynamics and ways to think about the habits that might already exist in terms of like how people put those devices on.
[00:18:27.509] Stephen Mangiat: Yeah, the show running was really great on this project. There was an iPad that had a server that let you know which device was connected to which sensor and where everyone was in the experience. And the experience was actually started by the docent on the iPad, which is really helpful for You know, when people go off wearing a headset, they can't see what they're seeing. It becomes a little bit nerve-wracking. But having that system with the iPads where you can understand where everyone is and how things are working. And actually, on the iPads, people walking by can see their breath value, which is really interesting. And, you know, lets people that aren't inside the experience, like, participate also, which is pretty cool.
[00:19:09.950] Kent Bye: Well at the impact reality summit breathe was one of the award winners for the climate change category And I just ran into somebody and they were kind of surprised that breathe was a climate change piece And I actually had to think for a second of how it was a climate change piece, but then I had a story But what is the story for how this experience this immersive experience of breathe is related to the larger issue of climate change?
[00:19:32.628] Diego Galafassi: Well, one of the very interesting conversations I had along the way through the research of this piece, it's a philosopher called David Abram, and he has written a lot about the atmosphere and air. And so he asked me the question, isn't it climate change, perhaps the simple consequence that we forgot that we live inside the atmosphere? and think that with this regard that the medium in which we live in is actually part of us, right? And so we treat the atmosphere as sort of almost like a place that we could just dump like a dump site. And so I think we didn't want to speak into climate change from a kind of science communication perspective. And we wanted to open up that conversation from a very intimate and personal way. And breathing for me became a kind of a really interesting portal to that. It's our direct link to the atmosphere. Every moment you are in a constant conversation with the atmosphere. So we wanted to open up that conversation in that way to develop perhaps a sensibility towards this medium that permeates everything that is alive right now. So for me the pieces about that is about reminding us that we live inside an atmosphere and at the end of the piece we bring some of that through the VO the voiceover that we have at the end and we we ask a question What atmosphere will future generations breathe? So it's really about the idea that the air we're breathing right now has reached really a critical point where scientists believe that we have about a decade to turn around some of the key dynamics that have been happening and how do we start that reflection not from a sort of a conversation around the globe but around the everyday life and our relationship to the natural world as we experience in an everyday form.
[00:21:15.530] Stephen Mangiat: Yeah, I live in California and air quality index is something that people are becoming more familiar with. Just the quality of the air with wildfires and the effects of climate change. I think people are becoming more aware of how the air is being affected. And so being able to point towards that was also a good part of it.
[00:21:35.217] Diego Galafassi: Also the pieces drawing, as Steve was saying, when there are four users inside the experience, the iPads are displaying the breathing data from the different users, so whoever is outside can see their patterns of breathing, but also in the screen we are showing information about the local environment. the direction of wind, the speed and air quality index, and the CO2 concentrations as well. And so we're looking into having those parameters to drive some of the visuals that you see, so that the piece can become also perhaps a, like have the memory of a climate that is changing. So depending on when you experience or where you experience, the piece will look different. And so the idea of really creating a piece that is adaptive and is kind of a memory of a climate change.
[00:22:20.543] Stephen Mangiat: We actually noticed that the air in Park City changed once Sundance started, which was pretty interesting.
[00:22:25.766] Kent Bye: How was that? You were monitoring the numbers or what changed?
[00:22:29.268] Stephen Mangiat: Yeah, we were looking at the numbers. Yeah, we don't have any sort of scientific evidence.
[00:22:35.932] Diego Galafassi: We are drawing real-time data from the sensors. Every city has air quality sensors, so you could find that information available. So our server is drawing that information and displaying it on the iPads. So before, when we were testing and doing the press preview of the piece, I think air quality was about 10 points. And when it started, it went up to 50.
[00:22:56.468] Kent Bye: So it's... There's a lot of Lyfts, a lot of Ubers, a lot of people driving around.
[00:22:59.911] Diego Galafassi: Cars going around. And so that's, yeah, again, the piece really reflects the place and the environment in which it's being shown.
[00:23:09.536] Kent Bye: Well, I think, you know, when you think about issues like climate change, I think of Al Gore and his Inconvenient Truth, showing you a lot of infographics and, you know, trying to convert the science into, like, a story, but through infographic data visualization or trying to, like, tell the story through numbers. I think the difference about this piece is that it's much more experiential. Like, you get this sense of the interconnectedness with each other. And that, I think, is a lot different. And then the way that I would answer the question for how I see it as that climate change piece is that, for me, there's this crisis of consciousness that we are not connected to the world around us. We're disconnected. And it's almost like in economics, if you destroy a rainforest, then the GDP can go up. So it's almost like the algorithms we have that run our society have what is called an economics and externality so that you can destroy the earth and have higher profits. And so if not having that integrated into the feedback loop of the primary driver of the culture of the economics and profit and corporations, then You have the situation where you have all this externalities that creates a situation where it's a tragedy of the commons. We're just destroying this shared resources and that for me having this shift from a philosophical perspective like substance metaphysics of saying like the underlying base of reality are these objects that are separate from each other and it's something that's much more process or relational, something that's more process philosophy or relational metaphysics that is trying to draw the interconnectedness and interdependence and something that is trying to see how we are fundamentally interconnected rather than just sort of like these islands. It's like the myth of the individual that is separate but that, you know, I think Noel Harari's Sapiens where he's like talking about the things that humans do that's different is this ability to collaborate and to be able to have all the knowledge and communication so we have this ability to have this legacy of how we make sense of the world but yet there's something that has gone wrong with the limits of reductive materialism and this need to see how we're able to be interconnected in a more relational way. And I feel like the medium of both virtual and augmented reality is like this new communications medium that allows us to tell those relational stories in a way that, like more of a linear written text with language, doesn't quite encapsulate.
[00:25:26.989] Diego Galafassi: Yeah, I love the way you framed it. I think it's exactly where this project was born. I mean, the film that I'm doing right now, it is about exploring ideas of process philosophy and how we can understand the world not as made of bits that come together to coalesce into the structures that we see, but actually a world that is made of fluxes and all the entities that we see there are actually just momentary arrangements in that flux. And I think the atmosphere and air became a really interesting way to speak into that, but from an experiential way. And I think the way I think about climate change, it's precisely as you said, climate change is not a technical problem that we could just solve with one technology or one simple thing. It's actually much more profound. It's perhaps just an opening for us to really reassess and reimagine the way we relate to the rest of the living world. To me that is a much more profound and deeper kind of change that I think there's a lot of possibilities within VR and AR to help us doing that transition. It's really a cultural shift that needs to be addressed in a quite delicate way and I think stories like this can help us move into that direction, create visuals, create metaphors that can help us just perceive ourselves even in a different way.
[00:26:42.214] Stephen Mangiat: Yeah, what I really like about XR, spatial computing, whatever you want to call it, is the fact that it really can highlight your senses and give you sensory superpowers and connect you to your senses again. Because I think a lot of times these days we just consume so much that our senses become dull. We don't really notice how we're feeling in particular situations, how we're responding to the environment that we're in. And so if you can use technology to help remind you of how you're interacting with the air around you, with people around you, with how you're feeling things, I think, to me, that's what's powerful about this medium.
[00:27:19.889] Kent Bye: So it sounds like you're somewhat familiar with the process philosophy or Alfred North Whitehead. For me, I see Whitehead as somebody who did this epic survey of going back through all of the history of philosophy to see what is the philosophy from a perspective of organism. biology and I think that's been a lot of the roots of a lot of this ecological consciousness that has been coming up in this last hundred years. Whitehead would talk about these concrescences or those fluxes and you know that for me there's a lot of similarities into like Chinese philosophy or other non-western ways of thinking about things and that you know there's like this other shift of rather than the quantities and the numbers it feels like more of a spectrum of the qualities and of like an unfolding process and more of driven towards these experiences that are unfolding in the context that are kind of connected to each other. But, you know, it feels like there's these new metaphors and new spatial metaphors that we can start to see to get a grasp of these new concepts that maybe the language doesn't necessarily fully describe.
[00:28:16.957] Diego Galafassi: Well, I could say one thing about this, that I think this is really at the heart of this cultural shift that we're talking about, to try to understand the world from a... well, to perhaps reassess the way we perceive even the world. And I just mentioned before, I've been working a lot with an anthropologist called Tim Ingold, and he writes a lot about lines. And the way we think normally about lines is sort of a container, a line that divides two different things. So we tend to see the world that's made of separate entities and I mean, arguably even physics has that approach, right? Where you're trying to find these elemental particles that coalesce to form the structures that we see. But there are other ways and other philosophies and other traditions that have looked at the world more as lines, as kind of more the lines of growth. So how do things grow and how do things move and how do they bind to one another to create larger structures? I think it's a whole set of new metaphors thinking about the ways instead of chains that are connected to think about knots and how things are knotted together and how the structures we perceive and see they are kind of temporary structures that emerge and dissolve and and perceiving the world in that manner, I think it's really at the heart of these shifts that we're seeing. And again, I think there's so much potential in the technologies that we're working with to actually explore those questions in quite a micro way and finding stories that can help us do that.
[00:29:40.516] Stephen Mangiat: Yeah, I think particles, for that reason, are a really natural fit for XR because they're so dynamic. You can have a lot of interaction with very simple inputs and being able to form, towards the end of the experience, we form images out of the particles and being able to sort of rearrange the substance that defines everything in different ways. Particles just make a lot of sense for this medium.
[00:30:07.519] Kent Bye: Yeah, that's interesting because when you think about artificial intelligence and machine learning you have trying to come up with and determine how each of the different entities are related to each other and you have all sorts of higher level levels of mathematics and geometry to be able to have these features that define what those relationships are but yet by having those primitives of those, I mean, in some ways they're atoms or molecules, but it's more about the relationship between those than the individuals themselves. It becomes like a story that gets told through this aggregation of all of these, like a whole swarm of the fluid dynamics of agency. You have an action you take and you expect what's going to happen, but then it's always slightly different because it's like all these complicated ways that they're interacting that you can't fully, in your mind, imagine and so it's got this sort of real inherent novelty to be able to play with it and to learn about the different dynamics of these hidden fields that we can't see but that the thing that was really striking about this piece was like to start to tell stories with these forms and these shapes because you start to bring them up transitioning from one to the next you're trying to use those objects to be able to form these metaphoric images and sometimes it's completely abstract but going in from like concrete forms that are you know showing a fingerprint for example to you know something that's much more abstract and so you have this fluctuation between the order and chaos that's kind of like going in and out creating that contrast but then trying to use those shapes and forms to tell a larger story.
[00:31:36.212] Stephen Mangiat: Rather than place an object in the space, it's like you're turning the light on the space itself. It's like you can see the field that's between you and the rest of your room, for example.
[00:31:48.724] Diego Galafassi: It ties really nicely to the whole concept of the piece and the idea that all of these forms are made of one single field. So the same particles that you see in your breathing from the beginning of the piece, you see them manifesting as different images of cities or even crossing different scales, going from a leaf to a tree. and so on. So it's this idea of having one continuous field that I think is really helpful for people that come through to understand that, yeah, all of these, it's part of one continuous field and not separate entities.
[00:32:21.384] Stephen Mangiat: Just one more thing on the particles, it's like there's so much media that creates our preconceptions as to what is magical, what does a hologram look like, that we're so used to, you know, watching Marvel movie or something like that. And I think the particles sort of fit that idea that we have about what these ethereal imagery in your space should look like. And so people respond well to particles, I think, for that reason, because they have all this other context that they're coming to the piece with.
[00:32:53.902] Kent Bye: I think in the early days of XR we had a lot, you're building on top of game engines and so having very concrete objects in a lot of ways and 3D modeling and a whole workflow but there's a very fixed Cartesian geometry that that has a very specific mindset and I feel like there's gonna be new frameworks through either 3JS or WebGL or open frameworks that is a little bit more of the creative coding community that starts to actually operate more in like mathematical structures that are then giving these larger shapes that then have these different particle effects. But it's a little bit different than thinking in terms of these concrete 3D objects. And so I feel like there's starting to be a little bit of a larger shift within the XR industry to start to play with a little bit more of these abstractions and these forms to be able to start to maybe get away from a little bit of that Cartesian mindset.
[00:33:42.348] Stephen Mangiat: Interaction is also easier with particles because picking up objects that you don't have tactile feedback becomes difficult, but swiping your hand through a field of particles and seeing it disperse in a way that feels natural, you don't have that limitation of the interaction. Well, what's next for Breathe?
[00:34:05.490] Diego Galafassi: So we're working with the Phi Center on a distribution plan for Breathe. We're really keen in bringing it into policy spaces, into galleries, festivals, and showing in Montreal for some months now in the spring. And we're really curious about what kind of conversations could be created around a piece like that, and can we join forces with other kinds of processes that happen around climate change or conversations about air pollution and so on, and just bring that to as many communities as possible.
[00:34:36.473] Kent Bye: And finally, what do you each think is the ultimate potential of spatial computing and immersive storytelling and what it might be able to enable?
[00:34:48.299] Stephen Mangiat: Yeah, that's a big question. I think I touched on it a bit earlier about connecting with your senses, giving people superpowers. I think the term spatial computing is a little bit limiting because it is one aspect of it, but it doesn't talk to the sensory aspect. So I think being able to give people experiences that are much different than they can have with their phones or with their television or movies is really what I find interesting. I think art is a way to push towards those areas. There's going to be things like web browsing, we know that's going to happen, but to me what I find exciting is the experience that we don't know yet.
[00:35:32.630] Kent Bye: Yeah, just a quick note on that. I think there's a part of the language that is continuing to evolve and grow from virtual reality, augmented reality, XR, to then spatial computing, immersive computing. I think to not get too fixed on the language itself to then kind of define it, but that's a good point that the spatial computing is very much defining like in the visual component, but there's a whole other Level of the sensory computing you could arguably say so just to kind of put that note out that you know as we move forward and see Different things that we're emphasizing then the language around it is going to also evolve as well. So thank you for that
[00:36:07.395] Diego Galafassi: Yeah, I agree with Steve. I think there's a great potential in terms of making it visible what is already here and helping us to reconnect to the rest of the living world and reconnect to each other. And one of the limitations perhaps that a friend of mine mentioned is that one of the issues with AR or XR is that you cannot see people's eyes. And I think there's a really major barrier in a way for creating the kind of connections that we would love to explore, but I think seeing people interacting in the piece, I thought it was, it created some very interesting dynamics between people and seeing them exiting and starting conversations, that for me was really inspirational. So I think as a medium that can help us understand ourselves and understand the world we live in in a different way, it's a really rich and really powerful space to be working in.
[00:36:55.847] Kent Bye: Okay, is there anything else that's left unsaid that you'd like to say to the members of the community?
[00:37:02.894] Diego Galafassi: Well, just thank you for the opportunity. This has been a really important and very thought-provoking chat. So thanks for doing the work that you do. And I just hope we can connect to the community out there and just keep sharing and learning from each one's experiences.
[00:37:18.678] Stephen Mangiat: Yeah, thank you very much.
[00:37:19.479] Kent Bye: Yeah, well, Breathe is one of my favorite experiences here at Sundance. I just think that the deeper message that it has is so important of what we need right now in the world, but also the way that it gives you this experience of that interconnectivity and interrelationship that we have to not only each other, but to the entire world. So thank you for the work that you're doing.
[00:37:38.990] Diego Galafassi: So thanks. Thank you. Thanks.
[00:37:42.135] Kent Bye: So that was Diego Galafasi. He's a transdisciplinary artist and the director of Breathe, as well as Steven Manshunt. He's an XR artist. He used to work at Magic Leap, and now he's a key collaborator on this project of Breathe, working on a lot of the visuals. So I have a number of different takeaways about this interview is that first of all, well, the deeper philosophical message here is that you are not just this individual that is separated from the world around you, but you're actually intimately interconnected to not only the world around you, but also the people that are around you as well. So this was a powerful experience in the sense that, you know, can you imagine how can you give someone an immersive experience that makes them feel connected to other people? And the way that they were able to do these particle effects where you have this chest sensor that is roughly correlated in terms of like, as you're breathing in and your chest is expanding, it's like roughly trying to translate your breathing out. And so. You're breathing out, breathing in, but you actually have a couple of ways of interacting with these particles. First of all, you can have your hand and start to interact with those particles. But when you get close enough to another person, then you start to breathe in their air, which is the air that you've just breathed out. And so just to make the connection of how we are connected to each other from the Marshmallow Leisure Feast, piece called we live in an ocean of air, you know, like fish are swimming in water, where we're kind of swimming in oxygen, and all these other gases that we're breathing, and we don't necessarily think about that. But we are kind of like fish out of water, in the sense that we're still breathing these different gases that have these different fluid dynamics. And so just to kind of connect those dots of not only are you sharing these gases with other people around you, but you're also, you know, connected to the atmosphere. And so they tried to connect what was happening to the current patterns and the air quality and all that stuff. For me, when I was in the experience, I couldn't necessarily discern that there was any sort of connection to the outside world in any way. I know that they were saying that after you're watching the iPads and you start to get some of that additional metadata But for me, it was more of a visceral experience of feeling like I had this connection to other people mediated through this augmented reality experience I think one of the other big things of note of this experience in particular was the installation was absolutely beautiful and amazing with this big white sheets that were somewhat transparent and It looked like a giant jellyfish in some ways. With wind blowing down, it was kind of pulsating. It created a real spectacle for people to be drawn to it and to see what is happening here. A lot of the work that was done on that by the Fye Gallery was just really top-notch. One of the best, most beautiful installations that I saw this year at Sundance. The onboarding and offboarding was also interesting just because how they were able to have things there on an iPad. And so when I actually did it, I was like, yeah, yeah, yeah, I know what I'm doing. I hit next. And, but there was certain calibration phases that I was supposed to actually like put on the thing around my chest first. And so I actually like had no way of going backwards. And so I missed some of the different calibration phases, but. having some sort of onboarding that is trying to step people through the proper way to put on the headset. You know, they had Magic Leap headsets there. They had to get my prescription. And in terms of location-based entertainment experiences, the fact that you have to get someone's prescription and get the right thing, that slows down the throughput. And if they can potentially have the next versions of the magically not have to be so specific as to the prescriptions, because a lot of times what actually ends up happening is that you don't get the prescriptions that you need just because it's so much of a hassle and you have to invest in, you know, in this case, if everybody had the same prescription, then they would have to have like, you know, four complete sets of all the different ones and be able to manage that. So that is a bit of a logistical nightmare for anybody who's actually running these as a location based entertainment. but also you have to put on the headset without tilting it anyway. They had to try to keep it horizontal for just the way that they had set it all up. So things like that, there seem to be a little bit of a workarounds around the technology and the limitations of technology and hopefully eventually all this will be solved but these are just some of the things that I'm noticing in terms of like the challenges and the throughput and having to recalibrate everything if it does lose it but overall this is a type of experience where I really had a strong direct experience with these particle effects and one of the things that Steven was saying is that when you have those particle effects in your hand it's actually allows you to give this high amount of agency because you're able to touch these different particles and because they're like these fluid dynamics types of simulations, and it's like moving out of the way before you actually touch it, then it kind of mimics what you would expect it to be. And you don't have a lot of haptic feedback, so you can still feel like you're interacting with these air particles. And then because of that, then it gives you this like deeper sense of actually having this sense of embodied presence in your agency as you're interacting with the environment, but that that's helping you see the world around you, that it's plausibly interacting. And it just gives you this deeper sense of embodied presence of you being there and your sense of agency is being able to be expressed by these different objects. Whereas like 3D objects, you don't necessarily have that same type of move your hand through a space and to see any sort of dynamic effects from that. So I thought this was a powerful experience and hopefully it's able to have a good run out there. I know that the FIOS center out of Montreal is going to be having this and showing it around. And I think it's also got a really important message, which is as a design challenge, how do you get people to feel connected to a larger story here? And I think that. They're trying to do that by talking about aspects of the climate change to be able to see how can you start to feel connected to this larger story that feels so abstract and into these models of mathematical abstractions. How can you give people a direct embodied experience to start to tell different aspects of this particular story? So. Anyway, if you have a chance to go see Breathe at some point, highly recommend it. And also just to be able to pull off these multi-user types of Magic Leap experiences. I haven't seen a lot of that, so I know that there's probably a lot of technical hurdles that they had to overcome to be able to do this. And onboarding and offboarding, I think just trying to really streamline it so that experiences like this can get out there and find ways to get people to have good throughput through experiences like this. So that's all that I have for today. And I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a listener supported podcast. And so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.