#1051: Co-Evolution of the Volumetric Performance Toolbox and “Suga” Dance Performance about the Transatlantic Slave Trade in the Caribbean

Suga’: A Live Virtual Dance Performance was a live performance in Mozilla Hubs that premiered at SIGGRAPH 2021 and also screened at the Sundance New Frontier 2022. This project tells the story of rebellion, resistance, and resilience of the Transatlantic Slave Trade in the Caribbean by blending different genres of live dance performance, guided tour, and environmental storytelling. They used OpenHeritage3D LIDAR scans of the Annaberg Sugar Mill in St John, U.S. Virgin Islands that are down converted into point-cloud representations that can be seen within a WebXR Social VR context in Mozilla Hubs.

It is also a project that co-evolved from the development of a self-contained, low-cost, volumetric capture solution called the Volumetric Performance Toolbox, which was a part of Eyebeam’s Rapid Response residency program in collaboration with Suga’ project lead Valencia James and Glowbox’s Thomas Wester. The design of the Volumetric Performance Toolbox was to create an affordable solution for performers that didn’t require any specific hardware (like a Windows computer like an Azure Kinect). It includes a Raspberry Pi 4, Intel RealSense camera, and a microphone all mounter on a small tripod that can directly connect to WiFi to stream volumetric performance data over the Internet, which is eventually displayed within a Mozilla Hubs scene.

I had a chance to unpack the journey of creating both the Volumetric Performance Toolbox and Suga’ with James, Wester, and collaborator Simon Boas on January 21, 2022.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s a trailer for Suga’: A Live Virtual Dance Performance

.
Here’s James’ Sundance Artist Statement

Here’s more information about the Volumetric Performance Toolbox from the Eyebeam Rapid Response residency.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. So continuing on my coverage of the Sundance New Frontier 2022, today's episode is featuring a piece called Shugga, a live virtual dance performance. So they're using this technology called the Virtual Performance Toolkit, which is essentially a low cost, self-contained unit that has essentially like a tripod stand, a Raspberry Pi, as well as a RealSense camera, as well as a microphone. so that you could send this and not even require any laptop or anything for artists to be able to do a volumetric performance that could be live streamed. In this case, it's being live streamed within Mozilla Hubs. So this piece of sugar is happening in the context of this social VR platform of Mozilla Hubs. And you're going into these different sugar mills and then translating these point cloud representations that are taken from the Caribbean. and then taking you through a little bit of a guided tour through, you know, having photos and establishing the context of the transatlantic slave trade and the establishment of the sugar industry in the Caribbean. And then in the middle of the piece, there's a dance that happens to do a bit of a healing ritual and a little bit of a recontextualization of these spaces if, you know, if this is a part of the history that the slave trade was in the Caribbean and the historical legacy of the transatlantic slave trade. So that's what we're covering on today's episode of the Voices of Yara podcast. So this interview with Valencia, Thomas and Simon happened on Friday, January 21st, 2022. So with that, let's go ahead and dive right in.

[00:01:39.363] Valencia James: Yes, thank you for having us, Kent. I'm Valencia James, and I come from the world of dance and improvisation and theater. And my work in XR has been quite recent and thinking about how artists can perform live from their living rooms using minimal equipment as theaters have closed or are closing due to the global pandemic.

[00:02:08.011] Thomas Wester: Hi, I'm Thomas Wester. I'm a co-founder of Glowbox, which is a spatial interaction lab. And my background is in technology, creative technology. I use AR and VR to explore learning more about humanity, what it means to be human. And I've been doing that for the last five years, doing a lot of head mounted augmented reality with the HoloLens. And now with this piece with Valencia through the pandemic, also exploring what does it mean to be live when we can't be together?

[00:02:37.505] Simon Boas: Hey, I'm Simon Boas. I'm an artist, educator, producer. I'm in Portland, Oregon. I met Valencia through Thomas. We have worked together quite a bit in the past. I've spent part of my career teaching emerging technology to college students. I do a fair amount of artwork on my own that's about scripting, using technology, and most of my work in VR, XR, has been previously in, you know, kind of a video installation, but through working with Thomas, I've been exposed more to head-mounted reality, or augmented reality, and then did this project with Valencia, you know, looking at how we can make projects like that when we can all be in the same place together.

[00:03:18.063] Kent Bye: Great. And maybe you can each give a bit more context as to your backgrounds and your journey into working with immersive technologies.

[00:03:25.006] Valencia James: Yes, I'll start. So I think with immersive technologies, it was really the start of the global pandemic when theaters were closing and I was thinking about how can there be still life performance. I had just finished working with a previous project called AIM where I would have a duet with this digital avatar that would learn my movements and give back new movements. So it would be this co-creation. But we were using projection, though the avatar was dancing the virtual world. And so the pandemic brought this idea of what if the physical performer can get into the virtual world. And so that's where I started with immersive technologies. And I knew a technician who knew Thomas, and so that's how we got to work together and through this research and research fellowship from Eyebeam called Rapid Response that we did in the summer of 2020. And we were thinking about how creators from underserved communities and Black, Indigenous, people of color, disabled artists, and people from the LGBTQ community can actually have more autonomy and access to these tools for telling their own stories compellingly through immersive tech. And so now we were able to build this into this piece that we are opening at Sundance, a new frontier exhibition called Sugar, a live virtual dance performance.

[00:04:59.918] Thomas Wester: Yeah, my journey into interactive started doing a lot of interactive storytelling online with museums, building new ways to explore content that museums have in their collections. And from that, doing a lot of installation work. So, museum physical installations. And when the AR started to come out, the first HoloLens, I had the opportunity to work with Melissa Painter. And we put the Heroes piece in New Frontier in 2017. So, that was kind of like my entry into AR. This is just when the HoloLens 1 was launching. And I've been hooked ever since, exploring how we can augment our environment and also interact with each other through that. Out of that project, I also worked with another artist, Kristen Lucas, and we did a piece called Dance with Flamingos. Everybody's wearing a headset, which turns you into a flamingo and you dance with a set of virtual holographic flamingos around you. And that was kind of the genesis for a lot of the other work that I've been doing in Immersive. And it's really this connection that we have with each other and how can technology mediate and enhance that or research that I think is really the way I'm thinking about it. And that's how Valencia and I connected on first the rapid response, which was in 2020, just as the pandemic hit was an opportunity to break loose from what we were thinking AR and VR was going to feel like and really get back to basics as to like, what's it going to mean if our world is so hybrid and disconnected, when we look at the same time at something like Mozilla Hubs, which is what we used as a way to get a wider audience telling immersive stories. And I think that's what I think we've landed on with this project and I'm excited about Shoga. is that this is not a highly polished 3D development effort. It is something that we've taken relatively untrained team to build an immersive experience in hubs and at the same time explore a complex story and history and tell it in a different way. So it's been a wonderful journey going from being pretty focused on headsets and how to run installations with headsets to thinking about how can we engage a wider audience and how can we get a larger group of people and also marginalized groups of people to tell our story in immersive ways. And that's, I think what we're doing with Shoga or kind of that's the gesture of this project is exploring that.

[00:07:19.057] Simon Boas: I'm actually fairly new to VRAR relative to the other two here and my background's in socially engaged art and video, but also in education. A lot of the work I used to do was very much around having people in a gallery, reacting things together. I've worked some with how can we bridge video and dance in real time, you know, using sensors and things like that. But it's largely through working with Valencia and with Thomas's mentorship, especially over the past year, that I've really been thinking more about AR and VR. And like Valencia was talking about, and Thomas was talking about, is the ability to get lots of people in a space together when we can't gather, especially people who don't need to have headsets. My parents, for example, are never going to touch a VR headset, but they've come to almost every one of the performances that we've done and they always ask about it and they're interested in it. So I think there's a lot of power in this project to get people interested in new ways of thinking about, you know, co-presence, even if they have no interest in this technology. So, you know, if they don't want to do the VR experience, they can still watch a live stream. They can still see the performance in some way. The other, my other angle into this is with education. You know, the other two here, I've talked about how we can work with people to facilitate them making their own projects and facilitate their expression. And so, you know, Valencia and I worked back in that rapid response phase with Eyebeam. We developed a curriculum to bring in various other workshop leaders and educators to teach on the tech side of it, but also just inspire conversation and think about the reasons why we're making this work.

[00:08:48.667] Kent Bye: Okay, well that, that helps set a lot of the underlying context in Sugar. It feels like a piece that's both connected to the technology because it's a story that you're telling, but it's also being able to dance. And so maybe before we get into the content of Sugar, maybe we should flesh out a little bit more about this volumetric performance toolkit and whether or not that was something that came before Sugar existed, or if it came out of the project of Sugar, and then that was an end product. And maybe just talk about this relationship between the volumetric performance toolkit. and the piece that's produced here and premiering here at Sundance.

[00:09:22.802] Valencia James: Yes. Well, just to clarify, we actually premiered it at Seagrass last summer, but yes, we're very happy to present it again at Sundance. So this piece, Suga, it was actually, it's really interesting, like you're asking like, which came first, the technology or the piece? And I must say it's very much intertwined because the way that we approached thinking about volumetric live performance in immersive web space is through the lens of the artist. And so we actually delved into the creative process of Suga at the beginning. And that's how we figured out how we employ the tech and what is needed for there to be the live performance and also the live communal experience happening. And so We started with the ideas of how can things be accessible and affordable for both performers and audiences. And so that led us to make certain decisions like we're using Mozilla hubs because Mozilla's ethos and the way that they create platforms is about accessibility and internet health. And also we developed a new like open hardware using a Raspberry Pi and the Intel RealSense depth camera, because those are the more accessible and affordable equipment out there. So we were looking at how can we really make this something that any artist can access? And so, yeah, so it's kind of all happened in tandem. And the focus was on the artist's needs and experience.

[00:11:02.345] Thomas Wester: Yeah, so a lot of the decision making we did came out of the concept of the Rapid Response Residency run by Eyebeam. which we ran as a residency inside a residency, where we invited a group of artists to work with this toolbox that we've built based on the design principles Valencia shared earlier. And a big thing that we were trying to work on was to move away from one way to do representation or to be present in the space is through your webcam, which is just your head, or it's through a mocap. mocap driven avatars, which is effective and interesting. But we were actually pretty interested in visual representation of somebody looks like and also the space that they're in. And that's really how we ended up with volumetric, which is depth, you know, basic point clouds. So it's a point cloud representation of the artist in the space of their choosing and of their making, or of their team's choosing and making. And I think that then became a platform to evolve Shoga and to build Shoga in. And I think it's an example. And hopefully we can work with more teams and more artists to build more of these types of experiences. So I wouldn't say it's a platform because it's, you know, it's a loose collection of open source hardware that requires quite some testing for it to be solid. But, you know, we're trying to show like, you know, this is a direction that we can all move in and explore more of. And in a way, For me, it's like revisiting some of the interactive storytelling experiments that the New Frontier started with when we go back to 2009, 2010, 11, where a lot of that was web-based, web 2.0, or I don't know if that was exact that time, but like really trying to think about, you know, what is the connection on the internet now that we're all network kind of stories we can tell. And I feel like some of the VRXR kind of stopped our exploration into that. We got so sidetracked a little bit, at least I speak for myself by the tech and about the agency that it provided to create new experiences, that we lost a little bit of what we could do with interactive storytelling or what we can do with group-based online storytelling. And so that's what I think Shuga is a good example of what can come out.

[00:13:06.535] Simon Boas: And to those points about, you know, the idea of platform and about community, I mean, it's worth pointing out that there's a crew of, you know, roughly 10 people running Shuga every time we do a performance. Most of those people are people who were either involved from the start or people who were involved in that residency within a residency we did. So a lot of the artists there have created their own works of volumetric performance. And, you know, as they continue to, and as we move on, you know, we have a Slack channel together where we can share tips, ask each other questions. And it's also a place where, like with Suga, if you're doing a project, you can say, Hey, we would love it if some of you could help us develop this, which is what happened here.

[00:13:44.696] Kent Bye: Yeah, just to clarify, cause I was familiar with Eyebeam as like an entity that's based out of New York city. Maybe you could give a bit more context as to your connections to Eyebeam and who's working for Eyebeam or who was part of the residency and just a little confused as to who was associated with what as this project was coming together.

[00:14:03.550] Valencia James: Yes, sure. So the project was started through that research residency called Rapid Response. And. I had applied with this project along with Thomas and Sorb Louie, who is another creative technologist who worked with us at the beginning. And then Simon joined through Globox being involved. That's Thomas's studio or space interaction lab.

[00:14:28.341] Kent Bye: Okay. That helps clarify things. And before we start to dig into some of the aspects of the experience of Suga, I wanted to have you elaborate a little bit on, you said you were using Raspberry Pi and, you know, I know there's things like Depthkit as a technology to be able to do volumetric capture. And so you're doing live volumetric performances here and this basically being able to stream that within Mozilla Hubs. And so what is the tech stack that you're using? Are you using like a RealSense and Raspberry Pi? What's what was a part of this volumetric performance toolkit if there's hardware components and then software components to be able to actually facilitate this as a piece?

[00:15:06.255] Thomas Wester: Yeah, so what we built is we built a capture kit is kind of what we've been calling it, but it's a small piece of hardware. I know this is audio only, but I can show. This is what it looks like. So, we built a bunch of these and this is a Raspberry Pi right here. This is a RealSense camera and this is the mic. So, this is all you need as an artist to be able to get started. It doesn't require a PC or a Mac. It doesn't require specific hardware or specific specs. It's all in there and I think it's a $300 package. RealSense might be a bit more expensive right now. So, we built this and the reason why we did that was because Working with Azure Connect requires a specific hardware, and we found that a lot of the artists that we were talking to didn't have that hardware. Azure Connect requires a PC, not a Mac, so it just has a hard time working with a Mac in its current iteration. And then the RealSense camera is more robotics-oriented. It's not as much artist-oriented or even consumer-oriented. So we're thinking of, well, if we want to create a low barrier to entry to create this representation, visual representation, not avataristic, not like an avatar, but like this is me and my space is what I look like. That's kind of the design principle that made us think, okay, let's put this together. So it's the latest Raspberry Pi 4, which is actually a pretty decent low-end PC to work from. And then the Intel RealSense camera, which is a stereoscopic, passive stereoscopic camera. also low power because it's more oriented towards robotics. So it works. And then we built, the software is written in Python and uses GStreamer. And GStreamer is a pretty common, not everybody is familiar with it, but it's basically what's underlying a lot of the streaming tech that we deal with. It's an open source library that will basically stream your video anywhere you want it to go. So that's kind of where we capture and send it. And we send it out And we're using a service called mux, mux.com. And what they will do is they will take your RTMP stream, which is the live stream that you're sending, and they will turn it into an HLS stream, which is a video stream that you can easily consume in Mozilla Hubs, that any browser can consume that video stream. And Mozilla Hubs is written in A-frame, which is based on Three.js, so it's a JavaScript based 3d library and we built a component if you think of it or a plugin whichever way you want to think of components easiest that takes the video stream and reinflates it as the easiest way to think of it and makes it a point cloud so it's very similar to depth kit actually like from a pipeline perspective i would say it's the same process it's a pretty common process the difference is the quality is much lower but the barrier to entry is also lower

[00:17:55.262] Kent Bye: So yeah, just to describe what I saw is that there was a little bit like a tripod selfie stick type of thing, where on top of it, you have a small box that looks like a little camera, but it's smaller than a phone maybe, but a little bit thicker, but it's a computer with the raspberry PI and on top of that, the real sense. And on top of that, a microphone, the idea is that you'd be able to set that down and then be able to have a performance, but not require even a laptop or a computer. You just need to have a wifi connection and it would send up the stream and then you could render it out on Mozilla hubs. And as a performer, do you see any feedback? Can you see what the shot looks like? Is it have a screen of some sort on that Raspberry Pi?

[00:18:32.900] Valencia James: Yes. So that box is also a touch screen. And so I can see my stream. So I connect it with my computer I'm using right now. I have a quite an old MacBook pro. And so using the internet, I would just enter some little details and I can press start stream and then it shows me my depth image so basically the closer you are to it the more red you look and the further you are to it the more blue your body looks and so I can see what is in frame and so this is how before every performance we can set up and we calibrate and I just I'm in my living room literally I'm going to be performing from here and basically I will just go through my choreography and see okay I'm in frame like this okay that's how we're going to set up and Simon actually make sure that my stream looks nice inside the scene in Mozilla Hubs.

[00:19:30.131] Simon Boas: What that means in this case is like trimming out the couch that you saw there because it'll pick up kind of like a cube. And so we're trying to get the part that's Valencia and as little of the floor as possible.

[00:19:41.115] Valencia James: I just wanted to say one thing that's really important, like why we chose volumetric streaming, which is like three environment and not need an avatar is that I wanted to be present there as I looked. And I think it's very radical to be a black femme, you know, as I am in my living room in a space that's usually, you know, dominated by, you know, it's more of a white male space, like gaming is, you know, by default dominated by, you know, white males. So I thought, you know, this is an idea of like, how can we have more diverse presence? And literally, you know, in the way that even I can be physically there as I am.

[00:20:22.163] Kent Bye: Hmm. Yeah. That's amazing that it sounds like you're able to then on the other end crop out. Is that what you said, Simon, that essentially what you're receiving more information that they're actually rendering out Mozilla hubs, but you can control that on the side of the server to be able, or at least on the receiving end of that stream, be able to dictate what you're going to actually render out in the space.

[00:20:43.138] Simon Boas: Yeah, that's right. So Mozilla spoke as like the scene builder for hubs and Thomas put together a component for that. That's, you know, the VPT volumetric performance toolbox stream. And so you drop that into your scene, it becomes an object theory. They can move in 3d space and there's various parameters for, you know, what the threshold is, how you size that cube. So when we were doing that residency project with a group of artists, a lot of them were, you know, there are people in Brooklyn and Manhattan who had very small apartments and they had found a corner in their house where they get the camera, maybe up high, they could play with the angle, but they were able to perform from very, very tight spaces as a result.

[00:21:18.650] Kent Bye: And Thomas, I don't know if you're going to show me something else. You've got a little demo set up here.

[00:21:22.656] Thomas Wester: Yeah, I just wanted to show, as a little heads up, and the whole point of that is not to see yourself, but like Valencia was saying, to see yourself in frame. Because if there's no cameraman or, you know, there's nobody there helping, I think that's a big piece of a lot of performances that there's always somebody choreographing. Somebody doing the audio or somebody doing the video or doing the lighting. And in this case, there's nobody there. It's just you in your space. So we had to provide some feedback. But it's basically showing a depth version instead of showing the RGB, it's showing how far something is from the camera.

[00:21:53.756] Kent Bye: Okay. Okay, that's really cool. What's the off-the-shelf cost if someone were to want to put together one of these volumetric performance toolkits? Like all those gear, what's the out-of-pocket cost?

[00:22:05.535] Thomas Wester: When we put this together, I think we ended up just under $300. I don't know if that's still the case. That was in the summer of 2020, so I haven't looked at it. But it's going to be somewhere around there. It's not going to be $600. So it's not hyper-affordable, but it's relatively within reach to put it together. That's the idea. We're trying to get it as cheap as possible. It's cheaper than a phone. I mean, that's the other way to look at it. You could use, if you have the latest iPhone, which costs a thousand bucks, you could do something similar with that. You could build an app and do it, but that's the nearest. The next level is to get an iPhone with LiDAR, the iPhone 12 Pro, I think it is. And then you could do something similar, but it costs much more to get to that.

[00:22:48.209] Kent Bye: Okay, well, I think I got a good sense of the technological foundations that you're working with here. And in the end product, we're in Mozilla Hubs as like a social VR piece, but with a lot of point cloud representations of different spaces, and then the actual experience, you're kind of stepping us through from scene to scene, almost in like a guided tour that then leads to a performance at the end. And so Valencia, maybe you could talk about this process of telling the story and how to pick these different locations to translate, and then your approach of how to take something that happened in the past, but recontextualize it and trying to bring it into the present moment through this immersive piece.

[00:23:25.617] Valencia James: Yes, thanks, Ken. So Sugar takes audiences on a journey through the historic reality of the transatlantic slave trade and the establishment of the sugar industry in the Caribbean. And it's a collective immersive experience. And so you're there with other people from all around the world, and everyone is going through the space as an avatar. It's actually, we created it to be experienced on your computers, so you don't need a fancy headset. So use your WASD keys and move around. And the journey, it's about honoring my ancestors, because we started this work in the summer of 2020, where we were still reeling from the shock of George Floyd's brutal murder. And at this time, my coping mechanism was to forge a stronger connection with my ancestors. And I started speaking with my grandmother and my uncle and like the history of the sugar industry in the Caribbean coming up, my uncle was working at a sugar factory and my great-grandmother was cutting sugar cane. And at the same time, I also found the actual spatial data of the Annenberg sugar mill. which was scanned by SCI-Arc in St. John in the U.S. Virgin Islands. And so all this was coming together at this time that we were thinking about how can we make virtual life performance. And this became like the foundation of Suga. And I wanted to make it a piece where we really look unflinchingly at the history because When we think about systems of oppression, we kind of skip over like what was the actual origin of it and what were the mechanics of that. And in the Caribbean, this was where the first plantation societies were established. I'm from Barbados in the Caribbean, and this is where you had the first so-called slave codes. So that's a legal system of control that was established in Barbados in 1661. or the ways that plantations were run and laws were made in the U.S. And then we see the effects of that today when we look at discrimination and racial injustice. So that was the motivation for making the work and the foundation of it. And the experience is meant to take us there and look on visually, and then the dance is about healing. So I'm thinking of how can we reclaim these spaces that are now in virtual form, but through these virtual acts, how can we create healing? I dance for my ancestors, and I also invite participants to reflect and also think of how can we create a better future together.

[00:26:10.442] Kent Bye: Yeah. And as I was going through the experience, I initially was in my Oculus Quest 2. And I think at the beginning you said you should not use a Quest 2 because it was not working quite right. But I was like, ah, it'll be fine. But what ended up happening was when the first scene loaded in, then I had to go to my computer because it wasn't reloading properly. And then the audio wasn't loading, but I, Thankfully, there was a live stream that I was able to go back. And then, so I missed the live volumetric performance of the first couple of scenes, but I was able to go in those spaces later and then catch up on the live stream of the video to watch that. But I was able to explore the different spaces that you had. And I think the four total different scenes, the onboarding space, and then taking into an initial island with lots of photos and point cloud representations. And then there was the actual dance and performance. And then there was like the stars and connecting and then coming back to the initial scene. But what I found interesting was that it was sort of like a guided tour in some sense. There's a lot of photos that we're trying to connect to what was happening in the Caribbean and the slave trade and the sugar industry across many different islands. had a chance to look up and say, okay, where was this happening? What island was this on? Just to help spatially orient myself to the whole complex of the Caribbean of all these different islands that this piece is taking place across in terms of the references of the photos. And then with the one specific sugar tower that you're standing within and dancing, and then going into the final scene, And so for me, in terms of genre, it's almost like a guided tour that is in a spatial area that is a reference to this location, but also allowing you to tap into the past through the photos and then the dance and then the final scene. So maybe you could talk about that scene with the photos because that It's almost like a museum of sorts of photos that were referencing the experiences. And so maybe you could talk about laying out that scene and then trying to, I guess, set the broader context for everything that was happening in that region.

[00:28:13.825] Valencia James: Yes. So there's an experienced guy, Sanjin Mallory is actually narrating the experience. Actually, the first scene is we go into the belly of a boat. I don't know if you got to see that. Oh yeah, I forgot that scene.

[00:28:25.342] Kent Bye: I guess there's five total scenes. So yeah, I experienced that after the fact, the live stream of that. So yeah, you're in the belly of the boat with lots of folks.

[00:28:33.388] Valencia James: Yes, yes. I'm sorry that the question was glitching. Yeah, but glad you could still catch up with it. So yeah, we start with a middle passage, which was a very, very, I don't know what words can really express the horror of it. We try to bring that feeling, but I do recommend folks to read up more about this history because there are many accounts of it. And so we start there and then we go into the Forest of Commemoration, which is the actual spatial data of the vegetation around the Annenberg sugar mill in St. John, US Virgin Islands. So we're amazed that we can actually use this spatial data and recreate it to create a kind of a museum or some kind of exhibition that honors that past. And so there are different images from the public domain about the history of the sugar industry, from the ways it was cultivated to the human cost and the terrible methods of torture that were used to subject African descendants who were forced to work there. And most importantly, we zero in on the resistance. And we focus on the story of Refu, who is a royal Akwamu leader from Ghana, who led the resistance in St. John in 1733. And we tell her story because also in history, it's very rare to find stories of female resistance leaders. And so we wanted to highlight and bring all the different aspects of the history, but also recontextualize it through the lens of this leader, Refu. And then after that, we go into the sugar mill. It's a sugar mill ruin. So normally, so you see this conical shape, and it's also the actual spatial data. Historically, it used to have like a wooden, humongous wooden propellers with like 40 feet wide. It was like state of the art at that time. Actually, these structures are found all over the Caribbean. So this is my connection, because there are many of them in Barbados.

[00:30:39.812] Kent Bye: Quick question on the spatial data. So what was the form of that spatial data and how did you like translate it into the point cloud and, or was it already in a point cloud? I'm just curious because the point cloud aesthetic, I think worked quite well in Mozilla hubs. And I'm curious how you were able to get to that as a endpoint, if you found that already in the spatial data or if you had to convert it into a point cloud.

[00:31:02.091] Valencia James: Yeah, that spatial data is actually open data on open 3d heritage. And I need to credit Holly Newlands, a technical artist who worked on this, as well as Thomas can also talk more about how to get point clouds into Mozilla Hubs because that's their magic.

[00:31:19.082] Thomas Wester: You know, it started with an architectural point cloud. Valencia, what's the name again of the organization? CyArk. CyArk, yeah. C-Y-A-R-K. I think their organization that go around are typically commissioned to do capture a volumetric capture of historic landmarks. So, for future preservation. So, it's like a historic archaeological, you know, that type of a thing. And when Valencia and I were thinking about space and where this could happen, I had encouraged Valencia to go on to Sketchfab and start looking for different elements that we could bring in. So, instead of going through a modeling process where we start from scratch and trying to think, oh, what do we want and how are we going to model that? It was more in line with hubs and spoke and within accessibility. We were thinking, OK, can we use found footage in essence to put this together or keep it as simple as possible? And so we found a reference to the Annenberg sugar mill. And then we were able to get, because it was part of, I think, a USGS funded project. So it's actually open source. But the problem is it's like 20 gigs of data. It's like a high resolution, because this is for preservation. This is for the future. 100 years from now, we can go look if, for some reason, a hurricane or something else would happen. We'd still be able to have that monument. And so we had to go through quite a process of simplifying it and bringing it down. Holly knows how that happened. That was Holly's magic. But basically to get it to a lower point count. So to go from something that was 20 gigs to something that's just a few megs. And we were able to make that work. I mean, point clouds are fun, especially in a spatial setting like that, because your mind kind of fills in the blanks. And it's not that important to have it be fully 100% solid. Also, because for Valencia, the point cloud itself is also points, you know, the representation is also through points. So that worked out. But I think the point here is that you can get pretty far with stuff that's readily available. And the tooling we used, we had to kind of get in depth or Holly had to get in depth in using some of the academic tooling on how to deal with point clouds. So there's a whole world of academia and science that knows how to deal with point clouds, and how to reduce them, how to average them so that you could get to something that's more workable.

[00:33:36.847] Kent Bye: Okay, yeah, what was interesting about my experience of watching this piece, because I had to pivot away from the immersive VR into the desktop, I was in the environment moving around. And then when it ended, I was able to go back into the live stream, because there was photos that then I could see what the captions were and then Google and find the original photos because what I was wanting to know was like, where was this taking? When was this? So the where and the when were things that I wanted to know to help set this larger context of this complex of stories that you're putting together. But as I hear you recount this, it's kind of found footage, but also the open source aesthetic of Mozilla Hubs. and thinking about, you know, the role of the web in being able to convert these spatial stories and using a medium like Mozilla Hubs, which is very easy to throw in photos like this, but how you're able to take that spatial context and then put the photos on top of that. So then the photos are out there on the internet and there's websites you can see, but there's something that was different around the experience that you created around the space. And then the other element of the, theatrical performance, meaning that I was being taken on a tour and being told stories live to help contextualize all these things. Because it felt like a little bit, let's say of taking a docent tour through an art gallery, where the docent will take you to specific paintings to tell the story, but not tell you the story of every single painting in the art gallery, because that would take forever. So it was sort of like that, like walking through a gallery and then moving into another spatial performance. But that idea of taking the affordances of the existing 2D web and starting to translate that with these other concepts from museums and docents and theatrical performances and the live transmission of an oral storytelling tradition, all of those things together, it feels like it's kind of an interesting blending of these genres as I start to see this project and where it can go in the future.

[00:35:30.169] Thomas Wester: Yeah, I think it's really interesting that you're there together. I think a lot of online experiences, well, less and less as we move into whatever this metaverse is going to be, but a lot of experiences are still singular or asynchronous. You're not there with somebody at the same time. And I think that's having done a lot of museum work. I love being in a museum with other people. The fact that there's other people there is really important to my experience in the museum, even if I'm not going to talk to them or even be just the fact that we're inhabiting the same space and sharing space. means something and I think that's really what Mozilla Hubs brings to this is that it's not a singular experience you alone exploring. You're there with others and that gives it a live or a timeliness and that also makes the space feel more alive. So even if you're not talking to the others or we're not encouraging much interaction, just being there together and then having the agency of choosing where to look, which is very different from film, because we could have told this as a documentary film and there's a lot of techniques within that. I'm not trying to diss that, but I think especially with difficult content or new content like this, being able to pause and look and move around a little bit and take your own scale of it instead of being driven consistently by what the frame or what the editor or what the director has framed for you. So I think that's what I love about this and interactive storytelling in general, that by the fact that you have some agency to choose where you are in the story, the fact that you could be with other people, but you still have choice as to how much you're with them. I think there's just a whole world of variables that we can start playing with as creatives and as directors to start thinking through what we could do with this. It's exciting.

[00:37:03.907] Kent Bye: Yeah well Valencia I'd love to have you maybe elaborate on the last two scenes where there's the dance and then you're walking into almost like directly connecting to the ancestors and the stars but maybe let's first start with the dance the performance that you're using the volumetric performance toolkit and you're set inside of this ruins of the sugar mill and there's almost like a a slope on the hill that we're all almost like a theater watching this performance. And so you're doing a number of different movements. And it's in the context of just being told the story of this rebellion led by this woman of this resistance and resilience. And then there's a dance and maybe you could describe what the dance means for you in terms of the larger context that you've set up to that point.

[00:37:48.185] Valencia James: Yes, thank you, Kent. So, yes, the dance happens in the ruin of the sugar mill. So these hollow conical limestone structures you can find everywhere in the Caribbean. And so that's what brought my connection and why I was very enthusiastic to build this whole performance around the idea of how my ancestors were taken and forced to work in these spaces and how my dance and my acts in virtual space can bring some possible healing. I think of it like a prayer because I do believe like In African culture, there's this very strong connection with ancestors and death is not like a finality. There is always this continued relationship. And so with these things in mind, I create this dance about, first of all, healing. And I'm very influenced by Haitian traditional spiritual dances. And I'm also involved in a dance company. And so My movements are coming from dances like Yanvalou, which is about prayer and also thinking about my gestures in space and also what that means. I'm doing it in my living room, but you're seeing me in the mill. And so what does that mean? What would that mean if I was in the mill doing these gestures? I use my gestures as a way to like acknowledge what happened in the past and also make it a prayer for healing and liberation. The dance has two parts. So it has that kind of more somber start. Then it builds and it goes into another type of music. So I'm working with a composer, Stefan Walkup from Barbados. That first music is composed by him. And then it goes into a more festive sound and it's coming from the 1688 orchestra in Barbados, and it's actually Barbadian folk songs kind of reimagined as a big band orchestra. And so that part is about the liberation and the healing and thinking of Black joy. And I need to shout out Carlos Johns Babila, our sound designer, who reworked those two very different pieces into a beautiful fluid kind of journey. And so my dance goes from healing gestures to liberation and a kind of a celebration of our resilience.

[00:40:22.306] Kent Bye: Yeah, and I don't know if you want to say anything more about that last scene of the prominent figures in Black history that are in the stars and that you're referencing them directly in terms of them, like you're looking up at them, but they're also looking down on you. There's this connection between the ancestors, but I thought that was also a very evocative scene of being in this cosmic place, but also seeing all these prominent figures from Black history this is kind of the off-boarding in some sense of connecting to the elders of the past and the ancestors. And so, yeah, maybe you could describe that last scene.

[00:40:54.557] Valencia James: Yes. So right after I make a gesture of like, we think of our ancestors rising, like after this healing liberation, then I hope their souls can rest and they, a feeling of release happens and the scene changes and we're in the cosmos. And yes, like we made constellations from photos of prominent figures in Black history, like Marcus Garvey and Martin Luther King Jr., but also their images of Black joy and liberation there. And then this piece is thinking about, yes, that connection is exactly that. We have our ancestors by blood, but also they're the ones who we can choose and look up to. And another very important part of the scene, the idea is that we are in this kind of sound bath. And this is also original composition by Carlos John Zabula. And we think of this space as a place of rest as well. And we are using frequencies of about 528 Hertz, thinking of sulfageo frequencies and thinking of the idea of healing and the regeneration of our DNA in this part. So it's again, thinking of healing of our ancestors, but also ourselves and how can we go into the future.

[00:42:13.144] Kent Bye: Yeah, that's really powerful. And a piece like this is very interesting because dance by its nature is embodied movement that is open to a lot of interpretation. And so for me, as I watch a piece like this, and I hear what you're saying at the level of what words can get to, but then in the direct embodied experience, it's more of a poem that you're absorbing with a lot of the symbolism that as a creator, you're putting out a lot of the symbolism. And then as an audience, we kind of have to up our game in terms of being able to read and interpret the symbolism. And so it's like an iterative process. I feel like we're going through right now where we're moving away from literal words being spelled out very specifically with the written word, but with the embodied movement and these spatial experiences that leave a lot to the interpretation of the vibe of the experience and the feeling that you get from it. So yeah, I feel like that's a part of the challenge that we have right now is to do those iterations and to have the artists push for those symbolic communications and then for audience to be able to then listen and then take whatever they take of it. Cause there's going to be some communication loss in terms of the intent. And then maybe people land up with places that you couldn't even imagine, but it feels like that's a part of what's happening here is this evolution of a new language of embodiment and spatial communication and symbolism and creating a ritualistic context to take people on this journey that goes from place to place. That's part of us as an audience becoming more sophisticated to be able to watch this and be able to even put words to what we just experienced. Because as I listen to you as the creator, it's like, oh, yeah, that makes total sense. But I don't know that I would have been able to articulate it quite in that same capacity for what I had just experienced.

[00:43:53.128] Valencia James: Yeah, I think that's the beauty of theatre and dance, that the interpretation is wide open. It just depends on who is watching and there is space for that. That's totally okay. You don't need to be sophisticated or know anything to experience and take the message that you do. And so I do hope that folks feel at ease where they are, wherever you are, for whatever context you're coming into this space that you come back enriched and every interpretation is really okay. It's about, I would like this to be a starting point, that there is a motivation to learn more about this history and to just have heightened awareness about the Caribbean and its role in the world that we see today.

[00:44:43.011] Kent Bye: Nice. And just as we start to wrap up, I'm just curious if each of you would like to share what you think the ultimate potential of these immersive technologies and open source and immersive storytelling might be and what they might be able to enable.

[00:44:57.946] Simon Boas: I can speak a little if no one else is going to jump in. You know, we've been doing this process for over a year now and What's always been amazing to me is Valencia and I have never met in person. I've worked directly with Thomas. I've sat in the room that he's in before quite a bit, but this is a project that we're talking about these various parts of it. There's a number of people involved and a lot of us are never in the same place. A lot of us have different communication styles. A lot of us have different skill sets entirely. And especially with the pandemic over the past, whatever, how long it's been now, It's felt like a lot of things have had to be turned off or just totally pivoted or turned down, but I don't see how this project could exist in any other form at this point. We've had to overcome some obstacles in terms of distance and technology, but it's been really inspiring for me just to see that. Not just on the traditional, I guess, how we would define emerging technology parts like depth video and things like that, but just the fact that we've done this using Mozilla Hubs, using Zoom, using various collaboration tools, that we were able to make something creative like this. So, that's been a big takeaway for me is we can make things that are new and novel, you know, regardless of what the external constraints are.

[00:46:09.275] Thomas Wester: I'm most excited about just the new form of spatial design for a wider group of people. So I look at content creation and when we think about creator economy and the way people think about creating things, a lot of that's video. It's still flat. And I'm excited on many levels about how more and more people are going to be able to think of it as like the democratization of creating content in 3D, which has been a very specific thing. And slowly we're getting to the point with AR and with VR and immersive tools, creative tools in VR, that more of us can start creating spatially. So I'm most excited about what Hubs is doing and other platforms like that, but basically trying to build web based tools, open tools, not walled gardens that allow others, that just allow anybody out there to get in and start creating. And so having been, I was at the start of the internet that we know now, before social media, that was really exciting to see that whole cycle. And then to see this new cycle, I think around spatialization and spatial and immersive web, just seeing the amount of creative potential and also the ability for people to tell new stories in new ways. And I think that's what I'm really excited about.

[00:47:21.970] Valencia James: Yes, I can repeat what Simon and Thomas said a thousand times. And what is the most fascinating for me is just that before this project, before 2020, I didn't have any skills in immersive technologies. And I'm so grateful and thrilled that I was able, even coming from outside, was able to use this to tell the stories that was most important for me. And so I feel like it gave me a voice that I'm so happy that people are listening. So I'm just happy about this being accessible by the web and like my uncle and my parents could join even though they're in different parts of the world. I'm excited that we can have this communal experience regardless of where we are. Like Simon had mentioned that I've never met Thomas, nor Simon in person, nor anyone that I've been working on the project with except for Sandrine. But Marin, Carlos and Holly and Terry, you know, they're in different parts of the world. We come together to make this magic and we get to have participants from all over the world come. So that's for me, the magic. And for me, the potential of XR, telling more people, telling stories that are important and us coming together, even though we're so far apart geographically.

[00:48:45.808] Kent Bye: Nice. And is there, is there anything else that's left unsaid that you'd like to say to the broader immersive community?

[00:48:51.730] Valencia James: Yeah, I'm just happy to meet everyone who is doing work in this space, but also maybe a call to action is maybe thinking about how we can share our skills. Maybe, you know, we go about making those platforms or whatever small ways we can to make sure more people can also tell their own powerful stories using these tools.

[00:49:14.927] Thomas Wester: Yeah, I follow Valencia with that. This is a team effort. It's hard to do on your own and teaming up to do these things instead of staying in your own silo is a key critical piece.

[00:49:27.430] Simon Boas: With that, I think if we haven't done it along the way already, it's worth shouting out some of our other collaborators. Terry on our right, Sandrine Mallory, Marin Vesely, Carlos John Stavula, Holly Newlands. Am I forgetting anyone? There's another bigger team with Globox that we've had in the past too, but those are the ones working on the project right now.

[00:49:45.523] Kent Bye: Yeah, Valencia, I don't know if you wanted to give a shout out for any of the other collaborators and performers that were of the day of the performance. I know you had a variety of different people that were helping to introduce and onboard and go through different exercises. The onboarding on this piece was interesting because you're virtually embodied in this space, but mediated through a 2D screen, but yet there was a series of different grounding embodiment exercises to walk through, but also just other docents and other artists that you had that you wanted to just give a shout out to.

[00:50:13.997] Valencia James: Yes. So there is Sandrine Mallory is the experience guide and, and she's the one who, you know, you meet. We're using webcam in the beginning just to get everyone like comfortable. And we go through this movement exercise. There's Lauren Vestaly is a 3D artist who built a lot of the scenes, like the boat scene and the cosmos scene, Terry Ayana writes. She's a rehearsal director and also built some sounds and visuals. Holly Newland's a technical artist who works on the point clouds and the bigger scenes. And Carlos Ranza Davila, who's an amazing sound designer and did some original compositions for the coherent sound experience. I also want to thank our supporters. There are many, but there's Eyebeam. We got support from Eyebeam and also the Frank Rachi Studio for Creative Inquiry at Carnegie Mellon University. There's the main supporters for this work, as well as countless advisors and members of the community who've come out and provided feedback and encouragement. Yeah, I hope I'm not missing so many. I'm not forgetting. Forgive me if I have. Yeah, I'm just grateful also for the chance to perform at Sundance. So thank you to the Sundance team, to Shari Frillo and the whole team at Sundance for having us.

[00:51:33.630] Kent Bye: Yeah, I'm personally really excited to see more open web technologies with WebXR, Mozilla Hubs. I think this is the first Mozilla Hubs piece I've seen in a festival context, at least. I know last year they had some WebGL experiences and Nani de la Peña has done some WebXR, but it was more of a tech demo rather than a social VR type of experience. And so the web technologies are lagging behind where the Unity or Unreal Engine are at. And so by its very nature, it's going to be degraded in terms of the fidelity, the types of experiences that you can do. But I'm just really excited to see what you're able to work with, what is possible and overcoming of the variety of different technical difficulties that are inevitable with a new emerging technology that's still on the bleeding edge of everything. So I'm excited to see where this continues to go. I'm glad to see more artists using and developing open source tools for other people to start to come in and build other stuff. And I think Sugar was able to take me to a place and introduce me to certain aspects of this history and through the context of these stories and the dance performance and yeah, the whole experiential design and the sound and everything else. Yeah. Just really powerful to be able to go through that as a group experience. And so, yeah, I'm excited to see where it continues to evolve and go in the future. I'm sure we'll look back on this and 20 or 30 years from now and just marvel at how far the technology has come. And so I guess with that, it's the beginning of a very long journey that we're on, but I'm glad that this team was able to make the progress that you were able to make with this piece. And as a result, also provide some new tools that came out of it as well. So yeah, Simon Thomas and Valencia, thank you so much for joining me here today on the podcast to be able to unpack your own journey and this specific experience of the sugar live dance performance and virtual reality. Great.

[00:53:20.195] Thomas Wester: Thanks for having us.

[00:53:21.695] Valencia James: Thanks for having us, Ken.

[00:53:23.155] Kent Bye: So that was Valencia James. She's a dancer who does improv and theater and is thinking a lot about how artists can perform live in their living rooms. Thomas Wester, founder of Globox. It's a spatial interaction lab based here in Portland, Oregon. What does it mean to be live in performance is something he's thinking a lot about. And then Simon Boas, he's an artist, educator and producer of immersive experiences. So I have a number of different takeaways about this interview is that first of all, It's really cool just to hear the backstory of how this piece was really born out of an innovation of the technology. It was a part of the Eyebeam Rapid Response Fellowship that Thomas and Globox was a part of and invited Valencia to be able to collaborate as an artist to be able to do this translation of these embodied spatial performances and what is the minimum viable Equipment that you need to be able to set something up don't need to have any type of particular computer It just needs to have a Wi-Fi connection then be able to have a tripod a Raspberry Pi this real sense Intel camera Which is more specialized for robotics and then a microphone on top of that that data is being sent over the server the full range of that data and then Simon is able to dictate how much of that volumetric performance can be embedded within the context of this larger Mozilla Hub scene, which they were able to get LiDAR scanned data that was super high resolution of the Annenberg sugar mill in the US Virgin Islands to do different translations through Blender to be able to create it so it's much more reasonable size to give you a sense of the architecture of the space without actually showing the full complexity of many gigabytes worth of file information and textures in order to really get the full feeling of these locations. And I think that actually works quite well in terms of just giving you a rough architecture of these spaces through these point cloud representations. I start to see a variety of different experiences over the last number of years, but especially within the on the morning you wake to the end of the world uses that point cloud aesthetic quite effectively, I think. And this piece, you're more in a social VR context. So it's Mozilla hubs, you're not able to talk to anybody, but you're able to move around in the browser, they didn't have the quest version optimized well enough, or there's something that was an error when you're going from scene to scene. so it's best to see it within a 2D screen. But still, there, you're able to move around in these spatial places. And again, they're kind of using these spatial locations of different places around the U.S. Virgin Islands and the Caribbean to ground you into this location and then to put a variety of different photos in one of the scenes. And the very first scene is, you're coming in, and you're in this boat with all these slaves who are a part of this transatlantic slave trade. the sound design and being in that space and then going out is setting the larger context of, you know, really the origins of a lot of this transatlantic slave trade. And then from there, you're going into the scene where it's kind of like a space where you're walking around and it's very abstract and it's hard to distinguish any specific architectural elements, just that it was a point cloud representation of the land. But on the sides, there were these photos that had captions that allowed me to look up different things of when this was taken, where this was taken. In the context of a live performance, you're kind of getting a guided tour. And you can imagine, like, if you go into a museum and you get a guided tour from a docent, you're not looking at every single painting. But you may be able to look at stuff and get a vibe. And so it was in the spirit of trying to give you a sense of all the different stuff that was happening within that region in that area. But they did focus on one of these rebellions that was led by Brefu, who was the story of rebellion that happened in November of 1733 on St. Jan, which is a small island on the Danish St. Indies. And then after we got a little bit more context of Brefu, then we went into the next scene, which was the sugar mill, which had the in the round where there was a hill that was leading in there where you could all stand on this hill and watch Valencia as she was inside of the mill doing this dance to connect to her ancestors and to do a little bit of a healing ritual. And then the next scene was very poetic with all these constellations of the prominent figures from Black history and narration that's happening to kind of tie everything together. And then they go back into an offboarding to do the Q&A and talk about the experience. So I think it actually shows the power of what you can do in something like the WebXR and Mozilla Hubs. This is the first, I think, WebXR specific experience. I mean, there's been other pieces before that I saw that was using like WebGL, but there wasn't like a VR component. I guess if you were to see this piece on your PC VR, you would be able to be fine with being immersed within the WebXR experience. But the Quest had some other issues with the way that it was changing the scenes. It was not loading correctly. But just to see that they're using this volumetric performance toolkit, that was such a key part of how this project even came about. And push the limits of what's even possible with the OpenStack. They said that this is actually lower quality than the depth kit and depth kit already has a little bit of a noisy volumetrically pixelated look and feel to it. But I think for the same time, you know, when you're in this larger abstraction of the point cloud aesthetic, I think it actually fits pretty well. You did have to walk around a little bit to get a full sense that that was a volumetric scan that was coming through. We were kind of far away. At least I was when I was first looking at it, that I did immediately detect that it was a full volumetric scan. But as I Started to walk around I could definitely see that there was more of a 3d specialization that was happening that It was really quite neat to see that being live streamed within a web browser So yeah, it's a little bit lower resolution but in terms of just increasing the levels of accessibility for folks to a lot of times that they were saying them the azure connect you have to have like a PC and a laptop and a Like a Windows based PC and the Mac doesn't work and so yeah Just a lot of artists who are creating you know may not have the required technology as a minimum bar be able to do this type of volumetric streaming so Yeah, definitely check out the volumetric performance toolkit. They have a website and a excited to see where this type of aesthetic goes in the future. It's sort of blending a number of different genres from like the performance and the dance and the guided tour aspect of walking through a place and trying to create like a photo album that is tied to the context of a specific piece of land and taking these other media artifacts. That's probably one of the other aspects of Mozilla hubs is it's very easy to pull in different And so, thinking about ways to then create more of an immersive experience, because I felt like I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, you know, I would, want to go back and I was going back and watching the live stream and just really digging more into all the different stuff that was there is quite dense for the small amount of time that we have there. But it was enough to catalyze my interest level and finding ways to expand that as an idea to take that kind of guided tour conceits. But also, if it's a nonlinear, you have open world exploration or to tie it to the whole region of the Caribbean and all the different places that they were referencing in those photos. So, yeah. Anyway, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show