#1078: Multi-perspective LIDAR Timelapse Art with ScanLAB Projects’ “FRAMERATE” to Think and Feel Different Time Scales

FRAMERATE is a timelapse LIDAR art installation by ScanLAB Projects that showed at SXSW. It encourages the audience to “think and feel in another time scale: geological time, seasonal time, tidal time.” I had a chance to talk with director & ScanLAB Projects co-founder Matthew Shaw about their volumetric capture techniques and desire to create art that helps us understand how machines perceive the world.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So continuing on my coverage of looking at some of the XR experiences at South by Southwest 2022, today's episode is with Framerate, which was an art project by Scanlabs that was above the main floor of where all the other XR experiences were. So ScanLabs does these LIDAR volumetric time-lapse captures over different periods, whether it's over the course of a day or sometimes over the course of many days or even as long as like a year, where they go each and every day and take a volumetric scan and then render these 2D frames where you're able to see how these landscapes change over time. It's looking at these different time scales to see the shifting dynamics of the geological landscapes or other aspects of how machines are viewing the world. So you're able to put a virtual camera within these LIDAR scans from many different perspectives of the same scene. And then so imagine walking to a room that's completely black with all these big TV screens that are essentially these different viewpoints from top down and that's on the floor. And then you're looking at all these different angles and you kind of watch the scene unfold from multiple different perspectives. So using these volumetric capture technologies to create an art show of larger scan timelapses that were being featured at South by Southwest, the piece was called frame rate. And I had a chance to talk to Matthew Shaw, who's a director and co-founder of the studio called Scanlight Projects. So that's what we're covering on today's episode of the voices of VR podcast. So this interview with Matthew happened on Tuesday, March 15th, 2022. So with that, let's go ahead and dive right in. So why don't you go ahead and introduce yourself and tell me a bit about what you do in the realm of VR.

[00:01:49.835] Matthew Shaw: Yeah. So my name is Matt Shaw together with my co-founder, William Trussell. We run a studio called scan lab projects. ScanLab is fascinated by all forms of spatial and volumetric capture, in particular LiDAR scanning. We've been working with LiDAR and all forms of machine vision for the last 12-15 years. We're really fascinated by the way that machines are beginning to see the world in a really kind of spatial, three-dimensional, four-dimensional way. But we're kind of a little bit terrified, I guess, by the fact that these machines are seeing the world through the lenses of LiDAR and making decisions, and sometimes humans are not getting to see that decision-making process. So the aim of ScanLab is to take the world that machines are seeing and bring that into a beautiful environment that humans can experience again. And we use the visuals that we create and the environments that we create to tell stories and kind of shine a light, I guess, on a bit of the future.

[00:02:41.944] Kent Bye: Okay, yeah. Maybe you could give me a bit more context as to your own background and your journey into the space.

[00:02:46.567] Matthew Shaw: Yeah, so both myself and Will studied architecture. So I think we came to 3D scanning in the first instance as a way to digitize the real world and make better interventions into it, better physical interventions into it. But when we first saw the data coming out of some of these machines, not only is it accurate, but it's just phenomenally beautiful. It has this kind of uncanny quality of, it's a photograph, it's real, like this stuff is coming from the real world and it's undeniably true. And yet it sits in this kind of slightly malleable CGI place where you can kind of suspend disbelief a little bit and you can, I don't know, as a kind of storytelling environment or as an exploratory creative environment, there's something lovely about that reel but just edging off the end of reel that we're really fascinated by.

[00:03:38.173] Kent Bye: Okay, and so we're here at South by Southwest and you're showing what it looks like LIDAR scans that are over a period of time And so they're sort of got this time-lapse quality But from what I could extrapolate from what I'm seeing was that maybe a LIDAR scan from one perspective But then you're rendering it out from like many other 2d cameras into a 2d plan And so you're getting multiple perspectives that are on these TV screens that are spread out and across this room So is that right that it was basically one LIDAR scan, but then from many different perspectives

[00:04:08.580] Matthew Shaw: Yeah, exactly. So, you know, I mean, we talk about scanning as a kind of evolution of photography. But in scanning, you've got two moments of framing rather than one. So the first decision that we make is placing our machine down in the world. And the framing that's involved in that is super important because that dictates the world that you're going to create and the shadows that are cast by the position of the scanner are really important to us. And the way that the scanner kind of lights the scene comes out of that first moment of framing in the real world. Then we collect the data and we interrogate it and we frame within it for a second time. So we put our virtual cameras inside that environment and can render out scenes from within them. So there's two camera positioning exercises, I guess. And in the piece that you've just seen, Framerate, we're really showing you a whole series of simultaneous camera angles onto the same scene. But it's not as simple as just being dropped inside a world and being able to look left and see what's to your left and see what's to your right and down and see the floor. These views are a little bit more curated, like you might look down and see a really detailed section of pebbles on a beach moving over time. And then you might look left and the entire cliff is kind of falling down in front of you. So we find that we have a little bit more kind of curatorial intervention, I guess, into the way that we're juxtaposing these scenes together and playing with the pacing that time is unfolding across the work as well.

[00:05:32.225] Kent Bye: Yeah, as someone who's a VR journalist, of course, my first reaction is like, I want to see this immersed in VR. Is it even possible to show some of these LiDAR scan animations that you've created in an immersive experience within VR?

[00:05:43.079] Matthew Shaw: Yeah, I mean, I guess the important thing to understand about ScanLab as well is that we've been playing with this technology for 12 plus years. And quite early on, we found all the tools that were out there pretty lacking. So simultaneously to kind of creatively exploring this, we're also building all of our own software to be able to manage these enormous datasets. Because, you know, framerate, you've rightly pointed out, it's like hundreds and hundreds of time-lapse scans. That's billions and billions of points in each of these datasets. So the software requirements and the hardware requirements to render this are pretty epic. You're looking at pre-rendered content inside framerate. We make content that is absolutely enormous sometimes, you know, like, 32K by 12K renders come out of our engine. When we talk about real-time, it's still very much possible. We've had kind of the whole forest sequences running in headsets already, and on mobile devices as well. You know, the current Apple chips are pretty amazing at processing some of this stuff. It's never going to be the same fidelity as pre-rendered stuff that's been churning away on our farm for a couple of weeks, but it's still pretty compelling.

[00:06:45.311] Kent Bye: Yeah, maybe you could walk through a little bit of the technology stack that you have, starting maybe with the LiDAR scanner that you're using, if it's something that's a commercial off-the-shelf or it's something that you've developed internally there at ScanLab.

[00:06:55.853] Matthew Shaw: Yeah, so for us, there's a whole range of sensors that we use from, you know, the Kinect, some real sensors, where we're getting live action movement at the scale of a person. We've started using the LiDAR sensors on the new iPhones as well. There's some quite beautiful, quite experimental data coming out of those as well. A lot of our work, and in particular the Framerate project that you've just seen, uses terrestrial laser scanners. So these are machines that are traditionally used in the surveying industry really. They're firing out millions of laser pulses a second and they're creating models that can be sometimes 40 million points from a model, sometimes 500, 600, 700 million points in a model. Yeah, so most of the hardware that we use is out there and, you know, people can get their hands on it. That's not to say we don't fiddle with the edges of it a little bit, for sure. I mean, myself and Will have always been taught to know a tool to its breaking point. And then it's at that breaking point when you're really going to start to have fun. So like, you know, what a laser scanner can't scan, a tumultuous waterfall and a mirrored surface and things like this. That's like the breaking point of the tech as well. It becomes really, really creative for us. all of these bits of hardware, all of these sensors, the sensors from autonomous vehicles and stuff, we're a little bit agnostic to them in a way. For us, it's all about getting the data in the rawest possible form, and that's quite often something called a point cloud, which I'm sure you're familiar with the kind of point cloud aesthetic. That's the thing that we've fallen in love with, and that's the thing that all of our tools are enabling us to play with. So once we've got a point cloud out, then we're pretty much into our own software stack, which is all about, you know, there's some cleanup stuff involved sometimes in point cloud. There's a lot of just moving massive data sets around. And then, yeah, we render everything either offline or real time. Currently our real time stuff is a combo of work we've done and getting that into Unity and then getting it onto devices.

[00:08:47.101] Kent Bye: So we talked a little bit about the hardware, but in terms of the software, what are the other things that you've developed in terms of the software, and is that something that you're productizing and selling, or is that something that you're just using it in terms of a service base, or is it just that you're an artist and you've created your own tool set that works for your own workflow?

[00:09:04.543] Matthew Shaw: Yeah, I mean, we're definitely not a software company selling our software at the minute. Software for us comes out of tools that we need for creative applications. So there's kind of three aspects to ScanLab, I guess. One is us building those tools internally, and that tool building process is very much a creative problem comes up, a technical solution gets developed, then a technical idea might appear and it results in something that can be thrown a really creative test or a kind of technical anonymy occurs and that results in a creative idea. So that's kind of one aspect of ScanLab. The second is creative collaborations. We've been lucky enough to work with a whole bunch of amazing people from forensics experts, CI scientists, theater directors, dance companies. We love to partner with people who are in a completely different world and space to us, whereby we can learn a lot from each other and we can come together around enabling these brilliant conversations where this piece of technology gets thrown in the room and everyone's experimenting with it. And then the final aspect of ScanLab and a place we love to operate is making our own works, sometimes manifest as VR or XR pieces, sometimes manifest as artworks. You shouldn't really end a sentence on an um, should you? That's all I got.

[00:10:24.114] Kent Bye: So in some of these scans that are in the frame rate, they're in black and white and the others have colors. Is that the ones that are colors that because you've gone in there and hand put in the colors or there's some way that you're extracting in the textures from these scenes?

[00:10:37.010] Matthew Shaw: So, yeah, you're right. There's two different aesthetics and frame rate. The black and white one is actually, for me, one of the purer ones, because we're not operating on a kind of traditional grayscale spectrum there. We're operating on a quality of reflectivity. So all of these LiDAR sensors are throwing out laser pulses into the world. In this particular piece, it's infrared laser pulses that are going out there. And different materials reflect that laser back differently. So you get this really beautiful quality of reflectivity that manifests itself in the data in this piece is on a grayscale spectrum. It's quite interesting because like, you know, a black t-shirt might come up white because it's actually got a really good reflectivity and stuff. So it's a slightly kind of off skew version of grayscale. But for us, you know, that's the true way that the sensor is seeing the world. When you get color data, it's a slightly separate process. So the spatial data is being collected, and then we traditionally map an HDR panoramic image onto that, push the color out onto the point cloud. And that's something that we care deeply about, actually. You know, when we do that, we have a range of, you know, fantastic photographers who work with us. Because this is real, right? We're documenting a real scene. There's not so much stuff going on in post. So the lighting is as the lighting was at that time of day. And so having a photographic process on location is really important to us to get the beautiful data that you've seen.

[00:11:59.213] Kent Bye: Yeah, there's a number of different scenes in here. Some of them were striking. Everything from like cows moving around to there was someone who built like a sand castle that was destroyed by the ocean waves coming in. And then there's one that really stuck with me, which was the decomposition of different plant life in a way that I don't know if that was over even more extended time to be able to capture that. It just felt like it was more of a time-lapse. A lot of the time-lapses are short time rather than like a more involved or longer process of something that we may not be able to perceive at all. Like we've seen time-lapse with 2D. I've only seen some time-lapse when it comes to like Felix and Paul and you see the clouds move over. But to really see something like go through a real dynamic change I think is one of the potentials that I'm really looking forward to seeing something that's a spatialized time-lapse that I think is intuitively there's going to be some sort of deep visceral reaction to seeing something that we can't perceive.

[00:12:54.187] Matthew Shaw: Yeah I mean I think So there's two different timescales going on in the piece here. Some high-frequency scans, so the scene with the cattle and the scene in the pub, for example, they're anything from 10 hours to 24 hours, scanning every five minutes. But that's the minority of the work in this piece, actually. Most of it are daily scans that have taken place over the course of a year, every single day. So it's quite an epic undertaking. For example, on the beach here, the scan's taken every single day for a year. And what you're seeing in the visuals is the tides sweeping across that beach every single day, the sediments moving around, the cliffs eroding, and there's also some human-driven processes in there. So we've studied a quarry for a year, for example, and you see the excavators carving away at the surface of the planet. I mean, the ambition of the piece and what a lot of people are coming out saying is exactly what you're alluding to there, which is, you know, they're seeing things that a traditional time-lapse has never been able to see because, yeah, you can do a traditional time-lapse from a fixed camera position, But quite often in a time-lapse, that fixed camera position is impossible to determine at the start of the process because we can't predict what's going to happen in the world. And the beautiful thing about the process that we have going on here is that those cameras can be anywhere later on. So the quarry is quite often best observed from these kind of amazing aerial positions, which would be almost impossible to time-lapse drone, for example.

[00:14:15.992] Kent Bye: Okay, yeah, I might have to go pop in and check out some of those scenes more just to see some of those more extended time-lapse, but logistically, how did you pull that off? Did you just put a LiDAR scanner there for a year and just collect the data?

[00:14:28.232] Matthew Shaw: Yeah, I mean, logistics is one way of putting it, like monk-like dedication from our team is another. So we had a couple of guys, Brad and Paul, local photographers in Norfolk, who we trained up in scanning process. And one of them every single day went to these beach locations, forest locations, garden locations. And then we've run the process again more recently up in Glasgow with Demelza and Kunal, again two local photographers who we've trained up in the process and they've been out doing the daily scans and it really is like this heroic physical mental endurance exercise from them because they take the scanners out with them, they set them up in exactly the same position. They do the scan, come rain or shine or snow or fog or storm, you know. And on the one hand, I kind of envy the challenge that they've had there, just going to the same place every day and observing it through their own eyes change over time, and also observe it through the eyes of the machine. But there's a level of patience that they've shown that I personally have been unable to show as well, you know.

[00:15:27.778] Kent Bye: Yeah, it's really quite amazing to see some of those images. Is it because of the LiDAR scan that you're still able to maintain the perspective and maybe realign stuff in post-processing if it's out of alignment? Or you really also have a similar type of thing where if it's slightly offset or moves, then it disrupts or adds additional noise to the data?

[00:15:48.000] Matthew Shaw: Yeah, I mean, you're identifying a bunch of possible problems for sure. But I mean, we've spent a lot of time making sure those scanners go down in the same position every day. And then the data does get aligned a little bit afterwards as well. There's kind of a second pass, I guess, if the camera has moved ever so slightly, then we can align the data to itself. That's easy in the city scenes because there's lots of things that don't change. But that's pretty tricky on the beach because literally everything changes every day.

[00:16:16.577] Kent Bye: What's next for Escape Lab? Where do you go from here?

[00:16:19.418] Matthew Shaw: Well, I mean, I see a couple of things about what I hope is next for framerate to start with, I guess. One thing I would say is that, you know, framerate is, it's not just there to ask people to kind of think about their position in the world and think on some slightly different timescales, you know, like seasonal time, tidal time, kind of geological time, and hopefully get us all to reflect on our relationship to the planet a little bit more. But framerate is also a prediction, and a lot of Scanlab's work is a prediction about how the Earth is going to be documented. So we truly believe that the planet is going to be documented to this level of detail, like these spatial records of Earth are going to be perpetually updated every single day by the eyes of autonomous vehicles, the mobile phones that we're walking around with. So what's next for framerate? I mean, we'd love to continue this process. We've studied a whole series of locations in the UK, but there's places that really deserve the eyes of these machines turned to them. Melting glaciers, desertification, you know, the building of whole new cities around the planet. So there's definitely an ambition to take the framerate process further forward. With this particular piece of work, I mean, we're special previewing here at South By, but there's work to be done on this installation, in particular around scale as well. I think the room that you've just been in, we're able to hold the detail of the datasets really beautifully, and that's there in the images. But I do want you to walk around a corner and suddenly see these visuals towering over you, you know, kind of 15 meters high and completely wash over you.

[00:17:51.499] Kent Bye: Yeah, with the amount of resolution that you have with this, certainly being able to generate those high resolution and be able to project those out, imagine with projection map systems to be able to combine all those things.

[00:18:00.938] Matthew Shaw: Yeah, I mean, resolution is quite often a problem for folks, right? You know, making 8K content, you know, it's quite a leap from the HD and the 4K workflows. But for us, I mean, the level of resolution in our scans, they don't break down at 8K at all. We're regularly rendering out 16K, 32K, sometimes bigger than that frame sizes for content for big immersive spaces. And it's really exciting to us to get hardware coming along, either projection hardware, or head-mounted hardware, or whatever it is that can start to handle those massive resolutions. Because, you know, LiDAR, I don't know if it needs it, but I certainly feel like it warrants it, right? Like the detail is in there, and it's starting to do justice to getting out.

[00:18:44.133] Kent Bye: Right, and finally, what do you think the ultimate potential of virtual reality and these immersive experiences might be, and what they might be able to enable?

[00:18:53.984] Matthew Shaw: I've got to say, as an architect, I probably tend towards an augmented future, where the physical world has these beautiful moments that connect really perfectly with the digital world, and then they can expand and contract on each other. So I'm really interested in that coming together moment where the digital really really enhances the physical and likewise the physical is allowed to do what it does best. You know, touching a cold bit of concrete with your hand and then having a beautiful point cloud effigy emerge from it or something is a happy place for me probably. The other thing I'd say is that around point clouds and around volumetric capture and laser scanning, the potential for creating memories that we're all going to revisit in the future, like I currently 3D scan my daughters all the time when they're out playing in the park and on the swings. And I don't fully know what I'm going to do with those scans yet, but I really believe that they will know what to do with them in the future. We are reaching the end of the time when 2D images are enough, are even kind of acceptable and the norm, I think. Yeah, the idea that our memories become these spatial, time-based entities that exist in a digital world but are also tied to the physical location where they took place. So I might be able to walk along a street and unfold time as it played out on that street ten years ago. I might be able to sit down in my lounge and play out again the memories of my children even though they're there as teenagers now and then they come back as adults.

[00:20:30.090] Kent Bye: Awesome. Is there anything else that's left unsaid that you'd like to say to the larger immersive community?

[00:20:36.017] Matthew Shaw: I guess I would say that we're all in this kind of quite privileged position of playing around with tools that are emerging. And one thing that we're always trying to do with machine vision technology in particular is be quite critical about how it's being used in the world. The idea that space is being perpetually documented by autonomous vehicles and by mobile devices and ultimately by CCTV as well. There's brilliant beautiful things that can come out of this but there's some sinister uses coming as well and so being critical about the tools that we use and telling stories that show the positive but also highlight the way that these things can go wrong and Yeah, taking stewardship for the technologies as we go forward Awesome.

[00:21:19.581] Kent Bye: Well at least thank you so much for joining me today on the podcast. It was really cool to see the the different representations of time and seeing what the computers see, and to think about that concept of the umwelt of what the machines are seeing. And yeah, really cool to see also the multiple angles and perspectives. I mean, as somebody who's into VR, I like to be immersed from a first-person perspective, but I can kind of see the architectural top-down and perspectives that are trying to get a sense of the 3D scene that's also in this piece. But I think it works in a way of as an art exhibition as well. So yeah, just excited to see the future of the technology, but also the artistic representation and what other people think about it as they experience this. So thank you. Thanks for your time. So that was Matthew Shaw. He's a director and co-founder of the studio called Scam Lab Projects. So I've a number of different takeaways about this interview is that first of all, well, it's pretty interesting to have this kind of umwelt experience of seeing what the machines see and what kind of insights you get out of that. Also just to see how you could do these time-lapse projects over long periods of time. And it's really quite meditative to see some of these different pieces. to see these different processes over long periods of time or unfolding and something that you would never really be able to perceive just because it happens incrementally day after day and we can't really put together what that image looks like. I am really looking forward to seeing the more immersive VR version of this and I think there's probably going to need to be a lot of innovations in terms of taking the mass of data that is being represented here and to be able to deliver that in real time. So I don't know if it's shaders that are taking that point cloud information to take horizontal slices that it's it's got a very specific Artistic style to it that as you watch it Then you're kind of looking into these different perspectives and views as you're walking around this room Like I said, it would be really cool to see the immersive version but there's also something quite interesting to have one volumetric capture and many different perspectives on that same thing that's happening and as you're walking around a room. It has a different quality and feel that I haven't quite experienced before. You could certainly recreate something like that within VR, where you're in a room and you're having different windows into a volumetric scan, and you're able to have something that's unfolding over time. This particular topic matter of these different things that they're shooting are dynamic and unfolding in different timescales that are inherently interesting to look at. Yeah, really interesting to see how they're pushing the edge of what's even possible with even rendering out this massive amounts of data over that amount of dedication to be able to capture some of these things. And so looking to see who they're collaborating with and taking these different techniques and as their own artistic practice and having different ways that they're doing it across different contexts and different collaborators. So, that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a supported podcast and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show