#978: Democratizing Volumetric Capture with Scatter’s Depthkit: The History, Evolution, & Future of Accessible Volumetric Filmmaking

Depthkit’s volumetric capture solution is democratizing access to the tools of volumetric filmmaking. I had a chance to do a deep dive with two of the co-founders of Scatter, James George (CEO) & Alexander Porter, who have been pioneers in the volumetric capture space. We talk about the talk about the history, evolution, and future of volumetric filmmaking. But also how tools like Depthkit play an active part of shaping the storytelling affordances of the medium. We cover a number of different sensemaking frameworks to makes sense of emerging technology evolution and diffusion, but also the funding that catalyzes innovation and aspects of culture that are on the bleeding edge of digesting, disseminating, and pushing the boundaries of innovation around creative expression and immersive storytelling.

In the second half of this in-depth interview, we do a technical deep dive to look at some of the pragmatic challenges and open problems when it comes to integrating volumetric capture within an independent immersive storytelling project. We talk about Scatter’s intention of democratizing access to the tools of volumetric filmmaking and the decision to move from a scrappy open source project to a fully-fledged start-up company with funding and customers.

The team behind Scatter and Depthkit are some of the most experienced immersive industry veterans since the modern resurgence of consumer VR. Be sure to check out my previous interviews with George, Porter, & Yasmin Elyiat in Zero Days VR as well as The Changing Same, which debuted at Sundance 2021. I also have a number of unpublished interviews with the Depthkit team that I hope to publish at some point in order to fill the gaps within the history and evolution of volumetric filmmaking.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So Scatter is a company that's working on a product called DepthKit, which is using these depth sensor cameras connected to other digital SLR cameras to be able to create volumetric capture. So they're some of the real pioneers of volumetric capture. In fact, we dive into the history in this episode today to be able to get into the evolution of Scatter and volumetric photography that evolved into volumetric capture as the medium of virtual reality was also coming up. And so we dig into what does it mean to create tools that are helping to shape and form this new communications medium. And then we dive into the second half of this conversation into some of the more technical details in terms of how they were using it in the project that was launching at Sundance 2021 called The Changing Same. So in the previous episode, I did the interview with the creative team of the changing same. And this is a little bit more of a technical deep dive into not only the depth kit technology, but the evolution of that as a medium and a communications form and where it's going in the future. So that's what we're coming on today's episode of the voices of VR podcast. So this interview with James and Alexander happened on Saturday, January 30th, 2021. So with that, let's go ahead and dive right in.

[00:01:25.138] James George: So I'm James George, I'm co-founder and CEO at Scatter. And prior to Scatter, I worked as a XR artist, a director, new media artist. And, you know, my background is primarily technical trained in computer vision and graphics. So that's me. I'll pass it over to Alexander.

[00:01:42.452] Alexander Porter: I'm Alexander Porter and a co-founder at Scatter and longtime collaborator with James. And my approach or kind of where I come to this work from is originally from filmmaking as a discipline and from photography as a discipline. So as a user of cameras and teller of stories and my work now is I think an evolution of that where I direct some of our virtual reality experiences and also have been kind of the steward of our workflows and understanding how they need to evolve and where they're best applied. And most recently, as we are continuing to evolve our technology, I've become a person who's sort of at the tip of the spear in terms of applying our tools and then allows us to create a lot of insight about what needs to get refined or what's working or not working.

[00:02:31.798] Kent Bye: Yeah, I, I bought my Oculus Rift DK1 back on like January 1st, 2014 and got it like a week later or so. And then a few weeks after that, there's the global game jam. And then also the Sundance film festival that clouds was there. And there was news that there was already like a virtual reality experiences that were showing at Sundance. And I know we've in our previous conversations, we've dug into those early times, but. I also in those early days was over ambitious and like actually bought the depth kit. Actually it was before it was depth kit. What was it called?

[00:03:05.540] James George: It's called RGBD toolkit. The name of the GitHub repository where we were just like pushing up our like hacker experiments.

[00:03:17.840] Kent Bye: Yeah, the RGB toolkit where it was like, you know, you would put this SLR camera with like a Kinect. And so I bought the Kinect. I was buying all this technology that I didn't have any project at the time. So there wasn't anything that was really catalyzing. But I remember it was sort of like a hacker project where it was like, if you did want to make it, I had like even bought like a 3D printed mount to be able to put like a Kinect camera on top of a DSLR camera that I didn't even own at that point. So it was like, the tooling from the very beginning, you've been working on this project to do like what I think of at least as this DIY volumetric capture, like this do-it-yourself, let's create the tool set to be able to enable people to do their own volumetric capture. And then since that time, over the last like seven years since I bought my first DK1, but the whole area of volumetric capture has had like Windows Mixed Reality capture, which is very expensive and very high end. And you have some stuff on your phone. And I see what you're doing is sort of in that middle area for the prosumer or the artists and the creators who are really trying to take the tools that are available with SLR cameras and these volumetric scanners. And you've been trying to, over the number of years, really just work on making that a better user experience with scatter that you eventually created. But you've been also at Sundance showing these different pieces over the years that have watched the evolution over time. So maybe you could catch us up as to what's happening with DepKit now.

[00:04:45.762] James George: Sure. Yeah. I can do a quick, concise summary of the origin story that you're talking about there too, from our perspective, and then connect the dots to where we are today as a company at Scatter, the company and our product Depthkit. So back in those days, Kent, it was really about creating community and building together. And like you said, like DIY, I think the concept of volumetric capture wasn't even like an idiom. back in 2013, 2014. It hadn't even connected to XR at that time and at that time it was really about this concept of re-photography. You know, can you take a photograph or an image or a video and then be able to have it as a data set and then be able to almost like re-materialize that photograph from a different angle using 3D graphics. And surprisingly or unsurprisingly, the technological milieu that we were in at that time also gave birth to, like you mentioned, the Oculus DK1, the ability to immerse yourself into an interactive 3D world. And all of a sudden, the value of being able to see spatially captured imagery from other angles became paramount to be able to actually create photorealistic, authentic content for immersive spaces. And that allowed, you know, folks like Alexander as a filmmaker and myself as someone who's interested in cinema to start to create in the medium of virtual reality without having to use the conventional synthetic graphics that are traditional within video game production, which are awesome and expressive, but there's a certain sensibility that comes with the filmmaking and photographic discipline about capturing from our world that it's a fantastic creative palette and we wanted that capability to be available to us and all of a sudden immersive interfaces became the best and most exciting way to actually bear witness to these interactive spatial worlds that were captured from the world, holograms if you will. And we wanted to discover that with the community. We wanted to release things. At that time, we were working open source, you know, Alexander and I were collaborating as designers, freelancers and artists, you know, in our own creative, respective creative practices and collaborative work. And that gave birth to two things. It gave birth to clouds as a creative project, and it also gave birth to the tools that were built in order to produce that project. And that theme of having a creative vision, wanting to create and predominantly XR based immersive stories and then building the tools to make that possible. That has stayed true as like a theme for us. So fast forward to today, now we have a company, we have 10 amazing teammates at Scatter, you know, scattered throughout the world now post COVID. And we're all working towards the same idea where we're building these experiences. In this case, you know, we at Sundance right now, we have the Changing Same premiering, which is our fourth title in the vein of volumetric films. And at the same time, we have a product depth kit, now a robust product that you can license and still DIY. We have a thriving group of customers in our creative community who are making in a similar fashion. And we're right in the weeds with them, telling our own stories so that we can have the most compassion. an understanding of, you know, as Alexander says, how to build the tools and also how to work with our community to make XR projects. So, you know, in that same time, the volumetric capture has, as you mentioned, like lots of other solutions have come into market. And we've stayed committed to this approach where, you know, we can enable folks to do it themselves anywhere in the world. So low cost hardware that's readily available around the world. Being able to teach and train creators to do it themselves, as opposed to, you know, us serving them as like a service provider, which is how, you know, Metastage and Dimension Studios work, just by example, is the primary distinction. We want to keep making that user experience easier and easier so that folks are empowered to tell their own stories on their terms.

[00:08:33.217] Alexander Porter: I'd love to elaborate on a few themes. It's really humbling and interesting to get called back so far in terms of our process, Kent, by you. And I love that this is a function that I think you serve in this community that's really exciting. And I didn't actually realize that you were one of the early users or at least early acquirers of the RGBD toolkit. But there's a theme there that is really interesting and people talk about maker culture or DIY practices a lot. You know, it's a concept that's now had a long history, but there's a theme there that's really important to me. And I think it was important to us then, which is this idea that tools that emerge directly out of a concrete and direct need and produced by the people that are actually experiencing that need. And so James and I were collaborating at that point and we were building and releasing the tools that were a natural outcropping of what we felt that we needed at the time. And that is something that persists. And when you look at our work today, I think that's very much still true is that we try to stay grounded in that as an approach. And then the second thing is we started this project with a practice that was dealt with speculative approaches that anticipated needs in the future as a creative practice. And so directly very much inspired by speculative fiction and thinking about as designers and as technology creators, you have to actually be responsible for one, the fact that you have impact and you have power in terms of the things that you create and the ideas that you start to spread. And that you also need to be responsible for trying to anticipate where things are going to go. And it's a very strange and quite a surreal process. And so in the days before, virtual reality was like a not embarrassing thing to do, you know, or less embarrassing or less odd or strange or kind of technologically intense thing to do. We were having these conversations about anticipating a future a post-photographic future to some degree, a future where the viewer has autonomy and control over their own perspective inside of stories. And so we had this really interesting journey of speaking quite abstractly about this idea for quite a long time before virtual reality emerged, and then having VR as a technology emerge and actually enable the consumption for the tools that we've been creating We created tools that didn't yet have an interface to perceive, and then having that interface actually emerge, it did occur basically around this moment in 2014 in the context of Sundance, where it was suddenly the world of film and the world of technology and video game interfaces started to merge. And 2014 was very interesting because we showed Clouds for the first time there. And also at the same time, I was working on a project there called Love Child, which is a documentary film about video game culture in South Korea and the relationship between virtual spaces and real spaces, where I used Depth Kit as well as a visual effect inside of that film. dealing with these topics of disembodied physical spaces. I think that moment is a really wonderful synthesis of these two ideas where both the film world is starting to emerge into what's now become virtual production. And then also the interfaces were evolving to allow for this convergence to kind of happen. And so that moment in 2014 was, I think, a very poignant origin point for what's now evolved into a whole creative culture and also a kind of techno-political landscape. So this is a wonderful moment to recall.

[00:11:46.830] Kent Bye: Yeah, I've been really getting into the philosophy of Alfred North Whitehead, his process philosophy. And he has this term called concrescence, where I think of it as like a culmination and birthing that is happening at the same time across different independently developing processes of emergence. And then all of a sudden, like I see the project of clouds where you're already like doing that speculative fiction and design of creating this piece. And then it's already got this inherent spatial element, whereas essentially you just implementing the SDK and to be able to make it a VR project where it got. into Sundance before you had even thought about it. And it was almost like, oh, hey, we have this hardware now and we can just support this over. And so I see that as this moment of concrescence where you are already tuned into that volumetric way of taking a documentary and deconstructing it and finding new ways of allowing the viewer to express agency through exploring all the different interview clips that happen to be about creative coding. But as I've been doing the process of the Voices of Yara podcast and doing all these different conversations, I've noticed that there's like at least four or five distinct phases of the emergence of a communications medium. The first phase is the development of new technology that creates new affordances. And then the second phase is the artists and the creators and the makers who recognize those affordances and then got all these ideas about what they could do. So they start to make pieces that start to creatively explore what those affordances are. And then the third phase is the distribution. So you have to have some way in which that people can actually watch this. And so in this case, it's the virtual reality platform to be able to show these spatialized creations, but also broader within the industry, we have things like Sundance and Tribeca and South by Southwest, the Tribeca Film Festival, Ifadoc Lab. And then eventually you have other distribution platforms like Steam and Aqua's Home and itch.io and WebXR is coming up as an emerging distribution platform. So you have the ways that people can at least watch the content that artists are creating with the new technology. And then the next crucial phase I'd say is the audience watching it. But maybe an interim phase could be the influencer or the media covering what those experiences are to help guide. But usually there's more of a direct connection. So you go to Sundance New Frontier, you see it and there's like a buzz that happens. And so there's a conversation, but sometimes that happens as a result of people already actually watching it. So I think that happens first with people starting to watch this content. And then as they watch the content, it actually creates this feedback loop cycle where they're learning how to watch the content, but they're also giving feedback into both the technology that's being developed, but also the artists that are creating the content. And so you have this feedback loop cycle where once you have the new technology, the artists that are making stuff, the distribution platform, as well as the audience, eventually there's maybe an interim space. Once the ecosystem gets large enough. where it's about discovering and filtering and pointing with the road to VR, upload VR, all the influencer economy, point people in the right direction to discover things that are out there with the existing ecosystem. But as I think about that distribution platform, I see that a lot of times artists will be happy with whatever the technology is there. and they will likely have to hand roll some stuff. But sometimes artists become a part of their practice to actually be at the very beginning of that cycle, which is developing the technology that enables artists to create stuff. And that's where I see both of you have been in that space of like, you want to creatively express yourself in these new ways, but you actually have to create the technology and create a platform and create a software ecosystem and a community. And it's like, it's at the very top of the stack, but at the end of it, you have all of these new ways of expressing yourself as you create that technology, enable the artists and have the distribution platform and all the film festivals, and then have those experiences get into the hands of the audiences. And if there's any one thing in this stack that has not been really taken off yet, it's that audience phase of people discovering this work and seeing the work and then taking it beyond just what the film festival is, but really going beyond. And I think we're seeing that with other technology trends, but I'd just be curious to hear each of your reflections on what I see as the four crucial phases and maybe five as we continue to move on and have more of a vibrant ecosystem. And yeah, how you've found yourself in this area of creating technology platforms around volumetric capture to create entirely new dimensions of what's possible with this medium.

[00:16:07.977] Alexander Porter: This is something I think a lot about and we have actually been thinking a lot about from the outset, specifically what affects culture? What are the kind of natural cyclical processes that allow new ideas to disseminate and hopefully really profound and impactful ideas to spread? So this has actually been part of our conversation and I think actually did influence our approach actually from the beginning. And in hindsight, we thought that we were speculating in one area, which was about the interfaces, the experiences, the evolution of photography. But perhaps we were also actually being speculative about the fact that what we would be creating would actually be accessible. Like that was actually a speculative domain. We were working as if there was machinery to really receive and spread the work that we were making, which is just not true at the time. and also we were working as if we were in a world where it was not extremely tedious and difficult to actually make the things themselves. In hindsight, those banal areas, we were working through sheer force of intensity and will and a lot of support and community collaboration. We were working in a way that we would like to be able to work in the future where the tool sets can actually disappear and you can have a natural experience of expression and the interface with the world is very seamless where you can be quote-unquote done when the project is done. You don't have to be responsible for ferrying the project almost by hand to each individual person. So in hindsight, I think about that and it's actually, we thought that we were speculating in the kind of cute and interesting areas, but actually I think what was more profound was to try to create in a way that we'd like to be able to create in the future. And I wanted to add one, you shared this framework that I'm actually not, wasn't really familiar with before, but a framework that I've been thinking about and that I was introduced to in college actually by a theorist, a thinker, one of my professors, Mackenzie Wark. They are very passionate about the situationist international and the Gita board and the, the relationship that this particular kind of avant-garde movement had to the dissemination of ideas. And there's this specific notion that has to do with the way that avant-gardes at that time is very much about fine arts and political practices, but the way that they create work and then the way that the kind of natural world and often the commercial world ingests those ideas. And so it's like, there's a very natural process where people will choose by whatever drives a person to create in this way, you choose to create in a way that is avant-garde in whatever your discipline is. And then commercial structures will have to build around those and then try to ingest those ideas, which oftentimes are quite spiky and a little bit hard to actually consume. And in that process of consumption, and I think for us now, it's about commercialization, a fair amount of the complexity and the nuance and sophistication of those ideas will actually have to get ground down, but those ideas then get transmitted. And so in the framework of the situation is the idea is that those avant-garde are responsible for producing such strange and kind of spiky and complex ideas that those ideas are difficult to ingest for traditional commercial structures. So that whole process and the fact that it's kind of always naturally happening over and over again is something that's very top of mind for me. And it happens in terms of the technologies. We create really sophisticated and quite complex and obtuse technologies that then slowly get digested and simplified. Also in the aesthetics, we saw very common trends where we would create new aesthetics, whether it was point clouds or just these surreal and odd aesthetics and different visual cultures would digest them at different times in their natural phase. So we would create new tools or new effects. And the world of fashion would ingest them quite quickly because they're very focused on visual novelty and it's visual novelty is rewarded and celebrated. And so they would kind of ingest it and then you would get music video directors. The world of rap, like the visuals around rap music and hip hop would ingest the ideas as well. So we would produce new tools and new outcomes and new outputs. And these different communities would almost like cycle through adoption. And in that journey, we got to work with incredible people, whether it's Kanye West or just like all of these people who in their natural practices and in their cultures adopt novelty in different ways. And so there's this natural cycle that just occurs over and over and over again. And I think where we are today is at least in terms of the technology, we're in a moment where in the kind of later stages of digestion, because built technology is so incredibly slow. And so now we have the kind of mega companies of our world who are slowly trying to digest these ideas 10 years later. That's kind of the way I perceive it. And in doing so, they have to soften things a little bit. They have to make it palatable to their audiences, et cetera. And in doing so, the opportunity is that these ideas get to spread far more widely. So in terms of grand unifying theories of cultural change, that's the one that I sort of think about a lot. But I think it's very similar in terms of what you're describing, phases in different cultures. So that's my soapbox on the topic. James, I'm curious what you think. Love that.

[00:20:57.741] James George: Yeah, I love what you're saying, Alexander. And I remember these formative discussions and our relationship to that digestion process, you know, making choices about whether we want to be on the avant-garde and move on to the next technological frontier as aesthetic excavators of sorts, as artists, or do we want to be the shepherds of the digestion, so to speak? And I think we've made a choice and it was a very important choice that we decided, you know, as volumetric filmmaking, as spatial capture, becomes digested that we want to be there along for that ride, as opposed to moving on to the next technological trend at the frontier, which a lot of our peers at the time was kind of their guiding light, you know, but I want to circle back Kent to a concept that you raised and persuade my own perspective, which is a nice counterpoint to Alexander around the beginning of your cycle that you mentioned. And the premise being that technology appears. and then artists use it to discover, you know, affordances occur. And a kind of formative point in my career prior to founding Scatter was I was the first artist in residence at Microsoft Research in Redmond. And it's a really esteemed position. I was really privileged to be invited there. And one of the reasons why I could kind of pilot that program, which has gone on to be a thriving program with many artists going through that organization, you know, Microsoft Research, just a bit of context, it contributes to academic research. Most of the folks there are writing white papers, you know, inventing new technology, going to conferences and presenting peer reviewed science. However, you know, because it's housed within Microsoft, it's also one of the major engines of innovation of like trickle down technology and intellectual property that becomes a core part of the products that the world runs on or run the world, depending on your perspective. So in that context at Microsoft research, the theme of my time there was about this concept of technological determinism. this idea that, you know, once an idea has been thought of, you know, it's kind of from the initial wired.com, Kevin Kelly talks about this a lot. Once an idea has been thought of, it is inevitable, technologically inevitable that it will become manifest and move forward and like proliferate through. And I wanted to understand that process. And I felt that at Microsoft research, it's one of the places where those affordances occur, right? This is where the Genesis point or origin point. And I had a really profound experience there, just to tell a story. I was there, I was working on clouds at the time, which was one of the reasons I was invited. They were interested in artists working in a research context. I was working with similar tools. This was with the team that had originally built the Kinect that we hacked to make the initial Depthkit. I met a team there who were looking at my work and they're like, well, what if you wanted to have the whole body? What if you had a hologram of, you know, a complete hologram? Cause we were working with these fragmented, single perspective kind of glitchy holograms at the time. And I was like, are you kidding me? That would be amazing. And one of the researchers broke the kind of NDA, the non-disclosure agreement. And like, and you know, I won't name names here. I don't want to incriminate anybody and snuck me onto. this room that was the early version of what has become the Microsoft Mixed Reality capture stage. And so I was standing there in this green palace of cameras, you know, like this architectural majesty of hundreds of cameras and seeing the very, very early holograms that they were creating. The amount of infrastructure and the assumptions that go into that. It's like they have infinite resources at this organization and they're focused totally on producing the highest quality results and they'll stop at nothing in terms of being able to throw brute force approach on it. There's millions and millions of dollars worth of equipment. to produce these holograms. And at that moment I was in awe of that, but I was also, it became very clear to me that this context for creating these affordances that you talk about, this kind of genesis point, is one of the problems with essentially elitism or bias within the world of technology, Silicon Valley, however you want to call it, where the assumptions of the folks working at this origin point often don't consider people who are different than them, because they're not in a place where we were even, you know, I'm a very privileged person, but we were working with intense resource constraints as independent artists. And so not only was I inspired to see, you know, technological determinism, like I could see this is going to be a thing, you know, like one of the world's largest companies is working on this hologram machine. And I was like, and I know that the people that we represent that are kind of self similar, that are artists and independent or worldwide are not going to have access to this. And then who is going to be, you know, we're talking about representational technology, like storytelling. which is ultimately of transmission for communication and voice, whose voice is going to be behind the hologram? Who's going to be telling these stories? Who's going to be represented if the powers that be are creating these inaccessible worlds? And so at that point, I became very galvanized that I want to, in my own work and my work with Alexander and our company, stand for the representation of a more diverse, a more accessible sector of humanity that can actually have access at any time, given affordable hardware and usable technology, which is what's readily available, to kind of play in a similar domain and actually be represented and tell their story within this world of 3D interactive new media that is inevitably emerging. So I think the idea that artists can be the ones generating the affordances and have a different set of assumptions and context and can actually come from a background of the humanities is what I want to say, you know, like something has this more broad view. You know, these things don't just occur there. Humans are inventing these things and making choices and their assumptions and biases are ingrained into those affordances.

[00:26:52.443] Kent Bye: All right. My mind is exploding with all these ideas and I'm like adding and modulating them in my mind, which is sense-making frameworks that I've been using. And one of the sense-making frameworks I use a lot is just this foundation of pluralism that allows me to swap out different frameworks. And I'm going to sort of pull in different frameworks, but first of all, I want to reflect on what you were saying, James, at that origin point. One of the things that I've been finding just really digging into the history of the evolution of technology is that it's often the storytellers and the philosophers and the science fiction writers and the speculative designers who are imagining these futures. A lot of times we look to say like Star Trek and the holodeck or science fiction writers of William Gibson and Neuromancer or Neil Stevenson and Snow Crash, where they're imagining these worlds in this speculative design that then inspires people like John Carmack and other people who are actually like, hey, we actually have all the technology that we need to be able to actually build these immersive 3D interactive worlds. And then Ready Player One gets released in 2011. After that, Sword Art Online in 2012. And then that creates a cultural context where there's these stories that people are talking about these technologies, but then people go and actually build it. So it's almost like the roadmap of what is possible. And so, yeah, there's like a philosophical storytelling element there that I don't have in the you're right in the sense that it does have this technological deterministic sense of it just appears, but it doesn't just appear out of nothing. It comes out of the context of these stories. The other thing that I just wanted to reflect on what Alex was saying, there's a theorist that I'm looking at these different phases of evolution. One is Simon Wardley, where he talks about both technology diffusion, which is technology getting out into the world, but there's also the technology evolution. Within the same iteration of technology, there's different phases that it goes through. And so the technology evolution, he breaks it up into these four phases. The first phase is the genesis, the idea. You'll be able to make it. That's what I was just talking about with like these graduate students in academia who are pushing the limits of technology. Maybe they're getting inspired by science fiction or philosophy, but they're like pushing the limits of what the technology is. There's this catalyst to create the first human computer interaction with sketchpad or the first virtual reality headsets. But those are like duct tape prototyped. So that's the first evolution, just the proof of concept, minimum viable product that this can exist. And this is what the experiences can have. And then it moves into this second phase of the enterprise. And so then it starts to get productized to the point where you're producing it for military or for academia or enterprise, or like NASA Ames was doing a lot of the early innovations and comes to virtual reality technologies. You have the Link Trainer that then led to different aspects of military training and actually being deployed into augmented reality for pilots since like the 1960s with Tom Furness. But it's very expensive to produce these. Each of those enterprise phases are custom, bespoke, handcrafted. You're just doing the first implementations. And you can't generalize that until you have multiple implementations across different contexts. And then you start to see, okay, what's common about all those. And then you get into the next phase, which is the mass consumer product, where now you're at a scale where it's getting into more people's hands until the last phase is mass ubiquity, where We reached that with, say, gasoline or electricity, or even like cloud computing, you could say, starts to reach into that commodity phase, where there's not a lot of difference between AWS or Azure Cloud or the Google Cloud, as long as you're able to at least provide that. It's like the basis of electricity, where you're just trying to get that into the hands. But over the time, you can trace networked computing through each of those phases of the genesis, the idea, then into the custom bespoke enterprise, and then the mass consumer phase, up until this mass ubiquity. All of technology, I think, is going through these different phases. The other technology diffusion, which is technology getting out into the world. With Roger's diffusion curve, where he talks about the different phases of the pioneers who are pushing the technology, the early adopters who are adopting that technology, Then you go from early adopters and then you cross the chasm into the mainstream. That's the phase of the exponential growth that starts to double at a place where that's essentially that crossing the chasm is like going viral and going up to a whole nother scale because it's been doubling at a consistent pace. But once you start doubling and then it starts to really take off where it's at that hockey stick of growth. So that's the early majority and then the late majority and the laggards where it continues to diffuse through the culture in those different ways. And it's very interesting to hear the fashion and the hip hop communities or artists, I'd say just generally the artists and the creators and architects and the visionaries who are like those early adopters and diffusers before something goes viral, they're sort of adopting things. The last point that I would just throw out there, when I first got into VR, it was just going from Kinect V1 to Kinect V2. And I actually like was able to get into like the early developer release program to get a Kinect 2 and I was able to get one, but it was so locked down that a lot of the ways in which that the Kinect 1 was open source, the Kinect 2 when it launched was like almost like a failed product before it even hit the market because Microsoft had already decided to not automatically ship it with the Xbox. And so it was a peripheral that nobody was designing anything for. And so it was sort of like a failed product. But when I talked to Mark Pesci, he said, the Kinect is actually probably one of the most successful technologies of all of XR. It's just got morphed into the HoloLens. And then now we have iPhones and LiDAR. I mean, volumetric capture and that concept of the Kinect has continued to evolve through different phases of the technology, but it wasn't able to really get into the phase of mass consumption. it still kind of hung out in that custom bespoke area. And the artists that once they got a hold of it really kept it alive. So as you were talking, there's a lot there, but that's at least how I start to think about that. And that, you know, in a lot of ways, the depth kit is really at that phase where you've been going through those different phases. And that it's really interesting to think about how accessibility was such a key part of the genesis of that speculative design because we are at that phase now where we have LiDAR cameras on iPhones and LiDAR cameras on iPad. I remember talking a number of years ago where you're saying, yeah, this technological roadmap is on this trajectory of this is where this is going towards this mass consumption where people will just have it available, but the tool sets to be able to actually do something I think is something that you've been developing the technology for a world that is starting to only exist now, which is that there's all these scanners and HoloLens 2 and to actually taking that same technology and HoloLens 2 and like redistributing again, the Kinect camera that people can use for internet of things, devices and enterprise applications, but hopefully at some point be plugging into things like DepthKit. So anyway. As you were talking there, there's so much that came up. I just wanted to share some of those different frameworks that I use and get some of your thoughts on all that.

[00:33:44.011] Alexander Porter: I love this. So I just try to compliment you as much as I can, Kent, because the fact that you manage these really sophisticated frameworks for thinking about the way that these things evolve, I think is a service to this community. And I'm just so grateful to have conversations of this depth about this stuff, because these ideas and the importance of being a conscientious adult and a deep thinker as you are creating technology, I think is extremely important. And so I'm really grateful for that. There's two things I want to add to be kind of intellectually honest, and I think I'd love to talk about the present day. And they are this. One is, I think you are really right, and there's this interesting arc between stories to stories for me about this, which is that when we think about the origin of a lot of these ideas, I think it's truly in the minds of children. So it's in the minds of the researchers, the people who end up growing up to be researchers, and encountering quite an unsophisticated childlike state, fundamental mythologies about what the future can look like. And just culturally, we need to be responsible for that because oftentimes the cognition that encounters those ideas evolves as a technical mind. And sometimes we don't evolve ethics or sophistication about the idea about what will the impact of this technology be in the world, it stays in this oddly kind of nascent childlike state. Like Ready Player One is like a really strange story. And I think what's really powerful about it is it conveys these ideas of childlike wonder and also volatile childness in the formation of a lot of these fundamental technologies. And I think that's really critical to look at. It's like the dreams of children form these massive platforms, and we also need to make sure that we grow our ethics up to match. So I think that's true probably for all of us, right? It's like the generation that we talked about were reading Isaac Asimov, and they were reading fiction from before them. So that's just one idea. And for me, it's fascinating that you have children encountering these stories and growing up to make technologies that hopefully ultimately can make profound stories again. So that's just a wonderful idea. And one other thing I wanted to just underscore is in situations where technologies emerge seemingly fully formed, like kind of like immaculate conception of a technology that seems strangely mature, it's usually because it was created secretly for war making. So we know the story about ARPANET and the origins of the internet. It's funded in a complex context of war and domination, and I think we have to recognize that. Origins of computer vision, when we contemplate why they seemed to have emerged so fully formed and so robust, and with communities of contribution and funding, it's because they take a lot of their origins in war making. So I think that's definitely true of the Kinect clearly and it's true of computer vision writ large. So that's the one thing I want to say is like the affordances don't emerge just out of the ether, you know, that if they are again really mature and well-funded to start with, it's oftentimes because they have ulterior purposes. And I think there's no exception. So when I think about our work, it's not about taking something from nothing and working with it, it's actually taking something that was created for purposes of control and domination and trying to adapt it, like rehabilitate it for creative purposes and for humanistic purposes. So those are my last points on that. I would love to talk about the present day. Also, James, I'm curious to hear your thoughts about the evolution of the Kinect.

[00:37:01.854] James George: Yeah, I mean, I know too much. I know there's things I know that I shouldn't know, so I'm going to be a little bit cautious. But just to reflect on what you're saying, Kent, because nothing you said is wrong. That's exactly how that product has evolved. You know, there's a quote from William Gibson, which is the street finds its own uses for things. And it comes to mind when you're describing this, because what you described was a very top down technological rollout. And same with what Alexander is saying, like this idea that these things get incubated beyond the public eyes in these militarized contexts for defense and then emerge in the public as the connect as a video game peripheral, fairly robust. And that's true, that does happen. And then at the same time, there's a countervailing force of the street and the interpretation of technology from whatever the public zeitgeist is and the kind of hacker mentality and pushing back on that. pushing back on that in a very disorganized bottoms up emergent fashion. And I'm realizing for the first time in this conversation that I like to exist fully within that tension. That I don't want to be on either side. And I think Scatter is trying to navigate as a company an existence within that tension. So what I mean by that is, you know, we are building a business, you know, we've moved from being open source hacker, totally DIY, zero resources, disorganized, you know, literally disorganized, like no company organizing force to running a company, having a product, focusing on business model, raising money from venture capitalists to fund the business and taking on the responsibility of the more top down needs of being in that position of economic stewardship, right? We have responsibilities and stakeholders, customers, investors, and a creative community. And, you know, it's unresolved. It really is truly unresolved. How do we, and the question that, you know, I think the company is facing now is like, how do we stay true to the ethos of our origin and our commitment to accessibility and democratization and diversity of voices and also still build something that can be universal and ubiquitous and transformative for humanity in a positive way and be responsible for that, as Alexander says, like follow that childlike dream of immersive, metaverse of 3D connectivity and creativity where anyone in the world can like be with one another as a hologram and share space without the need for travel or transmission of infectious diseases, let's say, and getting there and also ensuring that that place is a safe and universal inviting space for all of humanity. And that's kind of the mission and the challenge there because I think neither of those two extreme models of like incubate militaristic technology and then force it down the throats of the populace Or work inside a cyberpunk hacker ghetto, only shaking your fist at the Facebooks and the Amazons of the world and having no resource or power at the whims of whatever they're producing. So occupying this middle ground of entrepreneurship and artistry is our daily challenge. And it's tough on a day to day. They, those forces are in conflict and they create real friction.

[00:40:08.280] Kent Bye: Hmm. Yeah, a couple other thoughts and then we'll move into where technology is because as we're talking about this, it reminds me of Hegel and the dialectical process that he laid out where there's a thesis and then an antithesis and then a synthesis between those two things. And so that dialectical process of being in opposition is like, every person has some perspective on reality, but they have to be in dialogue with other people who have other lenses into reality. And that like, there's no one true story that's universal to everybody because everybody's going to have their own experiences within their own context. And that that's sort of the foundation of that pluralism is that the incompleteness of any one perspective and that you have to have these conversations. And in those conversations at the heart of those different perspectives are these dialectical processes that are mutually exclusive polarities. When it comes to the United States, I think of Lawrence Lessig's pathetic dot theory, where he says there's four main dials you can turn to shift culture. There's the technological architecture and the code, there's the culture, there's the politics and the law, and then there's the market and the money. So those four areas tend to have some influence. And as we're talking about this, we're talking about how the politics and the law of the government is saying we need to have a defense. And then they have tax money that's from the culture. They take that tax money and say, we need to develop the cutting edge of technology to prevent us from other countries attacking us. So it comes into this defense mindset that is the mutually destructive context where everybody's trying to kill each other. And that's the deeper context going all the way back to Ivan Sutherland and ARPANET and information processing technology office. All these things were funded by the military to be able to push forward the technology. It has come out of that context. We're in the midst of this time where we have the centralized big tech companies have accumulated so much wealth and power because there's been a lack of anti-trust So we have a lot of anti-competitive behaviors that have consolidated the wealth and power in these companies that they have functionally become more powerful than any government in the world. And there is a countervailing movement of antitrust to be able to break that up. And you have this revolutionary force of people trying to use their intelligence and wits to see the GameStop stock had been shorted, had more stock that was even existing in an overleveraged position. And then you have people that are like, no, we actually love GameStop. The fundamentals are strong. We're going to fight back. You have this peer-driven revolution of people using the tools of capitalism to attack these hedge funds that are using capitalism to destroy things that people actually use in the economy. You have this movement of people that are revolting. As we move into this year, we're in this deeper context of people reaching the limits of centralization, wanting to move towards those decentralized approaches. All these ways are like the cyberpunk future where people are in complete control of their technology rather than have the technology be these surveillance capitalism machines that are controlling the mass population and the worst case mashup of Big Brother and Brave New World and bad visions of the future. And we're at the time where we're able to speculate design, flourishing humanity, ways in which the money and the resources of the culture, rather than going into the defense technology, aims towards what are the things that are going to really support a culture that's vibrant and flourishing and inclusive and pluralistic and including lots of different voices. I think your company is set in this context where the world is ready for something like this to be adopted and empowering people to explore and innovate. Because honestly, there's been a stagnation of people who already have a lot of money, a lot of resources. Stuff that's being created is being created in order to make money, not being created to be able to actually express the deep parts of their soul and their trauma and their stories that they want to share, where that's the type of experiences I see at these film festivals and these creators and the artists that are pushing the limits of the technology, because they have actually something to say. The Changing Same, it's all about getting into the depths of the trauma of systemic racism and the racial trauma and to share that story of racial trauma in a way that allows you to connect to it. And so that has this deeper intention of it is to like recontextualize our past and our history to say, here's this other history you've been getting from that settler colonial mindset that has been abusing people for so long is something that's not actually reflecting of my direct experience. And so we're going to use this technology to actually tap into different stories that have not yet been told. which I think is so exciting to me because it feels like we're at the cusp of this explosion of people discovering these new mediums and what stories they want to tell with technology like DeckFit being able to enable them to take whatever is out there on the market to be able to start to tell those stories.

[00:44:48.651] Alexander Porter: I think that was really well said. As I hopefully grow a little bit, I realize that these sort of, I'm not a Hegel scholar, but these intense oppositional frameworks that I very much had as a young person, thinking about direct rebellion against certain things. And as I meet individuals inside of these organizations or individuals in these cultures, I realize how it's so much stranger than that. Like if only it was as simple as just some kind of fight against the man. And as we've gotten to know individuals in the context of venture capital, for example, I've met theorizing minds that are profound and think deeply about our culture, about these ideas, and would be right at home in a conversation like this. And frankly, I was to some degree surprised by that. And then also as we get to know the people inside, for example, the teams that are working on the Connect and at Microsoft, there are these just beautiful, bleeding heart people that are deeply committed and almost move to tears or emotionality when they hear about the application of their tools to tell human stories that are made of them personally. And so all of my oppositional angst has unfortunately faded. It's not that it has faded away. It's just become more complex, I'm realizing. And I think that to your point, it's a little bit like, you have sort of like punk rock that says, we hate disco. But then if you look at it historically, you're like, yeah, but you're all white, you know, like there's that there's especially I think, in these intense kind of melodramatic oppositional structures, what's actually happening is, there are entire landscapes that are completely ignored. And I think we're at the risk of that if we see ourselves as victims or we see ourselves as outsiders against this intense infrastructure, we're missing the fact that we're actually not outsiders at all. And that we're at the bottom of a gigantic, incomprehensible mountain, which is, what does it actually mean for people to have a participatory voice in the way that the world is shaped, the way that power flows, the way the stories are told? And specifically, like going back to this idea of these fundamental stories that people encounter as children that go on to shape the world, who's telling those stories? Who's writing those stories? Who's inspiring future generations? And so I think it's a wonderful segue to this project we're working on right now, or that we just released a Sundance called The Changing Same. as you said, which is this speculative story, a kind of after-futurist time travel story that both says that the more that things change, the more they stay the same. And that the world tries to pretend, especially America, tries to pretend that evolution is occurring, when actually it's just evolution of window dressing and that the fundamentals of Jim Crow law and fundamentals of slavery in some fundamental ways are still alive and well. So I think it's important as you tell myths about evolution to make sure that we're talking about the whole world and whether it's actually growing, even if it feels like it's evolving.

[00:47:42.251] Kent Bye: And James, I don't know if you had any other thing you wanted to add.

[00:47:45.694] James George: No, that was really profound, Alexander. I think this acknowledgement of also being insiders and what does owning democratization and accessibility mean. And I appreciate, Kent, when you point to places where people have power against the economy, these immutable forces, like you talk about these stock surges that are happening with like AMC and GameStop and interestingly, both entertainment distribution companies, which is poignant, that I appreciate that. I appreciate that power. I think for myself as a white man in a position of power at a company, an entrepreneur, and obviously confronting and participating in this confrontation of that privilege this summer and feeling so blessed to have that open up these conversations. During a time that we are developing the Changing Same, which has been four years in creative development, so providing a public context for the subject matter of Changing Same to, I think, reach and resonate more widely now than prior to that explosion of racial injustice and Black Lives Matter acknowledgement this summer, it's making me really understand and question and also be delighted in the beauty and surprise of how power dynamics can shift and how the privilege of being able to Own one's power, acknowledge the privileges and bias that I have perpetuated, and also learn and listen that power is not a zero-sum resource, that actually empowerment is collectively gained. By giving someone else power, you're not losing it. And here I'm talking about voice and opportunity. Share a voice, like who's telling the story? Who's being represented? What is that story, what is that myth, and then also opportunity who has the platform to participate in the economy in the creation of that and make a living and the ability to honor that privilege and understand that we can. change how, like at Scatter, we can change how we're working. And we can be aware of the fact that we have no black team members, you know, we have an all white board, you know, things like that. And that there's a lot to move forward there now that that is acknowledged, you know, and that there's a public conversation around that acknowledgement, which gives resources for making change into the future as we go forward and commitment to that and can own some of that shame that I think as white people we've been trained to hide that shame and are in a lot of white circles lying to ourselves about whether we're racist or not. So I think all of that is in the context of bringing The Changing Same to this platform, this audience at Sundance. And the goal of the project is to also bring it to wider audiences and being proud of the fact that the team that told that story, collaborating with Rada Studio, who've dedicated their entire creative career to telling stories on race and have really put their own lives on display and having the context that scattered to bring the capacity to create virtual reality immersive experiences to that director team and co-directing it with them, with Yasmin. the co-founder of Scatter, co-directing it, and the team that we built to make the project, being a very diverse team of artists across all levels of the project, from the artists to the leadership, being very diverse. It's a really magical, creative process, and I hope that it resonates in the work and shows, and I hope that that starts to shift things, and it's a sign of things to come where we can be truly committed to our mission. in acknowledgement of like what Alexander is saying, where there's a larger mission of democratization and accessibility to be had. We're at the base of a very large mountain and that's a privilege to be there and seeing it. So that's my reflection on that.

[00:51:30.387] Kent Bye: Yeah. And maybe to swing back to Yasmeen, since she is off doing a lot of the creative aspect of, of using the technology to make projects. I mean, that's been a big theme of Scatter and Depthkit is that you are in the process of creation and making stuff yourself. Maybe just quickly recount Yasmeen and how she came into this collaboration with Depthkit.

[00:51:49.135] James George: Yeah, it's a wonderful story. So to kind of connect a few of the dots here. So Alexander and I have worked together for over 10 years now, collaborating on, you know, initially what was our GBD toolkit as creative collaborators, and then having a design studio together. That was kind of an earlier iteration of Scatter, where we kind of applied DepthKit and other similar technologies in the speculative design capacity. And around 2016, early 2016, we got the bug that DevKit could become a product. And we wanted to take on, you know, we mentioned earlier, rather than being at the frontier of new technologies perpetually, we could actually be the stewards of this specific aspect, you know, volumetric filmmaking, volumetric capture for creative expression, be the stewards of bringing that to market, so to speak. And that was around the same time that Yasmeen entered the picture. She had recently left her position as a creative and technical director at Second Story. Yasmeen has a strong entrepreneurial streak. She had formerly founded a startup in a similar vein. She had a story to tell, which was 18 Days in Egypt and built a technology platform that was a bottoms up collaborative news telling platform, you know, in the context of the Arab Spring. and, you know, made a company out of that technology. So we really clicked, Alexander and Yasmin and I, because we just really got each other from this kind of background and also had very different sensibilities and approach and skills where Yasmin, although she's classically trained in computer science, you know, she has the technical chops as good as any of us. She's chosen this path of being a storyteller primarily and a director and a the creative, so taking on that capacity. So she joined in 2016 and helped us galvanize the vision for Scatter that Alexander and I were formulating at the time, which was how to be this company that can lead with creative storytelling as a catalyst for building products and technology. So as we have formulated our various roles and responsibilities to today, That was the same time as we made Zero Days, which was our first title as a company, follow up to Clouds, which she directed. And then Alexander and Yasmin co-directed Blackout, which was our second title, followed very shortly. And of course, Yasmin's a co-director now of The Changing Same. And having her take the forefront of this creative development process at the company and make demands of the technology that really stand for this countervailing force of the story needs to be told, that's primary. And the product needs to support that. And then Alexander really working closely with her to build the workflows and technical capacity for that. And then for myself as kind of a technical leadership position and product requirements and helping work with the engineering team of our other collaborator, Tim Scafidi, who's a founding team member at Scatter. We found this beautiful workflow, if you will, or, you know, checks and balances for us where we can stand in these relative capacities and And we've learned a lot about working together. It's been bumpy for sure. You know, there's a lot of tension built into that, these competing priorities and things like that. But in the end, I think, especially now today, and the Changing Same represents this, we've evolved into this really powerful team, harmoniously working together to bring these projects to bear at the same time that they are driving the innovation on the technology, which ultimately serves our customers and makes the product better for them. So that's kind of a summary of just means, you know, how she joined, why she joined and kind of how we are working at the constellation of the team today.

[00:55:14.743] Kent Bye: Okay. Yeah, that that's helpful. Thank you for that. Yeah. As I released this, I don't want to leave out part of the history. And I've actually have an interview with Yasmin and Alex from Tribeca talking about Blackout, but maybe we could start to move into where the existing camera market is today. Now we're in this phase where the sensor's everywhere. There's the next iteration of the Kinect camera. There's Intel RealSense cameras, there's. an iPhone, there's other LiDAR cameras, there's digital SLRs, which I think a lot of folks use from going back to the first iterations of the RGB toolkit of having the depth kit on top of the SLR camera. If people were to get into this, what is your recommendation and people want to just start to play around with it?

[00:55:53.907] James George: Yeah, I can kind of feed into just talk about the product as it stands today. So I'll talk about our product and I actually love to turn it over to Alexander because he wonderfully occupies the like ecosystem around our product and holds us accountable as a company to like being meaningful within that. So I'll talk about DepsKit and then I think Alexander can provide the listeners here a lot of touch points for, you know, creative tools in this vein to get started that aren't our own product. Cause we celebrate the whole ecosystem, but for DepsKit specifically, The Depthkit Core product is a self-serve creative tool. It's very much built like an Adobe Creative Cloud product, where you go to depthkit.tv, you sign up, and you can start using it for free in a feature-limited capacity. So you download the software and you plug in Microsoft Connect or an Intel RealSense camera into your PC, and then you can start capturing volumetric video immediately. So that's DepthKit Pro or DepthKit Record. We have kind of two levels there and it's a subscription product. So you're, you know, it costs $40 a month to have access to infinite content creation. If you want the trial version or the free version allows you to do short clips, but otherwise has all the functionality there. So it allows you to learn the workflow to get comfortable before you buy. And there's two parts to the product. There's the creative tool that I mentioned, desktop software. It has a user interface for capturing data from the Kinect and transforming it into holograms, representations of people as 3D objects. The second part of the product is like a no-code SDK that is a plugin to Unity. We also have support for the web and that allows you to put those holograms in a real-time playback context. So you can embed characters captured from the world. in Unity as opposed to having to use, you know, motion capture or synthetic characters or kind of stock assets in that sense. That's like the fundamental value proposition of Depthgate is put real human performances into interactive spaces. So then one step up from that is a very new and very exciting product line that we've been working on Literally for years, since the beginning, we've had this vision for Depthkit Studio, which is to build on the platform of a single perspective capture and empower our creators to do multi-perspective capture. So right now, Depthkit Studio, a single PC that retails for around $8,000, supports up to 10 depth sensors capturing simultaneously. So you can get a full human body in the 360 at an affordable price point when it comes to the hardware, very portable. And that follows the same workflow. You capture a new, and again, we have a plugin in Unity, and that we're currently selling privately. So people are reaching out to us and we work very closely with the partners for DevKit Studio because it's frankly, it's very early software. So the support overhead for our users to be successful right now requires us to really be in the weeds with them. You know, we have to roll it out kind of incrementally. But to be clear, the Changing Same is the first time that Depthkit Studio captures have been displayed publicly. And it's really brand new. October was the first time that we ourselves saw it for the first time. We had an early iteration in Blackout. We had full body capture in Blackout back in... 2017 is when we first prototyped the feature set, but it's actually taken us that time to get it to a point where even ourselves can use it because the workflow was so thorny that it just wasn't worth doing. To be frank, like it just wasn't repeatable. It was like total shoestring situation back then. So we built it up from there. So if you want to get started, start with Depthkit, the free version, the evaluation version, you can start making projects immediately using the Depthkit Pro and then. If you have an ambition to create, you know, really a full body capture project right now, we're selling to businesses that are like creative studios that have like commercial intent, get in touch with us and we can, we can equip you there. So I'll turn it over to Alexander because whatever's on Alexander, I know you probably have some thoughts on the product as well, but also would love to hear about other starting points. You know, Kent, you're talking about phones and other readily available tools for getting into 3d capture.

[00:59:47.629] Alexander Porter: Yeah, I'd love to describe this, because the use of these tools is when it becomes alive for people. And in a sense, it really came to life for us with this project, with The Changing Same. And the piece that really emerged is actually an experience that kind of approximates conventional cinematic filmmaking writ large, like as a whole workflow. And my experience having worked myself nearly to death on both of these projects is that what's different about them is actually with Blackout, Blackout in a lot of ways had more of a kinship with Clouds in the sense that it's a generative environment. It's a generative social landscape where you as a viewer are really driving your own narrative experience through the story. It's different every time. The placement of the characters is different, et cetera. It's not a timeline project. It's an emergent social space. And what's new about Zero Days is it's very much cinematic project. It's a very timeline based project. And that was for the first time came to life for me. And I'll say it was really a profound experience of like a really large circle for me of being able to come back to tool sets that allow you to control volumetric assets on a timeline. And I'm going to be diligent about describing just what volumetric actually is. So for anyone who's listening who doesn't really know, it's basically we are using the hybrid of cameras and depth sensors. And what that enables us to do is to take effectively like flat images and reconstruct them at the moment of viewing so that you get the kind of reintegration of a human being that's present in front of you. And it's special because it's still linear. It's still something that can be controlled on a timeline and adjusted on a timeline. And what was interesting about, and I'm going to use changing same as kind of an example to describe this, Joe and Michelle generated a script out of the creative world of the project, and we were able to deconstruct that script into meaningful different shots and start to think about those shots in terms of the angles that they'd be viewed by the audience. And some of them are very far away, some of them are very near. And one of the things that I found that's interesting is in the world of volumetric capture, people buy us quickly to the more is more philosophy, which is just like more sensors, more views, more angles. And I think constrained in part by a relatively tight timeline, we actually had to be really economical. And so what we did is oftentimes we actually used slightly fewer sensors. So while it is, and I will say, a miracle that we can record 10 Azure Connects at the same time on one single computer, it's a technological miracle. We actually didn't in many cases actually use that full capacity and we focused on the 180 degree face or sometimes a little bit more almost like 270 with a subset of those sensors focused really on what the viewer is going to be using. So there's a really interesting experience of having to deconstruct a script and then broke it into shots and then created top-down spatial layouts, very much like cinematography, like you plan your shots, but we're planning it in terms of the kind of viewing frustum and the viewing angles for the viewer, and then planning our shots so that we're filming the right facet of a given person based on the final layout of the scene. So it was a very much a really nuts spatial puzzle problem solving experience. I think it's like theatrical blocking, like planning a theater piece. And then orchestrating the ways that everyone is looking at each other and the way that they're moving in that space such that they don't collide. And then breaking them out again, recording them independently from the appropriate viewing angle with a subset of the sensors. So for us, we're using cinematic cameras and we're using Azure Connects in a studio. We did have a real breakthrough in terms of automatic background subtraction, but we were using a green screen cube. We're shooting into that cube. And we're turning lights up and down so that they're diegetically accurate to the actual end scene. And then we're recording the performances. It ended up being very akin to what you see with virtual production these days, where you composite a person into their final destination. So when we were previewing the recordings, we were compositing the person into the final destination in terms of what the backplate and the context would look like in Unity. And then we're filming their performance kind of on top of that, but they're in an empty space, right? Or if there's a chair, they're just sitting on a chair. So we did that. And then if they're performing opposite someone, they have to perform in synchronization with the audio that was created by that person prior. And so the actor should really be celebrated for the work they were able to do in such an experience of sensory isolation and still being able to perform this really powerful delivery. So they're performing kind of opposite each other. And then we did our post-production workflow, kind of segmenting them out and kind of optimizing the assets. And then again, in Unity, we're able to both place those people back into the spaces and then scrub them and move them on Unity's timeline. And so there's a wonderful evolution, which is that Unity has evolved, the game engine Unity has evolved a timeline feature that allows you to create very complex and yet still linear tools that evolved out of a culture of creating elaborate cut scenes in video games. And so we were able then to take these people and scrub their performance back and forth in time and also adjust their placement. And so it's a very special and odd and surreal pseudo cinematic experience of reconstituting these social environments and then scrubbing their performances back and forth so that they line up and getting the delivery just right. So for example, you know, Lamar Wheaton, who's our guide to the story, has been wrongfully imprisoned and he is extremely angry. And so we have to get his delivery right and so we can scrub him a little bit earlier so that he's delivering over the back of someone else's lines to convey that experience of frustration and anxiety and anger, those kinds of nuances of a half second here or there that actually create an entire narrative space are suddenly possible with a combination of a timeline and these spatial tools. And it's very difficult to describe and I'm very excited for other people to really have this experience. of a cinematic interface with spatial performances. It's very interesting. I literally did a two-dimensional edit of the whole project of these composited pieces and getting all the timing right in Premiere. Then I did basically another edit in the situational context. And it's very interesting that you discover things that are so different. In situ, for example, eye lines appear, some of which are accidental and some of which are intentional. And there were moments where they were actually discovered unintentional and accidental eye lines and moments of reaction, which were coincidental, but made the piece a lot better. And so we actually were swapping out assets right at the end that created the stuff that I love, which is having some of the project emerge in the edit was actually possible in a sort of spatial editing context. It's difficult to describe and very profound. So that's what I wanted to lay out in terms of a little bit of the taste of what this experience is like. It's very special.

[01:06:32.595] Kent Bye: Yeah, at the Global Game Jam of 2015, I was using FaceTime to do a piece and doing captures of people's faces. And the editing was a bit of a nightmare because you have to record it individually and then create a cohesive experience. And then to do all the blend shapes, editing, and the ways that you could either extend or expand or contract, you know, there ends up being like these hard edits a lot of times where You see a new sequence that started because it's like you're in a single was essentially a single take context where you'd want to see everybody give their performances where they're playing off of each other, but you have to do it one at a time and then reconstruct that illusion. That's a single take. And I think the tools around that and having to edit individual performances that that's a huge pain point that I'm sure that. We'll have lots of work for you for a long time to be able to really get that down. But the eyeline thing is actually something that James and I talked about, I think at SIGGRAPH 2019, where there has been movement at Tribeca 2020. There's a piece by Jeremy Bailenson, I think he was using the Microsoft Mixed Reality. But as you are in that experience with Jeremy Bailenson in his Stanford's Human Interaction Lab, you can move your head around and he actually is tracking you as this volumetric capture. And so you have this blend of taking what's essentially a rigged avatar with projected on top of that, all of the spatial capture so that you could start to do dynamic and reactive eye lines, which is, you know, as you move your head around as a viewer, the actor is looking straight at you, which I think is probably a lot of work still yet to be done. And that just generally in this field, in volumetric capture, it's probably one of the biggest pain points of like really feeling like this is a virtual human and not like a uncanny capture of something that happened before. But those dynamic interactions are so key of being able to allow you to really suspend your disbelief that this is a person that's there rather than something that feels like a synthetic hologram that has these uncanny qualities that are always going to be there for a long time within VR until we figure out a lot of either technology or AGI or whatever else is going to like really sell us. I think there's probably a lot of things like saccades and the natural ways in which that people are looking at each other and looking away and then like looking straight at you when they're trying to make a point. Or sometimes when you are an actor and you're acting into the void without having an actual context where you're acting against and sometimes you kind of have that dead you know, you're just looking straight into the camera kind of feel. So, I mean, those are a lot of the things that I noticed, at least in terms of that there wasn't any dynamic eye lines yet, but that could be something that I know is technologically feasible. Now, whether that's easy to do and pull off and whether or not, if you had that, it would make it easier to make it seem like some of the people within these scenes are actually looking at each other, because sometimes their eye lines are a little off. And it also, for me, kind of breaks my social presence where I would expect them to be looking at a very specific direction and that could have a whole other layers of meaning to be able to like get it close enough because you're not going to be able to like obviously have people look in completely different directions that they're not looking because it's going to contort their body in weird ways. But yeah, finding a good middle ground where you're able to have a little bit more nuanced control of those eye lines. So I don't know if that's something that's on your technological roadmap to be able to help do that.

[01:09:36.580] Alexander Porter: I'll speak maybe first to just this consideration inside the project and then we can talk about the kind of future of this. Again, I try to be very vigilant about not assuming that things are needed before I've actually validated experientially. However, this idea of dynamic eye lines or dynamically rigging, it's almost like two layers of character control, which is feasible in different pockets inside of our industry of specifically taking a performance which was baked and then kind of morphing that performance, maybe having someone's head turn a little bit more so that they're looking at you or looking at each other. These are capabilities that are kind of emergent and Sometimes it's ghastly, I'll be honest, to see that happen and have it happen wrong is a very sensitive balance. I think specifically, the more we do this work and the more that our tools gain fidelity, the more respect I have for the incredible perceptual nuance that human beings have when they perceive other humans, especially in social contexts. There's just layers upon layers upon layers. In the project, we did have to deal with this quite a lot because initially we did one performance and we had this kind of dead stare effect where it was sort of like, we guided the actor to just kind of look over there kind of thing. And what we realized is we needed them to actually look at a specific place and a specific person actually at the appropriate distance. So a lot of my work, it was a little bit of a surprise, but a lot of my work was actually about maintaining a 3D model of the space that we were shooting for, the virtual space. And then having to place a person or in a era of COVID we have a mannequin that we'd actually put there because we're trying to be very conscientious about the space and so literally I would have to get the direction correct and also the distance correct because me looking at the screen is weirdly very different than me looking out a window or or like looking at 20 feet or 50 feet. Like you can somehow, you can see it, it's eye conversion or convergence and also just affect. It's weirdly, you can tell. And so initially it was like, look at the camera. And then it became, look through the camera about 15 feet beyond, which is where the actual character you're going to be addressing actually is situated. So that was kind of one thing that we had to manage. The way I think about this piece is it feels a little bit more like a theatrical context where when I go to see a play, I accept that there's a group of human beings who are adopting characteristics and adopting a performance style, which is maybe slightly larger than life. And also they are ignoring me and 50 other people who are sitting here in the dark watching them. I mean, I think this piece feels a little bit in that heightened kind of theatrical space. And I think it's suitable because it's a magical piece. It's a piece that adopts magical logic. And even so, I think that really getting these kind of eyelines and these deliveries really correct and getting them spatially correct is something I'm really sensitive to. And the things that I was most surprised by was this thing that I was describing about perceiving distance with a performance. And then the second thing was Interestingly, I felt actually that time, the dimension of time was actually far more critical to get a sense that two people are actually conversational in dialogue. And I don't think we actually totally perfected it, but actually I think perhaps more important than looking in the right direction or even having the right kind of physical mannerisms in its response, it's about this, the odd lag that occurs between when you say something and then I think about what to say in response and then I deliver my response and getting that right I was surprised by this, but that is actually where we had to focus quite a lot, because that was the first place it broke. Because it was interestingly robotic when that timing was quote unquote correct to the script, where you're directing the actors to avoid stepping on each other's lines, which is an issue for sound quality. It means that they wait kindly for the other person to finish, and then they deliver their line, and then the other person kindly waits for them to finish. And in a natural conversation, especially a heated one, that's just not natural. So all that to say, I thought that the things that you're describing would be my primary concern with this project, when actually the kind of temporal dimension was my biggest issue in trying to deliver an honest recreation of the script. James to you, I didn't know that.

[01:13:40.099] James George: That's like, I'm like learning this project and it was done in such like a total breakneck thing. I'm like learning about the production now, you know, we were really became kind of siloed just because it was so much to be done. So we were, you know, we really ran each other loose. That's fascinating to hear Alexander. It totally worked. Ken, I'll touch on this idea of depth get in the product roadmap around some of these, we call it retargeting. There's several different terms of art and it's true. I think just to reference where it exists, like today, I know Microsoft makes reality capture has some capacity for this. I think Arc Tourist, this is one of their main value propositions is in their hollow suite product is the ability to do some of this retargeting and clean up. So it's definitely a need in the storytelling vernacular to be able to blend these two different paradigms, where one is like the paradigm of the static capture, truly authentic to the performance of what happened in the moment, which is, you know, the pure volumetric capture. And then on the other polarity of that is the traditional digital avatar puppet, which is a rigged skin mesh thing that maybe is derived from a scan, but actually is a totally synthetic object. And the performance may or may not have any bearing on reality as it occurred. And I think there's this middle ground that we'll find ourselves in, in the trade-offs between those two. And again, it's like back to our other conversations, there's a thesis and antithesis and a synthesis. I love that. I know right answer. I think that similar to what Alexander is saying, like I think conceptually and experientially, I agree with you that these are needs, you know, like it's a problem to be solved. And I'm so humbled also, as a person who's working on this product, by what it takes to even empower a creator to make what we've made, being the closest to the tool, the most embedded in it, have the privileged access to the actually engineering team to like, get the inside knowledge or actually drive the roadmap for the project. So, you know, we're really focused right now on simpler things like usability in terms of calibration and like tutorialization about how to even put something on a timeline, what Alexander explained, that's like not obvious. And it's a black art right now that we've only learned through the pressure of production. And then how do we communicate that? How do we make the product actually communicated on our behalf through design? So I really look forward to the day When we're privileged to be able to start adding this level of feature set to Depthkit, we also have the ability to collaborate with some of these ecosystem partners to bring that capacity. We're very much in a collaborative spirit. We don't need to invent everything, you know, whatever's best for the users. So, but yeah, I think right now it's like, where the DepthKit product team's head is at is like, how do we ensure that our customers are succeeding in the wild to even get to the point where changing same has gotten to, and how do we increase the success rate of the challenge of the tool around just the fundamentals?

[01:16:21.209] Kent Bye: Yeah, I think that makes sense. And going back to Simon Wardley's model, where he has those four phases where there's the genesis and the idea, just the process of doing volumetric capture, that you could do it. And then both the Microsoft Mixed Reality as well as DepthKit, I think are existing still in this custom bespoke, handcrafted enterprise applications. In this case, it's a lot of filmmakers and creators who are familiar with Unity as a place to be able to like capture a high quality version of this, having to do a lot of early phases of technology, where there's always bugs, there's always things you have to like figure out. And so trying to optimize that DIY volumetric capture for users that are actually in the field doing that, but it's still within the context of this. area where it's not like the mass consumer scale would say the iPhone or the iPad of creating a tool set. And I'm not sure if it's on the technological roadmap or you're already starting to do that, like say an iOS app that does all this endemically natively. within the same self-contained like iPad Pro with a LiDAR scanner, that's everything fully integrated into one master system. I could see how the stuff that you're developing now in the context of Unity and these higher end custom handcrafted bespoke implementations, that that's going to inform something like that evolution into like a consumer product. But I don't know if that's something you've already started to develop if you really want to nail the existing creator community that is a little bit more specialized with their knowledge. Because that ask would be saying, crossing the chasm and going into the mainstream. And is it ready? Is the market ready to support that? I think that's the sort of big question in terms of where the generalized literacy for what it means to create spatialized stories. Something like that would be amazing if I could download an application on my iPad Pro and start making. But technologically and to sort of figure all that out is not a trivial problem to solve.

[01:18:09.808] James George: So anyway, I don't know. Yes, certainly. Yeah. Well, just to acknowledge what you're saying, I think that Alexander and I both agree that the kind of big PC USB cables all over, there's a shelf life to that. It's not, I think, a sustainable future. I think there probably will always be these kind of like massive rigs and professional production system for volumetric capture. But I think as a company, as we've established in this conversation, I think we're more interested in our valences towards democratization and ubiquity. We're obviously not interested in building our own hardware. We want to partner with folks. So the idea of like, you know, the Kinect versus the iPhone is a constant conversation at Scatter. And I think that inevitably the drivers of technological progress for volumetric capture are two major things. As you mentioned, mobile capture, so that the proliferation of 3D sensors and spatial capture awareness, in addition to spatial consumption awareness, like with mobile AR, on mobile devices will continue to grow and grow and become more and more robust and high fidelity and ubiquitous. And then similarly, like the capacity for synthesis. So we were talking about retargeting and that kind of thing, like changing a capture after it's been done. But there's also a whole field now where you don't need to encircle somebody entirely in cameras as Microsoft does, as we want to do today. actually using machine learning and modeling, we can start to actually synthesize the human form and infill, and then by doing so, drive the complexity of content creation down significantly to the point where there's some really great research already, where a single RGB standard camera can actually create a fully fledged human volumetric capture. Of course, there's lots of challenges with that about those assumptions that go into the synthesis, which is a whole world. But those are the predominant forces that are moving this technique forward. And so as a company building products, we ask when is the time. So it's really when, not if for us. And right now there's this polarity again between accessibility of capture, like the 3D scanner I have is the best 3D scanners you could say, you know, like I already have it. Other people have it on my phone, you know, it's available. But the constraints of mobile capture, the fact that it's not a computer, it's not an external peripheral, those devices don't capture out the quality that we get when we go through the effort of plugging in these prosumer peripheral devices on a highly powered graphics PC. And right now, what we've established is that the primary need from our customers and really for ourselves is to increase the quality of the capture through the assumptions that we have right now, which is software only. and using retail hardware. And so the move into mobile, we would find ourselves kind of at the bottom of a new mountain, which is not dissimilar to where we were when the initial Kinect came out, which is like really playing into the kind of lo-fi glitchy kind of thing and figuring out then how to go from there to add synthesis to be able to bring back towards photorealism. That's going to be the path of mobile. And I think that we're really interested in that. And we have built iOS prototypes of DepthKit that we have used internally. experimentally to try to understand where it sits. And we play with all the other kind of 3D apps out there that are, you know, there's tons of apps on the App Store to do 3D capture, both static and moving, that are really fascinating and fun, that very much remind me of early Depthkit vibes. But right now, I think as a company, we're kind of staying humble in this, like, let's get this down with what we have so that it can actually reach audiences, you know, enable creators and then reach audiences. You know, we have to balance when do we move into this, you know, throw, kind of put this desktop capture scenario aside and start to build towards this. And I think the major questions there are who's the user, what's the use case, what's the business model. And there's a lot of variance within that. So there won't just be one mobile capture app. There's going to be a whole plethora with different assumptions and different users in mind. And we also have to find our own North star within that ecosystem as we move into it.

[01:22:04.806] Alexander Porter: Hmm. I would say also that, for me at least, it appears that a project like this is driven by the technology, it appears that it's driven by volumetric capture, but it's actually a crazy symphony of a bunch of different disciplines, and they have to mature, and we actually have to develop what those workflows need to be before that can be safely, quote unquote, baked down into enablement of that kind. I think maybe as an example of that, what makes a project like this possible is obviously an incredible team of people. For example, we have Corey Allen, who his prior work is as a DP and he works in professional sound and television shows. And he's gone through a three or four year journey of learning how to translate those skills into being a volumetric capture specialist. And so, you know, we have him kind of operating the quote unquote camera. And then we have Michele Graffietti, who comes from a background in two dimensional graphic design. He's our designer and product designer. This was his first project using Blender. And so he's like learning Blender and he's like using it to make the titles. And so he has to go through that journey, right? And then we have Joris de Angana, who comes from a video game design context, and he did all the lighting and all the beautiful fog and things. And he actually has to learn about this way of creating as different than conventional AAA video games. Elliot Mitchell, who is kind of our lead technologist and he had to deal with a mountain of source control concerns, right? It's like these are the kind of banal realities of this stuff. So he's integrating all these artists in the way that they're working and contributing to the project in a way that doesn't blow it up and doing just amazing technical work. And then, you know, we have Michael Allison, who's also on our product team, an engineer who comes from a background in 3D visuals and graphics and interactive installations and you know he's making all of the fireflies and things like that and and he has built a lot of the shaders and implementations and now he's having to really use them for the first time in the game engine and discovering challenges with video players and all of this stuff. And then we have Tim Scavitti, who's our head of technology at Scatter, and he is building new workflows for automatic background supplementation that were just immediately and suddenly needed in this project because there's so much material and we couldn't pull keys. And We have Maria Barreau, who's also on our team, and we had to basically evolve new calibration approaches that could make it possible. And so we have every one of these disciplines and these people coming from these different places, they actually have to adapt the ways that they naturally work and go through new phases in order for us to be able to look back at the project and say, oh, OK, next time we could actually know how this should be done. And so in my experience, there's sort of no substitute for this. And when we've consolidated that evolved knowledge, only then can you make something that's really simple and humble. And I literally dream about this prototype of a truly mobile volumetric capture and creation technology. But I know that there's no way around this challenge. You have to go through it. And all of these disciplines are going to have to get properly leveraged in order for us to know what those choices are and what is that one value that we need to expose that's really meaningful.

[01:25:05.370] Kent Bye: Yeah, the way that I conceive of this is that there's different levels of presence and there's different design disciplines that are optimized for that center of gravity of that presence. And that all of these qualities of presence are happening all at the same time, but there is this interdisciplinary fusion that's happening within VR. So as an example, you have active presence and sense of agency and interactivity. That's the game design discipline. You have mental and social presence. And so all the ways in that you have like social media on top of knowledge capture and the ways that that is represented in our websites and our user interactions, user experience on these website contexts and sort of the fusion of all these WIMP interfaces that we have. But then there's the emotional presence, which is the cinematic storytelling that combines music and lighting and building and releasing of tension, these consonants and dissonance cycles of storytelling, which is passively received by the user where none of their agency is changing the time-based medium of how that story is unfolding. and having the directors have control over that time-based medium. So that sense of emotional immersion that you can get from a film. So the cinematic design and storytelling from that, on top of this new embodiment of the body and body presence, architecture, industrial design, ergonomics, contemplative spiritual practices of what it means to be present, all these things that people have to cultivate with their sense of their body, their avatar representation, their virtual body ownership illusion, their sense of place presence of being in a place and that environment is around there. So you're building an architecture and theater So yeah, you're basically combining all of these design disciplines and saying, here's the one medium that we're all going to work in. What do each of the puzzle pieces do we have? And what is the master experiential design framework for immersive media to be able to functionally take in all these different perspectives? And what it sounds like is what I hear you saying is that you have to actually make these pieces with all these different designers from these different disciplines that are coming in and come into this workflow that ties it all together and then have a design framework around that, that then informs what the app would look like for a mobile app. But we're not there yet of still integrating all these different design disciplines. And we're still doing these first iterations of what's even possible and what the affordances of the medium are when you do combine all those things together.

[01:27:20.340] James George: I believe that's right. That's all the things to candy nailed it. That's amazing.

[01:27:25.005] Alexander Porter: I think there's also a, I'm mortified because I've forgotten so many team members. Like I forgot the entire sound and design teams read more. Uh, yeah, yeah.

[01:27:35.875] James George: Lily Fang made all the environment. Yep. Yeah. Okay. Uh, there's also, I just want to point out that there's a tension too. There's a divergence in kind of the paradigm that we're borrowing from because there's Herzog and there's Malek and then there's Tik Tok and everybody's making movies, you know? So like the question is what is volumetric filmmaking? And whose volumetric filmmaking are we talking about? Because I think this idea of proliferation is that there will not just be one. There are going to be teams like what Alexander described, what Scatter is today, who are a studio, you know, an interdisciplinary group of people who are coordinating in their specialties to build these epic things like you see with film production, very much built or AAA game production. And then you also have this more bite-sized, emotionally powerful, tantalizing content. that you find with like, you know, Instagram stories or TikTok that also could be volumetric. You mentioned holographic TikToks and things like that. So I think that the question really becomes about vision and where do we go as a company to build within the landscape of, I think, again, inevitably these things will all exist because humans will express themselves in all capacities. So it's like, where do we think that the most, you know, we have to come back to this kind of values driven approach that can guide us to ask these questions of, you know, which one of these expressive mediums, even the idea of making creative expressive mediums is coming from a sense of our core mission and values, which one will allow us to perpetuate towards the realization of something that is accessible and democratized, to revisit that. So I agree with what you're saying, Alexander, around that, you know, we do these elaborate works and then we figure out how to synthesize all these disciplines into an easy to use tool that has to have elements of all of them. But then it's also the question is what is the form of the content and who's making it and at what time will they be enabled and will those stories have impact and be powerful? in the way that we believe in. I think all those questions are pervasive for me and I don't think we have, and I don't think this is problematic, we don't yet have clarity on that thesis. It's not yet clear how 3D spatial capture content creation and kind of a UGC-like platform for dissemination like you see with the social apps, the image-based social apps, how that will manifest and how we'll make it a place that I think, you know, again, is humanizing and tries to be a countervailing force against the rolling up of attention that you see with Facebook and the ownership of the identity that has the problems that we've seen manifest over the last five years, you know, in a pointed way. So all of those questions are questions we deal with.

[01:30:13.858] Kent Bye: Yeah, and when you have a decentralized approach where you have complete ownership, it's sort of the equivalent of running your own website and how people could either be on one of these big platforms that already has the network effects of the audience there on YouTube or Twitter or TikTok or Snapchat or Instagram, all these platforms that people are allowing themselves to maybe have this ability to experiment, but they don't have any ownership over that. And so I feel like that there's a real need, this cyberpunk future that we talked about, where people really are in complete control over this and be at the level where people are willing to pay to be able to get a certain level of service, rather than be on a service like one of these big major tech corps that you're paying with your data and you're paying with your ownership of your identity, or you're paying with all sorts of not having that control. So again, it goes back to that dialectic between the closed world garden versus like the decentralized open ecosystems and that we need to have people like yourself and Scatter developing these alternatives because we don't want to live in a world where only three or four major tech corporations own all this stuff and control both the content creation and the distribution of this. It's just not a good, healthy, vibrant, resilient ecosystem when you just have like five people controlling everything. And I think the type of people who are like the hashtag IndieWeb, I'm going to have my own website. I'm not going to use like this corporate hosted thing. I'm going to do it myself. And I think I see the beginnings of that movement now. I mean, it's been there, but there's a pendulum that's swinging that maybe in the context of say censorship on the right with people hosting their own servers and everything else. But there's a sort of a political context there, but the dialectical process that's underlying that is the centralization versus decentralization. And that what I see Scatter fitting in is in that creating a healthy, vibrant ecosystem of tool sets for those creators, they're able to plug in. And hopefully by five or 10 years from now, we're in a space where people just walk in and they don't have to think about the tool set. They just go through their design process and the tools are kind of invisible. where it just enables their imagination to be manifest. And I think that's where we all want to get to, but there's a lot of ways in which that you have to kind of take it step by step. So maybe as we start to wrap up here, you could sort of give a vision for what you want to experience in this imaginal speculative design future that you've been dreaming of for many years. Like what that would feel like to be this fully empowered, immersive storyteller who is using tools that is allowing them to fuse all these different design disciplines together and to be able to express and honestly experiment and push the limits of what the affordances of the medium are. Yeah. I'm just asking if you'd be willing to step into that speculative design future and like, what does that feel like? What does that look like?

[01:32:56.489] James George: Okay, so where do we imagine where do I'll just say I'll speak for myself, where do I imagine this going and like a bright future of future of success and what really inspires me is to think about how. we can create, so we at Scatter as a team, can create a space, a place for people the world over to represent their story and their lives within a 3D imaginative space that is anchored in their real life. Themselves as they are, are present. And with that comes their authenticity, their dignity, the knowledge of their attention, their knowledge of their presence, because that's what our face represents. That's how you know, Kent, that I am paying attention to you right now, as you're looking at the one and only me paying attention to you and having this conversation I'm present with you. And that coming into a virtual space, along with the belongings and the space and the artifacts and the pets and the things that make our environments personal and special and unique, can be translated into a place where those become the building blocks of a creative space for me and for a community of creators the world over to represent and reflect their lives in a space where we can then invite people over to that space or visit others without the separation of geography, distance, culture, things like that, that get in the way that create these barriers, that create these bubbles, that create some of the biases and separation in the world. and misunderstandings and experience things together, you know, spend time with one another in a way that is as visceral and human as being with someone in person, as well as go on magical adventures together and share the stories that they're building, like the Changing Same within this space, like going to a theater performance or a movie with someone. And I think enabling that and enabling it so that it's a level playing field of creation and storytelling there would be really powerful for me to experience because I could learn things about people the world over that I didn't know and see how people express themselves and represent themselves and, and be informed and become, I would have a more unified relationship to humanity. And I think there's some real social impact that that could have, if it's done in a positive way, it would reduce air travel, which would help the environment that he's kind of like just practical outcomes of something like that, that not the need to travel and, you know, foster understanding in cultures is a goal and a dream.

[01:35:23.413] Alexander Porter: Nice. I find it a really complicated question. So I often find it difficult to describe really complex ideas and experiences with words. And words are like a thing that we're supposed to be using a lot in order to convey ideas and experiences. And so when I think about the question I'm trying to answer is, why do I do this work? And what am I trying to build for myself and others? And a topic that I'm really passionate about is discussions around diversity that have to do with psychology, diversity of psychology and kind of psychic experiences. And I was raised in a context touched by concerns around mental health and also a very magical and a really surreal context. So I was raised in an environment where ideas and stories and myths that aren't strictly considered to be grounded and real were very much part of my upbringing. And as a result, I have a whole psychic landscape that I found to be really beautiful that I want to be able to share, and I struggle often to do that. without shared reference points with other people or shared stories or shared myths. And you need a really high bandwidth format, a really high bandwidth method to convey that kind of embodied experience that it's not just that the layer of language, it's not just that the layer of images, it actually has to do with body experiences, experiences of grandeur or feeling diminutive, things like that that are really difficult to communicate in a way that people encounter and really experience. And so the dream for me, is to be able to take this landscape that I personally have and I love and also struggle with and to be able to create stories that give people some degree of an expression of that. And as I grow, I realize that it's not just a challenge, it's beautiful and I want to share it. This is my little microscopic experience of wanting to convey, to tell stories and convey this in your experience. And that happens to be very common. I think that's fundamental and I believe it's human. So I want that for myself and I want that for a large, very diverse community of people who are not necessarily the sort of normative, typical standard of people and experiences. And so I want to create a really sophisticated landscape of storytelling that is frankly weird, as weird as people are truly inside and complex and sophisticated, beautiful, odd, surreal. And that's difficult, I think. I find it difficult in a lot of conventional media. Whether it's technically difficult, sometimes it takes a lot of visual effects, you know, that kind of thing. The only format that I'm aware of that's actually really adept at this is novels, the written word. Like, you have a real economy there where you can express real grandeur and really surreal special worlds. because we've diffused the technology of writing quite well by now. So when I think about the future of this, I want that kind of magic and that kind of communicative power to be available to people. And I definitely think about it in terms of myself, my own personal expression, and I hope to have that be a shared experience. So that's very much what I think about. And in those worlds, the idea that it's not just about creation and dissemination, but it's also can be a co-creation, co-presence, and you can actually have a an experience of the whole process, which is social, it's profound, it's interesting, it's magical. So quote unquote narratively, your experience of building a piece in collaboration with peers and colleagues and friends and collaborators is a profound and narrative experience, just as sharing it is, and just as being in a space with someone else as they're encountering the thing you've created. That's a hugely profound piece of interactive work. And the idea that all those things can actually be fused. and natural and beautiful and dignified and not just not like immensely draining or challenging or technically hard or distracting. So a world in which you can have that whole range of the experience of creation and sharing that's fluid and that's profound. So I did my best. So that's what I'd like to create when I think about the future of these tools and these platforms.

[01:39:14.948] Kent Bye: Yeah. I kind of snuck in the ultimate potential question and rephrased in a different way. Nice. And, uh, is there anything else that's, uh, left and said that you'd like to say to the immersive community?

[01:39:29.477] Alexander Porter: No, except I'd just like to say I really appreciate you, Kent, and the work that you're doing. I think you have this sort of Homeric kind of bard-like quality in the context of all these spaces that we typically go to. And history is not a thing that just emerges. It's a thing that has to be built and carefully gathered and analyzed and reanalyzed. And I see that you're doing that work. And it's a huge labor to do that. And I just want to say I really see that you're doing that. And it's a huge service. And I'm just grateful to you for doing that work.

[01:40:00.729] James George: Thank you, Kent. Agree. Plus one.

[01:40:04.331] Kent Bye: Yeah. For me, it's been quite a journey. And, uh, as you were talking about the avant-garde, I feel like my own artistic practice is like the use of oral history and podcast. And I'm trying to get to these deeper philosophical questions that are driving me. And then at the same time, capturing all this data and this vision that I had of a collaborative interactive documentary that I was working on the echo chamber project. When I saw the clouds, I was like, wow, they actually built that. And so my origin point into this industry is marked with pieces that you had shown at Sundance. And then I was just like deeply inspired by that. And so, you know, the work that you've been doing for over 10 years now with the collaborations and your own creative explorations in the future of the volumetric capture at an accessible DIY scale. And I've seen how that has been within the broader context of the art and the artists and the creators, and that it's opened up all of these experiences that likely wouldn't exist at all without having the accessibility in a way that is affordable and usable. And it's such an early phase that this is a great opportunity for people to come in and start to play around with an experiment and actually push forward the medium in a way that needs to be experimented with. There's a tension between consensus and stagnation versus the avant-garde disruption that is trying to really be creative. And in that creative experimentation, it's risky and you don't know what's going to be able to be monetized or what's going to work and what not work. And you have to be on the bleeding edge of all the technology and nothing ever works. And you have to do all these bugs and get all the support. And so the cultivation of communities that are able to support that type of environment to be able to do that type of experimentation is vital for the evolution of the medium itself. We can't just have the only people that can create stuff have access to million dollar budgets to create. You need to have a way for the people on the street to find their own use of things, as you referenced that William Gibson quote. James. So yeah, I just deeply inspired to see what each of your journey into XR, but also just the ability to have conversations like this, to be able to challenge my own assumptions about my models and to adapt them and change them. And I'm going to go back and be like, all right, this didn't just come out of nowhere. This is coming out of a deeper dialectic process that goes back into deeper layers of culture and money and economy and the military and defense and domination settler colonial mindsets. I mean, you sort of go down to the core, but to me, I see the countervailing force of all that as a storytelling and imagination and the ways in which that you can empower people to have more of a pro-topian vision about the world that they want to create. And I really see that these virtual spaces that are being created are going to start to allow us to step into these worlds that don't quite exist yet in the broader culture, but the fact that they're able to be captured within a story is actually, they do exist. And now they're able to express your imagination to its full bandwidth, its full capacity, to be able to inspire a whole generation of other people to find other ways that they can translate that culture into their everyday lives. Anyway, it was a big download here of all this stuff. And I'm so glad to have a chance to be able to dive deep in all this stuff and to trace the evolution of Scatter and DepthKit and all your work that each have been doing. So thank you so much for joining me here on the podcast.

[01:43:13.014] James George: Thank you, Kent. Perfectly summarized. Yeah. Thank you.

[01:43:17.095] Kent Bye: So that was James George. He's a co-founder and CEO of scatter as well as Alexander Porter. He's the co-founder at scatter as well as a filmmaker and the steward of the workflows. So I have a number of different takeaways about this interview is that first of all, well, I just am really super inspired with the work that scatter has been doing because they're really doing this speculative design where they imagined a future where this volumetric capture technology existed. James had been working with the connect camera and they had created this open source toolkit called the RGB toolkit. Eventually, they needed to spin that up into an actual product and a company to be able to have the resources to really focus on the needs of serving a niche market of these volumetric filmmakers, who are in the process of trying to fuse together all these different design disciplines. Eventually, they want to have something that's as easy as an app on your iPhone. Right now, a lot of these workflows are very customized. They use Unity. It's that kind of a handcrafted bespoke system where a lot of the emphasis right now is trying to increase the level of fidelity that the volumetric capture scans have. And they have the DepthKit Studio, which is combining up to eight depth sensor cameras all connected to a single computer. So this for me was a really fun conversation just because both Alexander and James have been on the forefront of helping to evolve the medium itself. I mean, a part of the big practice of Scatter has been to work on these projects that are using the technology that they're creating. So they're able to express things that would not be able to be expressed without the technology and the tools that they're creating, going all the way back to clouds and then zero days VR, then blackout. And then this is the changing same, which is like their fourth major project. So each time they're evolving the tools and also finding new ways of expressing this aesthetic of a volumetric capture. So there is a trade-off between something like DepthKit and other volumetric capture solutions like Microsoft Mixed Reality Capture. Those tend to be very high fidelity, but they're also super, super expensive. And so they're able to make something that's a lot more accessible, but the trade-off is that there's a little bit more of the digital artifacts that happen within the volumetric scans. So it's a bit of a glitchy aesthetic that may pull some people out of an experience. As time goes on, I think the scans are going to have less and less of those digital artifacts. I think that's just the nature of how the technology progresses, is that there's always something that's continued improvement over time. But I think at the same time, it's a trade-off in terms of trying to get these technologies out there, to be able to get the technology in the hands of people, to be able to tell the stories that are really worth telling. For me, I just think it's amazing that something like this is out there to be able to help push the overall medium forward, because I think there's just a lot of innovations that come from the Changing Same and some of these other previous projects as well. The first half ended up getting into a lot of these overarching sensemaking frameworks that I've been using just to kind of make sense of what's happening within the immersive industry. Depthkit has really been on the frontier of living that out in terms of being pioneers and helping to disperse these technologies into different layers of the culture, whether it's fashion and hip-hop and these other creative artists. And now that they're trying to productize it and create these tools to be able to be used by these volumetric filmmakers, to be able to use within Unity, to be able to further explore different innovations of immersive storytelling. One of the things that Alexander said is that these technologies don't just come out of nowhere. Usually they're being born out of the military industrial complex or some sort of defense application. Computer vision is an example. It's something that's a technology that on the surface feels like it just came out of nowhere, fully formed, but there's just been a lot of money over many different years invested by the government to be able to develop a lot of these technologies because they have some specific military uses. But eventually the technology comes out, and as James Short says, you know, Willem Gibson quote of, you know, the street will find its own use of things. So then there's the hacker, creator, maker, storyteller ethos that gets a hold of this technology and just start to use it for creative expression and storytelling. And so there's a little bit more of the pro-social use of some of the technology. But sometimes it's hard to know where things began, whether it's from the military technology, whether it's from the sensorama as an example of the machine that was trying to fully immerse people into this cinematic experience, which is something that came before Ivan Sutherland's ultimate display paper, as well as before as a virtual reality system. So it's hard to know the origin point of whether or not it's starting with the stories or whether or not starting from what just the technology is able to do. But Alexander was really tapping into that mind of a child to see what kind of experience of awe and wonder that you're able to cultivate. And You know, these storytellers and speculative designers and science fiction writers, they're able to imagine this future to see how the technology is going to be used and be able to change the way that we express ourselves as humans. So the other big thing is just that Scatter and Depthkit is really focusing primarily on this accessibility of trying to make these tools democratized and make them available for more and more people to be able to use them to be able to express whatever story that they want to tell. So in the second half, we started to dig into some of the challenges, everything from the eye lines to one of the things that Alexander was saying was that when you're trying to have like a conversation that has an angry tenor and you're shooting things individually and you're trying to record it in a way that they're not stepping on top of each other. Then you end up in the situation where it's kind of hard to really get the timing right, where you don't want to have these awkward pauses. And it's something like volumetric capture. It can be very difficult to morph things in. It's basically like a single take and it's like a theatrical production. So you have to kind of deal with like, how are you cutting these different scenes together? But also, you know, you have to kind of stage it spatially to know what angle you're going to be capturing, because you don't necessarily need that 360 degree coverage. And because of that, you kind of have to do a little bit more planning of how to spatially align everything. So it was just interesting to hear a little bit more from Alexander to see what types of things that he needs to do to be able to actually plan and implement some of these volumetric captures within the context of this Unity project. And as the viewer immersed within that, what may look okay on a 2D screen, when you start to get in there, then the depth of field of someone's focus, as well as their outline and everything else, need to have a lot of massaging and Eventually, I'm sure that you're going to want to have everybody just do a single take, like more of a theatrical production. But right now, we're not at that point. I mean, there was the Intel Capture Studio, but that is now defunct. That was a way of potentially having lots of people into the same space at the same time, which kind of solves that issue of needing to kind of sync all these individual takes together and kind of reconstruct that sense of that spatial presence. I think as time goes on, especially if you start to have these rigs where you can have multiple rigs happening at the same time, then maybe it'll be a little bit easier to start to do these captures with multiple people at the same time. And something like volumetric capture is starting in the film festival context, but eventually I think it will come to apps like Snapchat, TikTok, or. Instagram. So these volumetric filters and how they can be used to do full-on volumetric captures, we're already starting to see some of that volumetric capture hologram advertising or performances. But there's this trade-off between the ways in which that you give up some of your own creative freedom or full control over your identity or your privacy. As you use these platforms, then there's certain ways that you're losing certain aspects or you're kind of mortgaging your privacy or paying with your data in certain ways. But there's also the network of the audience that's there. So there's this existential tradeoff. But from what I can hear, that eventually I think that Scatter is going to want to move into more of this mobile app realm. But because of all the difficulties in trying to figure out what the affordances of the immersive storytelling actually are and how to fuse together all these different design disciplines, that that's the problem that they're working on first and foremost, and that they're doing these creative projects and eating their own dog food in that sense and trying to figure out these workflows for how to bring all these different design disciplines together. Yeah, and just this vision that James George has of creating a space and a place for people to be in the world and to represent their own story and their lives. It's anchored within their own specialized context. And for Alexander, he wants to be able to express the different nuances of his own inner psychic life that is able to have this full expression of the many aspects of our experiences that range from the weird to complex, sophisticated, beautiful, odd or surreal. So really just this full expression of the human stories and really focusing on creating accessible tools to be able to democratize access to volumetric filmmaking. So that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a less supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show