Back in 2014, the CLOUDS interactive documentary premiered at Sundance New Frontier where it debuted a VR interface to navigate over 40 oral history interviews with creative coding pioneers. Movies have typically been pretty linear, but how could a documentary become more interactive? Just as Hypertext links enabled web content to interactively link to other related resources with many inbound and outbound links, then the CLOUDS creators James George and Jonathan Minard used a similar concept to hand craft multiple inbound and outbound connections for every segmented interview clip. This created an interconnected web of media that can be navigated within VR, which ends up being more like a tagged media database than a linear film.
I had a chance to talk to one of the creators of CLOUDS, James George, who is the co-founder & CEO of Simile Systems & founding member of the production company called Scatter. James is a very innovative thinker about the future of interactive media, and has many deep thoughts on the topic. He talks about the 4-year evolution of CLOUDS, documenting the creative coder movement, how they implemented their interactive documentary, and the future of cracking the narrative code of the VR medium through the defining of different each of the genres.
LISTEN TO THE VOICES OF VR PODCAST
CLOUDS has been out for a few years now, but I think it’s still ahead of it’s time in terms of what it’s doing with interactive documentary. James is in the currently process of productizing the Depth Kit to transform a Kinect into a computational photography camera, and is in production on a VR narrative experience called Blackout VR with Alexander Porter.
Here’s a 4-minute overview of CLOUDS
Donate to the Voices of VR Podcast Patreon
[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye and welcome to The Voices of VR Podcast. So I'm going to be continuing on this week with my focus on storytelling in VR by looking at this very ambitious and innovative interactive documentary that premiered at Sundance back in 2014 called Clouds. It's by James George and Jonathan Menard and what they did was to take this huge repository of like 40 interviews and 10 hours of footage and Rather than just cutting down a linear narrative experience of that, they created this database of content that you could navigate through this system where you can interactively go from point to point. And the focus of topic was creative coding. So creative coding is people who are artists who express themselves through code, through writing code and creating these 3D interactive media. So in the process of documenting this community, they really started to ask, like, what does a self-portrait look like of a digital artist? And so they started to use the Kinect as a computational photography system and to do this depth capture of these artists talking about creative code, and then eventually put them into virtual reality so they can have an immersive experience of their own art. So there's a lot of really innovative things that I think James George and Jonathan Menard were doing in the Clouds documentary in terms of creating this Wikipedia of interactive media. So we'll be talking about Clouds and some of the ideas about storytelling as well as genres within VR on today's episode of the Voices of VR podcast. But first, a quick word from our sponsors. Today's episode is brought to you by the Virtual Reality Company. VRRC is creating a lot of premier storytelling experiences and exploring this cross-section between art, story, and interactivity. they were responsible for creating the Martian VR experience, which was really the hottest ticket at Sundance, and a really smart balance between narrative and interactive. So if you'd like to watch a premier VR experience, then check out thevrcompany.com. Today's episode is also brought to you by The VR Society, which is a new organization made up of major Hollywood studios. The intention is to do consumer research, content production seminars, as well as give away awards to VR professionals. They're going to be hosting a big conference in the fall in Los Angeles to share ideas, experiences, and challenges with other VR professionals. To get more information, check out thevrsociety.com. So this interview with James George happened in New York City on Saturday, July 16th, right after a AR, VR meetup. So with that, let's go ahead and dive right in.
[00:02:53.547] James George: My name is James George, and I am an artist, filmmaker, entrepreneur. I currently am the CEO of Simile Systems, and we make a piece of software called The Depth Kit. Also a co-founder of a production company called Scatter, and we work on VR content projects. I got into VR, you know, my background is in filmmaking and computer science and it was always felt like these two separate worlds that I was trying to mix. Very interested in the way that code and computation could interact with the moving image to make new forms of media and push the future of film. And got into working a lot with camera systems, interactivity, large-scale projections, doing interactive installations. and working with artists really to do site-specific activations with projectors and cameras, really excited about that kind of stuff. And worked with several artists learning how to make these kind of installations and working as a technician. So I worked with a woman named Carolina Sebeka implementing a lot of her work and making interactive installations and then I worked with Chris Milk as a technical director for his project Treachery of Sanctuary, which was a large-scale interactive installation and his tour of the world. And then when I had my first VR experience, I thought of it really as a way to make these large-scale architectural experiences that I was building that could only be seen in one place in the world and for a limited amount of time by a limited number of people. And I thought, oh, here's an environment where we can actually distribute these interactive installations all over the world. So take the spaces and put other people in spaces that you're designing. So I saw the potential of that. And also really thinking about, you know, back to film, the way that we interact with images and understand things through images. and capture and translate our world. If there could be a new form of filmmaking that takes place in a virtual world that goes beyond wide angle videography, but really is putting you into a new interactive space that you can move through like a holodeck experience, but that still resembles and reflects our lived experience. So all of the latest work now is really focused on building that medium, that genre of live action, immersive room scale experiences.
[00:05:09.495] Kent Bye: Yeah, so I have a similar background in the sense that I have a electrical engineering degree. And then I sort of left working on the F-22 Raptor within the military industrial complex for five years, and then started making documentary films. And then sort of that led into doing technical training videos and podcasting. But I always kind of felt that I was trying to find my medium. And I think that after doing like 80 hours of interviews, focusing on the media's performance leading up to the war in Iraq. And I started doing this kind of interactive documentary project and started to try to not just replicate the problems of the media by creating an oversimplified 90 minute summary of an 80 hour experience. And it set me on this path of trying to create this interactive Experience where people can kind of navigate through this corpus of media that ultimately didn't work out And so then I realized that the thing that I really like to do was just interview people and I thought well I should just do a podcast and so started doing like Drupal podcasts and technical podcasts and I Then eventually when VR came along for me, it was kind of like oh well maybe my mediums actually VR where I could take my filmmaking and Artistic expression that way use my technical background and actually kind of fuse the two and so back in January 1st 2014 that's when I decided to buy my oculus rift it came a week later and I had this DK1 and right around that time Sundance was coming up and I saw that Sundance had on their new frontier section they were showing clouds and they were showing a virtual reality version of that and I was just like inspired like oh my god they have done it they actually created that vision that I had with this interactive media that I wasn't able to technically pull off but you're actually able to take these video clips shot on 3D depth sensor camera and then be able to put it into a whole interactive experience. So maybe you could talk a bit about clouds and how it came about and what you were able to do.
[00:07:05.170] James George: Yeah, I mean, the clouds project really spanned four years of time and went through several iterations. So it's difficult to summarize, but the impetus in the project started in a very humble way. So my collaborator on the project, Jonathan Menard has a background in media, but as a anthropologist, documentary filmmaker, and now fiction filmmaker screenwriter, I met him at a hackathon in Carnegie Mellon university. We're the head of the Studio for Creative Inquiry, which is sort of an art and technology lab there that focuses on interdisciplinary hybrid projects between science, art, and technology. I had created a conference, and at that time, this was 2011, around the explosion of 3D sensing and visualization that was made possible with things like the Kinect and the Asus PrimeSense. And really, it was the first time that the gestural interactivity, 3D scanning was something that individuals could get their hands on. And that studio was focused on bringing artists together to help contextualize that technology. What could this utilitarian technology be used for in an expressive creative capacity? So I was an invited artist and Jonathan was working at the studio at the time and My interest at that point was in using this technology, not for interacting with the computer, but as a new form of photography. And I was collaborating at the time with Alexander Porter, doing series of photographs that involved spatial imaging. So we were going to the subway and taking pictures of people there, like candid public photography to talk about and expose the possibility of what it's like to look through artificial intelligence surveillance system or things like that, really getting at the types of empathy and fear that technology evokes and using that form to evoke that in photography. And I was thinking at that time when I went to the studio in 2011, could we use this as a new form of filmmaking? Could these three-dimensional cameras actually open up a world where you could take photographs or video of people and then look at them elsewhere after the fact? So this was a long sort of futurist vision. And then if we're looking towards that future, What do we do with this really nascent technology now? This sort of glitchy, it felt like early cinema, like peering into, you know, watching the film develop. Here's all these point clouds and lines. And it felt very like looking into a new dimension. And in this community, there were essentially hackers and artists that were making these tools available. The community was called the Open Frameworks Community. They create C++ libraries for people to do interesting things with code. And I was in that community for the interactive work that I was telling you about earlier. We were doing large-scale projection and camera tracking and really experimenting with the creative possibilities of new technology. And so this natural thing to do when I was there hacking and got this camera system up and running where I could actually use the Kinect as a video camera was to do interviews with the people in the hackathon, all of the people around that community. And Jonathan being an expert interviewer, much like yourself, and being infinitely curious, you know, both of us being infinitely curious about what motivates people to work in this space, to create open source tools, to make artwork using computers, we began interviewing people. And we used this camera system. It was just obvious to us that even though there was this long-term future of true volumetric filmmaking where you have the holodeck-like experience that this community, these explorers, these pioneers were actually laying the groundwork and the thought work for what that would become. And it was a moment in like 2012 in the creative coding community where a lot of the work that had been done in these kind of academic tide pools was spilling into popular culture and being used for music videos or feature films or being used by advertising industry and we really felt like that creative artistic discipline that had been somewhat rarefied was now part of the public consciousness so we wanted to capture that moment. So that we found that this visualization technique, this sort of nascent glitchy point cloud aesthetic was a beautiful form and content connection with artists that work in that space. Because, you know, you're an artist, you could make paint a self-portrait of yourself. But if you work with code, how does a code artist make a self-portrait of themselves? Well, you capture yourself using computational photography and then you represent yourself in a virtual space that sounds quite Quite involved in connecting it, but it was actually quite elegant, you know It's sort of this, you know, like max headroom throwback cyberpunk future, but also quite relevant and quite now so we continued to take interviews and You know much like you were describing with your voices of AI podcast you started with a lot of the people you knew or had access to and then kind of built up confidence and understanding, and we kind of just sort of traced the network for the luminaries in this field and interviewed people like Bruce Sterling, coiner of the term cyberpunk, along with William Gibson, Paolo Antonelli, design curator at the MoMA, Kevin Slavin, who's a media theorist, game designer. So beyond just the practitioners, the programmers, and on to the theorists and people who really define and contextualize the medium to capture the conversation. We had a similar challenge where we were filming and filming and filming and didn't know exactly how it would all stitch together. And we started, we made a, a quick, you could call it an export even like we just stitched something together and made some visualizations and put it online. And we found that we were made like an eight minute version and a 20 minute version. And then we had these other topics that were already sequenced. And because the conversation wasn't super directed, we didn't have an agenda with our interviewing. It was really more curiosity driven. we had a lot of different roads that it could take, a lot of different paths. And, you know, one sequence that was interesting to one person may not be interesting to the next. So we actually started in the interview process asking the artists and the designers and the programmers, like, what would you do with this corpus of information? How would you visualize this? And of course, invariably, the answer came back in some form. It's like, well, why make a movie? Why not make a database? And they're artists that work with interactivity. All of their work is software-driven and real-time, so it involves the interaction of the viewer to work. So why not make a film that involves the interactivity of the viewer? So we devised the thing that we call the story engine, which is a software system that has all of the footage that we'd captured, annotated, and linked together, essentially, and allows a viewer, using their curiosity by asking questions that the system poses to them, to jump from one topic to the next and essentially explore this web of ideas of, you know, this thing reminds me of this other thing, much like a conversation, a stream of consciousness, really. and going a step further because we are working in a spatial paradigm. We're using volumetric film. We have 3D graphics. That was our format. We actually took that spatial metaphor and applied it to the content. So it created this thing that we call the cluster map, which is this web of ideas. And every topic in that web is closely related to adjacent topics based on how people talk. It was organic. We didn't actually set out to say this topic will be in that and that topic will be in that. We actually responded to what people were saying. And then the experience of clouds becomes like a traversal through this web. And so every time you enter it, you choose different questions, and you'll find a new path. And you may find yourself back on a topic you already listened to. You'll hear new voices, or it'll be informed by where you had just come from. In its entirety, there's over 10 hours of edited footage in clouds from over 40 different people. And we were left actually with this challenge of, how do we actually build this system? But again, this sort of form of content meta project turned in on itself. We were working with these open source tools for visualization and graphics. So we built the whole thing using open frameworks. And so it really was a portrait of a community. And we went as a second phase then, once we had all the interviews and we're working with the way of traversing and watching, it wasn't enough to just be able to see the artists talking and hear about the work. It was extremely abstract what they're talking about. So if this was at all going to bring new audiences into this work, then we need to actually show what they were talking about. So we went back in another phase, commissioned many of the artists and ourselves built visualizations and brought an actual, essentially what amounted to a virtual exhibition of their artwork into the application. So as you're watching them talk about an idea, say, I like to stimulate nature, and in nature you have birds flocking or fish schooling, you would actually then see that artist work and see a flocking simulation that they had built. So you could get a visceral sense of that. And that kind of acts as like the B-roll in a traditional documentary, although it's all generative and interactive. This was all being done in 2011, 2012. It was like kind of VR was rising at that time and wasn't really sure where it was going. We really focused on the form of this. actually had not a clear idea of how it would be published and what format. You know, I was coming as an installation artist. I was imagining ways of projecting it around you, you know, building exhibitions for it. But every time we were thinking maybe it could be like a video game, you know, that we want people to be able to explore it on their own time. It wasn't totally clear. Really, we were working kind of from the inside out, which I don't recommend, actually, for a project. You probably should have a better plan going into it. But for us, it was an exploration, so. And, actually, the project was invited to premiere at Sundance. This is when you encountered it. So, this is fall of 2013. And the curator was saying, you know, we're bringing this company here, this Oculus company. They have a new virtual reality headset. And it looks like your system could actually work in VR. Like, are you using 3D graphics? And I was actually, you know, had become aware of the Oculus. I hadn't tried it yet. And said, you know, that's not crazy. Like, maybe that could work. We were fortunate enough that we had built the cloud system in C++, and Open Frameworks is a great aggregator and able to interface with hardware and cameras, and it's really built for this. So in really a matter of maybe three or four days, we were able to take one of the Oculus headsets and get a VR viewer set up for clouds. And at that point, we were totally floored. You know, David O'Reilly, the famous animator says, you can't get high on your own supply. If you're an artist, like artists and myself included, like, it's hard to get super excited about your work. Sometimes, like you just always see you're nitpicking it. It rarely surprises you. You're always focused on the details and that's what makes it great. But this was one of the rare moments in contrast to that where. We had this team of 13 developers making visual systems, and they're all grinding away trying to make it look better. And we would put them into their own visualization with VR, and they were just like, wow, this is beautiful. This is amazing. Did I make this? And it was giving these artists who were working, it's almost like they were just looking through a tiny crack at these really deep simulations they were making. And the VR interface was such a fluid, physiological way of experiencing these abstractions that they were building, these three-dimensional visualizations, that it was actually breathtaking to them because it's like, this thing that's in my head that's reflected through the screen, actually can be all around me. So for us, it was this beautiful moment where the concept of the project, like artists coexisting with their code, was actually enacted. And we could actually put the people that were working on the project into VR. So we decided at that point that we had to push forward with a virtual reality version of Clouds. But we didn't abandon the screen-based version because in so many scenarios, you have to show it in theaters. You want to be able to show it on a laptop. It was a lot of work, but we continued to maintain two versions of it. And even today, there are these two different versions that are really two different views into this expanse of ideas and code and creativity.
[00:18:45.975] Kent Bye: Yeah, and to me, the thing that's really striking about actually experiencing it was to see some of those 3D visualizations and to be immersed in that art. Because I think a lot of people within VR are creating these more concrete experiences and scenes. And it's really getting into this really abstracted, generated from code, so code as art. And so maybe you could talk about this community of artists who create art through code.
[00:19:13.495] James George: Yeah, I mean, that's the subject of the film. And they're an interesting, I mean, I am part of this. So it's sort of, in a lot of ways, Jonathan, as an anthropologist, I was sort of his guide through the Amazon. Because I've been working in the Open Frameworks community, I'm implicated, I'm one of that community. So it allowed me to kind of see the way that things connect. So the history of, I'll start with the history of this community, because it's fascinating. A lot of it can be traced back to the MIT Media Lab Aesthetics and Computation group under the guise of John Maeda. And John Maeda and his lab, they were one of the first, and this is in the late 80s, early 90s, to focus on the application of design and aesthetic thinking, design thinking to technology. And we're all used to the iPhone and thinking about high design and technology, but it was actually fairly novel at that point to think of design aesthetics and graphic design as something that can be generated with code. So John built this system for his students called Design by Numbers. And it was an intentionally limited 128 by 128 pixel window that you then had a code window next to and you could write graphic entries. So you could draw dots. You could draw lines. You could make it white, make it black. It was actually black and white. Or you could do things like create a class that represented a fish. And then you could create 100 fish. And then you create rules that make those fish interact differently when they're moving next to each other. And then all of a sudden, you have playing out in this tiny little window. something that resembles a school of fish. And so this idea of using simulation and using your intellectual and logical understanding of something to simulate it but then create an expressive result is something that is throughout this community a through line. And the thing that makes this community strong is that it's very difficult to work with high technology and vision and graphics. It takes a lot of people years and years of training to get to the point where they can start building simulations, you know, if they're in the game industry or in the military simulation industry. But this community, the backgrounds are often actually not technical. They're designers, artists, sometimes even sociologists, people coming from a lot of different angles. And the philosophy of the Open Frameworks, the processing community, is that it shouldn't be hard to get started. And that if we're constantly sharing what we build and publishing examples, and continuing to talk and remembering what it's like to learn, that we'll be able to constantly have an influx of very interesting ideas and a diversity of thought and ideas in this community. So that prioritization, open processes, sharing, a focus on teaching, is really what makes this community so accessible to so many people, and which then makes it result in so much interesting and diverse work. So that's what galvanizes a lot of this and then we could actually use these because it's all codified in GitHub and online, there's actually a social graph that's visible and traced and who contributed what code and who built this simulation and that we could actually in some ways trace those connections almost like a wagon wheel out from where the code is the hub and the spokes are the individual content creators and individual artists. It allowed us to find the sort of nodes of the community and interview them and bring their work into the film. So in a lot of ways this community defies the traditional notion of what an artist is because we have this sort of renaissance concept of the individual genius, the artist working in a garret and their style being one person's genius. And this really focuses on the strength in community and that through sharing, we all can make each other and grow a community out of that.
[00:22:54.630] Kent Bye: Interesting. And I'm curious from your own perspective of like, what are your thoughts of actually going through and viewing a documentary in the format of a database and like how your personal experience of that is?
[00:23:10.302] James George: You mean when I watch Cloud? Well, I mean, again, you can't get high on your own supply, so I don't. Beyond like scrutinizing it to see if it's working, I have since stopped just like diving in and enjoying it. But there's a certain kind of chaos to it that I really enjoy, that one thing will remind you of another thing. It's not set up in a traditional documentary form where you kind of are introduced to the problem or the thesis or the perspective and then that kind of gets unpacked in a linear 90-minute way where you have a realization or a payoff at the end that shows, you know, kind of a new perspective. This is much more like attending a conference or browsing the internet where you're kind of presented with a multiplicity of different ways to go at each step and you say, okay, this caught my interest or that caught my interest or whoa, let's go back to the beginning and start again. And only through multiple viewing, only through seeing it a lot of different times and coming back and visiting and leaving again, do you start to build an image of what the entire corpus of the conversation is. And I think in some ways that reflects a lot more our lived experience of understanding. And presenting a documentary as a system is fascinating to me and allowing the viewer to explore and to feel like they just made the discovery. versus they were shown a certain perspective. And it's not saying that's a replacement or an evolution of the form or whatever. I think it's just a, it's a way of telling a story that was conducive to this community and is fit for this format and I think was a powerful choice that we made and a risky choice to abandon some of those traditional documentary conventions to move it into this space. And it comes, it's a double-edged sword, you know. I think with only a cursory experience of it, you might not be left knowing that much more. But for those who give it time and dive into it, I think they're coming out with something much greater than if we had distilled it to something linear.
[00:25:05.588] Kent Bye: Yeah, the two major thoughts that I had about it are that, first of all, there's sort of comparing this to watching a regular documentary that has like B-roll and kind of structured in a certain linear narrative and arc and has some sort of story that it's telling. Then there's comparing it to like some sort of knowledge representation of a nodal graph and being able to dive into it. And so in terms of the story, I kind of felt like sometimes it was hard for me to track the thread. And I think that going to the knowledge representation, if you had the nodal connections and I was able to actually more non-linearly jump, like more like cruising the web, I felt like that would have perhaps been a better way for my mind to contextualize and encode the information that was being shared. to be able to kind of like more interactively dive into like oh I can see this note has like three sound clips and I can expect to kind of just very get a click oh this one has like 25 it's a really rich topic or and the question then becomes like how do you label those themes or those different you know you have tags but yet are there other like how do you visualize how these different like major themes are overlapping And in terms of the narrative aspect, that I just kind of thought, sometimes if there wasn't any visual feedback or there's a visualization, sometimes I get so caught up in the visualization that it was hard to track these abstract concepts. And so it was almost like, I was like, okay, if I really wanted to dive into this, my mind may actually be able to listen to it better as a individual podcast and to be able to track the context of an individual person and hear the question and kind of like be able to encode it in my mind that way. And so I've sort of faced a similar question or issue of like having this huge repository of data. And, you know, what I do with the podcast is give the full context, I introduce it, and I sort of conceptually think of it as like weaving knowledge together, but through these discrete interviews that may be anywhere from like 15 to 40 minutes long, you know, and people listen to it and kind of have to do that in chunks. I think the challenge is the thing is like, what if people want to not listen to the full 10, 20 hours of footage, but they really only want to listen to the best 42 minutes that's the best for them. And how do they get to those topics and sort of a slice of it, because not everybody has time to sort of dive into things. So having this way that it's split up, you know, I think about like, okay, well, how you give people access in the framework to be able to not only navigate it, but have a mental model in their mind that allows them to set the context so that they can actually sort of receive all the information and be able to really fully integrate it. And the challenge here is, is that you're basically taking something that's within a full context and you're editing it and sort of putting it into a new context and then recontextualizing it within that. And whenever you're editing it, something's lost, but something's gained. And so there's these trade-offs that happen that, so that's, that's sort of like, as I experienced it, these are all the types of things that I think about.
[00:28:02.104] James George: Yeah, I mean, I wish I could show you again, the screen-based version has exactly what you're asking for to an extent. We have this idea of zooming in and zooming out. So when you're in the experience, you can take portals to different content areas and they're always one step further. They're adjacent topics at every time. So you can stay on this current topic or jump to the one next door through the portal. And this is like a spatial navigation. But anytime you can stop it, zoom out, look at the where you are in the entire web of ideas and choose a new topic and dive in. Or conversely, move over and say, I want to know more about that person. I want to hear their entire, I could listen to all their entire interview. And then the story engine will stitch together a coherent discussion from their like sort of talk from that person based on what you were just listening to and then dovetailing into other things that they'll talk about. But we strike this balance of, we don't want it to be like a protracted playlist system. You know, it really needs to be the experience of being immersed in something and trying to give you, the viewer, a vicarious understanding of what it's like to navigate these spatial ideas and float in a world of abstraction, which is really the world that these artists live in. So there's a balance of being sort of didactic and saying, here's, you know, listen to these three clips and you'll have the best information to presenting an interface that is about exploration because in a lot of ways the interface and the experience of that interface is the film and creating a mystery and curiosity and a little bit of confusion and those those emotions are important to what you experience. So I would say it's a balance and we ask ourselves those questions but yeah it's a it's a valid critique.
[00:29:35.490] Kent Bye: Well, I just, you know, I think that the mainstream media tends to take a very gross oversimplification of things, you know, part of doing the echo chamber project and what I was trying to do is be like, okay, if you really want to dive into it, you have to really give people the full context and access to all the footage that ends up on the cutting room floor. Cause isn't it a shame that when you produce a 90 minute film, that's like a horrible ratio. It could be a hundred to one. So for every hour, there's a hundred hours of footage that never made it. and there's just so much knowledge in there. And perhaps that's why I sort of focused on podcasting to be able to preserve that. But I think there's a lot of room for innovation in terms of really getting this type of idea and interface locked down to give access to people to rich media that is serial, sequential, where you can't skim it. It's hard to jump around. So how do you take something that's inherently linear and give someone a nonlinear experience of it and allow them to have enough information to be able to, whether it's a gif or a thumbnail. The web has figured out a lot of good ways to be able to do it from a text base, but how do you take a lot of those principles of the web and then create that immersive experience of rich media in a way that really feels like you're immersed into this interactivity and you're able to actually get to where you want and get the information where you want even if you have kind of partial information in terms of like labeling and like you don't know until you actually hear it and experience it.
[00:31:01.016] James George: Yeah, that's the challenge. And I think that every interview database, every topic, every subject matter will require a design that fits it. And so, you know, we made this project focusing on this particular subculture of artists and programmers, and these spatial metaphors and the way it laid out worked for us in accomplishing this. And then We hope that it inspires people. And the project's code is open source, so you can see, you can read the story engine, you can see the visualizations, you can see how these things were stitched together. We hope that that, at a most basic level, inspires people to be ambitious and be open-minded in the way that they approach creating things like this in the future. So, you know, there's no one-size-fits-all solution. It's something that I really don't think can be productized, or I don't want to spend that time building a general spatial navigation system for interviews, but I think that there is a lot to be explored. And I, you know, I talked to filmmakers. A lot of people are inspired by this story engine, this concept of saying, I want to get from this topic to that topic and I'll, I'll weave away there. So I think more than anything, it's clouds going forward should inspire people to not take those conventions as a given and know that there are new forms possible.
[00:32:13.305] Kent Bye: Yeah, when I did the echo chamber project and was trying to splice up the soundbites, that's sort of like a big challenge right there of like the in and out point of a soundbite. And to add this other extra sentence that is adding a little extra, but sort of adds a lot of time. And I think that that's something that took actually a lot of time and a lot of manual brute force work to actually get the data prepared for something like that. I can imagine in the future where you have an artificial intelligent natural language processing mixed with like automatic detection of from the semantic meaning, maybe AI could be able to figure out the in and out points of those sound bites. It's an art. You can kind of know when you hear a sound bite and be like, yes, they nailed it. They just spoke it. They transmitted the idea. It's clear. There's a lot of contextual information, a lot of intonation and how they are speaking the words, but also just an intuition that Editors have to know so as I think in the future like way out once we have like more sophisticated AI I can imagine sending it upon a data set Automatically chopping up and making it more feasible because as great of a conceptual idea This as this is it doesn't really scale to the volume of interviews and video that's being produced every day
[00:33:29.258] James George: Yeah, I mean, I think so, and this is a theme in clouds and something I hold deeply about the future path of technology. And there's sort of a dichotomy between whether you think AI is going to replace the artist, or does AI extend the hand of the artist. And I deeply believe in the latter, that we're able to make more complex tools that expand our capabilities, make us faster and more nuanced and more thoughtful in our work as artists or creators. So for Clouds, rather than, you know, we contemplated these same kind of things, automated editing, you know, semantic processing, transcription, And we really dialed it back to, let's just create an editing system that's non-linear, that goes beyond the linear timeline editing system. So you can think of a timeline in Final Cut as, you know, every clip has an in node, which is the clip before it, and an out, which is the clip after it. What if we blow that away, and there can be many clips going into one, and it can lead many directions? just that simple change, even if you put it back into the hands of an editor and they have to curate all of those connections in the same way like our mind curates our neurons, you know, through conditioning, that editor is then the conditioner and they watch and they watch and they prune and trim and make new connections and in the same way that a masterful editor on a film will be able to stitch together the perfect sculpture of the film, then given this new tool, Jonathan was able to create and condition the network of clouds in a way that became meaningful. And so I think that if you leave too much up to the computer, you will end up with something that doesn't feel like it was created by a person, and then we will alienate people through that. But computers are great at generating possibilities and great at making different combinations of things that then the artist can look at and say, I like that one, that one, and that one. And I don't like the 10,000 other ones. And so this feedback loop of creativity of being shown possibilities that AI or generative graphics or those kind of things can show you, but then always making sure the artist is in the loop, the creator is able to then curate those possibilities. And this is a theme that's actually discussed deeply in clouds because a lot of these artists work that way. They generate a multiplicity of possibilities and then publish three of them. And there are just tweaking the parameters versus, you know, setting the pixels.
[00:35:51.435] Kent Bye: So I'm really curious to hear some of your thoughts on, you know, balance this sense of cultivating a sense of presence and embodiment within a VR experience and trying to match that with narrative. So this tension between interactivity and narrative and, you know, you started to dive into that a little bit with clouds and into your future work. And so I'm just curious to hear some of your thoughts on that.
[00:36:13.217] James George: Yeah, I mean, I think there is a creative tension there. And I think we, in an early stage of a medium, like where we are right now with virtual reality, we have difficulty separating what's going to be left up to the level of the genre and what's going to be left up to the level of the medium. And I think that this thinking of the genre framework is important to me because we don't have established genres in VR yet. They're starting to sort of maybe draw some circles around similar projects or communities, like 360 video documentary or mundane task simulation, you know, things that are like popular and great and differentiated, but we don't really know. And the language and grammar and conventions are what ultimately makes a genre and a collection of genres makes a medium. And when it comes to narrative, I think that cracking the narrative code is difficult to do on a project to project basis. And it's obviously impossible to do on a medium basis, but this in between is where we can find maybe a happy medium. So when you make a project, maybe think what genre would this imply? it implies the negative space of projects that have not yet been created. And I think that thinking on that terms, a lot of semantics of where you want the viewer's attention to be in virtual reality and what you draw attention to become how you control what they think about. So if you represent your body, you are a character in VR and you're taking a point of view and then therefore the motivation of the protagonist, which is implicitly yourself, becomes where the narrative is. So you're focused on moving that character through the world to accomplish its goals, which in traditional narrative and cinema, that's like the only thing that drives narratives is the protagonist's goals and how the world acts against those goals. And every traditional narrative essentially can be boiled down to that. But then in things like Clouds, where your body's not implied, you as a viewer, you're genuinely yourself in your own body, not in sort of a story level of your body, then you're left to think about the world and the people in the world and what they're saying and how they connect. And you don't become self-conscious in that situation of beyond being self-conscious when you're watching TV. So I think for me, a lot of the sort of narrative tension of interactivity, making choices, Whose desires are you fulfilling? Are you playing the role of an individual in the story world or are you playing yourself and trying to discover something is one of the core subtleties and how we push that forward. So that's I guess that's one sort of nuanced way that we're thinking about things. And then I think that this mechanic of slipping between your actions motivate the story. So there's like goal oriented behavior, like with games and then your vicarious perception. So like your association with the characters, like passive interactivity, like exploration through viewing are also similarly in contrast. So I think there are very effective games that balance the two. So I think of like Quantic Dream's work where it's very narrative, very cinematic, and only at certain periods of time are you sort of prompted to act. And then you act and you feel then ownership over the progression of the story. But it's not like a game where you're trying to win points or get as far as possible or win the race. So I think thinking about the mechanics of success and failure and what drives And in the end, I think it is super instructive to read screenwriting and think about the way story worked. What underlying mechanics made story function in a cinematic framework, in a cinematic grammar? And to discover new signifiers for creating the same sort of reversals of energy and things like that in VR, but maybe at the level of the environment or the level of the room versus the level of the person. So this is what we're exploring in Blackout, actually, the narrative environments. So I'm excited to see if that succeeds.
[00:39:58.399] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?
[00:40:06.645] James George: Ultimate? I feel like, yeah, it's a difficult question. I think there is a new, this is speaking about genres or experiences, I think there is something that is somewhere between the most powerful narrative film you've ever seen in cinema and the most gripping and grossing Facebook game you've ever played with your friends. And there's somehow a way to take these worlds that are constrained by rules of a creator and put lots of different people into them socially and the chaos ensues where the people in that experience decipher and discover the rules of the world that they're in and I think that'll be some of the most exciting experiences. So there's some new form. that is not a game and it's not cinema, but it involves a tightly constrained world that people are interacting in and full immersion that will play out. And that'll be really interesting to see. I think of like immersive theater, but globally distributed. And so we're working towards that. And I think that there's a lot of hyperbolic ultimates that you could talk about, but I'm more interested in the things that are a little bit closer that can be achieved now.
[00:41:21.923] Kent Bye: Yeah. Cool. Is there anything else that's left unsaid that you'd like to say?
[00:41:28.027] James George: So maybe actually dovetailing off of what I was just saying, like about hyperbolic ultimate. So I think there's this conception that VR will become an ultimate mirror to our reality, and virtual reality will actually become a mirror of actual reality, and that there's some kind of strange nirvana that can be reached. where we actually add a layer of inception. And I actually think that this is a false goal. And as a creative community, there's a lot of work to be done to define what exactly can this world be and what new forms and new genres that are not hyperbolic and not all-encompassing, but actually work with the constraints of VR and create something that's truly new and transcendent. that generations will love for the future to come, but maybe it isn't the matrix, like where we're all plugged in and interacting with each other in some alternate reality. So I guess I would encourage the creative community to find your niche, find the nuance, and work with the constraints of the medium right now, and that'll create the most beautiful experiences, and actually probably be what fuels the future, versus taking huge leaps towards ultimate desire, which is too abstract for right now. So, yeah.
[00:42:39.693] Kent Bye: Awesome. Well, thanks so much James.
[00:42:41.817] James George: Yeah. Thank you for the interview. This is it was great to talk with you
[00:42:45.362] Kent Bye: So that was James George. He is the co-creator of Clouds with Jonathan Menard, and he's also working on the DepthKit 3D sensor camera with Alexander Porter. So I have a number of different takeaways from this interview is that first of all, I think that Jonathan is one of the deepest thinkers when it comes to thinking about a lot of this future of interactive media and some of the challenges that come with it. But I really want to just highlight a couple of things about what he was saying about the genres, because from his conceptualization, it's almost like VR is this new medium. But in order to really define what's universal amongst this medium, you have to look at each of the different genres that are emerging. And right now, the boundaries of those genres are still kind of emerging and evolving. And I can think of those boundaries as a little bit like the 12 domains of human experience that I've talked about in episode 355. And that's sort of a preliminary look at some of those genres that are emerging, just from the different industry verticals that are emerging and the differences that happen in each of those. But even within gaming and entertainment, there's going to be all sorts of sub-genres of experiences that are created, just as there's many different genres of film. And so kind of thinking about how those genres are kind of accumulating into really defining what the medium of VR is going to end up being. And so before you can crack the narrative code amongst the VR medium, you have to kind of crack the narrative code of each of these different subgenres. So I think it's just an interesting way of kind of breaking down how to think about storytelling and narrative in VR. Also I really appreciate what James is talking about like going back and looking at the renaissance of how this conceptualization of the renaissance artist was someone who was working in isolation and kind of this individual creative genius but yet the renaissance artists of the 21st century are using computer technologies in order to be connected to each other and I love that metaphor of thinking about the wheel of how in order to find the people who were really a big part of this community they were just looking at the github commit log to be able to kind of think of that as the wheel and the kind of center of that wheel was the Open Frameworks community and then looking at the different Git commits was sort of like each of the spokes that were leading into all the different nodes of people who were really vital to creating this entire wheel of creative coders and innovation when it comes to thinking about this 21st century collaborative renaissance artists that were using code as art. And the other thing that I just really wanted to highlight is that I think that there is something very specific and unique to taking a lot of data and summarizing it into a film that people can really understand and synthesize. It's sort of like this summary of all the highlights. And yet it takes a lot of time and effort and energy and there can be so much loss, especially when you've gone through the process of interviewing 40 different people and like dozens and dozens of hours of content that is just left on the cutting room floor. And it's something that has really motivated me to move away from documentary filmmaking and more into podcasting because I could just give people a full access to the podcast and they can have full access to the experience that I had in learning the information. And so as listeners of the Voices of the Year podcast, you can kind of empathize with how I've done, you know, over 400 interviews that have published so far. It's nearly six days of content. That's 144 hours. There's not a lot of people who have been able to kind of keep up with consuming all that knowledge and information. It's there if you want to go through it in a linear fashion, but I think this type of process that James and Jonathan have created with clouds gives us possibility of coming up with some way to kind of break it up and for you to really focus in and dive into specific areas that you're interested in. Now the challenge is that you lose context and sometimes something's lost when you're just taking an edited version or you're jumping into the middle of something that people are saying without having the full background to contextualize it for why they're saying it. And so there's something important to doing the linear, but there's also something where it's much more practical to think about how to dive into that specific information. You can kind of think about what has happened with encyclopedias of where some of this immersive technology is going to be headed. So Encyclopedia Britannica is sort of like this perfect example of a specific type of approach of curating knowledge in this linear fashion, this book. Well, something like Wikipedia comes along and it's like this decentralized way of aggregating all these interconnected linked information that's generated from people using this web technology. But the process of actually reading an article in Wikipedia becomes much more interactive in terms of your clicking from one concept to the next to the next and it's almost like much more like our brains actually work of this associative nature of creating this modal graph and you're able to much more easily chase your curiosity through that information and just the way that we cruise the web I don't think we've really come up with a way to do the analog within rich media but I think that's kind of what the clouds documentary represents is this opportunity to start to do that. So I've just been really inspired by a lot of the concepts and ideas and to actually pull off and carry through this clouds documentary. And so I just wanted to really focus on it. And even though it's a couple of years old and James and Jonathan have kind of moved on to other projects, and I think they may still be releasing official version of this. So you can really start to check it out and dive into this content. But overall, there's still a lot of juicy open questions for how some of this interactive media is going to look like. By looking at the documentary context, we can start to see how this might start to feed into the future of interactive narratives. So that's all that I have today. I wanted to just thank you for listening. And if you enjoy the podcast, then please do consider spreading the word and telling your friends. And if you'd like to donate to the podcast, then please consider going to patreon.com slash Voices of VR.